Five Interesting Data Engineering Projects
As the role of the data engineer continues to grow in the field of data science, so are the many tools being developed to support wrangling all that data. Five of these tools are reviewed here (along with a few bonus tools) that you should pay attention to for your data pipeline work.
By Dmitriy Ryaboy, VP Software Engineering at Zymergen.
There’s been a lot of activity in the data engineering world lately, and a ton of really interesting projects and ideas have come on the scene in the past few years. This post is an introduction to (just) five that I think a data engineer who wants to stay current needs to know about.
DBT
DBT, or “data build tool,” is a clean, well-executed take on what’s fundamentally a simple idea: SQL statements for doing important work should be version controlled, and it’d be nice if they could be easily parametrized, and maybe refer to each other. DBT is aimed at “data analysts” rather than data engineers (though there’s no reason data engineers wouldn’t use it). Everything is done in SQL (well, that and YAML).
By cleanly structuring how projects are laid out, how queries referring to other queries works, and what fields need to be populated in a config, DBT enforces a lot of great practices and vastly improves what can often be a messy workflow. With all this in place, it can run your workflow — and it can also generate documentation, help you run validation and testing queries, share code via packages, and more. It has a gentle on-ramp and gives the analyst many powerful tools if she chooses to take advantage of them — or stay out of the way, if not.
If you or someone in your life does a lot of SQL, they need to check out DBT.
Prefect
Prefect is an up-and-coming challenger to AirFlow: yet another data pipeline manager that helps set up DAGs of processes, parametrize them, react appropriately to error conditions, create schedules and processing triggers, and so on. If you look past their slightly arrogant marketing (apparently, Prefect is already a “global leader in dataflow automation,” and Airflow is just a “historically important tool”), Prefect has a few neat things going that earn it much praise from adopters:
- It cleanly separates the actual data flows from the scheduling/triggering aspect of job management, making things like backfills, ad-hoc runs, parallel workflow instances, etc., easier to achieve.
- It’s got a neat functional (as well as an Airflow-like imperative-style) API for creating DAGs.
- It avoids the Airflow’s XCom trap of communicating data between tasks through a sort of weird side channel that occasionally blows up on you, and instead relies on transparent (except when it blows up on you) serialization and explicit inputs/outputs for individual tasks.
- It makes dealing with parameters
You can do all this in Airflow, but the Prefect team argues their APIs make for a much cleaner and intuitive ways of addressing these and other challenges. They seem to have gained quite a few fans who agree.
Dask
Are people still sleeping on Dask? Stop sleeping on Dask.
“Dask is a “flexible library for parallel computing in Python.” If you are using Python a lot for data work, mostly sticking to NumPy / Scikit-learn / Pandas, you might find that throwing Dask in makes things whirr incredibly easily. It’s lightweight and fast, it works great on a single machine or on a cluster, it works well with RAPIDS to get you GPU acceleration, and it’s likely going to be a much easier transition for scale-up than moving your python code over to PySpark. They have a surprisingly well-balanced doc talking about pros and cons vs. Spark here: https://docs.dask.org/en/latest/spark.html .
DVC
DVC stands for “data version control.” This project invites data scientists and engineers to a Git-inspired world, where all workflow versions are tracked, along with all the data artifacts and models, as well as associated metrics.
To be honest, I’m a bit of a skeptic on “git for data” and various automated data/workflow versioning schemes: various approaches I’ve seen in the past were either too partial to be useful, or required too drastic a change in how data scientists worked to get a realistic chance at adoption. So I ignored, or even explicitly avoided, checking DVC out as the buzz grew. I’ve finally checked it out and… it looks like maybe this has legs? Metrics tied to branches/versions are a great feature. Tying the idea of git-like braches to training multiple models makes the value prop clear. The implementation, using Git for code and datafile index storage, while leveraging scalable data stores for data, and trying to reduce overall storage cost by being clever about reuse, looks sane. A lot of what they have to say in https://dvc.org/doc/understanding-dvc rings true. Thoughtworks used DVC as their demo tool of choice to discuss “CD4ML”.
On the other hand, I’m not super keen on handing over pipeline definition to DVC — Airflow or Prefect or a number of other tools appear to offer much more on that front. A casual perusal of internet resources revealed multiple mentions of using DVC alongside MLFlow or other tools, but it’s not clear how well that works and what one gives up.
Still — DVC is the technology that keeps coming up whenever the problem of “git for data” or “git for ML” comes up. It’s definitely worth checking out and keeping an eye on.
Great Expectations
Great Expectations is a really nice Python library that allows you to declare rules to which you expect certain datasets to confirm and validate those as you encounter (produce or consume) those datasets. These would be expectations such as expect_colum_values_to_match_strftime_format or expect_column_distinct_values_to_be_in_set.
It’s not wrong to think of these as assertions for data. Expectations can be evaluated using a number of common data compute environments (Spark, SQL, Pandas) and integrate cleanly into a number of various workflow engines, including DBT and Prefect discussed above (as well as Airflow, of course). The introduction and glossary of expectations sections of their docs are fairly self-explanatory.
On top of providing ways to define and validate these assertions, Great Expectations provides automated data profilers that will generate the expectations and clean HTML data documentation. How cool is that?!
It’s not a completely novel idea, but it appears to be well-executed, and the library is gaining traction.
Bonus Round
Maybe it’s a post-Hadoop effect, maybe it’s The Cloud, maybe it’s just that Python finally has type hints, but it’s downright difficult to narrow the list of interesting projects to five. Here are a few more that I personally would love to spend some time with, and think you, a reader so committed that you are still here, might enjoy as well, in alphabetical order:
- Amundsen is an interesting “data discovery and metadata platform” from Lyft. Every self-respecting tech unicorn seems to have one of these now. Can we stop and choose a winner?
- Cadence is a “fault-oblivious stateful code platform” or, in other words, a way to outsource some of the common concerns about having long-lived state in your functions to somebody else. Anyway, find time to watch this video and consider where this might apply in your life: https://www.youtube.com/watch?v=llmsBGKOuWI
- Calcite is the core of the deconstructed database, providing a SQL parser, a database-agnostic query execution planner and optimizer, and more. It can be found in a number of the “big data” projects that offer SQL support (Hive, Flink, Drill, Phoenix…)
- Dagster is a data workflow engine from the creator of GraphQL, and aims to transform developer ergonomics for data engineers in the way GraphQL did for frontend engineers. It’s good stuff, and probably deserves a separate post.
- Json-Schema is not at all new, but for whatever reason, people seem to not know it exists. It exists, it’s been growing, and you should define and validate your dang schemas. There are specs, there are tools, you can hang this on your existing JSON APIs and not suffer Avro/Thrift/Proto envy.
Original. Reposted with permission.
Related: