spark-rapids
soda-sql
Our great sponsors
spark-rapids | soda-sql | |
---|---|---|
3 | 25 | |
720 | 50 | |
4.2% | - | |
9.8 | 8.2 | |
7 days ago | over 1 year ago | |
Scala | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-rapids
-
Open source contributions for a Data Engineer?
His newer project, Ballista, was also donated to Apache Arrow. I hope to get the Rust skills to collaborate with him on open source work someday too. He's also doing really cool work on spark-rapids FYI.
-
I am reading this article https://www.frontiersin.org/articles/10.3389/fnins.2015.00492/full and thinking how to create an Amazon EMR infrastructure wih PySpark. Why is the GPU server not one of the nodes in the Apache Spark cluster? Or this is just an abstract view and the nodes are also the GPUs?
The spark-rapids project allows one to run multi-GPU ETL workloads on a Spark cluster. https://github.com/NVIDIA/spark-rapids In such a setup, the GPU nodes are part of the Spark cluster. Multi-GPU nodes are viable, although an executor is currently limited to a single GPU.
-
Ballista: New approach for 2021
So, in my day job at NVIDIA, I work on the RAPIDS Accelerator for Apache Spark, which is an open-source plugin that provides GPU-acceleration for ETL workloads, leveraging the RAPIDS cuDF GPU DataFrame library.
soda-sql
-
Data Quality - Great Expectations for Data Engineers
I might be a bit biased, but that was my opinion before even I started contributing to Soda SQL.
- dbt vs R/Python for transformation
-
SodaCL - preview of a new "data reliability as code" language
I'm one of the developers of the Open Source soda-sql data quality monitoring library, and over the past year we got some incredible feedback from our users, and based on that we started working on a new DSL for data reliability as code we are calling Soda CL.
-
How do you test your pipelines?
You can also use soda-sql to do checks on your warehouses separately. Both Soda SQL and Soda Spark are OSS/Apache licensed.
-
Being constantly shut down by more senior team members when I mention adding some QA in our work
As many have said, there might be business side of things to deliver. Somebody above promised delivery with tight deadlines. Trust me, I am not a fan, but this how the world works and it sucks. I would say in your free time, explore tools like greatexpectations.io https://greatexpectations.io/ or https://github.com/sodadata/soda-sql which are modern ways of testing in your learning curve
- Soda
- How heavily do you use Great Expectations?
-
What are some exciting new tools/libraries in 2021?
soda-sql really cool library to automate data quality checks on SQL tables
-
How do I incorporate testing after the fact?
Look at SodaSQL. It's more enterprise focused than Great Expectations and you can pipe results to a database for downstream actions and analysis.
-
Data Testing Tools, Pytest vs Great Expectations vs Soda vs Deequ
Certainly! Itβs not requested that much π but please add an issue on GitHub . I would love to add at least experimental support.
What are some alternatives?
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
streamlit - Streamlit β A faster way to build and share data apps.
pandera - A light-weight, flexible, and expressive statistical data testing library
ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.
sqlfluff - A modular SQL linter and auto-formatter with support for multiple dialects and templated code.
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
dbt-sessionization - Using DBT for Creating Session Abstractions on RudderStack - an open-source, warehouse-first customer data pipeline and Segment alternative.
dagster - An orchestration platform for the development, production, and observation of data assets.
re_data - re_data - fix data issues before your users & CEO would discover them π
meltano - Meltano: the declarative code-first data integration engine that powers your wildest data and ML-powered product ideas. Say goodbye to writing, maintaining, and scaling your own API integrations.
trino_data_mesh - Proof of concept on how to gain insights with Trino across different databases from a distributed data mesh