DataGristle
soda-sql
DataGristle | soda-sql | |
---|---|---|
5 | 25 | |
137 | 50 | |
- | - | |
0.0 | 8.2 | |
3 months ago | over 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DataGristle
- What are your weekend side projects?
- Instant data model from 1000s of unique files?
- Using Hashing to detect data changes in ELT
-
How do you sort a CSV file with several million rows?
DataGristle: this one contains some more unusual csv utilities, and what's in master includes the ability to sort by field names rather than offsets: https://github.com/kenfar/DataGristle
-
Open source contributions for a Data Engineer?
DataGristle by u/kenfar who influenced many of us in this sub.
soda-sql
-
Data Quality - Great Expectations for Data Engineers
I might be a bit biased, but that was my opinion before even I started contributing to Soda SQL.
- dbt vs R/Python for transformation
-
SodaCL - preview of a new "data reliability as code" language
I'm one of the developers of the Open Source soda-sql data quality monitoring library, and over the past year we got some incredible feedback from our users, and based on that we started working on a new DSL for data reliability as code we are calling Soda CL.
-
How do you test your pipelines?
You can also use soda-sql to do checks on your warehouses separately. Both Soda SQL and Soda Spark are OSS/Apache licensed.
-
Being constantly shut down by more senior team members when I mention adding some QA in our work
As many have said, there might be business side of things to deliver. Somebody above promised delivery with tight deadlines. Trust me, I am not a fan, but this how the world works and it sucks. I would say in your free time, explore tools like greatexpectations.io https://greatexpectations.io/ or https://github.com/sodadata/soda-sql which are modern ways of testing in your learning curve
- Soda
- How heavily do you use Great Expectations?
-
What are some exciting new tools/libraries in 2021?
soda-sql really cool library to automate data quality checks on SQL tables
-
How do I incorporate testing after the fact?
Look at SodaSQL. It's more enterprise focused than Great Expectations and you can pipe results to a database for downstream actions and analysis.
-
Data Testing Tools, Pytest vs Great Expectations vs Soda vs Deequ
Certainly! Itβs not requested that much π but please add an issue on GitHub . I would love to add at least experimental support.
What are some alternatives?
Skytrax-Data-Warehouse - A full data warehouse infrastructure with ETL pipelines running inside docker on Apache Airflow for data orchestration, AWS Redshift for cloud data warehouse and Metabase to serve the needs of data visualizations such as analytical dashboards.
deequ - Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
pandera - A light-weight, flexible, and expressive statistical data testing library
sqlfluff - A modular SQL linter and auto-formatter with support for multiple dialects and templated code.
didact-engine - The REST API and execution engine for the Didact Platform.
dbt-sessionization - Using DBT for Creating Session Abstractions on RudderStack - an open-source, warehouse-first customer data pipeline and Segment alternative.
spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs
re_data - re_data - fix data issues before your users & CEO would discover them π
Metabase - The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum:
trino_data_mesh - Proof of concept on how to gain insights with Trino across different databases from a distributed data mesh