Scio
dbt-expectations
Our great sponsors
Scio | dbt-expectations | |
---|---|---|
7 | 10 | |
2,520 | 939 | |
0.4% | 3.3% | |
9.6 | 6.7 | |
5 days ago | 27 days ago | |
Scala | Shell | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Scio
- Are there any openly available data engineering projects using Scala and Spark which follow industry conventions like proper folder/package structures and object oriented division of classes/concerns? Most examples I’ve seen have everything in one file without proper separation of concerns.
-
For the DE's that choose Java over Python in new projects, why?
I doubt it is possible because I suspect that GIL would like a word. So I could spend nights trying to make it work in Python (and possibly, if not likely, fail). Or I could just use this ready made solution.
-
what popular companies uses Scala?
Apache Beam API called Scio. They open sourced it https://spotify.github.io/scio/
-
Scala or Python
Generally Python is a lingua franca. I have never met a data engineer that doesn't know Python. Scala isn't used everywhere. Also, you should know that in Apache Beam (data processing framework that's gaining popularity because it can handle both streaming and batch processing and runs on spark) the language choices are Java, Python, Go and Scala. So, even if you "only" know Java, you can get started with Data engineering through apache beam.
-
Wanting to move away from SQL
I agree 100%. I haven't used SQL that much in previous data engineering roles, and I refuse to consider jobs that mostly deal with SQL. One of my roles involved using a nice Scala API for apache beam called Scio and it was great. Code was easy to write, maintain, and test. It also worked well with other services like PubSub and BigTable.
-
ETL Pipelines with Airflow: The Good, the Bad and the Ugly
If you prefer Scala, then you can try Scio: https://github.com/spotify/scio.
-
ELT, Data Pipeline
To counter the above mentioned problem, we decided to move our data to a Pub/Sub based stream model, where we would continue to push data as it arrives. As fluentd is the primary tool being used in all our servers to gather data, rather than replacing it we leveraged its plugin architecture to use a plugin to stream data into a sink of our choosing. Initially our inclination was towards Google PubSub and Google Dataflow as our Data Scientists/Engineers use Big Query extensively and keeping the data in the same Cloud made sense. The inspiration of using these tools came from Spotify’s Event Delivery – The Road to the Cloud. We did the setup on one of our staging server with Google PubSub and Dataflow. Both didn't really work out for us as PubSub model requires a Subscriber to be available for the Topic a Publisher streams messages to, otherwise the messages are not stored. On top of it there was no way to see which messages are arriving. During this the weirdest thing that we encountered was that the Topic would be orphaned losing the subscribers when working with Dataflow. PubSub we might have managed to live with, the wall in our path was Dataflow. We started off with using SCIO from Spotify to work with Dataflow, there is a considerate lack of documentation over it and found the community to be very reserved on Github, something quite evident in the world of Scala for which they came up with a Code of Conduct for its user base to follow. Something that was required from Dataflow for us was to support batch write option to GCS, after trying our hand at Dataflow to no success to achieve that, Google's staff at StackOverflow were quite responsive and their response confirmed that it was something not available with Dataflow and streaming data to BigQuery, Datastore or Bigtable as a datastore was an option to use. The reason we didn't do that was to avoid high streaming cost to these services to store data, as majority of our jobs from the data team are based on batched hourly data. The initial proposal to the updated pipeline is shown below.
dbt-expectations
-
Dbt tests vs Soda SQL
Have not used Soda, but dbt indeed is pretty good especially when adding dbt-expectations
-
Data-eng related highlights from the latest Thoughtworks Tech Radar
dbt-expectations
-
Data Quality Dimensions: Assuring Your Data Quality with Great Expectations
I highly.. highly.. recommend the dbt-expectations extension from Catologica for dbt. It's a port of Great Expectations, except you can quickly thunk it in your schema.yml's and have it run as part of your dbt test process. Super powerful and it's prevented us from shipping bad data many times.
-
Managing SQL Tests
I'm used to utilising dbt and defining my tests there (along with dbt-utils or https://github.com/calogica/dbt-expectations): I simply add a list item to a column definition and can already define a great number of tests without having to copy code. I can even extend the pre-defined using generic tests. Writing custom tests also integrates nicely. Additionally it's very convenient to tag tests or define a severity. The learning curve for a business engineer is almost flat as long as they know some SQL.
-
What are some Data Quality check related frameworks for datasets ranging from 100GB to 1TB in size?
Use dbt's testing functionality during your transformations with catalogica/dbt-expectations (Great Expectations framework ported to dbt)
-
Great Expectations is annoyingly cumbersome
Check out dbt-expectations https://github.com/calogica/dbt-expectations
-
CI/CD in data engineering - help a noob
There are certain things I would like to add such as data quality, I can use something like dbt great expectations, but I am not sure how much more I should force it before getting an airflow setup..
- How do you query and quality check data produced in intermediate steps in analytics pipeline?
-
ETL Pipelines with Airflow: The Good, the Bad and the Ugly
[dbt Labs employee here]
Check out dbt-expectations package[1]. It's a port of the Great Expectations checks to dbt as tests. The advantage of this is you don't need another tool for these pretty standard tests, and can be early incorporated into dbt workflows.
[1] https://github.com/calogica/dbt-expectations
-
Unit testing SQL in DBT
Also check out dbt-expectations that is a port of Great Expectations that greatly expands the configurable (non-assert) tests.
What are some alternatives?
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
dbt-utils - Utility functions for dbt projects.
Apache Flink - Apache Flink
dbt-oracle - A dbt adapter for oracle db backend
Apache Kafka - Mirror of Apache Kafka
materialize - The data warehouse for operational workloads.
beam - Apache Beam is a unified programming model for Batch and Streaming data processing.
NVTabular - NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems.
Reactive-kafka - Alpakka Kafka connector - Alpakka is a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Akka.
cuetils - CLI and library for diff, patch, and ETL operations on CUE, JSON, and Yaml
metorikku - A simplified, lightweight ETL Framework based on Apache Spark
dbt-fal - do more with dbt. dbt-fal helps you run Python alongside dbt, so you can send Slack alerts, detect anomalies and build machine learning models.