dbt-spark
trino-getting-started
dbt-spark | trino-getting-started | |
---|---|---|
7 | 2 | |
364 | 228 | |
1.6% | - | |
8.6 | 5.1 | |
2 days ago | 17 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dbt-spark
-
Trying Delta Lake at home
Spark + dbt => https://github.com/dbt-labs/dbt-spark/blob/main/docker-compose.yml
-
So now dbt is worth $4.2b! Yes, that's a "b" for billion.
So the idea is you land your data raw in a Delta bronze layer, and then use dbt models to propagate that data forward to silver, gold, do all of your data quality, etc. and all of the actual execution is happening on a Databricks SQL endpoint (or you can use the dbt-spark adapter and run your transfors as Spark on a cluster)
-
Show HN: SpotML – Managed ML Training on Cheap AWS/GCP Spot Instances
Neat. Congratulations on the launch!
Apart from the fact that it could deploy to both GCP and AWS, what does it do differently than AWS Batch [0]?
When we had a similar problem, we ran jobs on spots with AWS Batch and it worked nicely enough.
Some suggestions (for a later date):
1. Add built-in support for Ray [1] (you'd essentially be then competing with Anyscale, which is a VC funded startup, just to contrast it with another comment on this thread) and dbt [2].
2. Support deploying coin miners (might be good to widen the product's reach; and stand it up against the likes of consensys).
3. Get in front of many cost optimisation consultants out there, like the Duckbill Group.
If I may, where are you building this product from? And how many are on the team?
Thanks.
[0] https://aws.amazon.com/batch/use-cases/
[1] https://ray.io/
[2] https://getdbt.com/
-
Replacing Segment Computed & SQL Traits With dbt & RudderStack Warehouse Actions
It will be helpful to set the stage, as no two technical stacks are the same and not all data warehouse platforms provide the same functionality. It's for the latter that we really like tools like dbt, and the sample files provided here should provide a good starting point for your specific use case. Our instance leverages the cloud version of dbt and connects to our Snowflake data warehouse, where models output tables in a designated dbt schema.
-
Your default tool for ETL
T: SQL - views and scheduled queries in BigQuery; planning to go hard with dbt as soon as I can find some breathing room)
-
7 Alternatives to Using Segment
Since all of the data is often already in the data warehouse, the logical choice is to simply just use it as a CDP. A modern data stack should consist of an end-to-end flow from data acquisition, collection, and transformation. In most cases, the easiest way to enable this goal is by leveraging tools that are purposely designed to handle a single task. Fivetran, Snowflake, and dbt are great examples of this. In fact, this is the core technology stack that every data-driven company is adopting. Fivetran handles the entire data integration aspect providing a simple SaaS solution that helps businesses quickly move data out of their SaaS tools and into their data warehouse. Snowflake provides an easy way for organizations to consolidate their data into one location for analytics purposes. Lastly, dbt provides a simple transformation tool that is SQL-based, enabling users to create data models that can be reused. These three solutions combined create an effective data management platform.
-
Dbt with Databricks and Delta Lake?
This is the issue: https://github.com/dbt-labs/dbt-spark/issues/161. Too bad they still haven't fixed it!
trino-getting-started
-
Trying Delta Lake at home
https://github.com/bitsondatadev/trino-getting-started/tree/main/delta-lake => Trino (Presto "equivalent") + delta lake format + Minio (s3 equivalent)
-
(Almost) OpenSource data stack for a personal DE project. Before jumping on the project I would have liked to have some advice on things to fix or improve in this structure! do you think that this stack could work?
Here’s a small deployment with MinIO to play with: https://github.com/bitsondatadev/trino-getting-started/tree/main/hive/trino-minio
What are some alternatives?
dbt-databricks - A dbt adapter for Databricks.
airflow-docker - Source code of the Apache Airflow Tutorial for Beginners on YouTube Channel Coder2j (https://www.youtube.com/c/coder2j)
rudderstack-docs - Documentation repository for RudderStack - the Customer Data Platform for Developers.
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
sqlglot - Python SQL Parser and Transpiler
damons-data-lake - All the code related to building my own data lake
docker-spark-deltalake - Docker image for running SparkSQL Thrift server
cargo-crates - An easy way to build data extractors in Docker.
fastapi-realworld-example-app - Backend logic implementation for https://github.com/gothinkster/realworld with awesome FastAPI
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
delta-docs - Delta Lake Documentation