dbt-databricks
dbt-spark
dbt-databricks | dbt-spark | |
---|---|---|
15 | 7 | |
180 | 364 | |
1.7% | 1.6% | |
9.5 | 8.6 | |
14 days ago | 1 day ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dbt-databricks
-
Curious if anyone has adopted a stack to do raw data ingestion in Databricks?
Our current data infra looks a little something like this: 1. Airbyte deployed on EKS for supported data connectors. I’m using the alpha Databricks connector to load directly into Unity Catalog. 1a. S3 bucket for raw landing zone storage if we cannot directly load into Databricks Managed Tables. 2. Orchestration, storage, and transformations are in Databricks. Calling out to the Airbyte api in the EKS cluster to keep all orchestrations inside Databricks. 2a. databricks-dbt for transformations & cleaning.
-
dolly-v2-12b
dolly-v2-12bis a 12 billion parameter causal language model created by Databricks that is derived from EleutherAI’s Pythia-12b and fine-tuned on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA)
-
Any suggestions for building DBT project on DataBricks?
Read this https://github.com/databricks/dbt-databricks
- dummy
-
Clickstream data analysis with Databricks and Redpanda
Global organizations need a way to process the massive amounts of data they produce for real-time decision making. They often utilize event-streaming tools like Redpanda with stream-processing tools like Databricks for this purpose.
- Next step for my career..
-
DeWitt Clause, or Can You Benchmark %DATABASE% and Get Away With It
Databricks, a data lakehouse company founded by the creators of Apache Spark, published a blog post claiming that it set a new data warehousing performance record in 100 TB TPC-DS benchmark. It was also mentioned that Databricks was 2.7x faster and 12x better in terms of price performance compared to Snowflake.
- Would you use dbt with databricks? If so, why?
-
Welcome, DataEngHack online!
databricks
-
A Quick Start to Databricks on AWS
Go to Databricks and click the Try Databricks button. Fill in the form and Select AWS as your desired platform afterward.
dbt-spark
-
Trying Delta Lake at home
Spark + dbt => https://github.com/dbt-labs/dbt-spark/blob/main/docker-compose.yml
-
So now dbt is worth $4.2b! Yes, that's a "b" for billion.
So the idea is you land your data raw in a Delta bronze layer, and then use dbt models to propagate that data forward to silver, gold, do all of your data quality, etc. and all of the actual execution is happening on a Databricks SQL endpoint (or you can use the dbt-spark adapter and run your transfors as Spark on a cluster)
-
Show HN: SpotML – Managed ML Training on Cheap AWS/GCP Spot Instances
Neat. Congratulations on the launch!
Apart from the fact that it could deploy to both GCP and AWS, what does it do differently than AWS Batch [0]?
When we had a similar problem, we ran jobs on spots with AWS Batch and it worked nicely enough.
Some suggestions (for a later date):
1. Add built-in support for Ray [1] (you'd essentially be then competing with Anyscale, which is a VC funded startup, just to contrast it with another comment on this thread) and dbt [2].
2. Support deploying coin miners (might be good to widen the product's reach; and stand it up against the likes of consensys).
3. Get in front of many cost optimisation consultants out there, like the Duckbill Group.
If I may, where are you building this product from? And how many are on the team?
Thanks.
[0] https://aws.amazon.com/batch/use-cases/
[1] https://ray.io/
[2] https://getdbt.com/
-
Replacing Segment Computed & SQL Traits With dbt & RudderStack Warehouse Actions
It will be helpful to set the stage, as no two technical stacks are the same and not all data warehouse platforms provide the same functionality. It's for the latter that we really like tools like dbt, and the sample files provided here should provide a good starting point for your specific use case. Our instance leverages the cloud version of dbt and connects to our Snowflake data warehouse, where models output tables in a designated dbt schema.
-
Your default tool for ETL
T: SQL - views and scheduled queries in BigQuery; planning to go hard with dbt as soon as I can find some breathing room)
-
7 Alternatives to Using Segment
Since all of the data is often already in the data warehouse, the logical choice is to simply just use it as a CDP. A modern data stack should consist of an end-to-end flow from data acquisition, collection, and transformation. In most cases, the easiest way to enable this goal is by leveraging tools that are purposely designed to handle a single task. Fivetran, Snowflake, and dbt are great examples of this. In fact, this is the core technology stack that every data-driven company is adopting. Fivetran handles the entire data integration aspect providing a simple SaaS solution that helps businesses quickly move data out of their SaaS tools and into their data warehouse. Snowflake provides an easy way for organizations to consolidate their data into one location for analytics purposes. Lastly, dbt provides a simple transformation tool that is SQL-based, enabling users to create data models that can be reused. These three solutions combined create an effective data management platform.
-
Dbt with Databricks and Delta Lake?
This is the issue: https://github.com/dbt-labs/dbt-spark/issues/161. Too bad they still haven't fixed it!
What are some alternatives?
Neo4j - Graphs for Everyone
rudderstack-docs - Documentation repository for RudderStack - the Customer Data Platform for Developers.
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
TimescaleDB - An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
damons-data-lake - All the code related to building my own data lake
sql_to_ibis - A Python package that parses sql and converts it to ibis expressions
cargo-crates - An easy way to build data extractors in Docker.
nutter - Testing framework for Databricks notebooks
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
bitcoin-etl - ETL scripts for Bitcoin, Litecoin, Dash, Zcash, Doge, Bitcoin Cash. Available in Google BigQuery https://goo.gl/oY5BCQ
nimbo - Run compute jobs on AWS as if you were running them locally.