spark-rapids
dagster
spark-rapids | dagster | |
---|---|---|
6 | 52 | |
898 | 13,154 | |
2.3% | 2.6% | |
9.8 | 10.0 | |
1 day ago | 5 days ago | |
Scala | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-rapids
- Launch HN: ParaQuery (YC X25) – GPU Accelerated Spark/SQL
-
Open source contributions for a Data Engineer?
His newer project, Ballista, was also donated to Apache Arrow. I hope to get the Rust skills to collaborate with him on open source work someday too. He's also doing really cool work on spark-rapids FYI.
-
I am reading this article https://www.frontiersin.org/articles/10.3389/fnins.2015.00492/full and thinking how to create an Amazon EMR infrastructure wih PySpark. Why is the GPU server not one of the nodes in the Apache Spark cluster? Or this is just an abstract view and the nodes are also the GPUs?
The spark-rapids project allows one to run multi-GPU ETL workloads on a Spark cluster. https://github.com/NVIDIA/spark-rapids In such a setup, the GPU nodes are part of the Spark cluster. Multi-GPU nodes are viable, although an executor is currently limited to a single GPU.
-
Ballista: New approach for 2021
So, in my day job at NVIDIA, I work on the RAPIDS Accelerator for Apache Spark, which is an open-source plugin that provides GPU-acceleration for ETL workloads, leveraging the RAPIDS cuDF GPU DataFrame library.
dagster
- Personal Picks: Data Product News (March 19, 2025)
-
Data Orchestration Tool Analysis: Airflow, Dagster, Flyte
Data orchestration tools are key for managing data pipelines in modern workflows. When it comes to tools, Apache Airflow, Dagster, and Flyte are popular tools serving this need, but they serve different purposes and follow different philosophies. Choosing the right tool for your requirements is essential for scalability and efficiency. In this blog, I will compare Apache Airflow, Dagster, and Flyte, exploring their evolution, features, and unique strengths, while sharing insights from my hands-on experience with these tools in a weather data pipeline project.
-
Data Engineering with DLT and REST
This article demonstrates how to work with near real-time and historical data using the dlt package. Whether you need to scale data access across the enterprise or provide historical data for post-event analysis, you can use the same framework to provide customer data. In a future article, I'll demonstrate how to use dlt with a workflow orchestrator such as Apache Airflow or Dagster.``
-
Top 10 MLOps Tools for 2025
4. Dagster
-
How I've implemented the Medallion architecture using Apache Spark and Apache Hdoop
Instead of the custom orchestrator I used, a proper orchestration tool should replace it like Apache Airflow, Dagster, ..., etc.
-
AI Strategy Guide: How to Scale AI Across Your Business
Level 1 of MLOps is when you've put each lifecycle stage and their intefaces in an automated pipeline. The pipeline could be a python or bash script, or it could be a directed acyclic graph run by some orchestration framework like Airflow, dagster or one of the cloud-provider offerings. AI- or data-specific platforms like MLflow, ClearML and dvc also feature pipeline capabilities.
- Experience with Dagster.io?
-
Dagster tutorials
My recommendation is to continue on with the tutorial, then look at one of the larger example projects especially the ones named “project_”, and you should understand most of it. Of what you don't understand and you're curious about, look into the relevant concept page for the functions in the docs.
-
The Dagster Master Plan
I found this example that helped me - https://github.com/dagster-io/dagster/tree/master/examples/project_fully_featured/project_fully_featured
-
What are some open-source ML pipeline managers that are easy to use?
I would recommend the following: - https://www.mage.ai/ - https://dagster.io/ - https://www.prefect.io/ - https://metaflow.org/ - https://zenml.io/home
What are some alternatives?
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
meltano - Meltano: the declarative code-first data integration engine that powers your wildest data and ML-powered product ideas. Say goodbye to writing, maintaining, and scaling your own API integrations.
Prefect - The easiest way to build, run, and monitor data pipelines at scale.
ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.
Mage - 🧙 The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai