spark-rapids
Apache Spark
Our great sponsors
spark-rapids | Apache Spark | |
---|---|---|
3 | 101 | |
720 | 38,378 | |
4.2% | 1.3% | |
9.8 | 10.0 | |
6 days ago | 3 days ago | |
Scala | Scala | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-rapids
-
Open source contributions for a Data Engineer?
His newer project, Ballista, was also donated to Apache Arrow. I hope to get the Rust skills to collaborate with him on open source work someday too. He's also doing really cool work on spark-rapids FYI.
-
I am reading this article https://www.frontiersin.org/articles/10.3389/fnins.2015.00492/full and thinking how to create an Amazon EMR infrastructure wih PySpark. Why is the GPU server not one of the nodes in the Apache Spark cluster? Or this is just an abstract view and the nodes are also the GPUs?
The spark-rapids project allows one to run multi-GPU ETL workloads on a Spark cluster. https://github.com/NVIDIA/spark-rapids In such a setup, the GPU nodes are part of the Spark cluster. Multi-GPU nodes are viable, although an executor is currently limited to a single GPU.
-
Ballista: New approach for 2021
So, in my day job at NVIDIA, I work on the RAPIDS Accelerator for Apache Spark, which is an open-source plugin that provides GPU-acceleration for ETL workloads, leveraging the RAPIDS cuDF GPU DataFrame library.
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark – A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
airbyte - The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
streamlit - Streamlit — A faster way to build and share data apps.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
ballista - Distributed compute platform implemented in Rust, and powered by Apache Arrow.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
Scalding - A Scala API for Cascading
dagster - An orchestration platform for the development, production, and observation of data assets.
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
meltano - Meltano: the declarative code-first data integration engine that powers your wildest data and ML-powered product ideas. Say goodbye to writing, maintaining, and scaling your own API integrations.
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.