Dask
Apache Spark
Dask | Apache Spark | |
---|---|---|
32 | 101 | |
12,022 | 38,414 | |
0.8% | 0.7% | |
9.6 | 10.0 | |
2 days ago | 1 day ago | |
Python | Scala | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Dask
- The Distributed Tensor Algebra Compiler (2022)
-
A peek into Location Data Science at Ola
Data scientists work on phenomenally large datasets, and Dask is a handy tool for exploration within the confines of a single cloud VM or their local PCs. Location data visualization is an essential part of deciding further algorithm development and roadmap for projects. This lays the foundation for data engineering and science to work at scale, with petabytes of data.
- File format for large data with many columns
-
What is the best way to save a csv.file in number only ? PC hangs when my file is more than 2GB
Dask
-
Large Scale Hydrology: Geocomputational tools that you use
We're using a lot of Python. In addition to these, gridMET, Dask, HoloViz, and kerchunk.
-
msgspec - a fast & friendly JSON/MessagePack library
I wrote this for speeding up the RPC messaging in dask, but figured it might be useful for others as well. The source is available on github here: https://github.com/jcrist/msgspec.
-
What does it mean to scale your python powered pipeline?
Dask: Distributed data frames, machine learning and more
-
Data pipelines with Luigi
To do that, we are efficiently using Dask, simply creating on-demand local (or remote) clusters on task run() method:
-
Is Numpy always more efficient than Pandas? And how much should we rely on Python anyway?
Look into Dask, see: https://dask.org/
-
Ask HN: Is PySPark a Dead-End?
[1] https://dask.org/
Apache Spark
- "xAI will open source Grok"
-
Groovy 🎷 Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post 😉.
-
🦿🛴Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark – A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
Numba - NumPy aware dynamic Python compiler using LLVM
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
NetworkX - Network Analysis in Python
Scalding - A Scala API for Cascading
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
Interactive Parallel Computing with IPython - IPython Parallel: Interactive Parallel Computing in Python
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.