Apache Arrow
Airflow
Our great sponsors
Apache Arrow | Airflow | |
---|---|---|
75 | 169 | |
13,480 | 34,485 | |
2.2% | 2.1% | |
10.0 | 10.0 | |
6 days ago | about 10 hours ago | |
C++ | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache Arrow
-
How moving from Pandas to Polars made me write better code without writing better code
In comes Polars: a brand new dataframe library, or how the author Ritchie Vink describes it... a query engine with a dataframe frontend. Polars is built on top of the Arrow memory format and is written in Rust, which is a modern performant and memory-safe systems programming language similar to C/C++.
-
From slow to SIMD: A Go optimization story
I learned yesterday about GoLang's assembler https://go.dev/doc/asm - after browsing how arrow is implemented for different languages (my experience is mainly C/C++) - https://github.com/apache/arrow/tree/main/go/arrow/math - there are bunch of .S ("asm" files) and I'm still not able to comprehend how these work exactly (I guess it'll take more reading) - it seems very peculiar.
The last time I've used inlined assembly was back in Turbo/Borland Pascal, then bit in Visual Studio (32-bit), until they got disabled. Then did very little gcc with their more strict specification (while the former you had to know how the ABI worked, the latter too - but it was specced out).
Anyway - I wasn't expecting to find this in "Go" :) But I guess you can always start with .go code then produce assembly (-S) then optimize it, or find/hire someone to do it.
-
Time Series Analysis with Polars
One is related to the heritage of being built around the NumPy library, which is great for processing numerical data, but becomes an issue as soon as the data is anything else. Pandas 2.0 has started to bring in Arrow, but it's not yet the standard (you have to opt-in and according to the developers it's going to stay that way for the foreseeable future). Also, pandas's Arrow-based features are not yet entirely on par with its NumPy-based features. Polars was built around Arrow from the get go. This makes it very powerful when it comes to exchanging data with other languages and reducing the number of in-memory copying operations, thus leading to better performance.
-
TXR Lisp
IMO a good first step would be to use the txr FFI to write a library for Apache arrow: https://arrow.apache.org/
-
3D desktop Game Engine scriptable in Python
https://www.reddit.com/r/O3DE/comments/rdvxhx/why_python/ :
> Python is used for scripting the editor only, not in-game behaviors.
> For implementing entity behaviors the only out of box ways are C++, ScriptCanvas (visual scripting) or Lua. Python is currently not available for implementing game logic.
C++, Lua, and Python all implement CFFI (C Foreign Function Interface) for remote function and method calls.
"Using CFFI for embedding" https://cffi.readthedocs.io/en/latest/embedding.html :
> You can use CFFI to generate C code which exports the API of your choice to any C application that wants to link with this C code. This API, which you define yourself, ends up as the API of a .so/.dll/.dylib library—or you can statically link it within a larger application.
Apache Arrow already supports C, C++, Python, Rust, Go and has C GLib support Lua:
https://github.com/apache/arrow/tree/main/c_glib/example/lua :
> Arrow Lua example: All example codes use LGI to use Arrow GLib based bindings
pyarrow.from_numpy_dtype:
- Show HN: Udsv.js – A faster CSV parser in 5KB (min)
-
Interacting with Amazon S3 using AWS Data Wrangler (awswrangler) SDK for Pandas: A Comprehensive Guide
AWS Data Wrangler is a Python library that simplifies the process of interacting with various AWS services, built on top of some useful data tools and open-source projects such as Pandas, Apache Arrow and Boto3. It offers streamlined functions to connect to, retrieve, transform, and load data from AWS services, with a strong focus on Amazon S3.
-
Cap'n Proto 1.0
Worker should really adopt Apache Arrow, which has a much bigger ecosystem.
https://github.com/apache/arrow
-
C++ Jobs - Q3 2023
Apache Arrow
-
Wheel fails for pyarrow installation
I am aware of the fact that there are other posts about this issue but none of the ideas to solve it worked for me or sometimes none were found. The issue was discussed in the wheel git hub last December and seems to be solved but then it seems like I'm installing the wrong version? I simply used pip3 install pyarrow, is that wrong?
Airflow
-
Building in Public: Leveraging Tublian's AI Copilot for My Open Source Contributions
Contributing to Apache Airflow's open-source project immersed me in collaborative coding. Experienced maintainers rigorously reviewed my contributions, providing constructive feedback. This ongoing dialogue refined the codebase and honed my understanding of best practices.
-
Navigating Week Two: Insights and Experiences from My Tublian Internship Journey
In week Two, I contributed to the Apache Airflow repository.
-
Airflow VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Best ETL Tools And Why To Choose
Apache Airflow is an open-source platform to programmatically author, schedule, and monitor workflows. The platform features a web-based user interface and a command-line interface for managing and triggering workflows.
-
Simplifying Data Transformation in Redshift: An Approach with DBT and Airflow
Airflow is the most widely used and well-known tool for orchestrating data workflows. It allows for efficient pipeline construction, scheduling, and monitoring.
-
Share Your favorite python related software!
AIRFLOW This is more of a library in my opinion, but Airflow has become an essential tool for scheduling in my work. All our ML training pipelines are ordered and scheduled with Airflow and it works seamlessly. The dashboard provided is also fantastic!
-
Ask HN: What is the correct way to deal with pipelines?
I agree there are many options in this space. Two others to consider:
- https://airflow.apache.org/
- https://github.com/spotify/luigi
There are also many Kubernetes based options out there. For the specific use case you specified, you might even consider a plain old Makefile and incrond if you expect these all to run on a single host and be triggered by a new file showing up in a directory…
- "Você veio protestar para ter acesso ao código fonte da urnas. O que é o código fonte?" "Não sei" 🤡
- Cómo construir tu propia data platform. From zero to hero.
-
Is it impossible to contribute to open source as a data engineer?
You can try and contribute some new connectors/operators for workflow managers like Airflow or Airbyte
What are some alternatives?
h5py - HDF5 for Python -- The h5py package is a Pythonic interface to the HDF5 binary data format.
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
dagster - An orchestration platform for the development, production, and observation of data assets.
FlatBuffers - FlatBuffers: Memory Efficient Serialization Library
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
ClickHouse - ClickHouse® is a free analytics DBMS for big data
beam - Apache Beam is a unified programming model for Batch and Streaming data processing.
Dask - Parallel computing with task scheduling