bytewax
flink-statefun
Our great sponsors
bytewax | flink-statefun | |
---|---|---|
18 | 18 | |
1,144 | 493 | |
8.2% | 2.0% | |
9.8 | 5.1 | |
6 days ago | 5 months ago | |
Python | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bytewax
- Building a streaming SQL engine with Arrow and DataFusion
-
Near Real Time Ingestion to DB using Python
You can probably use Python to solve your problem, there are many ways you can speed up your deserialization/flattening. I work on Bytewax (https://github.com/bytewax/bytewax) and I wouldn't mention it if it wasn't a good fit, but I think it's worth looking at here. It is a stream processor that makes it easy to scale, maintain order, track progress, and you just write native Python.
-
Stream processing framework for a new project in Python
Disclaimer: I work on Bytewax, but it feels like this could be a good fit and would save you some time looking around. If you need to do stateful operations (reduce, window, etc.) then you can use bytewax - https://github.com/bytewax/bytewax with pub/sub, but you would need to build a custom connector. There are some guides on how to do that - https://www.bytewax.io/blog/custom-input-connector.
- What are your favorite tools or components in the Kafka ecosystem?
-
A Python package for streaming synthetic data
This is great, definitely see the utility here. I have had to hack this together so many times while building streaming workflows with github.com/bytewax/bytewax and other tools.
-
Snowflake - what are the streaming capabilities it provides?
When low latency matters you should always consider an ETL approach rather than ELT, e.g. collect data in Kafka and process using Kafka Streams/Flink in Java or Quix Streams/Bytewax in Python, then sink it to Snowflake where you can handle non-critical workloads (as is the case for 99% of BI/analytics). This way you can choose the right path for your data depending on how quickly it needs to be served.
-
Sunday Daily Thread: What's everyone working on this week?
Working on how to use https://github.com/bytewax/bytewax to create embeddings in real-time for ML use cases. I want to make a small library for embedding pipelines, but still learning about vector dbs and the tradeoffs between the different solutions.
-
Arroyo: A distributed stream processing engine written in Rust
Project looks cool! Glad you open sourced it. It could use some comments in the code base to help contributors ;). I also like the datafusion usage, that is awesome. BTW I work on github.com/bytewax/bytewax, which is based on https://github.com/TimelyDataflow/timely-dataflow another Rust dataflow computation engine.
-
Launch HN: BuildFlow (YC W23) – The FastAPI of data pipelines
Cool, nice idea. Can you sub in different backend like bytewax (https://github.com/bytewax/bytewax) for stateful processing?
-
Kafka Stream Processing in Java or Scala
If you want to keep in your Python/SQL area of expertise and by all means I don't mean to promote not learning a new language, but just as an FYI. There are some non-Java/Scala tools between streaming databases like risingwave and materialize, streaming platforms like fluvio and redpanda, and stream processors like bytewax and faust.
flink-statefun
-
flink-statefun VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Snowflake - what are the streaming capabilities it provides?
When low latency matters you should always consider an ETL approach rather than ELT, e.g. collect data in Kafka and process using Kafka Streams/Flink in Java or Quix Streams/Bytewax in Python, then sink it to Snowflake where you can handle non-critical workloads (as is the case for 99% of BI/analytics). This way you can choose the right path for your data depending on how quickly it needs to be served.
-
JR, quality Random Data from the Command line, part I
Sometimes we may need to generate random data of type 2 in different streams, so the "coherency" must also spread across different entities, think for example to referential integrity in databases. If I am generating users, products and orders to three different Kafka topics and I want to create a streaming application with Apache Flink, I definitely need data to be coherent across topics.
-
Brand Lift Studies on Reddit
The Treatment and Control audiences need to be stored for future low-latency, high-reliability retrieval. Retrieval happens when we are delivering the survey, and informs the system which users to send surveys to. How is this achieved at Reddit’s scale? Users interact with ads, which generate events that are sent to our downstream systems for processing. At the output, these interactions are stored in DynamoDB as engagement records for easy access. Records are indexed on user ID and ad campaign ID to allow for efficient retrieval. The use of stream processing (Apache Flink) ensures this whole process happens within minutes, and keeps audiences up to date in real-time. The following high-level diagram summarizes the process:
-
Query Real Time Data in Kafka Using SQL
Most streaming database technologies use SQL for these reasons: RisingWave, Materialize, KsqlDB, Apache Flink, and so on offering SQL interfaces. This post explains how to choose the right streaming database.
-
How to choose the right streaming database
Apache Flink.
-
5 Best Practices For Data Integration To Boost ROI And Efficiency
There are different ways to implement parallel dataflows, such as using parallel data processing frameworks like Apache Hadoop, Apache Spark, and Apache Flink, or using cloud-based services like Amazon EMR and Google Cloud Dataflow. It is also possible to use parallel dataflow frameworks to handle big data and distributed computing, like Apache Nifi and Apache Kafka.
-
Forward Compatible Enum Values in API with Java Jackson
We’re not discussing the technical details behind the deduplication process. It could be Apache Flink, Apache Spark, or Kafka Streams. Anyway, it’s out of the scope of this article.
-
Which MQTT (or similar protocol) broker for a few 10k IoT devices with quite a lot of traffic?
One can also consider https://flink.apache.org/ instead of Kafka for connecting a large number of devices.
-
Apache Pulsar vs Apache Kafka - How to choose a data streaming platform
Both Kafka and Pulsar provide some kind of stream processing capability, but Kafka is much further along in that regard. Pulsar stream processing relies on the Pulsar Functions interface which is only suited for simple callbacks. On the other hand, Kafka Streams and ksqlDB are more complete solutions that could be considered replacements for Apache Spark or Apache Flink, state-of-the-art stream-processing frameworks. You could use them to build streaming applications with stateful information, sliding windows, etc.
What are some alternatives?
timely-dataflow - A modular implementation of timely dataflow in Rust
opensky-api - Python and Java bindings for the OpenSky Network REST API
arroyo - Distributed stream processing engine in Rust
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
2022-bytewax-redpanda-air-quality-monitoring
debezium - Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
django-unicorn - The magical reactive component framework for Django ✨
redpanda - Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM!
Django - The Web framework for perfectionists with deadlines.
Apache Pulsar - Apache Pulsar - distributed pub-sub messaging system
Pyramid - Pyramid - A Python web framework
faust - Python Stream Processing. A Faust fork