Apache Flink
dvc
Apache Flink | dvc | |
---|---|---|
10 | 110 | |
23,248 | 13,189 | |
0.9% | 1.2% | |
9.9 | 9.6 | |
6 days ago | 8 days ago | |
Java | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache Flink
-
First 15 Open Source Advent projects
7. Apache Flink | Github | tutorial
-
Pyflink : Flink DataStream (KafkaSource) API to consume from Kafka
Does anyone have fully running Pyflink code snippet to read from Kafka using the new Flink DataStream (KafkaSource) API and just print out the output to console or write it out to a file. Most of the examples and the official Flink GitHubare using the old API (FlinkKafkaConsumer).
-
I keep getting build failure when I try to run mvn clean compile package
I'm trying to use https://github.com/mauricioaniche/ck to analyze the ck metrics of https://github.com/apache/flink. I have the latest version of java downloaded and I have the latest version of apache maven downloaded too. My environment variables are set correctly. I'm in the correct directory as well. However, when I run mvn clean compile package in powershell it always says build error. I've tried looking up the errors but there's so many. https://imgur.com/a/Zk8Snsa I'm very new to programming in general so any suggestions would be appreciated.
-
How do I determine what the dependencies are when I make pom.xml file?
Looking at the project on github, it seems like they should have a pom in the root dir https://github.com/apache/flink/blob/master/pom.xml
-
Akka is moving away from Open Source
Akka is used only as a possible RPC implementation, isn't it?
- We Are Changing the License for Akka
-
DeWitt Clause, or Can You Benchmark %DATABASE% and Get Away With It
Apache Drill, Druid, Flink, Hive, Kafka, Spark
-
Computation reuse via fusion in Amazon Athena
It took me some time to get a good grasp of the power of SQL; and it really kicked in when I learned about optimization rules. It's a program that you rewrite, just like an optimizing compiler would.
You state what you want; you have different ways to fetch and match and massage data; and you can search through this space to produce a physical plan. Hopefully you used knowledge to weight parts to be optimized (table statistics, like Java's JIT would detect hot spots).
I find it fascinating to peer through database code to see what is going on. Lately, there's been new advances towards streaming databases, which bring a whole new design space. For example, now you have latency of individual new rows to optimize for, as opposed to batch it whole to optimize the latency of a dataset. Batch scanning will be benefit from better use of your CPU caches.
And maybe you could have a hybrid system which reads history from a log and aggregates in a batched manner, and then switches to another execution plan when it reaches the end of the log.
If you want to have a peek at that here are Flink's set of rules [1], generic and stream-specific ones. The names can be cryptic, but usually give a good sense of what is going on. For example: PushFilterIntoTableSourceScanRule makes the WHERE clause apply the earliest possible, to save some CPU/network bandwidth further down. PushPartitionIntoTableSourceScanRule tries to make a fan-out/shuffle happen the earliest possible, so that parallelism can be made use of.
[1] https://github.com/apache/flink/blob/5f8fb304fb5d68cdb0b3e3c...
-
Avro SpecificRecord File Sink using apache flink is not compiling due to error incompatible types: FileSink<?> cannot be converted to SinkFunction<?>
[1]: https://mvnrepository.com/artifact/org.apache.avro/avro-maven-plugin/1.8.2 [2]: https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/sink/FileSink.java [3]: https://ci.apache.org/projects/flink/flink-docs-master/docs/connectors/datastream/file_sink/ [4]: https://github.com/apache/flink/blob/c81b831d5fe08d328251d91f4f255b1508a9feb4/flink-end-to-end-tests/flink-file-sink-test/src/main/java/FileSinkProgram.java [5]: https://github.com/rajcspsg/streaming-file-sink-demo
dvc
-
My Favorite DevTools to Build AI/ML Applications!
Collaboration and version control are crucial in AI/ML development projects due to the iterative nature of model development and the need for reproducibility. GitHub is the leading platform for source code management, allowing teams to collaborate on code, track issues, and manage project milestones. DVC (Data Version Control) complements Git by handling large data files, data sets, and machine learning models that Git can't manage effectively, enabling version control for the data and model files used in AI projects.
-
Why bad scientific code beats code following "best practices"
What you’re describing sounds like DVC (at a higher-ish—80%-solution level).
https://dvc.org/
See pachyderm too.
-
First 15 Open Source Advent projects
10. DVC by Iterative | Github | tutorial
-
Exploring Open-Source Alternatives to Landing AI for Robust MLOps
Platforms such as MLflow monitor the development stages of machine learning models. In parallel, Data Version Control (DVC) brings version control system-like functions to the realm of data sets and models.
- ML Experiments Management with Git
-
Git Version Controlled Datasets in S3
I was using DVC (https://dvc.org/) for some time to help solve this but it was getting hard to manage the storage connections and I would run into cache issues a lot, but this solves it using git-lfs itself.
- Ask HN: How do your ML teams version datasets and models?
-
Exploring MLOps Tools and Frameworks: Enhancing Machine Learning Operations
DVC (Data Version Control):
- Evaluate and Track Your LLM Experiments: Introducing TruLens for LLMs
-
[D] Is there a tool to keep track of my ML experiments?
I have been using DVC and MLflow since then DVC had only data tracking and MLflow only model tracking. I can say both are awesome now and maybe the only factor I would like to mention is that IMO, MLflow is a bit harder to learn while DVC is just a git practically.
What are some alternatives?
Trino - Official repository of Trino, the distributed SQL query engine for big data, former
MLflow - Open source platform for the machine learning lifecycle
Deeplearning4j - Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learning using automatic differentiation.
lakeFS - lakeFS - Data version control for your data lake | Git for data
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
H2O - Sparkling Water provides H2O functionality inside Spark cluster
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
Scio - A Scala API for Apache Beam and Google Cloud Dataflow.
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
Apache Kafka - Mirror of Apache Kafka
aim - Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.