mlrun
feast
Our great sponsors
mlrun | feast | |
---|---|---|
3 | 8 | |
1,287 | 5,246 | |
5.5% | 1.7% | |
9.9 | 9.3 | |
2 days ago | 2 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mlrun
- Discussion on Need of Feature Stores
-
I reviewed 50+ open-source MLOps tools. Here’s the result
You should also add MLRun: https://github.com/mlrun/mlrun
- Has anyone here been able to deploy Mlrun successfully on Kubernetes cluster?
feast
- What's Happening with Feast?
-
Running The Feast Feature Store With Dragonfly
Feast stands as an exceptional open-source feature store, revolutionizing the efficient management and uninterrupted serving of machine learning (ML) features for real-time applications. At its core, Feast offers a sophisticated interface for storing, discovering, and accessing features—the individual measurable properties or characteristics of data essential for ML modeling. Operating on a distributed architecture, Feast harmoniously integrates several pivotal components, including the Feast Registry, Stream Processor, Batch Materialization Engine, and Stores.
-
Ask HN: How to Break into AI Engineering
AI Engineering is basically Data Engineering focused on AI. When in "traditional" Data Engineering you create pipelines that store processed data in something like a Data Lake, in AI Eng. your end storage might be a specialized Feature Storage (like Feast or GCP Vertex AI).
There are some AI Engineers with strong scientific/mathematical background, but that's rare. Usually, you're paired with these ML people that actually develop and evaluate the models.
So my advice is to start with Data Engineering and then find a specialization AI. You should have a VERY solid foundation on scripting and programming, specially Python. Also, a lot of concepts of "data wrangling". Understanding how data flows from point A to point B, how the intermediate storages and streaming engines work, etc. Functional programming is key here.
-
In Need of Guidance: Implementing MLOps in a Complex Organization as a Junior Data Engineer
A feature store usually stores features which are used for training ML model. It is a centralized place for collaboration between data engineer, ML engineer, and data scientist, so that data engineer can write to the feature store while ML engineer and data scientist read from it. Hopsworks https://www.hopsworks.ai and feast https://github.com/feast-dev/feast are examples of open source feature store.
- [D] Your 🫵 Preferred Feature Stores?
-
[P] Announcing Feast 0.10: The simplest way to serve features in production
Github: https://github.com/feast-dev/feast
-
[D] What’s the simplest, most lightweight but complete and 100% open source MLOps toolkit? -> MY OWN CONCLUSIONS
Have you looked at Feats as a Feature Store solution? It seems promising but I haven't really looked into it yet though.
- Feast: OSS Feature Store for Production ML
What are some alternatives?
dagster-example-pipeline - Template Dagster repo using poetry and a single Docker container; works well with CICD
kedro-great - The easiest way to integrate Kedro and Great Expectations
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
featureform - The Virtual Feature Store. Turn your existing data infrastructure into a feature store.
SmartSim - SmartSim Infrastructure Library.
Milvus - A cloud-native vector database, storage for next generation AI applications
phidata - Build AI Assistants with function calling and connect LLMs to external tools.
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
mosec - A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine
great_expectations - Always know what to expect from your data.
ploomber - The fastest ⚡️ way to build data pipelines. Develop iteratively, deploy anywhere. ☁️
feathr - Feathr – A scalable, unified data and AI engineering platform for enterprise