Activeloop Hub VS evidently

Compare Activeloop Hub vs evidently and see what are their differences.

Activeloop Hub

Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake] (by activeloopai)

evidently

Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b (by evidentlyai)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
Activeloop Hub evidently
31 10
4,807 4,619
- 3.9%
9.9 9.5
over 1 year ago 1 day ago
Python Jupyter Notebook
Mozilla Public License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Activeloop Hub

Posts with mentions or reviews of Activeloop Hub. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-19.
  • [Q] where to host 50GB dataset (for free?)
    1 project | /r/datasets | 25 Jun 2022
    Hey u/platoTheSloth, as u/gopietz mentioned (thanks a lot for the shout-out!!!), you can share them with the general public through uploading to Activeloop Platform (for researchers, we offer special terms, but even as a general public member you get up to 300GBs of free storage!). Thanks to our open source dataset format for AI, Hub, anyone can load the dataset in under 3seconds with one line of code, and stream it while training in PyTorch/TensorFlow.
  • [D] NLP has HuggingFace, what does Computer Vision have?
    7 projects | /r/MachineLearning | 19 Apr 2022
    u/Remote_Cancel_7977 we just launched 100+ computer vision datasets via Activeloop Hub yesterday on r/ML (#1 post for the day!). Note: we do not intend to compete with HuggingFace (we're building the database for AI). Accessing computer vision datasets via Hub is much faster than via HuggingFace though, according to some third-party benchmarks. :)
  • [N] [P] Access 100+ image, video & audio datasets in seconds with one line of code & stream them while training ML models with Activeloop Hub (more at docs.activeloop.ai, description & links in the comments below)
    4 projects | /r/MachineLearning | 17 Apr 2022
    u/gopietz good question. htype="class_label" will work, but querying doesn't support multi-dimensional labels yet. Would you mind opening an issue requesting that feature?
  • Easy way to load, create, version, query and visualize computer vision datasets
    1 project | news.ycombinator.com | 28 Mar 2022
    Hi HN,

    In machine learning, we are faced with tensor-based computations (that's the language that ML models think in). I've recently discovered a project that helps you make it much easier to set up and conduct machine learning projects, and enables you to create and store datasets in deep learning-native format.

    Hub by Activeloop (https://github.com/activeloopai/Hub) is an open-source Python package that arranges data in Numpy-like arrays. It integrates smoothly with deep learning frameworks such as TensorFlow and PyTorch for faster GPU processing and training. In addition, one can update the data stored in the cloud, create machine learning pipelines using Hub API and interact with datasets (e.g. visualize) in Activeloop platform (https://app.activeloop.ai). The real benefit for me is that, I can stream my datasets without the need to store them on my machine (my datasets can be up to 10GB+ big, but it works just as well with 100GB+ datasets like ImageNet (https://docs.activeloop.ai/datasets/imagenet-dataset), for instance).

    Hub allows us to store images, audio, video data in a way that can be accessed at lightning speed. The data can be stored on GCS/S3 buckets, local storage, or on Activeloop cloud. The data can directly be used in the training TensorFlow/ PyTorch models so that you don't need to set up data pipelines. The package also comes with data version control, dataset search queries, and distributed workloads.

    For me, personally the simplicity of the API stands out, for instance:

    Loading datasets in seconds

      import hub ds = hub.load("hub://activeloop/cifar10-train")
  • Easy way to load, create, version, query & visualize machine learning datasets
    1 project | /r/learnmachinelearning | 28 Mar 2022
    Hub by Activeloop (https://github.com/activeloopai/Hub) is an open-source Python package that arranges data in Numpy-like arrays. It integrates smoothly with deep learning frameworks such as Tensorflow and PyTorch for faster GPU processing and training. In addition, one can update the data stored in the cloud, create machine learning pipelines using Hub API and interact with datasets (e.g. visualize) in Activeloop platform (https://app.activeloop.ai/3)
  • Datasets and model creation flow
    1 project | /r/mlops | 20 Feb 2022
    Consider this
  • [P] Database for AI: Visualize, version-control & explore image, video and audio datasets
    6 projects | /r/MachineLearning | 17 Feb 2022
    Please take a look at our open-source dataset format https://github.com/activeloopai/hub and a tutorial on htypes https://docs.activeloop.ai/how-hub-works/visualization-and-htype
    1 project | /r/MachineLearningKeras | 14 Feb 2022
    I'm Davit from Activeloop (activeloop.ai).
  • The hand-picked selection of the best Python libraries released in 2021
    12 projects | /r/Python | 21 Dec 2021
    Hub.
  • What are good alternatives to zip files when working with large online image datasets?
    2 projects | /r/datascience | 14 Dec 2021
    What solution have you used that you like as a data scientist when working with large datasets? Any standard python API to access the data? Other solution? If anyone has used https://github.com/activeloopai/Hub or other similar API I'd be interested to hear your experience working with it!

evidently

Posts with mentions or reviews of evidently. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-11.
  • [P] Free open-source ML observability course: starts October 16 🚀
    1 project | /r/MachineLearning | 15 Oct 2023
    Hi everyone, I’m one of the creators of Evidently, an open-source (Apache 2.0) tool for production ML monitoring. We’ve just launched a free open course on ML observability that I wanted to share with the community.
  • Free Open-source ML observability course
    1 project | news.ycombinator.com | 4 Oct 2023
    Evidently itself is an open-source ML monitoring tool with 3m+ downloads so it's fairly popular https://github.com/evidentlyai/evidently. The course will show it but also other OSS tools like Mlflow and Grafana.

    Disclaimer: I am one of the people working on Evidently.

  • Batch ML deployment and monitoring blueprint using open-source
    2 projects | /r/mlops | 11 May 2023
    Repo:https://github.com/evidentlyai/evidently/tree/main/examples/integrations/postgres_grafana_batch_monitoring
  • Looking for recommendations to monitor / detect data drifts over time
    3 projects | /r/datascience | 15 Apr 2023
  • evidently: Evaluate and monitor ML models from validation to production
    1 project | /r/coolgithubprojects | 8 Dec 2022
  • State of the Art data drift libraries on Python?
    3 projects | /r/mlops | 24 May 2022
    Thank you for your answer. I'm trying it today and the the other libraries mentioned + https://github.com/evidentlyai/evidently
  • Package for drift detection
    2 projects | /r/mlops | 6 Apr 2022
    evidently: https://github.com/evidentlyai/evidently
  • The hand-picked selection of the best Python libraries released in 2021
    12 projects | /r/Python | 21 Dec 2021
    Evidently.
  • [D] 5 considerations for Deploying Machine Learning Models in Production – what did I miss?
    3 projects | /r/MachineLearning | 21 Nov 2021
    Consideration Number #5: For model observability look to Evidently.ai, Arize.ai, Arthur.ai, Fiddler.ai, Valohai.com, or whylabs.ai.
  • Launch HN: Evidently AI (YC S21) – Track and Debug ML Models in Production
    1 project | news.ycombinator.com | 7 Jul 2021
    Hi HN, we are Evidently AI http://evidentlyai.com. We're building monitoring for machine learning models in production. The tool is open source and available on GitHub: https://github.com/evidentlyai/evidently. You can use it locally in a Jupyter notebook or in a Bash shell. There’s a video showing how it works in Jupyter here: https://www.youtube.com/watch?v=NPtTKYxm524.

    Machine learning models can stop working as expected, often for non-obvious reasons. If this happens to a marketing personalization model, you might spam your customers by mistake. If this happens to credit scoring models, you might face legal and reputational risks. And so on. To catch issues with the model, it is not enough to just look at service metrics like latency. You have to track data quality, data drift (did the inputs change too much?), underperforming segments (does the model fail only for users in a certain region?), model metrics (accuracy, ROC AUC, mean error, etc.), etc.

    Emeli and I have been friends for many years. We first met when we both worked at Yandex (the company behind CatBoost and ClickHouse). We worked on creating ML systems for large enterprises. We then co-founded a startup focused on ML for manufacturing. Overall we've worked on more than 50 real-world ML projects, from e-commerce recommendations to steel production optimization. We faced the monitoring problem on our own when we put models in production and had to create and build custom dashboards. Emeli is also an ML instructor on Coursera (co-author of the most popular ML course in Russian) and a number of offline courses. She knows first-hand how many data scientists try to repeatedly implement the same things over and over. There is no reason why everyone should have to build their own version of something like drift detection.

    We spent a couple of months talking to ML teams from different industries. We learned that there are no good, standard solutions for model monitoring. Some quoted us horror stories about broken models left unnoticed which led to $100K+ in losses. Others showed us home-grown dashboards and complained they are hard to maintain. Some said they simply have a recurring task to look at the logs once per month, and often catch the issues late. It is surprising how often models are not monitored until the first failure. We spoke to many teams who said that only after the first breakdown they started to think about monitoring. Some never do, and failures go undetected.

    If you want to calculate a couple of performance metrics on top of your data, it is easy to do ad hoc. But if you want to have stable visibility into different models, you need to consider edge cases, choose the right statistical tests and implement them, design visuals, define thresholds for alerts etc. That is a harder problem that combines statistics and engineering. Beyond that, monitoring often involves sharing the results with different teams: from domain experts to developers. In practice, data scientists often end up sharing screenshots of their plots and sending files here and there. Building a maintainable software system that supports these workflows is a project in itself, and machine learning teams usually do not have time or resources for it.

    Since there is no standard open-source solution, we decided to build one. We want to automate as much as possible to help people focus on the modeling work that matters, not boilerplate code.

    Our main tool is an open-source Python library that generates interactive reports on ML model performance. To get it, you need to provide the model logs (input features, prediction, and ground truth if available) and reference data (usually from training). Then you choose the report type and we generate a set of dashboards. We have pre-built several reports to detect things like data drift, prediction drift, visualize performance metrics, and help understand where the model makes errors. We can display these in a Jupyter notebook or HTML. We can also generate a JSON profile instead of a report. You can then integrate this output with any external tool (like Grafana) and build a workflow you want to trigger retraining or alerts.

    Under the hood, we perform the needed calculations (e.g. Kolmogorov Smirnov or Chi-Squared test to detect drift) and generate multiple interactive tables and plots (using Plotly on the backend). Right now it works with tabular data only. In the future, we plan to add more data types, reports and make it easier to customize metrics. Our goal is to make it dead easy to understand all aspects of model performance and monitor them.

    We differ from other approaches in a couple of ways. There are end-to-end ML platforms on the market that include monitoring features. These work for teams who are ready to trade flexibility in order to have an all-in-one tool. But most teams we spoke to have custom needs and prefer to build their own platform from open components. We want to create a tool that does one thing well and is easy to integrate with whatever stack you use. There are also some proprietary ML monitoring solutions on the market, but we believe that tools like these should be open, transparent, and available for self-hosting. That is why we are building it as open source.

    We launched under Apache 2.0 license so that everyone can use the tool. For now, our focus is to get adoption for the open-source project. We don’t plan to charge individual users or small teams. We believe that the open-source project should remain open and be highly valuable. Later on, we plan to make money by providing a hosted cloud version for teams that do not want to run it themselves. We're also considering an open-core business model where we charge for features that large companies care about like single sign-on, security and audits.

    If you work in tech companies, you might think that many ML infra problems are already solved. But in more traditional industries like manufacturing, retail, finance, etc., ML is just hitting adoption. Their ML needs and environment are often very different due to legacy IT systems, regulations, and types of use cases they work with. Now that many move from ML proof-of-concept projects to production, they will need the tools to help run the models reliably.

    We are super excited to share this early release, and we’d love if you could give it a try: https://github.com/evidentlyai/evidently. If you run models in production - let us know how you monitor them and if anything is missing. If you need some help to test the tool - happy to chat! We want to build this open-source project together with the community, and it is very important for us to hear your thoughts and feedback.

What are some alternatives?

When comparing Activeloop Hub and evidently you can also consider the following projects:

dvc - 🦉 ML Experiments and Data Management with Git

great_expectations - Always know what to expect from your data.

petastorm - Petastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.

seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models

CKAN - CKAN is an open-source DMS (data management system) for powering data hubs and data portals. CKAN makes it easy to publish, share and use data. It powers catalog.data.gov, open.canada.ca/data, data.humdata.org among many other sites.

MLflow - Open source platform for the machine learning lifecycle

datasets - TFDS is a collection of datasets ready to use with TensorFlow, Jax, ...

whylogs - An open-source data logging library for machine learning models and data pipelines. 📚 Provides visibility into data quality & model performance over time. 🛡️ Supports privacy-preserving data collection, ensuring safety & robustness. 📈

TileDB - The Universal Storage Engine

ydata-profiling - 1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.

postgresml - The GPU-powered AI application database. Get your app to market faster using the simplicity of SQL and the latest NLP, ML + LLM models.