deep-fast-vision
deeplake
deep-fast-vision | deeplake | |
---|---|---|
4 | 14 | |
13 | 8,304 | |
- | 1.2% | |
5.8 | 9.6 | |
about 1 year ago | about 9 hours ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deep-fast-vision
- Making deep transfer learning vision easier to work with, Deep Fast Vision (new python library)
-
Deep Fast Vision (new python library): Easy Auto-ML for Deep Transfer Learning Vision. Prototype Your Experiments Fast with this New Python Library!
Comprehensive documentation for Deep Fast Vision is available both in the docs folder and at the documentation page.
deeplake
-
Creation of the ApostropheCMS Documentation Chatbot
Finally, we stored these vectors in our chosen database: the activeloop DeepLake database. This database is open source, something near and dear to our own open-source hearts. We will cover some additional details in a further section, but it is specifically designed to handle vector data and perform efficient similarity searches, which is crucial for quick and accurate retrieval during the RAG process.
- FLaNK AI Weekly 25 March 2025
-
Qdrant, the Vector Search Database, raised $28M in a Series A round
I think Activeloop(YC) is too: https://github.com/activeloopai/deeplake/
-
[P] I built a Chatbot to talk with any Github Repo. 🪄
This repository contains two Python scripts that demonstrate how to create a chatbot using Streamlit, OpenAI GPT-3.5-turbo, and Activeloop's Deep Lake. The chatbot searches a dataset stored in Deep Lake to find relevant information and generates responses based on the user's input.
-
[P] Chat With Any GitHub Repo - Code Understanding with @LangChainAI & @activeloopai
Deep Lake GitHub
- [P] A 'ChatGPT Interface' to Explore Your ML Datasets -> app.activeloop.ai
-
Build ChatGPT for Financial Documents with LangChain + Deep Lake
As the world is increasingly generating vast amounts of financial data, the need for advanced tools to analyze and make sense of it has never been greater. This is where LangChain and Deep Lake come in, offering a powerful combination of technology to help build a question-answering tool based on financial data. After participating in a LangChain hackathon last week, I created a way to use Deep Lake, the data lake for deep learning (a package my team and I are building) with LangChain. I decided to put together a guide of sorts on how you can approach building your own question-answering tools with LangChain and Deep Lake as the data store.
-
Launch HN: Activeloop (YC S18) – Data lake for deep learning
Re: HF - we know them and admire their work (primarily, until very recently, focused on NLP, while we focus mostly on CV). As mentioned in the post, a large part of Deep Lake, including the Python-based dataloader and dataset format, is open source as well - https://github.com/activeloopai/deeplake.
Likewise, we curate a list of large open source datasets here -> https://datasets.activeloop.ai/docs/ml/, but our main thing isn't aggregating datasets (focus for HF datasets), but rather providing people with a way to manage their data efficiently. That being said, all of the 125+ public datasets we have are available in seconds with one line of code. :)
We haven't benchmarked against HF datasets in a while, but Deep Lake's dataloader is much, much faster in third-party benchmarks (see this https://arxiv.org/pdf/2209.13705 and here for an older version, that was much slower than what we have now, see this: https://pasteboard.co/la3DmCUR2iFb.png). HF under the hood uses Git-LFS (to the best of my knowledge) and is not opinionated on formats, so LAION just dumps Parquet files on their storage.
While your setup would work for a few TBs, scaling to PB would be tricky including maintaining your own infrastructure. And yep, as you said NAS/NFS would neither be able to handle the scale (especially writes with 1k workers). I am also slightly curious about your use of mmap files with image/video compressed data (as zero-copy won’t happen) unless you decompress inside the GPU ;), but would love to learn more from you! Re: pricing thanks for the feedback, storage is one component and customly priced for PB-scale workloads.
-
[P] Launching Deep Lake: the data lake for deep learning applications - https://activeloop.ai/
Deep Lake is fresh off the "press", so we would really appreciate your feedback here or in our community, a star on GitHub. If you're interested to learn more, you can read the Deep Lake academic paper or the whitepaper (that talks more about our vision!).
-
Researchers at Activeloop AI Introduce ‘Deep Lake,’ an Open-Source Lakehouse for Deep Learning Applications
Continue reading | heck out the paper and github