imagenette
txtai
imagenette | txtai | |
---|---|---|
9 | 356 | |
877 | 7,033 | |
0.0% | 3.2% | |
0.0 | 9.3 | |
over 1 year ago | 7 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
imagenette
-
[P] Graph path traversal with semantic graphs
This idea isn't exclusive to text, the same can be done for images. See this example from the imagenette dataset.
-
Ask HN: In 2022, what is the proper way to get into machine/deep learning?
FastAI has ready-to-run code that does just this. They seem to have an ImageNet package https://github.com/fastai/imagenette
-
How can I download ImageNet dataset with only 20 or 30 classes?
Imagenette is a smaller subset with only 10 classes. https://github.com/fastai/imagenette
You can try this https://github.com/fastai/imagenette which is subset of the main dataset.
-
[D] How can I download ImageNet dataset with only 20 or 30 classes?
Download entire imagenet and annotations. Filter out all annotations that do not contain the classes you're interested in. Consider imagenette.
-
[R] Dataset for research paper
You can try Imagenette, which is a 10-class subset of ImageNet but with the same number of images per class. There are also two datasets linked in the README for increased difficulty.
-
ResNet from scratch - ImageNet
If you're looking for something more bite-sized, how about Imagenette?
-
[D] Tiny-Imagenet original size images
Fastai has made something like that. Does this fit the bill https://github.com/fastai/imagenette?
-
[R] AdasOptimizer Update: Cifar-100+MobileNetV2 Adas generalizes with Adas 15% better and 9x faster than Adam
You don't need Imagenet to verify it really works or not, Checkout https://github.com/fastai/imagenette the fastai folks have a small subset of Imagenet, which has 3 types of the dataset, test on them. If AdasOptimizer really works it you should be able to beat their results, or at least see where it stands.
txtai
- Show HN: FileKitty β Combine and label text files for LLM prompt contexts
-
What contributing to Open-source is, and what it isn't
I tend to agree with this sentiment. Many junior devs and/or those in college want to contribute. Then they feel entitled to merge a PR that they worked hard on often without guidance. I'm all for working with people but projects have standards and not all ideas make sense. In many cases, especially with commercial open source, the project is the base of a companies identity. So it's not just for drive-by ideas to pad a resume or finish a school project.
For those who do want to do this, I'd recommend writing an issue and/or reaching out to the developers to engage in a dialogue. This takes work but it will increase the likelihood of a PR being merged.
Disclaimer: I'm the primary developer of txtai (https://github.com/neuml/txtai), an open-source vector database + RAG framework
-
Build knowledge graphs with LLM-driven entity extraction
txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.
-
Bootstrap or VC?
Bootstrapping only works if you have the runway to do it and you don't feel the need to grow fast.
With NeuML (https://neuml.com), I've went the bootstrapping route. I've been able to build a fairly successful open source project (txtai 6K stars https://github.com/neuml/txtai) and a revenue positive company. It's a "live within your means" strategy.
VC funding can have a snowball effect where you need more and more. Then you're in the loop of needing funding rounds to survive. The hope is someday you're acquired or start turning a profit.
I would say both have their pros and cons. Not all ideas have the luxury of time.
- txtai: An embeddings database for semantic search, graph networks and RAG
-
Ask HN: What happened to startups, why is everything so polished?
I agree that in many cases people are puffing their feathers to try to be something they're not (at least not yet). Some believe in the fake it until you make it mentality.
With NeuML (https://neuml.com), the website is a simple HTML page. On social media, I'm honest about what NeuML is, that I'm in my 40s with a family and not striving to be the next Steve Jobs. I've been able to build a fairly successful open source project (txtai 6K stars https://github.com/neuml/txtai) and a revenue positive company. For me, authenticity and being genuine is most important. I would say that being genuine has been way more of an asset than liability.
-
Are we at peak vector database?
I'll add txtai (https://github.com/neuml/txtai) to the list.
There is still plenty of room for innovation in this space. Just need to focus on the right projects that are innovating and not the ones (re)working on problems solved in 2020/2021.
- Txtai: An all-in-one embeddings database for semantic search and LLM workflows
-
Generate knowledge with Semantic Graphs and RAG
txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.
-
Show HN: Open-source Rule-based PDF parser for RAG
Nice project! I've long used Tika for document parsing given it's maturity and wide number of formats supported. The XHTML output helps with chunking documents for RAG.
Here's a couple examples:
- https://neuml.hashnode.dev/build-rag-pipelines-with-txtai
- https://neuml.hashnode.dev/extract-text-from-documents
Disclaimer: I'm the primary author of txtai (https://github.com/neuml/txtai).
What are some alternatives?
tiny-imagenet
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
pytorch-optimizer - torch-optimizer -- collection of optimizers for Pytorch
tika-python - Tika-Python is a Python binding to the Apache Tikaβ’ REST services allowing Tika to be called natively in the Python community.
DemonRangerOptimizer - Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
transformers - π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
AdasOptimizer - ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivative, it fine-tunes the step size, truly making step size scheduling obsolete, achieving state-of-the-art training performance
faiss - A library for efficient similarity search and clustering of dense vectors.
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
paperai - π π€ Semantic search and workflows for medical/scientific papers
Milvus - A cloud-native vector database, storage for next generation AI applications
llmsherpa - Developer APIs to Accelerate LLM Projects