big_vision
Queryable
big_vision | Queryable | |
---|---|---|
5 | 5 | |
1,746 | 2,439 | |
14.6% | - | |
7.1 | 7.9 | |
5 days ago | about 1 month ago | |
Jupyter Notebook | Swift | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
big_vision
-
I accidentally built a meme search engine
I think this is based off Google research https://github.com/google-research/big_vision
-
Show HN: I made a Pinterest clone using SigLIP image embeddings
The the vision training models are available here: https://github.com/google-research/big_vision/tree/main which I am assuming, based on the research paper is what was used for the project.
-
[D] What are the strongest plain baselines for Vision Transformers on ImageNet?
Found relevant code at https://github.com/google-research/big_vision + all code implementations here
-
[P] Simple ViT Implementation in Flax
Official Github repository: https://github.com/google-research/big_vision
-
Open-Source Simple-ViT Implementation
An open-source implementation of the Better plain ViT baselines for ImageNet-1k research paper in Google's JAX and Flax.
An update from some of the same authors of the original paper proposes simplifications to ViT that allows it to train faster and better.
Among these simplifications include 2d sinusoidal positional embedding, global average pooling (no CLS token), no dropout, batch sizes of 1024 rather than 4096, and use of RandAugment and MixUp augmentations. They also show that a simple linear at the end is not significantly worse than the original MLP head.
Simple ViT Research Paper: https://arxiv.org/abs/2205.01580
Official Github repository: https://github.com/google-research/big_vision
Developer updates can be found on: https://twitter.com/EnricoShippole
In collaboration with Dr. Phil 'Lucid' Wang: https://github.com/lucidrains
Queryable
-
I accidentally built a meme search engine
You might be interested in this, https://github.com/mazzzystar/Queryable, https://queryable.app/
I run it on my iPhone.
Native app. Doesn't require a network connection (great for privacy).
-
Meta's Segment Anything written with C++ / GGML
I think you would want to use something like CLIP embeddings for image search.
Really enjoyed using this app for iOS: https://github.com/mazzzystar/Queryable
-
Shortcuts ?
This project is open source, so maybe someone will help implement it in the future. :)
-
[P] I open sourced Queryable - a CLIP-based photo search app (SwiftUI)
Many Americans distrust Chinese developers, fearing their photo album privacy would be violated and therefore are reluctant to use the product. I often receive emails from some developers asking about technical details. Now that it's free, why not make the source code available too. The link is: https://github.com/mazzzystar/Queryable.
What are some alternatives?
clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them
natural-language-image-search - Search photos on Unsplash using natural language
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
natural-language-youtube-search - Search inside YouTube videos using natural language
Awesome-CLIP - Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).
MoTIS - [NAACL 2022]Mobile Text-to-Image search powered by multimodal semantic representation models(e.g., OpenAI's CLIP)
Puddles - A native SwiftUI app architecture
Chinese-CLIP - Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
bark.cpp - Suno AI's Bark model in C/C++ for fast text-to-speech
ReduxUI - 💎 Redux like architecture for SwiftUI
sam.cpp
llm - An ecosystem of Rust libraries for working with large language models