Queryable VS big_vision

Compare Queryable vs big_vision and see what are their differences.

Queryable

Run OpenAI's CLIP model on iOS to search photos. (by mazzzystar)

big_vision

Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more. (by google-research)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Queryable big_vision
5 5
2,424 1,560
- 4.4%
7.9 7.1
18 days ago 16 days ago
Swift Jupyter Notebook
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Queryable

Posts with mentions or reviews of Queryable. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-13.

big_vision

Posts with mentions or reviews of big_vision. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-13.
  • I accidentally built a meme search engine
    6 projects | news.ycombinator.com | 13 Apr 2024
    I think this is based off Google research https://github.com/google-research/big_vision
  • Show HN: I made a Pinterest clone using SigLIP image embeddings
    2 projects | news.ycombinator.com | 16 Feb 2024
    The the vision training models are available here: https://github.com/google-research/big_vision/tree/main which I am assuming, based on the research paper is what was used for the project.
  • [D] What are the strongest plain baselines for Vision Transformers on ImageNet?
    2 projects | /r/MachineLearning | 15 Dec 2022
    Found relevant code at https://github.com/google-research/big_vision + all code implementations here
  • [P] Simple ViT Implementation in Flax
    2 projects | /r/MachineLearning | 10 Jul 2022
    Official Github repository: https://github.com/google-research/big_vision
  • Open-Source Simple-ViT Implementation
    2 projects | news.ycombinator.com | 10 Jul 2022
    An open-source implementation of the Better plain ViT baselines for ImageNet-1k research paper in Google's JAX and Flax.

    An update from some of the same authors of the original paper proposes simplifications to ViT that allows it to train faster and better.

    Among these simplifications include 2d sinusoidal positional embedding, global average pooling (no CLS token), no dropout, batch sizes of 1024 rather than 4096, and use of RandAugment and MixUp augmentations. They also show that a simple linear at the end is not significantly worse than the original MLP head.

    Simple ViT Research Paper: https://arxiv.org/abs/2205.01580

    Official Github repository: https://github.com/google-research/big_vision

    Developer updates can be found on: https://twitter.com/EnricoShippole

    In collaboration with Dr. Phil 'Lucid' Wang: https://github.com/lucidrains

What are some alternatives?

When comparing Queryable and big_vision you can also consider the following projects:

clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them

natural-language-image-search - Search photos on Unsplash using natural language

aphantasia - CLIP + FFT/DWT/RGB = text to image/video

natural-language-youtube-search - Search inside YouTube videos using natural language

Awesome-CLIP - Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).

MoTIS - [NAACL 2022]Mobile Text-to-Image search powered by multimodal semantic representation models(e.g., OpenAI's CLIP)

Puddles - A native SwiftUI app architecture

Chinese-CLIP - Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.

bark.cpp - Port of Suno AI's Bark in C/C++ for fast inference

ReduxUI - 💎 Redux like architecture for SwiftUI

sam.cpp

llm - An ecosystem of Rust libraries for working with large language models