clip-as-service
rclip-server
Our great sponsors
clip-as-service | rclip-server | |
---|---|---|
15 | 12 | |
12,181 | 30 | |
0.6% | - | |
5.2 | 0.0 | |
3 months ago | over 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clip-as-service
- Search for anything ==> Immich fails to download textual.onnx
-
I'm going insane trying to train large datasets for poses, any input would be greatly appreciated I've been stuck for days
I think training models with limited images can lead to overfitting, so I think you can try using a set of images with different poses. You might also want to try flipping or to help out the model so it gets to do different psoes. You might also want CLIP-as-a-service, but just know that pre-trained models isn't always be the best solution. My .02c
-
[D]Want to Search Inside Videos Like a Pro?
Imagine an AI-powered grep command, one that could process a film and find segments matching a text. With CLIP-as-service, you can do that. Here is the repo link, https://github.com/jina-ai/clip-as-service.
- Image Similarity Score using transfer learning
-
Best models for sentence similarity with good benefit-cost ratio?
you could try Jina.ai's CLIP-as-a-Service: https://github.com/jina-ai/clip-as-service
-
Google launched multisearch last week, here's how you can create your own multisearch
Multisearch allows people to search with both text and images. With Open-Source project CLIP-as-service, you can use CLIP (a deep learning model by OpenAI) to do the same. Ask me if you have any questions?
-
Natural text to image search(without captions), using CLIP model. Notebook in comment.
Are you scraping these images or using any dataset? Do share the link, would love to play around with it. Would love to hear your feedback for clip-as-service (what I use in my example)?
-
Open-Source python package to find relevant images for a sentence
Built CLIP-as-service, an open-source library to create embeddings of images and text using CLIP. These embeddings can be used to find the relevant images for any sentence. Note: you don't need to caption the images for this to work, and it is not just limited to objects in the image but an overall understanding built via CLIP neural network.
-
Built an ML library that can describe an image or find relevant images for a sentence
Built [CLIP-as-service](https://github.com/jina-ai/clip-as-service), an open-source library to create embeddings of images and text using CLIP.
-
[P] Clip-as-service to embed images and sentences into fixed-length vectors with CLIP
Excited to share my new project CLIP-as-service, a high-scalability service for embedding images and text. It serve CLIP models with ONNX runtime and PyTorch JIT with 800QPS.
rclip-server
-
Apple - Fruit = X? rclip update: query combos and snapcraft, homebrew, and pypi releases
This query combinations feature was initially introduced by a GitHub user ramayer (/u/rmxz on Reddit). Thank you, /u/rmxz, for this amazing contribution! /u/rmxz also built a rclip-server, an online web interface to a rclip database where you can play with such expressions: http://image-search.0ape.com/ (MIT-licensed source code: https://github.com/ramayer/rclip-server).
-
Identifying Google Maps/Earth locations based on image dataset.
This is almost all based on manipulating OpenID CLIP embeddings --- directly comparing the embeddings and tweaking them with text prompts. Source code is on github here for the back-end and here for the front end.
-
Can Dall-E output be interpolated?
source code for that wikimedia-CPI search can be found here It does the embedding math here and here.
-
Skeptical of this finding: "DALLE-2 has a secret language." Any thoughts?
source for that CLIP-based search engine and wikimedia indexer on github here.
-
Natural text to image search(without captions), using CLIP model. Notebook in comment.
Sry for the late reply. All my source is available in that git repo ( https://github.com/ramayer/rclip-server ).
-
Can we find analogies with CLIP embeddings?
Source code here.
-
ML at home?
Yes - to organize my own pictures with this project.
-
Hi. Is there an AI that finds images based on a word in the title
Source here: https://github.com/ramayer/rclip-server
-
Weaviate Wikidata Vector Search Engine
I've been having fun with vector search of Wikimedia Images (live demo here)
- any free / public AI photo search apps with object recognition?
What are some alternatives?
BERTopic - Leveraging BERT and c-TF-IDF to create easily interpretable topics.
rclip - AI-Powered Command-Line Photo Search Tool
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
terrapattern - Enabling journalists, citizen scientists, humanitarian workers and others to detect “patterns of interest” in satellite imagery.
DeBERTa - The implementation of DeBERTa
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
electra - ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
OpenPrompt - An Open-Source Framework for Prompt-Learning.
ludwig - Low-code framework for building custom LLMs, neural networks, and other AI models
scibert - A BERT model for scientific text.
tensorflow-open_nsfw - Tensorflow Implementation of Yahoo's Open NSFW Model