VehicleFinder-CTIM
clip-as-service
VehicleFinder-CTIM | clip-as-service | |
---|---|---|
1 | 15 | |
5 | 12,528 | |
- | 0.5% | |
3.1 | 5.2 | |
over 1 year ago | about 1 year ago | |
Python | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
VehicleFinder-CTIM
-
FindVehicle and VehicleFinder: A NER dataset for natural language-based vehicle retrieval and a keyword-based cross-modal vehicle retrieval system
Natural language (NL) based vehicle retrieval is a task aiming to retrieve a vehicle that is most consistent with a given NL query from among all candidate vehicles. Because NL query can be easily obtained, such a task has a promising prospect in building an interactive intelligent traffic system (ITS). Current solutions mainly focus on extracting both text and image features and mapping them to the same latent space to compare the similarity. However, existing methods usually use dependency analysis or semantic role-labelling techniques to find keywords related to vehicle attributes. These techniques may require a lot of pre-processing and post-processing work, and also suffer from extracting the wrong keyword when the NL query is complex. To tackle these problems and simplify, we borrow the idea from named entity recognition (NER) and construct FindVehicle, a NER dataset in the traffic domain. It has 42.3k labelled NL descriptions of vehicle tracks, containing information such as the location, orientation, type and colour of the vehicle. FindVehicle also adopts both overlapping entities and fine-grained entities to meet further requirements. To verify its effectiveness, we propose a baseline NL-based vehicle retrieval model called VehicleFinder. Our experiment shows that by using text encoders pre-trained by FindVehicle, VehicleFinder achieves 87.7\% precision and 89.4\% recall when retrieving a target vehicle by text command on our homemade dataset based on UA-DETRAC. The time cost of VehicleFinder is 279.35 ms on one ARM v8.2 CPU and 93.72 ms on one RTX A4000 GPU, which is much faster than the Transformer-based system. The dataset is open-source via the link https://github.com/GuanRunwei/FindVehicle, and the implementation can be found via the link https://github.com/GuanRunwei/VehicleFinder-CTIM.
clip-as-service
- Search for anything ==> Immich fails to download textual.onnx
-
I'm going insane trying to train large datasets for poses, any input would be greatly appreciated I've been stuck for days
I think training models with limited images can lead to overfitting, so I think you can try using a set of images with different poses. You might also want to try flipping or to help out the model so it gets to do different psoes. You might also want CLIP-as-a-service, but just know that pre-trained models isn't always be the best solution. My .02c
-
[D]Want to Search Inside Videos Like a Pro?
Imagine an AI-powered grep command, one that could process a film and find segments matching a text. With CLIP-as-service, you can do that. Here is the repo link, https://github.com/jina-ai/clip-as-service.
- Image Similarity Score using transfer learning
-
Best models for sentence similarity with good benefit-cost ratio?
you could try Jina.ai's CLIP-as-a-Service: https://github.com/jina-ai/clip-as-service
-
Google launched multisearch last week, here's how you can create your own multisearch
Multisearch allows people to search with both text and images. With Open-Source project CLIP-as-service, you can use CLIP (a deep learning model by OpenAI) to do the same. Ask me if you have any questions?
-
Natural text to image search(without captions), using CLIP model. Notebook in comment.
Are you scraping these images or using any dataset? Do share the link, would love to play around with it. Would love to hear your feedback for clip-as-service (what I use in my example)?
-
Open-Source python package to find relevant images for a sentence
Built CLIP-as-service, an open-source library to create embeddings of images and text using CLIP. These embeddings can be used to find the relevant images for any sentence. Note: you don't need to caption the images for this to work, and it is not just limited to objects in the image but an overall understanding built via CLIP neural network.
-
Built an ML library that can describe an image or find relevant images for a sentence
Built [CLIP-as-service](https://github.com/jina-ai/clip-as-service), an open-source library to create embeddings of images and text using CLIP.
-
[P] Clip-as-service to embed images and sentences into fixed-length vectors with CLIP
Excited to share my new project CLIP-as-service, a high-scalability service for embedding images and text. It serve CLIP models with ONNX runtime and PyTorch JIT with 800QPS.
What are some alternatives?
FindVehicle - FindVehicle: A NER dataset in transportation to extract keywords describing vehicles on the road
DeBERTa - The implementation of DeBERTa
x-clip - A concise but complete implementation of CLIP with various experimental improvements from recent papers
BERTopic - Leveraging BERT and c-TF-IDF to create easily interpretable topics.
Macaw-LLM - Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
ludwig - Low-code framework for building custom LLMs, neural networks, and other AI models
prismer - The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
tensorflow-open_nsfw - Tensorflow Implementation of Yahoo's Open NSFW Model
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
scibert - A BERT model for scientific text.
rclip - AI-Powered Command-Line Photo Search Tool
electra - ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators