The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →
Top 12 Python sentence-transformer Projects
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
beir
A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
-
StoryToolkitAI
An editing tool that uses AI to transcribe, understand content and search for anything in your footage, integrated with ChatGPT and other AI models
-
DiffCSE
Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"
-
Python-Schema-Matching
A python tool using XGboost and sentence-transformers to perform schema matching task on tables.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Llama-2-GGML-CSV-Chatbot
The Llama-2-GGML-CSV-Chatbot is a conversational tool leveraging the powerful Llama-2 7B language model. It facilitates multi-turn interactions based on uploaded CSV data, allowing users to engage in seamless conversations.
-
balena
BALanced Execution through Natural Activation : a human-computer interaction methodology for code running.
Project mention: [D] Is it better to create a different set of Doc2Vec embeddings for each group in my dataset, rather than generating embeddings for the entire dataset? | /r/MachineLearning | 2023-10-28I'm using Top2Vec with Doc2Vec embeddings to find topics in a dataset of ~4000 social media posts. This dataset has three groups:
RAG is very difficult to do right. I am experimenting with various RAG projects from [1]. The main problems are:
- Chunking can interfer with context boundaries
- Content vectors can differ vastly from question vectors, for this you have to use hypothetical embeddings (they generate artificial questions and store them)
- Instead of saving just one embedding per text-chuck you should store various (text chunk, hypothetical embedding questions, meta data)
- RAG will miserably fail with requests like "summarize the whole document"
- to my knowledge, openAI embeddings aren't performing well, use a embedding that is optimized for question answering or information retrieval and supports multi language. Also look into instructor embeddings: https://github.com/embeddings-benchmark/mteb
1 https://github.com/underlines/awesome-marketing-datascience/...
The BEIR project might be what you're looking for: https://github.com/beir-cellar/beir/wiki/Leaderboard
There's StoryToolkitAI, it's free (But requires Davinci Resolve Studio). It can transcribe and generate subtitles quite accurately. It also has a feature to translate subtitles to English. I haven't tried the translate feature yet, but I've been using this tool for my work a lot. It also supports more languages than resolve's built in transcription and auto subtitle tool.
pip install git+https://github.com/jacobmarks/emoji_search.git
huggingface.co/Llama-2-GGML-CSV-Chatbot
By “this”, I mean an open-source semantic emoji search engine, with both UI-centric and CLI versions. The Python CLI library can be found here, and the UI-centric version can be found here. You can also play around with a hosted (also free) version of the UI emoji search engine online here.
Project mention: BALanced Execution Through Natural Activation: A HCI Methodology | news.ycombinator.com | 2023-12-31
Project mention: Tranformer-based Denoising AutoEncoder for ST Unsupervised pre-training | news.ycombinator.com | 2024-02-04A new PyPI package for training sentence embedding models in just 2 lines.
The acquisition of sentence embeddings often necessitates a substantial volume of labeled data. However, in many cases and fields, labeled data is rarely accessible, and the procurement of such data is costly. In this project, we employ an unsupervised process grounded in pre-trained Transformers-based Sequential Denoising Auto-Encoder (TSDAE), introduced by the Ubiquitous Knowledge Processing Lab of Darmstadt, which can realize a performance level reaching 93.1% of in-domain supervised methodologies.
The TSDAE schema comprises two components: an encoder and a decoder. Throughout the training process, TSDAE translates tainted sentences into uniform-sized vectors, necessitating the decoder to reconstruct the original sentences utilizing this sentence embedding. For good reconstruction quality, the semantics must be captured well in the sentence embeddings from the encoder. Subsequently, during inference, the encoder is solely utilized to form sentence embeddings.
GitHub : https://github.com/louisbrulenaudet/tsdae
Installation :
Python sentence-transformers related posts
-
Do you Know! Llama ?
-
BEIR: A Heterogeneous Benchmark for Information Retrieval
-
Benefits of hybrid search
-
Ideas on how to improve classification and scoring using Mean Pooled Sentence Embeddings
-
SetFit (Sentence Transformer Fine-tuning) - Fewshot Learning without prompts [D]
-
SetFit – Efficient Few-Shot Learning with Sentence Transformers
-
NaLCoS: Search commit messages in your repository in natural language
-
A note from our sponsor - WorkOS
workos.com | 30 Apr 2024
Index
What are some of the best open-source sentence-transformer projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | Top2Vec | 2,843 |
2 | mteb | 1,395 |
3 | beir | 1,372 |
4 | StoryToolkitAI | 577 |
5 | DiffCSE | 286 |
6 | nalcos | 53 |
7 | Python-Schema-Matching | 23 |
8 | emoji_search | 8 |
9 | Llama-2-GGML-CSV-Chatbot | 8 |
10 | emoji-search-plugin | 5 |
11 | balena | 5 |
12 | tsdae | 3 |
Sponsored