sentence-transformers
paperai
Our great sponsors
sentence-transformers | paperai | |
---|---|---|
45 | 19 | |
13,793 | 1,196 | |
4.5% | 3.3% | |
9.2 | 5.9 | |
2 days ago | 5 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sentence-transformers
-
External vectorization
txtai is an open-source first system. Given it's own open-source roots, like-minded projects such as sentence-transformers are prioritized during development. But that doesn't mean txtai can't work with Embeddings API services.
-
[D] Looking for a better multilingual embedding model
Ok great. My use case is not very specific, but rather general. I am looking for a model that can perform asymmetric semantic search for the languages I mentioned earlier (Urdu, Persian, Arabic etc.). I have also looked into the sentence-transformer training documentation. Do you think it would be a good idea to use the XNLI dataset for fine-tuning? Or maybe you can suggest much better dataset. Furthermore, I am not sure if fine-tuning is suitable for my task. Because my use case is general so I can use already trained model.
- Best pathway for Domain Adaptation with Sentence Transformers?
-
Syntactic and Semantic surprisal using a LLM
The task you are looking for is semantic textual similarity. There are a few models and datasets out there that can do this. I'd probably start with the SemEval2017 Task 1 task description and competition entries here and then work outward from there (using something like SemanticScholar or Papers With Code to find newer state of the art works that cite these models if needed). For what it's worth you might find that Sentence Bert (SBERT) gives good vectors for cosine similarity comparison out of the box for this task.
-
Mean pooling in BERT
Check out the sentence-transformers implementation. If I don't miss anything they don't exclude CLS when the pooling strategy is set to 'mean'
-
I Built an AI Search Engine that can find exact timestamps for anything on Youtube using OpenAI Whisper
Break up transcript into shorter segments and convert segments to a 768 vector array. Use a process known as embedding using our second ML model, UKP Labs BERT’s sentence transformer model.
-
Seeking advice on improving NLP search results
Not sure what kind of texts you have, but these models have a max sequence limit of 512 (approx 350 words or so). If you're texts are longer than that, consider splitting them up into chunks or creating a summary and taking an embedding of that. Some clustering algorithm may be the way to go here. Here's a bunch of examples. I use agglomerative for my use case.
-
Dev Diary #12 - Finetune model
https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/data_augmentation (Augmented Encoding)
-
[R] Customize size of Bio-BERT pre-trained embeddings
For vector representation you can take the mean and then pca to get the size that you want, but if you have time then use sentence transformers to train a vector representation instead.
- SentenceTransformer producing different sentence embedding results in Docker
paperai
-
Oracle of Zotero: LLM QA of Your Research Library
Nice project!
I've spent quite a lot of time in the medical/scientific literature space. With regards to LLMs, specifically RAG, how the data is chunked is quite important. With that, I have a couple projects that might be beneficial additions.
paperetl (https://github.com/neuml/paperetl) - supports parsing arXiv, PubMed and integrates with GROBID to handle parsing metadata and text from arbitrary papers.
paperai (https://github.com/neuml/paperai) - builds embeddings databases of medical/scientific papers. Supports LLM prompting, semantic workflows and vector search. Built with txtai (https://github.com/neuml/txtai).
While arbitrary chunking/splitting can work, I've found that integrating parsing that has knowledge of medical/scientific paper structure increases the overall accuracy and experience of downstream applications.
-
Build Personal ChatGPT Using Your Data
https://github.com/neuml/paperai
Disclaimer: I am the author of both
-
[P] Parse research papers into structured data
paperai | paperetl
- Show HN: Semantic search and workflows for medical/scientific papers
- Semantic search and workflows for medical/scientific papers
-
# Run txtai in native code
action: translate input: txtai executes machine-learning workflows to transform data and build AI-powered semantic search applications. output: txtai exécute des workflows d'apprentissage automatique pour transformer les données et construire des applications de recherche sémantique alimentées par l'IA. action: translate input: Traditional search systems use keywords to find data output: Les systèmes de recherche traditionnels utilisent des mots-clés pour trouver des données action: summary input: https://github.com/neuml/txtai output: txtai executes machine-learning workflows to transform data and build AI-powered semantic search applications. Semantic search applications have an understanding of natural language and identify results that have the same meaning, not necessarily the same keywords. API bindings for JavaScript, Java, Rust and Go. Cloud-native architecture scales out with container orchestration systems (e. g. Kubernetes) action: summary input: https://github.com/neuml/paperai output: paperai is an AI-powered literature discovery and review engine for medical/scientific papers. Paperai was used to analyze the COVID-19 Open Research Dataset (CORD-19) paperai and NeuML have been recognized in the following articles: Cord-19 Kaggle Challenge Awards Machine-Learning Experts Delve Into 47,000 Papers on Coronavirus Family. real 0m22.478s user 0m13.776s sys 0m3.218s
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
txtai - đź’ˇ All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
onnx - Open standard for machine learning interoperability
tika-python - Tika-Python is a Python binding to the Apache Tika™ REST services allowing Tika to be called natively in the Python community.
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
SciencePlots - Matplotlib styles for scientific plotting
Top2Vec - Top2Vec learns jointly embedded topic, document and word vectors.
faiss - A library for efficient similarity search and clustering of dense vectors.
scibert - A BERT model for scientific text.
datasets - 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
science-parse - Science Parse parses scientific papers (in PDF form) and returns them in structured form.