AnnA_Anki_neuronal_Appendix
SimCSE
Our great sponsors
AnnA_Anki_neuronal_Appendix | SimCSE | |
---|---|---|
3 | 2 | |
55 | 3,242 | |
- | 2.2% | |
8.4 | 0.0 | |
18 days ago | 7 months ago | |
Python | Python | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AnnA_Anki_neuronal_Appendix
-
Revolution? Using machine learning to handle backlog, introducing AnnA: Anki neural network appendix
ping /u/AnKingMed /u/Glutanimate Here's the link: https://github.com/thiswillbeyourgithub/AnnA_Anki_neuronal_Appendix
- Gaguing interest/ seeking help for a long term implementation of Anki along side NLP language models to revolutionize second language aquisition
-
The Anki algorithm needs more research and development
It's currently on github and not at all finished. https://github.com/thiswillbeyourgithub/AMiMA_anki_mind_map/settings
SimCSE
-
BERT-Based Clustering on a Corpus of Genre Samples Kinda Sucks. Why?
Base BERT sentence embeddings are just not good for a couple of reasons and there's some research papers that show this. You can try SimCSE, Google's USE or SBERT as mentioned previously and you'll get better output. It's just an inherent flaw to base BERT that it can't produce good sentence embeddings. Papers have shown you probably will get better scores using GloVe embeddings from scratch than base BERT.
-
State of the Art in Sentence Embeddings
To answer your question about sentence embedding SOTA, it is not s-Bert and hasn't been for a while. SimCSE officially takes the crown since it's been presented at a conference, though according to paperswithcode's benchmark leaderboard there are other papers on arxiv that report higher performance on STS and similar tasks such as DCPCSE. Having tried both of these for my use case I found SimCSE to be better but YMMV.
What are some alternatives?
autocards - Accelerating learning through machine-generated flashcards.
PromCSE - Code for "Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning (EMNLP 2022)"
speed-focus-mode - Speed Focus Mode add-on for Anki
inltk - Natural Language Toolkit for Indic Languages aims to provide out of the box support for various NLP tasks that an application developer might need
incremental-reading - Anki add-on providing incremental reading features
DiffCSE - Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"
dutch-word-embeddings - Dutch word embeddings, trained on a large collection of Dutch social media messages and news/blog/forum posts.
BERTopic - Leveraging BERT and c-TF-IDF to create easily interpretable topics.
experimentalCardEaseFactor - Adjusts ease factor for cards individually during review in Anki in order to hit an 85% success rate.
kanji-flashcard-generator - Simple script to generate flashcards for studying kanji
highlight-search-results - Highlight Search Results in the Browser add-on for Anki
amazon-sagemaker-examples - Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠Amazon SageMaker.