ERNIE
unilm
ERNIE | unilm | |
---|---|---|
4 | 40 | |
6,165 | 18,358 | |
0.0% | 1.5% | |
2.7 | 9.0 | |
about 1 year ago | 7 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ERNIE
-
[N] Baidu to Unveil Conversational AI ERNIE Bot on March 16 (Live)
Found relevant code at https://github.com/PaddlePaddle/ERNIE + all code implementations here
- ERNIE - ViLG 2.0 by Baidu
- [R] Baidu’s Knowledge-Enhanced ERNIE 3.0 Pretraining Framework Delivers SOTA NLP Results, Surpasses Human Performance on the SuperGLUE Benchmark
unilm
- The Era of 1-Bit LLMs: Training_Tips, Code And_FAQ [pdf]
- The Era of 1-Bit LLMs: Training Tips, Code and FAQ
-
The Era of 1-bit LLMs: ternary parameters for cost-effective computing
+1 On this, the real proof would have been testing both models side-by-side.
It seems that it may be published on GitHub [1] according to HuggingFace [2].
[1] https://github.com/microsoft/unilm/tree/master/bitnet
[2] https://huggingface.co/papers/2402.17764
- I'm an Old Fart and AI Makes Me Sad
-
On building a semantic search engine
e5-mistral is essentially a distillation from gpt-4 to a smaller model. You can see here https://github.com/microsoft/unilm/blob/16da2f193b9c1dab0a69...
they actually have custom prompts for each dataset being tested.
Question would be, if you haven't seen the task before, what is a good prompt to prepend for your task?
IMO e5-mistral is overfit to MTEB
-
Leveraging GPT-4 for PDF Data Extraction: A Comprehensive Guide
Layout LM v1, v2 and v3 models [ Github ] DocBERT [ Github ]
-
Microsoft Publishes LongNet: Scaling Transformers to 1,000,000,000 Tokens
The repository is available here.
-
Recommended open LLMs with image input modality?
It is missing kosmos-2. I remember its image captioning was(demo currently down) really good and it's almost as fast as llava and lavin.
-
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Should be this: https://github.com/microsoft/unilm/
-
[R] LongNet: Scaling Transformers to 1,000,000,000 Tokens
This is from Microsoft Research (Asia). https://aka.ms/GeneralAI
What are some alternatives?
ABSA-PyTorch - Aspect Based Sentiment Analysis, PyTorch Implementations. 基于方面的情感分析,使用PyTorch实现。
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
involution - [CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator
clip-as-service - 🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
gensim - Topic Modelling for Humans
PaddleNLP - 👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
maelstrom - A workbench for writing toy implementations of distributed systems.
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
rasa - 💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
ERNIE-text-classification-pytorch - This repo contains a PyTorch implementation of a pretrained ERNIE model for text classification.
memprompt - A method to fix GPT-3 after deployment with user feedback, without re-training.