instructor-embedding
gpt-2
instructor-embedding | gpt-2 | |
---|---|---|
4 | 64 | |
1,703 | 21,146 | |
3.1% | 1.1% | |
5.9 | 2.5 | |
10 days ago | 27 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
instructor-embedding
-
My experience on starting with fine tuning LLMs with custom data
If you li embeddings and vector DB, you should look into this: https://github.com/HKUNLP/instructor-embedding
-
Build Personal ChatGPT Using Your Data
If you look at a embeddings leaderboard [1], one of the top competitors called InstructorXL [2] is just a pip install away. It's neck and neck with Ada v2 except for a shorter input length and half the dimensions, with the added benefit that you'll always have the model available.
Most of the other options just work with the transformers library.
[1] https://huggingface.co/spaces/mteb/leaderboard
[2] https://github.com/HKUNLP/instructor-embedding
-
I've made a customisable SMS personal assistant which has infinite and persistent semantic memory.
Use instructor-embedding to to make it 100% local and even maybe quick relationship lookup (embed relationship info with sentiment analysis instruction)
-
Whisper Transcription Formatting
First.I believe having srt subtitles as whisper result would be better.Essentially you don't need just a list of words like YouTube does.You need something more structured.I don't remember what whisper outputs so I might be wrong.There is whisperx for that as example. And then maybe use gpt index over it.Or something like instructor model That can work.
gpt-2
-
What are LLMs? An intro into AI, models, tokens, parameters, weights, quantization and more
Medium models: Roughly between 1B to 10B parameters. This is where Mistral 7B, Phi-3, Gemma from Google DeepMind, and wizardlm2 sit. Fun fact: GPT 2 was a medium sized model, much smaller than its latest versions.
- Sam Altman is still trying to return as OpenAI CEO
- Build Personal ChatGPT Using Your Data
-
Are the recent advancements in AI technology primarily driven by recent discoveries or the progress in hardware capabilities and the abundance of available data?
"Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper. "
-
BING IS NOW THE DEFAULT SEARCH FOR CHATGPT
They did release GPT-2 under the MIT License.
-
Don Knuth Plays with ChatGPT
Did you arrive at this certainty through reading something other than what OpenAI has published? The document [0] that describes the training data for GPT-2 makes this assertion hilarious to me.
[0]: https://github.com/openai/gpt-2/blob/master/model_card.md#da...
- Was frustriert euch an der Nutzung oder der Diskussion um KI?
- The AI
-
Help with pet project to learn - Running ChatGPT-2 at home
I made a clone of https://github.com/openai/gpt-2 on my local laptop
- По поводу опасности ИИ и предложений остановить разработки на 6 месяцев.
What are some alternatives?
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
dalle-mini - DALL·E Mini - Generate images from a text prompt
openai-cookbook - Examples and guides for using the OpenAI API
minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Nuggt - An Autonomous LLM Agent that runs on Wizcoder-15B
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
vlite - fast vector database made in numpy
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
easydiffusion - Easiest 1-click way to create beautiful artwork on your PC using AI, with no tech knowledge. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image.
sentencepiece - Unsupervised text tokenizer for Neural Network-based text generation.
lit-gpt - Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. [Moved to: https://github.com/Lightning-AI/litgpt]
jukebox - Code for the paper "Jukebox: A Generative Model for Music"