point-alpaca
petals
point-alpaca | petals | |
---|---|---|
9 | 98 | |
408 | 8,684 | |
0.0% | 1.5% | |
4.2 | 8.3 | |
about 1 year ago | 5 days ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
point-alpaca
- point-alpaca
-
Pygmalion releases two new LLaMA based models: Pygmalion 7B and the roleplay oriented Metharme 7B. These are major improvements over the old Pygmalion models.
How does this perform compared to something like https://github.com/pointnetwork/point-alpaca?
-
What AI models do you want me to test and judge with GPT-4? Taking suggestions from the community!
How does https://github.com/pointnetwork/point-alpaca compare? I was surprised how well the demo performed.
-
Is this good idea to buy more rtx 3090?
I don't think it's worth it. The smaller models are powerful enough for most purposes. Did you try Point Alpaca? https://github.com/pointnetwork/point-alpaca
-
What's the current "Best" LLaMA LoRA? or moreover what would be a good benchmark to test these against. (HF links incl in post)
It's not a LoRA, but this is the best I've tried: https://github.com/pointnetwork/point-alpaca It requires a GPU.
- Alpaca recreation without LORA ( released as a diff. )
- Goodbye Alpaca
- [D] Totally Open Alternatives to ChatGPT
petals
-
Mistral Large
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
-
Distributed Inference and Fine-Tuning of Large Language Models over the Internet
Can check out their project at https://github.com/bigscience-workshop/petals
- Make no mistake—AI is owned by Big Tech
- Would you donate computation and storage to help build an open source LLM?
-
Run 70B LLM Inference on a Single 4GB GPU with This New Technique
There is already an implementation along the same line using the torrent architecture.
https://petals.dev/
-
Run LLMs in bittorrent style
Check it out at Petals.dev. Chatbot
- Is distributed computing dying, or just fading into the background?
-
Ask HN: Are there any projects currently exploring distributed AI training?
https://github.com/bigscience-workshop/petals
-
Mistral 7B,The complete Guide of the Best 7B model
https://github.com/bigscience-workshop/petals
Inference only: https://lite.koboldai.net/
- Run LLMs at home, BitTorrent‑style
What are some alternatives?
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
llama - Inference code for Llama models
awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)