YaLM-100B
petals
YaLM-100B | petals | |
---|---|---|
35 | 98 | |
3,721 | 8,710 | |
0.1% | 1.8% | |
0.0 | 8.3 | |
10 months ago | 11 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
YaLM-100B
-
Elon Musk's Grok Exactly Echoes ChatGPT Responses: Identical Answers Raise Questions - EconoTimes
Its probably just open source software/training sets repurposed... https://github.com/yandex/YaLM-100B
- OpenAI CEO suggests international agency like UN's nuclear watchdog could oversee AI
-
A few less Googleable questions about local LLMs
There is a 100b model published on pache 2.0 license. Though there is no information about finetuning it or using in 4-bit with smth like llama.cpp. Trying to figure out how to try it without renting extremely expensive gpu set. https://github.com/yandex/YaLM-100B
-
Is it possible to use llama.cpp or create Alpaca Lora for YALM-100b model?
Hey everyone! I just discovered an open-source 100 billion parameter language model called YaLM, which is published under the Apache 2.0 license. The model is trained on more than 1 TB of Russian and English text. Here's the GitHub repo: https://github.com/yandex/YaLM-100B and an article explaining how it was trained: https://medium.com/yandex/yandex-publishes-yalm-100b-its-the-largest-gpt-like-neural-network-in-open-source-d1df53d0e9a6
-
Kandinsky 2.1 - a new open source text-to-Image model
Yandex has already released a LLM: https://github.com/yandex/YaLM-100B
-
Just another casualty...
So there is this open project YaLM 100B require 200 GB of disk space, it is trained on 1.7 TB of text
- There's a lot of news about American/European AI. Do we know anything about what China, India, Russia and other countries are up to?
-
Suggestion. Chat mode.
You'd think so, but to train a model like the one CAI uses, it would require truly jaw-breaking amount of funds. That's why CAI is so suspicious tbh. Just to give you an example, YaML (100 billion parameters which is probably less than CAI) took 65 days to train, and 800 A100 graphics cards. 175 billion parameters would not be 1.75 times higher because it's not a linear function. It would probably be 10x or even more. IIRC, "Open"Ai could only afford to train GPT-3 a single time...
-
Ask HN: Can I download GPT / ChatGPT to my desktop?
I don't much follow AI news beyond what I randomly happen to see on HN, but this might still be the largest open source model: https://github.com/yandex/YaLM-100B . There's discussion of it here: https://old.reddit.com/r/MachineLearning/comments/vpn0r1/d_h... - at the bottom of that page is a comment from someone who actually ran it in the cloud.
-
[Rant] Siri is beyond horrendous and it’s even worse than ever
Hilariously, Yandex Alisa runs circles around it, because it's not just a collection of gimmicks but has an actual 100B-class language model (YaLM, opensourced) as its core, plus lots of decent engineering. It's helpful, skillful and feels alive, almost like ChatGPT.
petals
-
Mistral Large
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
-
Distributed Inference and Fine-Tuning of Large Language Models over the Internet
Can check out their project at https://github.com/bigscience-workshop/petals
- Make no mistake—AI is owned by Big Tech
- Would you donate computation and storage to help build an open source LLM?
-
Run 70B LLM Inference on a Single 4GB GPU with This New Technique
There is already an implementation along the same line using the torrent architecture.
https://petals.dev/
-
Run LLMs in bittorrent style
Check it out at Petals.dev. Chatbot
- Is distributed computing dying, or just fading into the background?
-
Ask HN: Are there any projects currently exploring distributed AI training?
https://github.com/bigscience-workshop/petals
-
Mistral 7B,The complete Guide of the Best 7B model
https://github.com/bigscience-workshop/petals
Inference only: https://lite.koboldai.net/
- Run LLMs at home, BitTorrent‑style
What are some alternatives?
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
SLIDE
llama - Inference code for Llama models
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
alpaca-lora - Instruct-tune LLaMA on consumer hardware
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
YaLM-100B - Pretrained language model with 100B parameters
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
ClickHouse - ClickHouse® is a free analytics DBMS for big data
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.