SLIDE | YaLM-100B | |
---|---|---|
3 | 35 | |
475 | 3,725 | |
-0.4% | 0.2% | |
0.0 | 0.0 | |
over 2 years ago | 11 months ago | |
Python | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SLIDE
-
Yandex opensources 100B parameter GPT-like model
That's pretty much what SLIDE [0] does. The driver was achieving performance parity with GPUs for CPU training, but presumably the same could apply to running inference on models too large to load into consumer GPU memory.
https://github.com/RUSH-LAB/SLIDE
- [R] CPU algorithm trains deep neural nets up to 15 times faster than top GPU trainers
- CPU-based algorithm trains deep neural nets up to 15 times faster than top GPU
YaLM-100B
-
Elon Musk's Grok Exactly Echoes ChatGPT Responses: Identical Answers Raise Questions - EconoTimes
Its probably just open source software/training sets repurposed... https://github.com/yandex/YaLM-100B
- OpenAI CEO suggests international agency like UN's nuclear watchdog could oversee AI
-
A few less Googleable questions about local LLMs
There is a 100b model published on pache 2.0 license. Though there is no information about finetuning it or using in 4-bit with smth like llama.cpp. Trying to figure out how to try it without renting extremely expensive gpu set. https://github.com/yandex/YaLM-100B
-
Is it possible to use llama.cpp or create Alpaca Lora for YALM-100b model?
Hey everyone! I just discovered an open-source 100 billion parameter language model called YaLM, which is published under the Apache 2.0 license. The model is trained on more than 1 TB of Russian and English text. Here's the GitHub repo: https://github.com/yandex/YaLM-100B and an article explaining how it was trained: https://medium.com/yandex/yandex-publishes-yalm-100b-its-the-largest-gpt-like-neural-network-in-open-source-d1df53d0e9a6
-
Kandinsky 2.1 - a new open source text-to-Image model
Yandex has already released a LLM: https://github.com/yandex/YaLM-100B
-
Just another casualty...
So there is this open project YaLM 100B require 200 GB of disk space, it is trained on 1.7 TB of text
- There's a lot of news about American/European AI. Do we know anything about what China, India, Russia and other countries are up to?
-
Suggestion. Chat mode.
You'd think so, but to train a model like the one CAI uses, it would require truly jaw-breaking amount of funds. That's why CAI is so suspicious tbh. Just to give you an example, YaML (100 billion parameters which is probably less than CAI) took 65 days to train, and 800 A100 graphics cards. 175 billion parameters would not be 1.75 times higher because it's not a linear function. It would probably be 10x or even more. IIRC, "Open"Ai could only afford to train GPT-3 a single time...
-
Ask HN: Can I download GPT / ChatGPT to my desktop?
I don't much follow AI news beyond what I randomly happen to see on HN, but this might still be the largest open source model: https://github.com/yandex/YaLM-100B . There's discussion of it here: https://old.reddit.com/r/MachineLearning/comments/vpn0r1/d_h... - at the bottom of that page is a comment from someone who actually ran it in the cloud.
-
[Rant] Siri is beyond horrendous and it’s even worse than ever
Hilariously, Yandex Alisa runs circles around it, because it's not just a collection of gimmicks but has an actual 100B-class language model (YaLM, opensourced) as its core, plus lots of decent engineering. It's helpful, skillful and feels alive, almost like ChatGPT.
What are some alternatives?
lc0 - The rewritten engine, originally for tensorflow. Now all other backends have been ported here.
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
goslide - SLIDE (Sub-LInear Deep learning Engine) written in Go
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
HashingDeepLearning - Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"
YaLM-100B - Pretrained language model with 100B parameters
Stockfish - A free and strong UCI chess engine
ClickHouse - ClickHouse® is a free analytics DBMS for big data
metaseq - Repo for external large-scale work