rwkv.cpp
gpt4all
rwkv.cpp | gpt4all | |
---|---|---|
12 | 139 | |
1,113 | 65,231 | |
2.8% | 3.3% | |
6.8 | 9.8 | |
about 1 month ago | about 9 hours ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rwkv.cpp
-
Eagle 7B: Soaring past Transformers
There's https://github.com/saharNooby/rwkv.cpp, which related-ish[0] to ggml/llama.cpp
[0]: https://github.com/ggerganov/llama.cpp/issues/846
- People who've used RWKV, whats your wishlist for it?
-
The Eleuther AI Mafia
Quantisation thankfully is applicable to RWKV as much as transformers. Most notably in our RWKV.cpp community project: https://github.com/saharNooby/rwkv.cpp
Tooling/Ecosystem is something that I am actively working on as there is still a gap to transformers level of tooling. But i'm glad that there is a noticeable difference!
And yes! experiments are important, to ensure improvements in the architecture. Even if "Linear Transformers" replaces "Transformers". Alternatives should always be explored, to learn from such trade-offs to the benefit of the ecosystem
(This was lightly covered in the podcast, where I share IMO that we should have more research into text based diffusion networks)
- Tiny models for contextually coherent conversations?
-
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth
Q8_0 models: only for https://github.com/saharNooby/rwkv.cpp (fast CPU).
- [R] RWKV: Reinventing RNNs for the Transformer Era
-
4096 Context length (and beyond)
There's https://github.com/saharNooby/rwkv.cpp which seems to work, and might be compatible with text-generation-webui.
-
The Coming of Local LLMs
Also worth checking out https://github.com/saharNooby/rwkv.cpp which is based on Georgi's library and offers support for the RWKV family of models which are Apache-2.0 licensed.
-
KoboldCpp - Combining all the various ggml.cpp CPU LLM inference projects with a WebUI and API (formerly llamacpp-for-kobold)
I'm most interested in that last one. I think I heard the RWKV models are very fast, don't need much Ram, and can have huge context tokens, so maybe their 14b can work for me. I wasn't sure how ready for use they were though, but looking more into it, stuff like rwkv.cpp and ChatRWKV and a whole lot of other community projects are mentioned on their github.
- rwkv.cpp: FP16 & INT4 inference on CPU for RWKV language model (r/MachineLearning)
gpt4all
- Show HN: I made an app to use local AI as daily driver
-
Ollama Python and JavaScript Libraries
I don’t know if Ollama can do this but https://gpt4all.io/ can.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Gpt4all is a local desktop app with a Python API that can be trained on your documents: https://gpt4all.io/
-
WyGPT: Minimal mature GPT model in C++
The readme page is cryptic. What does 'mature' mean in this context? What is the sample text a continuation of?
Hving a gif the thing in use would be great, similar to the gpt4all readme page. (https://github.com/nomic-ai/gpt4all)
-
LibreChat
Check https://github.com/nomic-ai/gpt4all instead.
-
OpenAI Negotiations to Reinstate Altman Hit Snag over Board Role
"I ran performance tests on two systems, here's the results of system 1, and heres the results of system 2. Summarize the results, and build a markdown table containing x,y,z rows."
"extract the reusable functions out of this bash script"
"write me a cfssl command to generate a intermediate CA"
"What is the regex for _____"
"Here are my accomplishments over the last 6 months, summarize them into a 1 page performance report."
etc etc etc
If you're not using GPT4 or some LLM as part of your daily flow you're working too hard.
Get GPT4All (https://gpt4all.io), log into OpenAI, drop $20 on your account, get a API key, and start using GPT4.
-
Darbe uzdraude naudotis CHATGPT: ar cia normalu?
offline versija, nors ir ne tokia pažengus - https://github.com/nomic-ai/gpt4all ; https://gpt4all.io/index.html
- GPT4All: An ecosystem of open-source on-edge large language models - by Nomic AI
-
Why use OpenAI's ChatGPT3.5 online service, if you can instead host your own local llama?
Take a look at https://gpt4all.io, their docs are pretty awesome
-
Ask HN: Are you using a local LLM? If yes, what for?
I run one. I built an iMessage-like frontend to it using plain JS and a Python websocket backend. I mostly just use it for curiosity and playing with different prompts. I only have 16GB of RAM to dedicate to it, so I use an 8B parameter model which is enough for fun and chitchat, but I don't find it good enough to replace ChatGPT.
https://github.com/nomic-ai/gpt4all
What are some alternatives?
llama.cpp - LLM inference in C/C++
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
ChatRWKV - ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
mpt-30B-inference - Run inference on MPT-30B using CPU
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
verbaflow - Neural Language Model for Go
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)