openplayground
dalai
openplayground | dalai | |
---|---|---|
12 | 59 | |
6,099 | 13,060 | |
- | - | |
2.0 | 6.5 | |
9 days ago | 6 months ago | |
TypeScript | CSS | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openplayground
-
Show HN: Unified access to top AI models, supporting GPT4, Claude and more
https://github.com/nat/openplayground
I load up $5 into my account using my credit card and then reload it whenever it gets low, it also has a tab for comparing multiple resulta from different models together.
-
I love how many people want a way bigger context window for example for GPT-4 (like 100k-1m). May I introduce you the cost of one API call at the full 32k context window? 2$. So 1m would approximately cost you 60$. One call. 60$.
https://github.com/nat/openplayground https://discord.gg/uT98U9HJ.
-
How good is the 100k context model?
Try here: https://github.com/nat/openplayground
-
Performance of GPT-4 vs PaLM 2
From there you have lots of other models: One of the best places to easily start using multiple models is using a multiple model UI program lik GPT4All, there are also some programs that provide access to more models or use different ways of interfacing with them, here are some of what I've found are the best / most popular programs to play around with lots of different models and compare them: LocalAI, text-generation-webui, open playground
- Show HN: Promptfoo – a tool for comparing LLM prompts and models
- Show HN: AI Playground by Vercel Labs
-
What is this subreddit about? I can't tell if its wifaus or locally run LLMs
Here's another interesting engine called AI playground that lets you do side-by-side comparisons of language models based on the same prompts: https://github.com/nat/openplayground
- An LLM playground you can run on your laptop
dalai
-
Ask HN: What are the capabilities of consumer grade hardware to work with LLMs?
I agree, I've definitely seen way more information about running image synthesis models like Stable Diffusion locally than I have LLMs. It's counterintuitive to me that Stable Diffusion takes less RAM than an LLM, especially considering it still needs the word vectors. Goes to show I know nothing.
I guess it comes down to the requirement of a very high end (or multiple) GPU that makes it impractical for most vs just running it in Colab or something.
Tho there are some efforts:
https://github.com/cocktailpeanut/dalai
-
Meta to release open-source commercial AI model
If you're just looking to play with something locally for the first time, this is the simplest project I've found and has a simple web UI: https://github.com/cocktailpeanut/dalai
It works for 7B/13B/30B/65B LLaMA and Alpaca (fine-tuned LLaMA which definitely works better). The smaller models at least should run on pretty much any computer.
- How can I run a large language model locally?
- meirl
-
FreedomGPT: AI with no censorship
I am not against easy mode options dude, for example I used to run GANs through command line. I replaced them with Upscayl when I found it. Convenience is king after all. Something about this one isn't right though. They are advertising it as a model they built meanwhile their own github show it to be a frontend of LLAMA. Why aren't they honest about it? Why use bots to spam about it? This causes me to not trust the executable they share to 1 to 1 compliation of the source code neither. I would still recommend looking for more decent alternatives. Btw, running it directly isn't that complicated
-
Google removes the waitlist on Bard today and will be available in 180 more countries
https://github.com/ggerganov/llama.cpp https://github.com/oobabooga/text-generation-webui https://github.com/mlc-ai/mlc-llm https://github.com/cocktailpeanut/dalai https://github.com/ido-pluto/catai (this is super easy to install but it doesnt provide an api or have integration with langchain)
-
ChatGPT Data Breach BreakDown - Why it Should be a Concern for Everyone!
This was easy to get running: https://github.com/cocktailpeanut/dalai with alpaca 13B (on my 16GB or ram)
-
A brief history of LLaMA models
I had it running before with Dalai (https://github.com/cocktailpeanut/dalai) but have since moved to using the browser based WebGPU method (https://mlc.ai/web-llm/) which uses Vicuna 7B and is quite good.
-
Meet Atom the GPT Assistant, an AI-powered Smart Home Assistant. It's like Google Assistant but with endless possibility of ChatGPT, it's like Siri but with extensibility of Open Source power.
https://github.com/nsarrazin/serge let's you pick which model and runs in a container. For API https://github.com/cocktailpeanut/dalai looks super promising.
- Mercredi Tech - 2023-04-26
What are some alternatives?
llama.cpp - LLM inference in C/C++
gpt4all - gpt4all: run open-source LLMs anywhere
BetterChatGPT - An amazing UI for OpenAI's ChatGPT (Website + Windows + MacOS + Linux)
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama - Inference code for Llama models
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
ChatALL - Concurrently chat with ChatGPT, Bing Chat, Bard, Alpaca, Vicuna, Claude, ChatGLM, MOSS, 讯飞星火, 文心一言 and more, discover the best answers
galai - Model API for GALACTICA
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.