ChatGLM-6B
Open-Assistant
ChatGLM-6B | Open-Assistant | |
---|---|---|
17 | 329 | |
39,341 | 36,647 | |
1.6% | 0.3% | |
8.4 | 8.3 | |
2 months ago | 7 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ChatGLM-6B
-
What are the current fastest multi-gpu inference frameworks?
ChatGLM seems to be pretty popular but I've never used this before.
-
A CEO is spending more than $2,000 a month on ChatGPT Plus accounts for all of his employees, and he says it's saving 'hours' of time
There are also locally hosted options that approach the effectiveness of ChatGPT. This GLM for example was specifically trained to be able to be processed on a single consumer grade GPU
- Open Source Chinese LLMs
- ChatGLM-6B: run locally on consumer graphics card (6GB of GPU memory required)
- Ask HN: Open source LLM for commercial use?
-
Coding LLaMa Modell?
A link to for y'all. Definitely gonna try to mess around with this!
- 关于GPT,AI和未来的一些社会经济问题,向诸位请教
- FLiPN-FLaNK Stack Weekly for 20 March 2023
- ChatGLM-6B - an open source 6.2 billion parameter English/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and Reinforcement Learning from Human Feedback. Runs on consumer grade GPUs
- ChatGLM: Open bilingual language model based on General Language Model framework
Open-Assistant
-
Best open source AI chatbot alternative?
For open assistant, the code: https://github.com/LAION-AI/Open-Assistant/tree/main/inference
-
GPT-4 Turbo for free with no sign up, and most importantly no Bing
Is this being used to collect chat results for synthetic data and/or training like https://github.com/LAION-AI/Open-Assistant did? I believe they gave away GPT-4 api calls via a text interface and absorbed the cost to later build a dataset of chats.
-
OpenAI now sends email threats?!
https://open-assistant.io seems to have the same guardrails, as ChatGPT. Tried it on several prompts and it wouldn't comply.
- ChatGPT-Antworten nach Schulnoten bewerten
-
Chat GPT Alternatives?
Open-Assistant [https://open-assistant.io/]
-
What are the best AI tools you've ACTUALLY used?
Open Assistant by LAION AI on GitHub
-
Keep Artificial Intelligence Free, protect it from monopolies: please sign this petition
To add to this if you want something for free or at least close to free, contribute to OpenSource projects like https://open-assistant.io/
-
If I had to get someone from total zero to ChatGPT power user
Also, there are fairly useful alternatives like GPT4ALL and Open Assistant that you can run locally.
-
Compiling a Comprehensive List of Publicly Usable LLM Q&A Services - Need Your Input!
https://open-assistant.io - oasst-sft-6-llama-30b
- Proposal for a Crowd-Sourced AI Feedback System
What are some alternatives?
llama.cpp - LLM inference in C/C++
KoboldAI-Client
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
datagen - Generate authentic looking mock data based on a SQL, JSON or Avro schema and produce to Kafka in JSON or Avro format.
llama - Inference code for Llama models
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
gpt4all - gpt4all: run open-source LLMs anywhere
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support