sparsegpt
chat-ui
sparsegpt | chat-ui | |
---|---|---|
16 | 40 | |
634 | 6,314 | |
5.0% | 10.0% | |
2.4 | 9.7 | |
about 1 month ago | 4 days ago | |
Python | TypeScript | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sparsegpt
-
(1/2) May 2023
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot (https://arxiv.org/abs/2301.00774)
- Why Falcon going Apache 2.0 is a BIG deal for all of us.
-
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
There is this : https://github.com/IST-DASLab/sparsegpt
-
Webinar: Running LLMs performantly on CPUs Utilizing Pruning and Quantization
Check the paper here, it's intersting: https://arxiv.org/abs/2301.00774
-
OpenAI chief goes before US Congress to propose licenses for building AI
There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.
Relevantish: https://arxiv.org/abs/2301.00774
The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.
Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.
If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.
-
How to run Llama 13B with a 6GB graphics card
Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.
The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.
- SparseGPT: Language Models Can Be Accurately Pruned in One-Shot
chat-ui
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
Zephyr 141B is a Mixtral 8x22B fine-tune. Here are some interesting details
- Base model: Mixtral 8x22B, 8 experts, 141B total params, 35B activated params
- Fine-tuned with ORPO, a new alignment algorithm with no SFT step (hence much faster than DPO/PPO)
- Trained with 7K open data instances -> high-quality, synthetic, multi-turn
- Apache 2
Everything is open:
- Final Model: https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v...
- Base Model: https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1
- Fine-tune data: https://huggingface.co/datasets/argilla/distilabel-capybara-...
- Recipe/code to train the model: https://huggingface.co/datasets/argilla/distilabel-capybara-...
- Open-source inference engine: https://github.com/huggingface/text-generation-inference
- Open-source UI code https://github.com/huggingface/chat-ui
Have fun!
-
AI enthusiasm - episode #2🚀
As long as you have a free Hugging Face account, you can sign up and exploit HuggingChat, a web-based chat interface where you will find 5 large language models to play with (Mixtral-7B-it v0.1 and v0.2, Command R plus, Gemma 1.1-7B-it, Dolphin). You will also have the possibility to exploit several assistants made by the Hugging Face community, or even create your own!
-
OpenAI Startup Fund: GP Hallucination
I submitted something about this the other day (and it got flagged)- poked around a little bit and the only interesting thing I could find is this: https://github.com/huggingface/chat-ui/issues/254 and I don't really even understand what it is, it references the stuff the dude who wrote this is discussing. I had kinda written the whole thing off as someone with too much time on their hands and is just f'ing around with stuff for whatever reason.
I think they made this as well: https://chat.openai.com/g/g-KT4gusP3Y-a-l-i-s-t-a-i-r-e-earl... - it doesn't seem very useful.
*¯\_(ツ)_/¯ to me after spending an hr or so poking around, it seemed like a bored modern tech savvy young person playing around.
- ⚔️ Embeddings, Chatbots RAG Arena et forfaits Telecom OPT-NC
-
Show HN: I made an app to use local AI as daily driver
- https://github.com/huggingface/chat-ui
-
Deconstructing Hugging Face Chat: Explore open-source chat UI/UX for generative AI
Hugging Face Chat - open-source repo powering Hugging Chat!
-
What are you guys using local LLMs for?
If you don't want to do coding, I think HuggingFace's chat-ui can come in handy with web retrieval RAG and llama-cpp running as a server. Please check their documentation on how to setup( See "Running your own models using a custom endpoint" section on their Github).
-
The founder of OpenAI/ChatGPT is a Zionist calling people that are against Israeli genocide “antisemitist”, how dare the American left speak against genocide!?
yes! it's proprietary, invasive, and harvests your data and use it for improving the AI, Ultman went to Israel weeks after Chatgpt was introduced, Israel like any other tech-giant-country needs to make sure that it has control over that data and/or use it to achieve its goals, so it's better to find offline FOSS alternatives (if you have a decent enough PC) or use HuggingChat as an online FOSS alternative, I find it better than GPT 3.5 in many aspects
-
Smartphone Brands Sorted Out, So You Don't Have To
I have categorized some of the smartphone brands by their parent company using HuggingChat based on RLHF, Google's Bard, ChatGPT, and Perplexity. All of them are powered by LLMs, and both ChatGPT and Perplexity use GPT-3.5.
-
Accessing ChatGPT in non-official UI
I'm looking for something like https://huggingface.co/chat/ or OpenAssistant, but it should target OpenAI's api.
What are some alternatives?
StableLM - StableLM: Stability AI Language Models
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
github-copilot-product-specific-terms
DiscordChatExporter-frontend - Browse json files exported by Tyrrrz/DiscordChatExporter in familiar discord like user interface
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
coriander - Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices
AgileRL - Streamlining reinforcement learning with RLOps. State-of-the-art RL algorithms and tools.