llm-awq
langchain4j-examples
llm-awq | langchain4j-examples | |
---|---|---|
7 | 3 | |
1,902 | 388 | |
10.9% | - | |
8.0 | 8.8 | |
8 days ago | 1 day ago | |
Python | Java | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-awq
-
TinyChat: Large Language Model on the Edge
TinyChat is an efficient, lightweight, Python-native serving framework for 4-bit LLMs by AWQ. It delivers 2.3x generation speed up on RTX4090.
Code: https://github.com/mit-han-lab/llm-awq/tree/main/tinychat
- FLaNK Stack Weekly 23 Oct 2023
-
New base model InternLM 7B weights released, with 8k context window.
I am having trouble finding any 8bit GPTQ models at all, there don't seem to be any on HF it's almost all 4bit with the odd 3bit of the big ones. Suspect I will have to make my own for eval purposes but it's lower priority on my list then finding a 4bit that's GPU friendly but doesn't have such a performance penalty... Looking at AWQ they have 3 and 4bit versions.
-
Llama33B vs Falcon40B vs MPT30B
Using the currently popular gptq the 3bit quantization hurts performance much more than 4bit, but there's also awq (https://github.com/mit-han-lab/llm-awq) and squishllm (https://github.com/SqueezeAILab/SqueezeLLM) which are able to manage 3bit without as much performance drop - I hope to see them used more commonly.
- New hardware-friendly quantization method
-
Activation-Aware Weight Quantization for LLM Compression Outperforms GPTQ
Better quantization would have a direct and meaningful impact for everyone running local LLMs. The technique has already been applied to both Vicuna and the multimodal LLaMA variant LLaVA.
https://github.com/mit-han-lab/llm-awq
-
New quantization method AWQ outperforms GPTQ in 4-bit and 3-bit with 1.45x speedup and works with multimodal LLMs
GitHub: https://github.com/mit-han-lab/llm-awq
langchain4j-examples
-
Search in Documentation with a JavaFX Chat LangChain4j Application
The goal of LangChain4j is to simplify the integration of AI and LLM capabilities into Java applications. The project lives on GitHub, and has a separate repository with demo applications. I first learned about LangChain4j at the Devoxx conference in Antwerp in October last year. Lize Raes gave an impressive presentation with 12 demos. In the last demo, she asked the application to give some answers based on a provided text. And that was exactly what I was looking for to be able to interact with an existing dataset.
- FLaNK Stack Weekly 12 February 2024
- FLaNK Stack Weekly 23 Oct 2023
What are some alternatives?
SqueezeLLM - [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
stable-audio-tools - Generative models for conditional audio generation
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
lang2sql - A tutorial for setting an SQL code generator with the OpenAI API
Voyager - An Open-Ended Embodied Agent with Large Language Models
CoC2023 - Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data
CML_AMP_AI_Text_Summarization_with_Amazon_Bedrock - CML_AMP_AI_Text_Summarization_with_Amazon_Bedrock
amazon-bedrock-with-builder-and-command-patterns - A simple, yet powerful implementation in Java that allows developers to write a rather straightforward code to create the API requests for the different foundation models supported by Amazon Bedrock.
kafka-streams-dashboards - showcases Grafana dashboards for Kafka Stream applications leveraging client JMX metrics.
data-in-motion - This is repository for tutorials of Data In Motion starting with Data Distribution