-
llm-awq
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I am having trouble finding any 8bit GPTQ models at all, there don't seem to be any on HF it's almost all 4bit with the odd 3bit of the big ones. Suspect I will have to make my own for eval purposes but it's lower priority on my list then finding a 4bit that's GPU friendly but doesn't have such a performance penalty... Looking at AWQ they have 3 and 4bit versions.
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a more popular project.
Related posts
-
OmniGlue: Generalizable Feature Matching with Foundation Model Guidance
-
Knowledge Base Support for the Generic Bedrock Agent Test UI
-
Ask HN: How does modern FreeCAD compare with Solidworks?
-
Show HN: Empower-functions, SOTA OSS function calling LLM
-
We created the first open-source implementation of Meta's TestGenāLLM