LLaMA-Factory
HALOs
LLaMA-Factory | HALOs | |
---|---|---|
3 | 1 | |
21,791 | 561 | |
- | 12.1% | |
9.9 | 8.8 | |
1 day ago | 15 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLaMA-Factory
- FLaNK-AIM Weekly 06 May 2024
-
Show HN: GPU Prices on eBay
Depends what model you want to train, and how well you want your computer to keep working while you're doing it.
If you're interested in large language models there's a table of vram requirements for fine-tuning at [1] which says you could do the most basic type of fine-tuning on a 7B parameter model with 8GB VRAM.
You'll find that training takes quite a long time, and as a lot of the GPU power is going on training, your computer's responsiveness will suffer - even basic things like scrolling in your web browser or changing tabs uses the GPU, after all.
Spend a bit more and you'll probably have a better time.
[1] https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#...
- FLaNK Weekly 31 December 2023
HALOs
-
On Sleeper Agent LLMs
If you are using no-code solutions, increasing an "idea" in a dataset will make that idea more likely to appear.
If you are fine-tuning your own LLM, there are other ways to get your idea to appear. In the literature this is sometimes called RLHF or preference optimization, and here are a few approaches:
Direct Preference Optimization
This uses Elo-scores to learn pairwise preferences. Elo is used in chess and basketball to rank individuals who compete in pairs.
@argilla_io on X.com has been doing some work in evaluating DPO.
Here is a decent thread on this: https://x.com/argilla_io/status/1745057571696693689?s=20
Identity Preference Optimization
IPO is research from Google DeepMind. It removes the reliance of Elo scores to address overfitting issues in DPO.
Paper: https://x.com/kylemarieb/status/1728281581306233036?s=20
Kahneman-Tversky Optimization
KTO is an approach that uses mono preference data. For example, it asks if a response is "good or not." This is helpful for a lot of real word situations (e.g. "Is the restaurant well liked?").
Here is a brief discussion on it:
https://x.com/ralphbrooks/status/1744840033872330938?s=20
Here is more on KTO:
* Paper: https://github.com/ContextualAI/HALOs/blob/main/assets/repor...
* Code: https://github.com/ContextualAI/HALOs
What are some alternatives?
KVQuant - KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
argilla - Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.
seatunnel - SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool.
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
machinascript-for-robots - Build LLM-powered robots in your garage with MachinaScript For Robots!
efficient-kan - An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).
generative-ai-python - The Gemini API Python SDK enables developers to use Google's state-of-the-art generative AI models to build AI-powered features and applications.
FLaNK-Ice - Apache Iceberg - Cloud Data Lakehouse
promptbench - A unified evaluation framework for large language models
kamal - Deploy web apps anywhere.
Stirling-PDF - #1 Locally hosted web application that allows you to perform various operations on PDF files
osgameclones - Open Source Clones of Popular Games