GrapheneOS-Knowledge
axolotl

GrapheneOS-Knowledge | axolotl | |
---|---|---|
3 | 36 | |
77 | 8,524 | |
- | 3.9% | |
0.0 | 9.7 | |
almost 3 years ago | 4 days ago | |
HTML | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GrapheneOS-Knowledge
-
NitroPhone – “Most Secure Android on the Planet”
This is just one example (linked below) but I've seen a fair bit of this type of behaviour just specifixally from the project founder/leader. There does seem to be a lot of other more level-headed folk involved with the project too however so not sure how insurmountable the problem is.
https://github.com/Peter-Easton/GrapheneOS-Knowledge/issues/...
- Making Librem 5 Apps
axolotl
- Axolotl: Fine-tuning framework for various AI models
-
ORPO, DPO, and PPO: Optimizing Models for Human Preferences
Implementation: ORPO has been integrated into popular fine-tuning libraries like TRL, Axolotl, and LLaMA-Factory.
- Run Llama locally with only PyTorch on CPU
- Axolotl: Tool designed to streamline the fine-tuning of various AI models
-
Liger Kernel: +20% throughput, -60% memory for multi-GPU LLM training
Liger-Kernel support has already been merged to axolotl [0] along with an example config that makes use of it [1], if anyone would like to quickly try it out.
[0] https://github.com/axolotl-ai-cloud/axolotl/pull/1861
[1] https://github.com/axolotl-ai-cloud/axolotl/blob/main/exampl...
- Axolotl: A tool to fine-tune AI models
-
Ask HN: Most efficient way to fine-tune an LLM in 2024?
The approach I see used is axolotl with QLoRA using cloud GPUs which can be quite cheap.
https://github.com/OpenAccess-AI-Collective/axolotl
- FLaNK AI - 01 April 2024
-
LoRA from Scratch implementation for LLM finetuning
https://github.com/OpenAccess-AI-Collective/axolotl
What are some alternatives?
os-issue-tracker - Issue tracker for GrapheneOS Android Open Source Project hardening work. Standalone projects like Auditor, AttestationServer and hardened_malloc have their own dedicated trackers.
unsloth - Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory! 🦥
axolotl - A Signal compatible cross plattform client written in Go, Rust and Vuejs
gpt-llm-trainer
Pine64-Arch - :penguin: Arch Linux ARM for your PinePhone/Pro and PineTab/2
tracecat - The open source Tines / Splunk SOAR alternative for security and IT engineers. Built on simple YAML templates for integrations and response-as-code.
README - Start here
LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
bromite - Bromite is a Chromium fork with ad blocking and privacy enhancements; take back your browser!
signal-cli - signal-cli provides an unofficial commandline, JSON-RPC and dbus interface for the Signal messenger.
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
OpenPipe - Turn expensive prompts into cheap fine-tuned models
