GrapheneOS-Knowledge
axolotl
GrapheneOS-Knowledge | axolotl | |
---|---|---|
3 | 29 | |
72 | 5,987 | |
- | 12.0% | |
0.0 | 9.8 | |
about 2 years ago | 1 day ago | |
HTML | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GrapheneOS-Knowledge
-
NitroPhone – “Most Secure Android on the Planet”
This is just one example (linked below) but I've seen a fair bit of this type of behaviour just specifixally from the project founder/leader. There does seem to be a lot of other more level-headed folk involved with the project too however so not sure how insurmountable the problem is.
https://github.com/Peter-Easton/GrapheneOS-Knowledge/issues/...
-
Making Librem 5 Apps
The Librem 5 is not as open or libre as its marketing has tried to insinuate, simply having its binary blob signed and validated firmware saved in write-protected read-only memory and loaded by a secondary coprocessor to exploit a loophole in the definiton of "libre" hardware to allow it to qualify for the FSF's definiton of "Free" hardware. This renders the firmware unupdateable without shorting a connection. In the event a vulnerability is discovered in the modems or radios, the firmware cannot be updated without physically dismantling the phone. Firmware initialization is also no longer under the control of the host operating system because the initialization is carried out from outside the OS: changing or updating software on the host will not address these design defects. Although the modems and radios are not attached to the host via DMA, they rely on USB for isolation, which simply shifts the trust from the kernel driver to the kernel USB stack, and USB was never designed with distrusting the device plugged into it in mind unlike SMMU/IOMMU, which is specifically designed to mitigate unconstrained DMA.
Current releases of the Librem 5 have been plagued by thermal throttling issues and poor battery life which in some cases has clocked in at less than 1 hour at idle.
The Librem 5 does not even support software encryption and no progress has been made toward adding even LUKS encryption. The Librem 5 lacks a secure element for any hardware binding on the encryption and so would be entirely dependent on software-only encryption.
The rebranded version of Debian that the Librem 5 uses as an operating system uses the same security model as the desktop stack, which is a perimeter or "all or nothing" security model. In the future, applications may be installed utilizing FlatPak. The threat model and measures FlatPak takes to meet it are as of yet unclear and uncertain.
From https://github.com/Peter-Easton/GrapheneOS-Knowledge/blob/ma...
axolotl
-
Ask HN: Most efficient way to fine-tune an LLM in 2024?
The approach I see used is axolotl with QLoRA using cloud GPUs which can be quite cheap.
https://github.com/OpenAccess-AI-Collective/axolotl
- FLaNK AI - 01 April 2024
-
LoRA from Scratch implementation for LLM finetuning
https://github.com/OpenAccess-AI-Collective/axolotl
- Optimized Triton Kernels for full fine tunes
- Axolotl
-
Let’s Collaborate to Build a High-Quality, Open-Source Dataset for LLMs!
One option is to look at what Axolotl uses. They have a list of different dataset formats that they support. They're mostly in JSON with specific field names, so you could start putting a dataset together with a text editor or a JSON editor.
- Axolotl: Streamline fine-tuning of AI models
-
Dataset Creation Tools?
You can save that overall set into a json file and load it up as training data in whatever you're using. I'm using axolotl for it at the moment. Though a GUI based option is probably best for the first couple of tries until you get a feel for the options.
-
Progress on Reproducing Phi-1/1.5
Looking forward to the results! If it turns out the dataset is reproducible, then it might be a good candidate for ReLora training on axolotl!
What are some alternatives?
Pine64-Arch - :penguin: Arch Linux ARM for your PinePhone/Pro and PineTab/2
signal-cli - signal-cli provides an unofficial commandline, JSON-RPC and dbus interface for the Signal messenger.
bromite - Bromite is a Chromium fork with ad blocking and privacy enhancements; take back your browser!
gpt-llm-trainer
os-issue-tracker - Issue tracker for GrapheneOS Android Open Source Project hardening work. Standalone projects like Auditor, AttestationServer and hardened_malloc have their own dedicated trackers.
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
axolotl - A Signal compatible cross plattform client written in Go, Rust and Vuejs
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
README - Start here
LMFlow - An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
OpenPipe - Turn expensive prompts into cheap fine-tuned models