SaaSHub helps you find the best software and product alternatives Learn more →
Llama2.c Alternatives
Similar projects and alternatives to llama2.c
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
-
-
-
JohnTheRipper
Discontinued John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs [Moved to: https://github.com/openwall/john]
-
-
-
-
llama2.c discussion
llama2.c reviews and mentions
-
Llama 3.1 in C
My bad, I directly linked to the C file instead of the project here:
So it is a program that given a model file, tokenizer file and a prompt, it continues to generate text.
So to get it to work, you need to clone and build this: https://github.com/trholding/llama2.c
So the steps are like this:
First you'll need to obtain approval from Meta to download llama3 models on hugging face.
So go to https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct, fill the form and then go to https://huggingface.co/settings/gated-repos see acceptance status. Once accepted, do the following to download model, export and run.
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct
git clone https://github.com/trholding/llama2.c.git
cd llama2.c/
# Export Quantized 8bit
python3 export.py ../llama3.1_8b_instruct_q8.bin --version 2 --meta-llama ../Meta-Llama-3.1-8B-Instruct/original/
# Fastest Quantized Inference build
make runq_cc_openmp
# Test Llama 3.1 inference, it should generate sensible text
./run ../llama3.1_8b_instruct_q8.bin -z tokenizer_l3.bin -l 3 -i " My cat"
-
What would an LLM OS look like?
Nice article. We did a demo for booting to LLM and also as Kernel Module: https://github.com/trholding/llama2.c The whole things was funny and buggy, but since then we have been developing in stealth, even trying to raise VC capital. Our goal is to make computers like a buddy to whom you can talk to and explain things and get work done, kinda like a Jarvis. The way we interact with computers haven't changed for decades, its time to disrupt that to get more productivity. I also believe with this approach one can avoid installing different applications, when the computer (models) emulate activities done through applications. For example, cutting and pasting a dog from a dog photo onto a banner for a dog racing competition would not require you to be a graphics artist nor use tools like photshop / gimp. You could tell the computer and it would use segment anything to cut the dog, use Text and SD for banner text and bg paste the dog, seek your approval, search for the fastest, best and cheapest banner printing service and submit it. 10 years ago this could have been sci-fi, but now it is a possibility. Just need to connect the dots, package and polish it to make it a good product.
-
The Second Batch of the Open Source AI Grants
We are trying to create a proper OS that boots to an LLM. A toy demo is available in the releases: https://github.com/trholding/llama2.c
Can we apply?
- Llama 2 Everywhere (L2E): Standalone, Binary Portable, Bootable Llama 2
- Play a hidden framebuffer Doom on TempleDOS
- Show HN: Tiny OS that boots to LLAMA2, has DOOM and easter eggs
-
A Linux OS that boots to LLAMA2 and has a startrek like UI
I feel you!
Here it is: https://github.com/trholding/llama2.c#new---l2e-os-linux-ker...
-
OS that boots to LLAMA2 also runs LLAMA2 as kernel module
404? Try: https://github.com/trholding/llama2.c
-
A note from our sponsor - SaaSHub
www.saashub.com | 2 Dec 2024
Stats
trholding/llama2.c is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of llama2.c is C.