SaaSHub helps you find the best software and product alternatives Learn more →
Llama2.c Alternatives
Similar projects and alternatives to llama2.c
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
FLiPStackWeekly
FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
-
chatgpt-retrieval-plugin
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
towhee
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
-
micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
-
dify
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
-
symmetric-ds
SymmetricDS is database replication and file synchronization software that is platform independent, web enabled, and database agnostic. It is designed to make bi-directional data replication fast, easy, and resilient. It scales to a large number of nodes and works in near real-time across WAN and LAN networks.
-
CML_AMP_Churn_Prediction_mlflow
Build an scikit-learn model to predict churn using customer telco data.
-
api-for-open-llm
Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc. 开源大模型的统一后端接口
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
llama2.c reviews and mentions
-
Stuff we figured out about AI in 2023
FOr inference, less than 1KLOC of pure, dependency-free C is enough (if you include the tokenizer and command line parsing)[1]. This was a non-obvious fact for me, in principle, you could run a modern LLM 20 years ago with just 1000 lines of code, assuming you're fine with things potentially taking days to run of course.
Training wouldn't be that much harder, Micrograd[2] is 200LOC of pure Python, 1000 lines would probably be enough for training an (extremely slow) LLM. By "extremely slow", I mean that a training run that normally takes hours could probably take dozens of years, but the results would, in principle, be the same.
If you were writing in C instead of Python and used something like Llama CPP's optimization tricks, you could probably get somewhat acceptable training performance in 2 or 3 KLOC. You'd still be off by one or two orders of magnitude when compared to a GPU cluster, but a lot better than naive, loopy Python.
[1] https://github.com/karpathy/llama2.c
[2] https://github.com/karpathy/micrograd
-
Minimal neural network implementation
A bit off topic but ML-guru Mr Karpathy has implemented a state-of-art Llama2 model in a plain C with no dependencies on 3rd party/freeware libraries. See repo.
-
WebLLM: Llama2 in the Browser
Related. I built karpathy’s llama2.c (https://github.com/karpathy/llama2.c) without modifications to WASM and run it in the browser. It was a fun exercise to directly compare native vs. Web perf. Getting 80% of native performance on my M1 Macbook Air and haven’t spent anytime optimizing the WASM side.
Demo: https://diegomarcos.com/llama2.c-web/
Code:
-
Lfortran: Modern interactive LLVM-based Fortran compiler
Would be cool for there to be a `llama2.f`, similar to https://github.com/karpathy/llama2.c, to demo it's capabilities
-
Llama2.c L2E LLM – Multi OS Binary and Unikernel Release
This is a fork of https://github.com/karpathy/llama2.c
karpathy's llama2.c is like llama.cpp but it is written in C and the python training code is available in that same repo. llama2.c's goal is to be a elegant single file C implementation of the inference and an elegant python implementation for training.
His goal is for people to understand how llama 2 and LLM's work, so he keeps it simple and sweet. As the project progresses, so will features and performance improvements added.
Currently it can infer baby (small) Story models trained by Karpathy at a fast pace. It can also infer Meta LLAMA 2 7b models, but at a very slow rate such as 1 token per second.
So currently this can be used for learning or as a tech preview.
Our friendly fork tries to make it portable, performant and more usable (bells and whistles) over time. Since we mirror upstream closely, the inference capabilities of our fork is similar but slightly faster if compiled with acceleration. What we try to do different is that we try to make this bootable (not there yet) and portable. Right now you can get binary portablity - use the same run.com on any x86_64 machine running on any OS, it will work (possible due to cosmopolitan toolchain). The other part that works is unikernels - boot this as unikernel in VM's (possible due unikraft unikernel & toolchain).
See our fork currently as a release early and release often toy tech demo. We plan to build it out into a useful product.
- FLaNK Stack Weekly for 14 Aug 2023
-
Adding LLaMa2.c support for Web with GGML.JS
In my latest release of ggml.js, I've added support for Karapathy's llama2.c model.
-
Beginner's Guide to Llama Models
I really enjoyed Anrej Kaparthy's llama2.c project (https://github.com/karpathy/llama2.c), which runs through creating and running a miniature Llama2 architecture model from scratch.
-
How to scale LLMs better with an alternative to transformers
- https://github.com/karpathy/llama2.c
I think there may be some applications in this limited space that are worth looking into. You won’t replicate GPT-anything but it may be possible to solve some nice problems very much more efficiently that one would expect at first.
-
A simple guide to fine-tuning Llama 2
It does now: https://github.com/karpathy/llama2.c#metas-llama-2-models
-
A note from our sponsor - SaaSHub
www.saashub.com | 27 Apr 2024
Stats
karpathy/llama2.c is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of llama2.c is C.
Sponsored