Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rllama
-
Ask HN: Who wants to be hired? (July 2023)
Location: San Francisco
Remote: No preference, as long as I don't have to move far from Bay Area
Willing to relocate: No
Technologies: C, Rust, Golang, Haskell, Lisp, Python, Lua, OpenGL, SQLite3, JavaScript, PostgreSQL, AWS EC2, S3, ECS, Batch.
Resume: https://www.linkedin.com/in/mikjuola
Email: [email protected]
---
I've been working at the Bay Area since 2015, most recently at Pinterest. At work, I've done big data pipelines, designed some batch job systems, computing metrics, handling billing APIs, lots of Python, Go and Java and working with AWS, i.e. backend and data engineer stuff.
But I'm trying to look for work that's more in line with what I do on my free time: Challenging low-level C or Rust programming, machine learning implementations (see e.g. this thing I made https://github.com/Noeda/rllama/, graphics programming or research-type work, uncommon programming languages.
If you scroll through my random crap repositories you can see what kind of things I'm interested in: https://github.com/Noeda?tab=repositories
-
State-of-the-art open-source chatbot, Vicuna-13B, just released model weights
No, my project is called rllama. No relation to GGML. https://github.com/Noeda/rllama
-
Where can I learn more about SIMD, CPU intrinsics and the like in the context of Rust?
I have seen some Rust attempts as well such as https://github.com/Noeda/rllama/ but they are still way behind the C++ ones. This seems like an interesting space to get into.
-
Show HN: Alpaca.cpp – Run an Instruction-Tuned Chat-Style LLM on a MacBook
I ran it on a 128 RAM machine with a Ryzen 5950X. It's not fast, 4 seconds per token. But it's just about fits without swapping. https://github.com/Noeda/rllama/
-
Llama.rs – Rust port of llama.cpp for fast LLaMA inference on CPU
I've counted three different Rust LLaMA implementations on r/rust subreddit this week:
https://github.com/Noeda/rllama/ (pure Rust+OpenCL)
https://github.com/setzer22/llama-rs/ (ggml based)
https://github.com/philpax/ggllama (also ggml based)
There's also a discussion on GitHub issue on setzer's repo to collaborate a bit on these separate efforts: https://github.com/setzer22/llama-rs/issues/4
- Rust+OpenCL+AVX2 implementation of LLaMA inference code
- Pure Rust CPU and OpenCL implementation of LLaMA language model
80r3d
What are some alternatives?
llama.cpp - LLM inference in C/C++
Projects
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
litestar - Production-ready, Light, Flexible and Extensible ASGI API framework | Effortlessly Build Performant APIs
alpaca-lora - Instruct-tune LLaMA on consumer hardware
voodoo - Profile
ultraviolet - A wide linear algebra crate for games and graphics.
Resume
orblivion - My Open Source Portfolio
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
geoquest - A geography game