rllama
litestar
rllama | litestar | |
---|---|---|
7 | 27 | |
519 | 4,453 | |
- | 3.1% | |
6.2 | 9.8 | |
3 months ago | 4 days ago | |
Rust | Python | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rllama
-
Ask HN: Who wants to be hired? (July 2023)
Location: San Francisco
Remote: No preference, as long as I don't have to move far from Bay Area
Willing to relocate: No
Technologies: C, Rust, Golang, Haskell, Lisp, Python, Lua, OpenGL, SQLite3, JavaScript, PostgreSQL, AWS EC2, S3, ECS, Batch.
Resume: https://www.linkedin.com/in/mikjuola
Email: [email protected]
---
I've been working at the Bay Area since 2015, most recently at Pinterest. At work, I've done big data pipelines, designed some batch job systems, computing metrics, handling billing APIs, lots of Python, Go and Java and working with AWS, i.e. backend and data engineer stuff.
But I'm trying to look for work that's more in line with what I do on my free time: Challenging low-level C or Rust programming, machine learning implementations (see e.g. this thing I made https://github.com/Noeda/rllama/, graphics programming or research-type work, uncommon programming languages.
If you scroll through my random crap repositories you can see what kind of things I'm interested in: https://github.com/Noeda?tab=repositories
-
State-of-the-art open-source chatbot, Vicuna-13B, just released model weights
No, my project is called rllama. No relation to GGML. https://github.com/Noeda/rllama
-
Where can I learn more about SIMD, CPU intrinsics and the like in the context of Rust?
I have seen some Rust attempts as well such as https://github.com/Noeda/rllama/ but they are still way behind the C++ ones. This seems like an interesting space to get into.
-
Show HN: Alpaca.cpp – Run an Instruction-Tuned Chat-Style LLM on a MacBook
I ran it on a 128 RAM machine with a Ryzen 5950X. It's not fast, 4 seconds per token. But it's just about fits without swapping. https://github.com/Noeda/rllama/
-
Llama.rs – Rust port of llama.cpp for fast LLaMA inference on CPU
I've counted three different Rust LLaMA implementations on r/rust subreddit this week:
https://github.com/Noeda/rllama/ (pure Rust+OpenCL)
https://github.com/setzer22/llama-rs/ (ggml based)
https://github.com/philpax/ggllama (also ggml based)
There's also a discussion on GitHub issue on setzer's repo to collaborate a bit on these separate efforts: https://github.com/setzer22/llama-rs/issues/4
- Rust+OpenCL+AVX2 implementation of LLaMA inference code
- Pure Rust CPU and OpenCL implementation of LLaMA language model
litestar
-
Show HN: Mountaineer – Webapps in Python and React
I wonder what happened after. It looks like the commenter/creator moved on:
https://github.com/litestar-org/litestar/commits?author=Gold...
-
Litestar – powerful, flexible, and highly performant Python ASGI framework
What would you like to see here? Could you perhaps open an issue at https://github.com/litestar-org/litestar so we can track and implement this?
If you are just needing a client what you need should be available OOTB, unless you want more hands off.
Here is also a good article for example: https://dev.to/pbaletkeman/secure-python-litestar-site-with-...
-
Show HN: Build your startup or side project faster with these SaaS templates
I thought Litestar was the recommendation these days over FastAPI. Is it not?
https://litestar.dev/
-
Killed by open sourced software. Companies that have had a significant market share stolen from open sourced alternatives.
Litestar - Litestar has been picking up quite a lot of steam in the past year since the lead maintainer of their largest OS competitor (fastapi) seems to be unable to prioritize listening to community feedback / concerns people have over the project. You literally can't mention fastapi on this site without people bringing up litestar.
-
It's Christmas day. You wake up, run to the tree, tear open the largest package with your name on it... FastAPI has added _____?
A redirect to https://litestar.dev/
-
Django 5.0 Is Released
What's the preferred Python Web Framework these days?
I've read a lot of love for Litestar (formerly Starlite), since it seems people prefer it over FastAPI, Flask, etc.
https://litestar.dev
- Ask HN: If you were to build a web app today what tech stack would you choose?
-
Don't put your business logic in the controllers
If your project is built using litestar then you have controllers.
-
Ask HN: Who wants to be hired? (October 2023)
System Integration: Odoo, Erpnext, Authentik, Authlia, Teleport, and more.
We are Hexcode Technologies, a Fullstack Development software agency based in Myanmar. We offer competitive rates and are available for full team or project-based development. With 30+ completed projects, we have a strong track record of delivering on time and within budget. Our flexible rates make us an ideal choice for startups, and we have experience building full-stack software for banks.
Our team consists of 3 Fullstack developers, 1 Scrum Master, 1 Frontend Specialist, 1 Deep Learning specialist, 1 Backend Specialist, and 1 UI/UX professional.
I am also available for hire as a Consultant and Fullstack/CTO . I have 20+ years of experience and actively contributes in opensource projects.
Focus: Python [Litestar, Django] + Typescript [Svelte, React] + Tailwind.
Contributes to Litestar: [Litestar Contributions](https://litestar.dev, https://github.com/litestar-org/litestar).
- Show HN: I built a Python web framework from scratch
What are some alternatives?
llama.cpp - LLM inference in C/C++
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
apiflask - A lightweight Python web API framework.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
streamsync - No-code in the front, Python in the back. An open-source framework for creating data apps.
ultraviolet - A wide linear algebra crate for games and graphics.
Flask - The Python micro framework for building web applications.
80r3d
dream-html - Render HTML, SVG, MathML, htmx markup from your OCaml Dream backend server
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
live_svelte - Svelte inside Phoenix LiveView with seamless end-to-end reactivity