ubicloud
llama-cpp-python
ubicloud | llama-cpp-python | |
---|---|---|
16 | 55 | |
3,065 | 6,658 | |
3.9% | - | |
9.9 | 9.8 | |
4 days ago | 1 day ago | |
Ruby | Python | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ubicloud
- FLaNK AI for 11 March 2024
-
Show HN: Open-source x64 and Arm GitHub runners. Reduces GitHub Actions bill 10x
The docs still say the Elastic license is used but looking at https://github.com/ubicloud/ubicloud/blob/main/LICENSE it looks like the project might have switched to GNU Affero General Public License v3.0 in the last day.
- GitHub - ubicloud/ubicloud: Open, free, and portable cloud. Elastic compute, block storage (non replicated), and virtual networking services in public alpha.
-
Ask HN: How does your company balance test coverage and deploy speed?
At Ubicloud, we have 100% line and branch coverage that is mandated on every PR (https://github.com/ubicloud/ubicloud). We also have an E2E test suite that we run periodically and with every commit. We did not really feel like our tests are slowing us down, it actually makes us faster since we have a higher trust to the payload and many manual checks that would need to be done is safely skipped.
-
Ubicloud – open, free and portable cloud
> Taken from here: https://ubicloud.com/
Am I the only one getting a certificate error browsing there?
-
Ask HN: Thoughts about Elastic V2, SSPL, or mixed software licenses?
Link to our project: https://github.com/ubicloud/ubicloud
We’re choosing Elastic V2 for three reasons: (1) We’re planning to monetize through a managed service and we’d like the license to support that, (2) Later if we change our mind, we think it’s easier on our users if we go from a restrictive license to a more permissive one, and (3) The Elastic V2 license is much simpler than its cousin, Server Side Public License (SSPL).
That said, Elastic V2 is a new license and doesn’t seem to as popular as SSPL. Also, some projects out there mix and match multiple licenses in their repo to be able to call themselves open source.
Any insights / feedback on Elastic V2 or software licenses in general?
- Attribute-Based Access Control (ABAC) Implementation in 130 Lines of Code
llama-cpp-python
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
There's a Python binding for llama.cpp which is actively maintained and has worked well for me: https://github.com/abetlen/llama-cpp-python
- FLaNK AI for 11 March 2024
-
OpenAI: Memory and New Controls for ChatGPT
I'll share the core bit that took a while to figure out the right format, my main script is a hot mess using embeddings with SentenceTransformer, so I won't share that yet. E.g: last night I did a PR for llama-cpp-python that shows how Phi might be used with JSON only for the author to write almost exactly the same code at pretty much the same time. https://github.com/abetlen/llama-cpp-python/pull/1184
-
TinyLlama LLM: A Step-by-Step Guide to Implementing the 1.1B Model on Google Colab
Python Bindings for llama.cpp
- Mistral-8x7B-Chat
-
Running Mistral LLM on Apple Silicon Using Apple's MLX Framework Is Much Faster
If the model could be made to work with llama.cpp, then https://github.com/abetlen/llama-cpp-python might be more compact. llama.cpp only supports a limited list of model types though.
- Run ChatGPT-like LLMs on your laptop in 3 lines of code
-
Code Llama, a state-of-the-art large language model for coding
https://github.com/abetlen/llama-cpp-python has a web server mode that replicates openai's API iirc and the readme shows it has docker builds already.
-
Meta: Code Llama, an AI Tool for Coding
LocalAI https://localai.io/ and LMStudio https://lmstudio.ai/ both have fairly complete OpenAI compatibility layers. llama-cpp-python has a FastAPI server as well: https://github.com/abetlen/llama-cpp-python/blob/main/llama_... (as of this moment it hasn't merged GGUF update yet though)
-
First steps with llama
I went with Python, llama-cpp-python, since my goal is just to get a small project up and running locally.
What are some alternatives?
manageiq - ManageIQ Open-Source Management Platform
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
fog-azure-rm - Fog for Azure Resource Manager
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
cloudfront-signer - Ruby gem for signing AWS CloudFront private content URLs and streaming paths.
llama.cpp - LLM inference in C/C++
AWS SDK for Ruby - The official AWS SDK for Ruby.
text-generation-inference - Large Language Model Text Generation Inference
forem - For empowering community 🌱
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
homebrew-portable-ruby - 🚗 Versions of Ruby that can be installed and run from anywhere on the filesystem.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.