omnitool
llama.cpp
omnitool | llama.cpp | |
---|---|---|
2 | 791 | |
117 | 59,389 | |
3.4% | - | |
6.7 | 10.0 | |
13 days ago | 4 days ago | |
TypeScript | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
omnitool
-
Nitro: A fast, lightweight 3MB inference server with OpenAI-Compatible API
Our you could use something like omnitool (https://github.com/omnitool-ai/omnitool) and interface with both cloud and local AI, not limited to llms.
-
A new old kind of R&D lab
Very interesting read. My name is Emmanuel Lusinchi, and together with my colleague Georg Zoeller, we represent Omnitool.ai, a fledgling platform in the AI landscape with a mission that resonates deeply with the ethos of Answer.AI.
Firstly, congratulations on the launch of Answer.AI and the vision you've set forth. We have been following your work with great admiration - I took the FastAI course in what feels like an eternity ago (3 years) and I have been recommending it to anyone interested in foundational AI. But more to your post’s point: your commitment to harnessing AI's potential to create practical end-user products is not only inspiring but aligns with our own philosophy.
We have developed an open-source "AI lab-in-a-box". A platform that seamlessly integrates a multitude of AI models, both locally and cloud-hosted, through a single unified interface. The aim is to simplify access to the latest developments in AI - both on the technical side (knowledge to run AI models and connect them together) and on the financial side (access to GPUs) . We believe this to be useful to accelerate experimentation and iteration but also facilitate teaching AI - giving teachers a simple, consistent tool and giving students practical experience with the latest models so they can experience first hand complex and often too abstract concepts such as Bias. By reducing friction and lowering barriers of entry, our platform aims at democratizing access to the latest AI technologies, providing almost anyone with the tools and flexibility needed to push the boundaries of what's possible with AI.
And we do believe that our platform could serve as a valuable tool in your R&D processes, speeding up Answer.AI ability to rapidly prototype and refine applications that leverage foundational research breakthroughs.
Moreover, we share your concern about the widening gap in understanding AI's capabilities and its implications. We believe that transparency, education, and open-source collaboration are key to bridging this gap, ensuring that AI's benefits are widely distributed and its risks are responsibly managed.
We are reaching out to explore potential avenues for collaboration. Whether it's simply helping you evaluate and perhaps use our platform into your R&D workflow, co-developing new tools, or simply engaging in a dialogue to share insights, we are eager to contribute to the incredible work you're undertaking at Answer.AI.
We would be honored to discuss this further with you. Please find more information about our platform and its capabilities on our GitHub: https://github.com/omnitool-ai/omnitool. We are also open to setting up a demonstration or a meeting at your convenience to explore synergies between our organizations.
Warm regards,
Emmanuel Lusinchi Co-founder, Omnitool.ai [email protected]
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
gpt4all - gpt4all: run open-source LLMs anywhere
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
ggml - Tensor library for machine learning
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
safetensors - Simple, safe way to store and distribute tensors
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
alpaca-lora - Instruct-tune LLaMA on consumer hardware