modal-examples
llama.cpp
modal-examples | llama.cpp | |
---|---|---|
9 | 779 | |
572 | 57,984 | |
5.6% | - | |
9.5 | 10.0 | |
5 days ago | 4 days ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
modal-examples
-
Show HN: Real-time image autocomplete in <100 lines of code with SDXL Lightning
We made a small app for SDXL Lightning, running your own Python code on GPUs. It generates images in real time.
https://potatoes.ai/
We know there was a fal.ai post yesterday, and that got a lot of interest, but we also made this demo yesterday and didn't share — just wanted to mention it as an alternative option for people who like running their own code and custom models instead of using a prebuilt API provider.
The backend code is open-source too and you can deploy it yourself: https://github.com/modal-labs/modal-examples/blob/main/06_gpu_and_ml/stable_diffusion/stable_diffusion_xl_lightning.py
-
Our startup has docs issues and it is costing us prospects. What things can you share to help us?
The startup I work at is relatively pretty good at documentation engineering. We have written code to test the code snippets in docstrings (https://github.com/modal-labs/pytest-markdown-docs) and we have written code to do synthetic monitoring testing of the examples in our examples repo (https://github.com/modal-labs/modal-examples). We are also diligent about putting using Python's warnings library to handle API deprecation, and treat deprecation warnings as errors internally, ensuring our own code samples and examples are most up-to-date.
-
OpenLLaMA: An Open Reproduction of LLaMA
You can get it running with one Python script on Modal.com :)
https://github.com/modal-labs/modal-examples/blob/main/06_gp...
-
Whispers AI Modular Future
This demo lets you choose the podcast, and is open-source: https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
https://github.com/modal-labs/modal-examples/tree/main/06_gp...
Transcribes 1hr of audio in roughly 1min, using parallelisation across CPUs.
-
Show HN: PodText.ai – Search anything said on a podcast, Highlight text to play
This demo is open-source: https://github.com/modal-labs/modal-examples/tree/main/06_gp....
https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
-
Show HN: Stable Diffusion Pokémon Cards
It's become so easy to stick together ML models, often without training most or all of them yourself.
*video demo:* https://youtu.be/mQsMuM8d4Qc
*cloud platform:* https://modal.com
*code*: https://github.com/modal-labs/modal-examples/tree/main/06_gp...
-
How can machine learning help us learn languages better?
Transcription - OpenAI just released Whisper. Check out what it can do with podcasts
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
Here's the source code.
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
gpt4all - gpt4all: run open-source LLMs anywhere
WAAS - Whisper as a Service (GUI and API with queuing for OpenAI Whisper)
EasyLM - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
ggml - Tensor library for machine learning
brev-cli - Connect your laptop to cloud computers. Follow to stay updated about our product
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM