Open-Llama
modal-examples
Open-Llama | modal-examples | |
---|---|---|
7 | 9 | |
637 | 604 | |
- | 7.3% | |
10.0 | 9.5 | |
about 1 year ago | 7 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Open-Llama
-
(1/2) May 2023
Training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF (https://github.com/s-JoL/Open-Llama)
- Open-Lamam: A “real” open-source project to train LLM not just checkpoints
- Open-Lamam: A real open-source project to train LLM
- Open-Llama: A Open Source Project for Training Language Models
-
OpenLLaMA: An Open Reproduction of LLaMA
Really exciting how fast fully pre-trained new models are appearing.
Here's another repo (with the same "open-llama" name) that has been available on hugging face as well for a few weeks. (different training dataset)
https://github.com/s-JoL/Open-Llama
-
Build your onw LLM 101
Open-Llama
- Open-Llama is an open source project that provides a complete set of training processes for building large-scale language models, from data preparation to tokenization, pre-training, instruction tuning, and reinforcement learning techniques such as RLHF.
modal-examples
-
Show HN: Real-time image autocomplete in <100 lines of code with SDXL Lightning
We made a small app for SDXL Lightning, running your own Python code on GPUs. It generates images in real time.
https://potatoes.ai/
We know there was a fal.ai post yesterday, and that got a lot of interest, but we also made this demo yesterday and didn't share — just wanted to mention it as an alternative option for people who like running their own code and custom models instead of using a prebuilt API provider.
The backend code is open-source too and you can deploy it yourself: https://github.com/modal-labs/modal-examples/blob/main/06_gpu_and_ml/stable_diffusion/stable_diffusion_xl_lightning.py
-
Our startup has docs issues and it is costing us prospects. What things can you share to help us?
The startup I work at is relatively pretty good at documentation engineering. We have written code to test the code snippets in docstrings (https://github.com/modal-labs/pytest-markdown-docs) and we have written code to do synthetic monitoring testing of the examples in our examples repo (https://github.com/modal-labs/modal-examples). We are also diligent about putting using Python's warnings library to handle API deprecation, and treat deprecation warnings as errors internally, ensuring our own code samples and examples are most up-to-date.
-
OpenLLaMA: An Open Reproduction of LLaMA
You can get it running with one Python script on Modal.com :)
https://github.com/modal-labs/modal-examples/blob/main/06_gp...
-
Whispers AI Modular Future
This demo lets you choose the podcast, and is open-source: https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
https://github.com/modal-labs/modal-examples/tree/main/06_gp...
Transcribes 1hr of audio in roughly 1min, using parallelisation across CPUs.
-
Show HN: PodText.ai – Search anything said on a podcast, Highlight text to play
This demo is open-source: https://github.com/modal-labs/modal-examples/tree/main/06_gp....
https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
-
Show HN: Stable Diffusion Pokémon Cards
It's become so easy to stick together ML models, often without training most or all of them yourself.
*video demo:* https://youtu.be/mQsMuM8d4Qc
*cloud platform:* https://modal.com
*code*: https://github.com/modal-labs/modal-examples/tree/main/06_gp...
-
How can machine learning help us learn languages better?
Transcription - OpenAI just released Whisper. Check out what it can do with podcasts
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
Here's the source code.
What are some alternatives?
open_llama - OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
My-Medium-Articles-Friendly-Links - Friendly link to all of my medium articles
WAAS - Whisper as a Service (GUI and API with queuing for OpenAI Whisper)
AgileRL - Streamlining reinforcement learning with RLOps. State-of-the-art RL algorithms and tools.
EasyLM - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
mlc-llm - Universal LLM Deployment Engine with ML Compilation
promptfoo - Test your prompts, agents, and RAGs. Use LLM evals to improve your app's quality and catch problems. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.
brev-cli - Connect your laptop to cloud computers. Follow to stay updated about our product