brev-cli
SRNet
brev-cli | SRNet | |
---|---|---|
7 | 1 | |
197 | 220 | |
1.0% | 0.9% | |
7.9 | 10.0 | |
4 days ago | over 4 years ago | |
Go | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
brev-cli
- Brev: Start fine-tuning and training models in < 10 minutes
- OpenLLaMA: An Open Reproduction of LLaMA
-
Using the cloud or buying a GPU
I don't have a PC right now that will run StableDiffusion. I can build one but I think I'm going to need a pretty powerful GPU which I'm not sure I can afford right now. I started using something called Brev https://brev.dev/ (no, I don't work there just found it searching). It's pretty affordable and super easy to setup.
-
is there a good guide on how to train an ai to simulate your own art work?
I just finished listening to an episode of the Practical AI podcast, where they talked with Nader Khalil from brev.dev. They talked a little bit about setting up dreambooth and training it with ten images in about 4 minutes. I havent tested it, but it is worth a try. Brev.dev is a way to set up virtual machines and developement environments. Would love to heard from people who have used it.
- New AI edits images based on text instructions (instructPix2Pix/imaginAIry)
-
Tensorbook
R.I.P. battery.
Personally I've been using Brev [1] to do my cloud training, you get a cloud GPU instance that you can upgrade/downgrade on the fly, and makes supports VS Code out of the box.
[1] https://brev.dev/
- Brev
SRNet
-
New AI edits images based on text instructions (instructPix2Pix/imaginAIry)
The authors of TextStyleBrush cite SRNet, which is available at https://github.com/youdao-ai/SRNet but probably has worse quality. I don't know of others, but I have not looked very hard either.
What are some alternatives?
EasyLM - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
imaginAIry - Pythonic AI generation of images and videos
sd_dreambooth_extension
open_llama - OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
stable-diffusion-webui - Stable Diffusion web UI
modal-examples - Examples of programs built using Modal
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Open-Llama - The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
llama.cpp - LLM inference in C/C++