metaseq
cupscale
metaseq | cupscale | |
---|---|---|
53 | 81 | |
6,389 | 2,067 | |
0.4% | - | |
6.2 | 0.0 | |
11 days ago | over 1 year ago | |
Python | C# | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
metaseq
-
Training great LLMs from ground zero in the wilderness as a startup
This is a super important issue that affects the pace and breadth of iteration of AI almost as much as the raw hardware improvements do. The blog is fun but somewhat shallow and not technical or very surprising if you’ve worked with clusters of GPUs in any capacity over the years. (I liked the perspective of a former googler, but I’m not sure why past colleagues would recommend Jax over pytorch for LLMs outside of Google.) I hope this newco eventually releases a more technical report about their training adventures, like the PDF file here: https://github.com/facebookresearch/metaseq/tree/main/projec...
- Chronicles of Opt Development
-
See the pitch memo that raised €105M for four-week-old startup Mistral
The number of people who can actually pre-train a true LLM is very small.
It remains a major feat with many tweaks and tricks. Case in point: the 114 pages of OPT175B logbook [1]
[1] https://github.com/facebookresearch/metaseq/blob/main/projec...
- Technologie: „Austro-ChatGPT“ – aber kein Geld zum Testen
- OPT (Open Pre-trained Transformers) is a family of NLP models trained on billions of tokens of text obtained from the internet
- Current state-of-the-art open source LLM
-
Elon Musk Buys Ten Thousand GPUs for Secretive AI Project
Reliability at scale: take a look at the OPT training log book for their 175B model run. It needed a lot of babysitting. In my experience, that scale of TPU training run requires a restart about once every 1-2 weeks—and they provide the middleware to monitor the health of the cluster and pick up on hardware failures.
-
Is AI Development more fun than Software Development?
I really appreciated this log of Facebook training a large language model of how troublesome AI development can be: https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles
-
Visual ChatGPT
Stable Diffusion will run on any decent gaming GPU or a modern MacBook, meanwhile LLMs comparable to GPT-3/ChatGPT have had pretty insane memory requirements - e.g., <https://github.com/facebookresearch/metaseq/issues/146>
-
Ask HN: Is There On-Call in ML?
It seems so, check this log book from Meta: https://github.com/facebookresearch/metaseq/blob/main/projec...
cupscale
- Print Four Souls Cards at Home (Fixed Audio)
-
What about game assets that target 1080p and you want 4K fidelity?
If you want to do more, there's chaiNNer and CupScale. You need to download an AI model to use those. There are a lot of anime/cartoon models out, so pick one that you like from here. (Note: Upscaly doesn't support these custom models.)
- Help selecting software
-
Do you have Topaz AI?
I'm not 100% sure how it holds up against topaz, but I've used cupscale (a gui for ESRGAN) to upscale most of my stuff. Its free (https://github.com/n00mkrad/cupscale) and you can find a million different ESRGAN models which are focused on different kinds of images (https://upscale.wiki/wiki/Model_Database).
- Unfall mit Fahrerflucht, AI-Upscaling?
-
(For FE Awakening in Citra) How can I change robin hair portrait?
Now upscaling isn't hard to do by itself, but the setup can be difficult. As I said earlier, ERSGAN is the preferable way to do it. (https://github.com/n00mkrad/cupscale) Cupscale is my preferred tool for doing it this way. (https://www.topazlabs.com/gigapixel-ai) Gigapixel is another option that's easier for newcomers, but may not produce as good of results. They even have a free trial if you want to demo the tool.
- What workflow is best for upscaling portraits taken by phone camera or DSLR?
-
Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion?
I use cupscale for upscaling things. Allows chaining models and handles video.
-
Are there any google collab scripts or other tools to upscale a bunch of images..?
For local there's cupscale and chainner
- A rustic cottage by the field [1920x1080]
What are some alternatives?
stable-diffusion - A latent text-to-image diffusion model
Waifu2x-Extension-GUI - Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, Real-CUGAN, RTX Video Super Resolution VSR, SRMD, RealSR, Anime4K, RIFE, IFRNet, CAIN, DAIN, and ACNet.
nlp-resume-parser - NLP-powered, GPT-3 enabled Resume Parser from PDF to JSON.
chaiNNer - A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Real-ESRGAN-ncnn-vulkan - NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
Real-ESRGAN - Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
manim - Animation engine for explanatory math videos
waifu2x - Image Super-Resolution for Anime-Style Art
ChatGPT.el - ChatGPT in Emacs
chaiNNer - A flowchart/node-based image processing GUI aimed at making chaining image processing tasks (especially upscaling done by neural networks) easy, intuitive, and customizable. [Moved to: https://github.com/chaiNNer-org/chaiNNer]