aitextgen
hivemind
aitextgen | hivemind | |
---|---|---|
19 | 40 | |
1,826 | 1,837 | |
- | 1.5% | |
1.8 | 5.4 | |
10 months ago | about 1 month ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aitextgen
-
Where is the engineering part in "prompt engineer"?
It's literally a wrapper for the ChatGPT API (currently). I have another library for training models from scratch but haven't had time to work on it.
-
self-hosted AI?
I'm experimenting with https://github.com/minimaxir/aitextgen for some some simple tasks. It is pretty much a wrapper around gpt2 and gpt neox models.
-
How would I go about implementing warmup steps from the Transformers library?
I'm sorry if this is the wrong place to ask, but I wasn't sure where else to turn. Several of us have already opened an issue with AITextGen, but it seems that the maintainer isn't particularly active these days. I'm a fairly proficient developer (self-taught), and I know my way around ML, but I was not formally-educated in deep learning. A lot of Pytorch-Lightning looks like black magic, to me. I suspect that I'm missing an important detail that would be fairly simple for many of you to identify.
-
NanoGPT
To train small gpt-like models, there's also aitextgen: https://github.com/minimaxir/aitextgen
-
Neuro-sama sings "Take On Me" with her Angelic Voice
It's actually relatively easy to train your own GPT model and there are multiple tools out there that make it almost just plug and play: https://github.com/minimaxir/aitextgen
-
Is there a place with all the models indexed?
I've been learning python and for the past few days, I've been playing around with the aitextgen library.
-
I built an AI model to auto-generate Dominion cards. Here are the hilariously bad results.
Then I ran that through the ai and got it to spit out cards that looked like that training data. I used aitextgen. So I let it run for like 4 hours and it thinks it has made 10,000 rows of cards. But some of these cards are duplicates to each other or to cards that already exist, or use a card name that already exists in the original game, or have like 20 '|' characters in one row, or have zero '|'. So I run a script to remove all of these cards like that, and I end up with like 2,000-4,500 cards that are "functional".
-
Thoughts on GPT3?
If you search this subreddit, you should find lots of discussions about it, as well as alternatives like GPT-J (open source). If you'd like to experiment with GPT-2 for text generation, try https://github.com/minimaxir/aitextgen. It's fun to play with.
-
Show HN: Tensorpedia – Using GPT-2 to synthesize Wikipedia articles
Hey HN! I've been lurking for a while now and I've finally created something that I feel is worth sharing.
I've called this project "Tensorpedia." At its core, Tensorpedia takes in a title and utilizes it as a prompt for GPT-2 to synthesize the introductory part of a Wikipedia article. The machine learning stuff is written using a wonderful library called aitextgen [0], using Wikipedia's "Vital Articles" as a data set [1]. The server is written in Node, and it uses Redis as an article cache. If you want to read my article about it (for some reason), you can check it out here [2].
I created this project to get more experience with server technologies. While I wouldn't say it's a complicated application, I learned quite a lot from it.
Additionally, as I was inspired by all of those this-x-doesn't-exist projects from a while back, this project is mostly for fun. As such, I don't know how much practical use it has, but I've generated some pretty hilarious articles from it.
[0] https://github.com/minimaxir/aitextgen
[1] https://en.wikipedia.org/wiki/Wikipedia:Vital_articles/Level...
[2] https://jonahsussman.net/posts/2022-01-this-wiki-dne/
-
Downloaded GPT-2, Encode.py, and Train.py not found.
If by downloaded you mean clone the gpt-2 github repo it doesn't come with those scripts. I personally played around with https://github.com/minimaxir/aitextgen which is a simple wrapper around the gpt-2 code, it comes with some very clear usage. (Shout out to minimaxir and everyone else involved in aitextgen for making using gpt-2 easy to use!)
hivemind
-
You can now train a 70B language model at home
https://github.com/learning-at-home/hivemind is also relevant
-
Would anyone be interested in contributing to some group projects?
I really hope you'll join me, for the Petals support, at least! A single docker-compose.yml file is all we need, for now. If we are able to find enough people willing to host some smaller models, perhaps we could expand into the Hivemind, and create our own, custom foundation model one day?
- Hive mind:Train deep learning models on thousands of volunteers across the world
-
Could a model not be trained by a decentralized network? Like Seti @ home or kinda-sorta like bitcoin. Petals accomplishes this somewhat, but if raw computer power is the only barrier to open-source I'd be happy to try organizing decentalized computing efforts
Decentralized deep learning: https://github.com/learning-at-home/hivemind
-
Orca (built on llama13b) looks like the new sheriff in town
https://github.com/learning-at-home/hivemind - same people behind it, was made before petals I think.
-
Do you think that AI research will slow down to a halt because of regulation?
not if we rise to meet that challenge. here's a few tools that facilitate AI research in the face of an advanced persistent threat: Hivemind- a distributed Pytorch framework
-
LLM@home
yeah, there's Hivemind. and there's research wrt how to chunk out training workload so it can be scaled up. not sure why there's commentary that latency issues would limit this sort of enterprise, the architecture typically isn't designed for liveness. other subfields of distributed training/inference include zero-knowledge machine learning. besides all of that, there's also adversarial computation like SafetyNets and refereed delegation of computation.
-
[D] Google "We Have No Moat, And Neither Does OpenAI": Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI
We already have the software for it. There are some projects, but the one I'm most familiar with is https://github.com/learning-at-home/hivemind for training and it's sister project https://petals.ml/ for running large models distributed.
-
Run 100B+ language models at home, BitTorrent‑style
I'm not entirely how the approach they're using works [0], but I study federated learning and one of the highly-cited survey papers has several chapters (5 and 6 in particular) addressing potential attacks, failure modes, and bias [1].
0: https://github.com/learning-at-home/hivemind
1: https://arxiv.org/abs/1912.04977
-
SETI Home Is in Hibernation
The Hivemind project is just that
https://github.com/learning-at-home/hivemind
What are some alternatives?
lm-evaluation-harness - A framework for few-shot evaluation of language models.
replika-research - Replika.ai Research Papers, Posters, Slides & Datasets
DiscordChatAI-GPT2 - A chat AI discord bot written in python3 using GPT-2, trained on data scraped from every message of my discord server (can be trained on yours too)
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
Super-SloMo - PyTorch implementation of Super SloMo by Jiang et al.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
alpa - Training and serving large-scale neural networks with auto parallelization.
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
trump_gpt2_bot - aitextgen (aka GPT-2) Twitter bot
HiveMind-core - Join the OVOS collective, utils for OpenVoiceOS mesh networking