rex-gym
gpt-2-simple
Our great sponsors
rex-gym | gpt-2-simple | |
---|---|---|
1 | 13 | |
957 | 3,366 | |
- | - | |
0.0 | 0.0 | |
about 1 year ago | over 1 year ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rex-gym
-
By moving the battery pack forward, you can make the popular SpotMicro design balance much better. We had trouble getting it to do standing/walking because the center of mass was far to the back.
Our work was based on (SpotMicro)[https://github.com/michaelkubina/SpotMicroESP32] and (Rex Gym)[https://github.com/nicrusso7/rex-gym]. Our GitHub is (here)[https://github.com/LSaldyt/laser-dog]
gpt-2-simple
-
Show HN: WhatsApp-Llama: A clone of yourself from your WhatsApp conversations
Tap the contact's name in WhatsApp (I think it only works on a phone) and at the bottom of that screen there's Export Chat.
For finetuning GPT-2 I think I used this thing on Google Colab. (My friend ran it on his GPU, it should be doable on most modern-ish GPUs.)
https://github.com/minimaxir/gpt-2-simple
I tried doing something with this a few months ago though and it was a bit of a hassle to get running (needed to use a specific python version for some dependencies...), I forget the details sorry!
-
indistinguishable
I mentioned in a different reply that I used https://github.com/minimaxir/gpt-2-simple
-
training gpt on your own sources - how does it work? gpt2 v gpt3? and how much does it cost?
You will need a few hundred bucks, python experience, and a simple implementation such as this repo https://github.com/minimaxir/gpt-2-simple
-
I (re)trained an AI using the 36 lessons of Vivec, the entirety of C0DA, the communist manifesto and the top posts of /r/copypasta and asked it the most important/unanswered lore questions. What are the lore implications of these insights?
I just used the gpt-2-simple python package and ran it overnight in an jupyter notebook, but you could copy the code to any python compiler and it should also work.
-
How do I start a personal project?
I'll note that if you're just doing text generation it is a simple project as far as ML goes, there are some nice libraries you can use that require minimal ML knowlege -eg https://github.com/minimaxir/gpt-2-simple
-
I created a twitter account that posts AI generated Canucks related tweets. I call it "Canucks Artificial Insider".
Then, I use the GPT-2 AI libraries, wrapped in a python library GPT-2 Simple to generate the content. My actual code is basically just their code sample, so basically 6 lines of python. With GPT-2, you train the existing AI to your specific dataset, which in my case is this text file of tweets.
-
Training GPT-2 with HuggingFace Transformers to sound like a certain author
gpt_2_simple is your best bet! Its super easy to use, you just need to downgrade TensorFlow and some other packages in your environment.
-
These Magic cards don't exist - Generating names for new cards using machine learning and GPT-2.
I used the GPT-2 Simple program by minimaxir to train the algorithm on every card in Magic's history that was released in a main expansion. Then I generated about 2,000 (it was actually more, but the algorithm really liked giving me cards that already exist) new names which I searched through to find the best ones.
-
No rush, mostly curious (training/finetuned models)
Might I suggest starting Starting here, to learn on Simple GPT2. They have a Google Colab Notebook if your CPU GPU is shit, and what helped me learn best is dissect the code, and basically make my own Colab notebook piece by piece, learning what each function does.
-
Selecting good hyper-parameters for fine-tuning a GPT-2 model?
The last couple of months, I've been running a Twitter bot that posts GPT-2-generated content, trained off of Tweets from existing accounts using gpt-2-simple. In my more recent training sessions, it seems like the quality of the output has been decreasing; it often gives outputs that are just barely modified from the original training data, if not verbatim.
What are some alternatives?
pybullet-gym - Open-source implementations of OpenAI Gym MuJoCo environments for use with the OpenAI Gym Reinforcement Learning Research Platform.
Style-Transfer-in-Text - Paper List for Style Transfer in Text
robot-gym - RL applied to robotics.
textgenrnn - Easily train your own text-generating neural network of any size and complexity on any text dataset with a few lines of code.
drl_grasping - Deep Reinforcement Learning for Robotic Grasping from Octrees
ctrl-sum - Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paper
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
openai-api-py-lite - OpenAI API Python bindings with no dependencies
PILCO - Bayesian Reinforcement Learning in Tensorflow
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
gretel-synthetics - Synthetic data generators for structured and unstructured text, featuring differentially private learning.
AIdegger - Extended publications of Martin Heidegger uncovered using machine learning.