aitextgen
whisper.cpp
aitextgen | whisper.cpp | |
---|---|---|
19 | 187 | |
1,826 | 31,174 | |
- | - | |
1.8 | 9.8 | |
10 months ago | 5 days ago | |
Python | C | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aitextgen
-
Where is the engineering part in "prompt engineer"?
It's literally a wrapper for the ChatGPT API (currently). I have another library for training models from scratch but haven't had time to work on it.
-
self-hosted AI?
I'm experimenting with https://github.com/minimaxir/aitextgen for some some simple tasks. It is pretty much a wrapper around gpt2 and gpt neox models.
-
How would I go about implementing warmup steps from the Transformers library?
I'm sorry if this is the wrong place to ask, but I wasn't sure where else to turn. Several of us have already opened an issue with AITextGen, but it seems that the maintainer isn't particularly active these days. I'm a fairly proficient developer (self-taught), and I know my way around ML, but I was not formally-educated in deep learning. A lot of Pytorch-Lightning looks like black magic, to me. I suspect that I'm missing an important detail that would be fairly simple for many of you to identify.
-
NanoGPT
To train small gpt-like models, there's also aitextgen: https://github.com/minimaxir/aitextgen
-
Neuro-sama sings "Take On Me" with her Angelic Voice
It's actually relatively easy to train your own GPT model and there are multiple tools out there that make it almost just plug and play: https://github.com/minimaxir/aitextgen
-
Is there a place with all the models indexed?
I've been learning python and for the past few days, I've been playing around with the aitextgen library.
-
I built an AI model to auto-generate Dominion cards. Here are the hilariously bad results.
Then I ran that through the ai and got it to spit out cards that looked like that training data. I used aitextgen. So I let it run for like 4 hours and it thinks it has made 10,000 rows of cards. But some of these cards are duplicates to each other or to cards that already exist, or use a card name that already exists in the original game, or have like 20 '|' characters in one row, or have zero '|'. So I run a script to remove all of these cards like that, and I end up with like 2,000-4,500 cards that are "functional".
-
Thoughts on GPT3?
If you search this subreddit, you should find lots of discussions about it, as well as alternatives like GPT-J (open source). If you'd like to experiment with GPT-2 for text generation, try https://github.com/minimaxir/aitextgen. It's fun to play with.
-
Show HN: Tensorpedia β Using GPT-2 to synthesize Wikipedia articles
Hey HN! I've been lurking for a while now and I've finally created something that I feel is worth sharing.
I've called this project "Tensorpedia." At its core, Tensorpedia takes in a title and utilizes it as a prompt for GPT-2 to synthesize the introductory part of a Wikipedia article. The machine learning stuff is written using a wonderful library called aitextgen [0], using Wikipedia's "Vital Articles" as a data set [1]. The server is written in Node, and it uses Redis as an article cache. If you want to read my article about it (for some reason), you can check it out here [2].
I created this project to get more experience with server technologies. While I wouldn't say it's a complicated application, I learned quite a lot from it.
Additionally, as I was inspired by all of those this-x-doesn't-exist projects from a while back, this project is mostly for fun. As such, I don't know how much practical use it has, but I've generated some pretty hilarious articles from it.
[0] https://github.com/minimaxir/aitextgen
[1] https://en.wikipedia.org/wiki/Wikipedia:Vital_articles/Level...
[2] https://jonahsussman.net/posts/2022-01-this-wiki-dne/
-
Downloaded GPT-2, Encode.py, and Train.py not found.
If by downloaded you mean clone the gpt-2 github repo it doesn't come with those scripts. I personally played around with https://github.com/minimaxir/aitextgen which is a simple wrapper around the gpt-2 code, it comes with some very clear usage. (Shout out to minimaxir and everyone else involved in aitextgen for making using gpt-2 easy to use!)
whisper.cpp
-
Show HN: I created automatic subtitling app to boost short videos
whisper.cpp [1] has a karaoke example that uses ffmpeg's drawtext filter to display rudimentary karaoke-like captions. It also supports diarisation. Perhaps it could be a starting point to create a better script that does what you need.
--
1: https://github.com/ggerganov/whisper.cpp/blob/master/README....
- LLaMA Now Goes Faster on CPUs
-
LLMs on your local Computer (Part 1)
The ggml library is one of the first library for local LLM interference. Itβs a pure C library that converts models to run on several devices, including desktops, laptops, and even mobile device - and therefore, it can also be considered as a tinkering tool, trying new optimizations, that will then be incorporated into other downstream projects. This tool is at the heart of several other projects, powering LLM interference on desktop or even mobile phones. Subprojects for running specific LLMs or LLM families exists, such as whisper.cpp.
-
Voxos.ai β An Open-Source Desktop Voice Assistant
I'm not sure if it is _fully_ openai compatible, but whispercpp has a server bundled that says it is "OAI-like": https://github.com/ggerganov/whisper.cpp/tree/master/example...
I don't have any direct experience with it... I've only played around with whisper locally, using scripts.
-
Jarvis: A Voice Virtual Assistant in Python (OpenAI, ElevenLabs, Deepgram)
unless i'm misunderstanding `whisper.cpp` seems to support streaming & the repository includes a native example[0] and a WASM example[1] with a demo site[2].
[0]: https://github.com/ggerganov/whisper.cpp/tree/master/example...
- Wchess
-
I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now.
Usage 1: Good to transcribe audio. An example use case could be to summarize YouTube videos or long courses. Usage 2: You talk with voice to your AI that responds with text (later with audio too). - https://github.com/ggerganov/whisper.cpp
-
Scrybble is the ReMarkable highlights to Obsidian exporter I have been looking for
π£οΈποΈ whisper.cpp (offline speech-to-text transcription, models trained by OpenAI, CLI based, browser based)
- Whisper.wasm
-
Whisper C++ not working for me. Anyone else?
Has anyone played around with Whisper C++ for swift? I'm hitting a snag even on the demo. I've downloaded the github repo and everything matches up with this video [ https://youtu.be/b10OHCDHDQ4 ] but when he hits the transcribe button, it actually prints out the captioning. When I do it, it skips that part and just says "Done...". But it, does everything else - plays the audio, says it's transcribing.. just doesn't show me the transcription: and it's not in the debug window either. But the demo isn't throwing any errors, and I haven't messed with the code really so this is their example. https://github.com/ggerganov/whisper.cpp
What are some alternatives?
lm-evaluation-harness - A framework for few-shot evaluation of language models.
faster-whisper - Faster Whisper transcription with CTranslate2
DiscordChatAI-GPT2 - A chat AI discord bot written in python3 using GPT-2, trained on data scraped from every message of my discord server (can be trained on yours too)
Whisper - High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
bark - π Text-Prompted Generative Audio Model
transformers - π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
trump_gpt2_bot - aitextgen (aka GPT-2) Twitter bot
llama.cpp - LLM inference in C/C++