Our great sponsors
-
You can either use Hugging Face Transformers as they have a lot of pre-trained models that you can customize. Or Finetuners like this one: which is a toolkit for fine-tuning multiple models.
-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a more popular project.
Related posts
- How do you think search will change with technology like ChatGPT, Bing’s new AI search engine and the upcoming Google Bard?
- Combining multiple lists into one, meaningfully
- speech_recognition not able to convert the full live audio to text. Please help me to fine-tune it.
- Questions about fine-tuned results. Should the completion results be identical to fine-tune examples?
- Is there a way I can feed the gpt3 model database object like tables? I know we can create fine tune model but not sure about the completion part. Please help!