TheVault
datablations
Our great sponsors
TheVault | datablations | |
---|---|---|
4 | 6 | |
78 | 289 | |
- | 8.7% | |
7.9 | 6.9 | |
3 months ago | about 1 month ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TheVault
-
(2/2) May 2023
A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation (https://github.com/FSoft-AI4Code/TheVault)
-
List of code generation datasets (open source)
TheVault
-
[P] Fine-tuning LLaMA on TheVault by AI4Code
I essentially want to fine-tune LLaMA on a dataset that's geared towards code generation. After a bit of research I found TheVault which seems good enough for the job (let me know if there are better datasets tho).
-
[R] Introducing The Vault: A new multilingual dataset for advancing code understanding and generation.
Github page: https://github.com/FSoft-AI4Code/TheVault
datablations
-
Gemini is only 1x Chinchilla, so it undertrained for production
1x chinchilla means it's not really undertrained but that more could be squeezed without excessive difficulty https://arxiv.org/abs/2305.16264
- Can LLMs learn from a single example?
-
Chinchilla’s Death
You might want to give a read to "Scaling Data-Constrained Language Models" [1]. They basically generalized the Chinchilla scaling law by investigating behavior on multi-epoch runs.
[1] https://arxiv.org/abs/2305.16264
-
RWKV Pile+ seems to be training on far more tokens than any LLM ever has
I would imagine that there is a lot of overlap, yeah. That said, training on repeated data does seem to be effective at this level.
-
(2/2) May 2023
Scaling Data-Constrained Language Models (https://arxiv.org/abs/2305.16264)
- How to Keep Scaling Large Language Models when Data Runs Out? A New AI Research Trains 400 Models with up to 9B Parameters and 900B Tokens to Create an Extension of Chinchilla Scaling Laws for Repeated Data
What are some alternatives?
DB-GPT - AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
TinyLlama - The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
GirlfriendGPT - Girlfriend GPT is a Python project to build your own AI girlfriend using ChatGPT4.0
airoboros - Customizable implementation of the self-instruct paper.
tree-of-thoughts - Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
code_contests
prompt-engineering - Tips and tricks for working with Large Language Models like OpenAI's GPT-4.
waymo-open-dataset - Waymo Open Dataset
SuperAGI - <⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
whylogs - An open-source data logging library for machine learning models and data pipelines. 📚 Provides visibility into data quality & model performance over time. 🛡️ Supports privacy-preserving data collection, ensuring safety & robustness. 📈
chathub - All-in-one chatbot client