datablations
prompt-engineering
datablations | prompt-engineering | |
---|---|---|
6 | 18 | |
290 | 7,988 | |
3.8% | 2.3% | |
6.9 | 5.1 | |
about 1 month ago | 6 months ago | |
Jupyter Notebook | ||
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
datablations
-
Gemini is only 1x Chinchilla, so it undertrained for production
1x chinchilla means it's not really undertrained but that more could be squeezed without excessive difficulty https://arxiv.org/abs/2305.16264
- Can LLMs learn from a single example?
-
Chinchilla’s Death
You might want to give a read to "Scaling Data-Constrained Language Models" [1]. They basically generalized the Chinchilla scaling law by investigating behavior on multi-epoch runs.
[1] https://arxiv.org/abs/2305.16264
-
RWKV Pile+ seems to be training on far more tokens than any LLM ever has
I would imagine that there is a lot of overlap, yeah. That said, training on repeated data does seem to be effective at this level.
-
(2/2) May 2023
Scaling Data-Constrained Language Models (https://arxiv.org/abs/2305.16264)
- How to Keep Scaling Large Language Models when Data Runs Out? A New AI Research Trains 400 Models with up to 9B Parameters and 900B Tokens to Create an Extension of Chinchilla Scaling Laws for Repeated Data
prompt-engineering
- Ask HN: Any good collection of writing prompts for GPT 3.5/4?
-
Show HN: LLM Agent Paper List
An agent is a style of prompt that lets LLMs act as reasoning engines. It's also known as the ReAct pattern (which engineers are avoiding using for namespace collision reasions).
You can read a good intro example here: https://github.com/brexhq/prompt-engineering#react
- FLaNK Stack Weekly for 20 June 2023
-
What are your long-term career goals?
Well, if developers get replaced by AI, then who are the managers going to manage :). I personally don't think AI is just going to replace us. The way we work will continue to change as new AI tools come out. I'm taking time to tinker with new tools and seeing how others do as well (e.g., I found Brex's tips and tricks for working with LLMs very insightful: https://github.com/brexhq/prompt-engineering).
-
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT
I recognize there's plenty of catnip here when it comes to calling this "engineering" or not, however, whatever you want to call it (prompt fiddling?), the techniques are crucial if you want to achieve reasonably consistent output from current-state LLMs. As models improve concerns about context window limitations will be reduced and it will be easier to discern user intent.
These are good straight-to-the-point guides:
- Prompt Engineering by BrexHQ: https://github.com/brexhq/prompt-engineering
- OpenAI guidance: https://help.openai.com/en/articles/6654000-best-practices-f...
- https://devblogs.microsoft.com/dotnet/gpt-prompt-engineering...
- (great examples): https://www.deeplearning.ai/short-courses/chatgpt-prompt-eng...
tl;dr:
-
(2/2) May 2023
Brex's Prompt Engineering Guide (https://github.com/brexhq/prompt-engineering)
- GitHub - brexhq/prompt-engineering: Tips and tricks for working with Large Language Models like OpenAI's GPT-4.
- Brex’s Prompt Engineering Guide
What are some alternatives?
TinyLlama - The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
airoboros - Customizable implementation of the self-instruct paper.
Prompt-Engineering-Guide - 🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
tree-of-thoughts - Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
FinGPT - FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
SuperAGI - <⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
chathub - All-in-one chatbot client
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
canal - 阿里巴巴 MySQL binlog 增量订阅&消费组件