Learn_Prompting
openai-cookbook
Learn_Prompting | openai-cookbook | |
---|---|---|
77 | 215 | |
4,160 | 55,954 | |
- | 1.0% | |
9.3 | 9.5 | |
about 2 months ago | 4 days ago | |
MDX | MDX | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Learn_Prompting
-
Ask HN: Where to learn the cutting edge of prompt engineering?
- https://learnprompting.org
-
Announcing HackAPrompt, the first-ever global Prompt Hacking competition (Sponsored by OpenAI , PreambleAI, ScaleAI, & HuggingFace)
In May 2023, our team at Learn Prompting hosted HackAPrompt, the first-ever global prompt hacking competition aimed at improving AI safety! Over 3,000 hackers competed to outsmart and trick large language models (LLMs) for their share of over $35K which was sponsored by industry leaders such as OpenAI, Scale AI, Preamble, StabilityAI, Snorkel, and Hugging Face. Their support enabled our team to collect over 600,000 adversarial prompts, which we analyzed to enhance the overall safety of large language models (LLMs).
-
Enhancing AI Interaction: A Guide to Prompt Engineering
DeepLearning.AI Prompt engineering Learn Prompting
-
HackAPrompt 2023
The HackAPrompt competition, organized by Learn Prompting, challenges participants to exploit the vulnerabilities of LLMs (Large Language Models) through a process known as prompt hacking. This involves tricking the AI into expressing unintended outputs, specifically the phrase "I have been PWNED", while abiding by strict rules regarding punctuation and additional characters.
-
Discover and share interesting ChatGPT chats
I’m always curious in learning about how others uses ChatGPT. It’s awesome that there’re lots of online tutorials (eg book, website and course) teaching about prompt engineering. Most of them follow a single Q & A style, that’s one question followed by a single answer, which is great and useful. However, I’m more interested in seeing longer form of conversations with better context. And it’s great that ChatGPT added the feature to share chats. I created an directory website/community for this, so that we can all discover, share and be creative.
-
Seeking Your Top Recommendations for Resources on ChatGPT and Generative AI
Solid website on prompting
-
🧰 AI Tools +150 Tools in 6 Categories
Midjourney - An old dog in the new industry. Text to art, learn prompting to get better results.
- Koji posao se moze raditi da budes svaki mjesec u drugoj zemlji?
-
Prompt Engineering Courses are SCAM !!!!
https://learnprompting.org/ only thing you'll ever need...and all free
-
Want to learn prompt engineering
You can start here here
openai-cookbook
-
Question-Answer System Architectures using LLMs
A pretrained LLM is a closed-book system: It can only access information that it was trained on. With domain fine-tuning, the system manifests additional material. An early prototype of this technique was shown in this OpenAi cookbook: For the target domain, text was embedded using an API, and then when using the LLM, embeddings were retrieved using semantic similarity search to formulate an answer. Although this approach evolved to retrieval-augmented generation, its still a technique to adapt a Gen2 (2020) or Gen3 (2022) LLM into a question-answering system.
-
Ask HN: High quality Python scripts or small libraries to learn from
https://github.com/openai/openai-cookbook/blob/main/examples...
- Collection of notebooks showcasing some fun and effective ways of using Claude
- OpenAI Cookbook: Techniques to improve reliability
- OpenAI Cookbooks
-
How to fine tune vit/convnet to focus on the layout of the input room image and ignore other things ?
It sounds like you are trying to tweak embeddings for similarity search. Rather than fine-tune the model's layers, you may want to try training a linear transformation the existing model's output embedding. Openai has a cookbook on how to do that. You will need some data though - but I think you can try it with ~20 pieces of synthetically generated data.
-
Best base model 1B or 7B for full finetuning
tutorial from OpenAI https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb
-
Resources to learn ChatGPT and the OpenAI API
OpenAI Cookbook
- OpenAI Cookbook
-
Another Major Outage Across ChatGPT and API
OpenAI community repo with lots of examples: https://github.com/openai/openai-cookbook
What are some alternatives?
Prompt-Engineering-Guide - 🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
askai - Command Line Interface for OpenAi ChatGPT
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
ChatGPT_DAN - ChatGPT DAN, Jailbreaks prompt
chatgpt-retrieval-plugin - The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
AgentGPT - 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
engshell - An English-language shell for any OS, powered by LLMs
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows