nn-zero-to-hero
awesome-chatgpt-prompts
nn-zero-to-hero | awesome-chatgpt-prompts | |
---|---|---|
10 | 157 | |
10,499 | 104,393 | |
- | - | |
2.4 | 7.0 | |
8 days ago | 7 days ago | |
Jupyter Notebook | HTML | |
MIT License | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nn-zero-to-hero
-
Understanding GPT Tokenizers
Andrej covers this in https://github.com/karpathy/nn-zero-to-hero. He explains things in multiple ways, both the matrix multiplications as well as the "programmer's" way of thinking of it - i.e. the lookups. The downside is it takes a while to get through those lectures. I would say for each 1 hour you need another 10 to looks stuff up and practice, unless you are fresh out of calculus and linear algebra classes.
- New to AI and ChatGPT - Where do I start?
-
Let's Create Our Own ChatGPT From Scratch! — An online discussion group starting Tuesday May 16, monthly meetings
All the needed course material is here: https://github.com/karpathy/nn-zero-to-hero
- Any good content for software engineers looking to delve deeper into LLMs/AI/NLP etc?
-
GPT in 60 Lines of NumPy
That concept is not the easiest to describe succinctly inside a file like this, I think (especially as there are various levels of 'beginner' to take into account here). This is considered a very entry level concept, and I think there might be others who would consider it to be noise if logged in the code or described in the comments/blogpost.
After all, there was a disclaimer that you might have missed up front in the blogpost! "This post assumes familiarity with Python, NumPy, and some basic experience training neural networks." So it is in there! But in all of the firehose of info we get maybe it is not that hard to miss.
However, I'm here to help! Thankfully the concept is not too terribly difficult, I believe.
Effectively, the loss function compresses the task we've described with our labels from our training dataset into our neural network. This includes (ideally, at least), 'all' the information the neural network needs to perform that task well, according to the data we have, at least. If you'd like to know more about the specifics of this, I'd refer you to the original Shannon-Weaver paper on information theory -- Weaver's introduction to the topic is in plain English and accessible to (I believe) nearly anyone off of the street with enough time and energy to think through and parse some of the concepts. Very good stuff! An initial read-through should take no more than half an hour to an hour or so, and should change the way you think about the world if you've not been introduced to the topic before. You can read a scan of the book at a university hosted link here: https://raley.english.ucsb.edu/wp-content/Engl800/Shannon-We...
Using some of the concepts of Shannon's theory, we can see that anything that minimizes an information-theoretic loss function should indeed learn as well those prerequisites to the task at hand (features that identify xyz, features that move information about xyz from place A to B in the neural network, etc). In this case, even though it appears we do not have labels -- we certainly do! We are training on predicting the _next words_ in a sequence, and so thus by consequence humans have already created a very, _very_ richly labeled dataset for free! In this way, getting the data is much easier and the bar to entry for high performance for a neural network is very low -- especially if we want to pivot and 'fine-tune' to other tasks. This is because...to learn the task of predicting the next word, we have to learn tons of other sub-tasks inside of the neural network which overlap with the tasks that we want to perform. And because of the nature of spoken/written language -- to truly perform incredibly well, sometimes we have to learn all of these alternative tasks well enough that little-to-no-finetuning on human-labeled data for this 'secondary' task (for example, question answering) is required! Very cool stuff.
This is a very rough introduction, I have not condensed it as much as it could be and certainly, some of the words are more than they should be. But it's an internet comment so this is probably the most I should put into it for now. I hope this helps set you forward a bit on your journey of neural network explanation! :D :D <3 <3 :)))))))))) :fireworks:
For reference, I'm interested very much in what I refer to as Kolmogorov-minimal explanations (Wikipedia 'Kolmogorov complexity' once you chew through some of that paper if you're interested! I am still very much a student of it, but it is a fun explanation). In fact (though this repo performs several functions), I made https://github.com/tysam-code/hlb-CIFAR10 as beginner-friendly as possible. One does have to make some decisions to keep verbosity down, and I assume a very basic understand of what's happening in neural networks here too.
I have yet to find a good go-to explanation of neural networks as a conceptual intro (I started with Hinton -- love the man but extremely mathematically technical for foundation! D:). Karpathy might have a really good one, I think I saw a zero-to-hero course from him a little while back that seemed really good.
Andrej (practically) got me into deep learning via some of his earlier work, and I really love basically everything that I've seen the man put out. I skimmed the first video of his from this series and it seems pretty darn good, I trust his content. You should take a look! (Github and first video: https://github.com/karpathy/nn-zero-to-hero, https://youtu.be/VMj-3S1tku0)
For reference, he is the person that's made a lot of cool things recently, including his own minimal GPT (https://github.com/karpathy/minGPT), and the much smaller version of it (https://github.com/karpathy/nanoGPT). But of course, since we are in this blog post I would refer you to this 60 line numpy GPT first (A. to keep us on track, B. because I skimmed it and it seemed very helpful! I'd recommend taking a look at outside sources if you're feeling particularly voracious in expanding your knowledge here.)
I hope this helps give you a solid introduction to the basics of this concept, and/or for anyone else reading this, feel free to let me know if you have any technically (or-otherwise) appropriate questions here, many thanks and much love! <3 <3 <3 <3 :DDDDDDDD :)))))))) :)))) :))))
-
Trending ML repos of the week 📈
6️⃣ karpathy/nn-zero-to-hero
-
What can I do to start learning machine learning?
I’m a software engineer with zero experience with ml but have an interest in learning. I am confortable programming in any dynamic object oriented language. My basic plan to get started is to spend some time with the mathematical foundations of ml (Udemy course Mathematical foundations of Machine learning on Udemy looks decent). It also covers these concepts in the context of popular ml frameworks such as tensorflow and PyTorch so that’s kind of a two for one. I also stumbled upon this course: https://github.com/karpathy/nn-zero-to-hero.
- Neural Networks: Zero to Hero
- Mesterséges intelligencia
awesome-chatgpt-prompts
- Top ChatGPT prompts I could find with ranking system
- FLaNK Stack Weekly 12 February 2024
-
🌌 5 Open-Source GPT Wrappers to Boost Your AI Experience 🎁
Aside from the built-in prompts powered by awesome-chatgpt-prompts (Are you an ETH dev, a financial analyst, or a personal trainer today?), you can also create, share and debug your chat tools with prompt templates.
- Aprimorando as respostas do ChatGPT com prompts estratégicos
-
Ask HN: Daily practices for building AI/ML skills?
I've found the following resources helpful:
- 15 Rules For Crafting Effective GPT Chat Prompts (https://expandi.io/blog/chat-gpt-rules/)
- Awesome ChatGPT Prompts (https://github.com/f/awesome-chatgpt-prompts)
For more resources of like nature, you can search for "mega prompt".
-
Prompt writing communities
Someone assembled an adhoc page in Github that is amassing quite a large library of prompt ideas [Github]
-
Ask HN: Collection of best GPT-4 prompts?
I like to use PromptLayer for this. But you could easily set up a simple CRUD web app to track prompts/average completion token # length, different variations.
There is also awesome-chatgpt-prompts (https://github.com/f/awesome-chatgpt-prompts) which has some interesting ones. What are you looking for?
- Supercharge your writing with ChatGPT prompts
-
Introducing YourChat: A multi-platform LLM chat client that supports the APIs of text-generation-webui and llama.cpp.
* Built-In Prompts: Channel creativity using integrated prompts sourced from github.com/f/awesome-chatgpt-prompts.
-
Yet another ChatGPT generated workout... but modified.
So, I jumped into the ChatGPT fitness wagon to generate a New And Improved® workout that will have a mix of bodybuilding and calisthenics. I used a pre-made prompt to generate a PPL+FB and specified things like fitness leve, equipment, schedules, etc. in order to make if fit my current status. From there I made it fit some of my needs and chose some exercises that I wanted to do every day: wrist and core.
What are some alternatives?
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
ChatGPT-pdf - A Chrome extension for downloading your ChatGPT history to PNG, PDF or a sharable link
minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
gpt-prompts-cli - CLI for selecting or defining prompts to use with the ChatGPT chatbot, which retrieves the prompts from the awesome-chatgpt-prompts repository.
llama.go - llama.go is like llama.cpp in pure Golang!
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
tuning_playbook - A playbook for systematically maximizing the performance of deep learning models.
llm-workflow-engine - Power CLI and Workflow manager for LLMs (core package)
tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer
chatgpt-google-extension - A browser extension that enhance search engines with ChatGPT