chatgpt-failures
stanford_alpaca
chatgpt-failures | stanford_alpaca | |
---|---|---|
20 | 108 | |
574 | 28,856 | |
- | 0.9% | |
1.2 | 2.0 | |
about 1 year ago | about 2 months ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chatgpt-failures
-
OpenAI Research Says 80% of U.S. Workers' Jobs Will Be Impacted by GPT
Related / fun :
https://emaggiori.com/chatgpt-fails/
(I don't know the author or the book.)
https://github.com/giuven95/chatgpt-failures
"Plausible explanations
-
People still behaving like everything is normal!
There is a whole list of ChatGPT failures here. Such as the time traveling user error. Why did it insist the user had time traveled? Because when it made the initial error it feed that error into the "attention" of the conversation, using matrix addition, as if it was an authoritative fact of the evolving conversation. That matrix is the only actual piece of the conversation it retains for transforming new input from that user. The matrix addition nonlinear, so ChatGPT can't simply unwind such a mistakes and redo the matrix addition with valid information. So, when pressed, it generates the most "probable" explanation based on the best fit to the existing matrix at that time that defines the "true" state of the conversation as defined by that matrix. The "attention" part of ChatGPT, which is the part that makes it so convincingly powerful, is a matrix state that cannot unwind itself and reflect on how it came to that state. There is, and cannot be, a self correction mechanism using such a mechanism (matrix) to define the state of a conversation.
-
GPT-4
Is anybody compiling a list of errors specific to GPT-4?
This has been a great resource to-date:
https://github.com/giuven95/chatgpt-failures
-
Thoughts on ChatGPT and the future of e-commerce
As a result, it confidently asserts obvious inaccuracies, like "it takes 9 women 1 month to make a baby."
-
Amid ChatGPT outcry, some teachers are inviting AI to class
Here's a github repo of chatGPT failing. There's a PopSci article describing what chatGPT is doing, why it's susceptible to errors: It's trying to predict what words are used together, irrespective of what information is correct.
-
Just got Access !
You could try any of the prompts ChatGPT failed from here https://github.com/giuven95/chatgpt-failures
-
The new Bing AI hallucinated during the Microsoft demo. A reminder these tools are not reliable yet
In this article, the author Dmitri Brereton shows some mistakes the Bing AI committed in the recent Microsoft demo. I have archived more failure case examples in this repo: https://github.com/giuven95/chatgpt-failures
- L'etica della robotica. Io contro chatGTP
- How worried are you about AI replacing you
-
ChatGPT updated with improved factuality and mathematical capabilities.
Previously it said 1.
stanford_alpaca
-
How Open is Generative AI? Part 2
Alpaca is an instruction-oriented LLM derived from LLaMA, enhanced by Stanford researchers with a dataset of 52,000 examples of following instructions, sourced from OpenAI’s InstructGPT through the self-instruct method. The extensive self-instruct dataset, details of data generation, and the model refinement code were publicly disclosed. This model complies with the licensing requirements of its base model. Due to the utilization of InstructGPT for data generation, it also adheres to OpenAI’s usage terms, which prohibit the creation of models competing with OpenAI. This illustrates how dataset restrictions can indirectly affect the resulting fine-tuned model.
- Ask HN: AI/ML papers to catch up with current state of AI?
- OpenAI board in discussions with Sam Altman to return as CEO
- Are there any AI like ChatGPT without content restrictions?
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
In this article, we're going to experiment with LoRA and fine-tune Llama Alpaca using commercial hardware.
-
Creating a new Finetuned model
Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
-
Shock tick up for wage growth to 7.3% in blow for Bank of England
I'm not talking about OpenAI ChatGPT I'm talking about things ALPACA, and where did they train these models? Off the existing models for a fraction of a fraction of a fraction of the cost: https://crfm.stanford.edu/2023/03/13/alpaca.html
- Bye bye Bing
-
The idea maze for AI startups (2015)
I think there's a new approach for “How do you get the data?” that wasn't available when this article was written in 2015. The new text and image generative models can now be used to synthesize training datasets.
I was working on an typing autocorrect project and needed a corpus of "text messages". Most of the traditional NLP corpuses like those available through NLTK [0] aren't suitable. But it was easy to script ChatGPT to generate thousands of believable text messages by throwing random topics at it.
Similarly, you can synthesize a training dataset by giving GPT the outputs/labels and asking it to generate a variety of inputs. For sentiment analysis... "Give me 1000 negative movie reviews" and "Now give me 1000 positive movie reviews".
The Alpaca folks used GPT-3 to generate high-quality instruction-following datasets [1] based on a small set of human samples.
Etc.
[0] https://www.nltk.org/nltk_data/
[1] https://crfm.stanford.edu/2023/03/13/alpaca.html
-
Repos and tutorials for a full finetune (not LoRA)
AFAIK, the original alpaca repo was a full finetune. https://github.com/tatsu-lab/stanford_alpaca
What are some alternatives?
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
Milvus - A cloud-native vector database, storage for next generation AI applications
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
reflex - 🕸️ Web apps in pure Python 🐍
llama.cpp - LLM inference in C/C++
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
othello_world - Emergent world representations: Exploring a sequence model trained on a synthetic task
Alpaca-Turbo - Web UI to run alpaca model locally