chatgpt-failures
llama.cpp
chatgpt-failures | llama.cpp | |
---|---|---|
20 | 776 | |
574 | 57,463 | |
- | - | |
1.2 | 10.0 | |
about 1 year ago | 5 days ago | |
Python | C++ | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chatgpt-failures
-
OpenAI Research Says 80% of U.S. Workers' Jobs Will Be Impacted by GPT
Related / fun :
https://emaggiori.com/chatgpt-fails/
(I don't know the author or the book.)
https://github.com/giuven95/chatgpt-failures
"Plausible explanations
-
People still behaving like everything is normal!
There is a whole list of ChatGPT failures here. Such as the time traveling user error. Why did it insist the user had time traveled? Because when it made the initial error it feed that error into the "attention" of the conversation, using matrix addition, as if it was an authoritative fact of the evolving conversation. That matrix is the only actual piece of the conversation it retains for transforming new input from that user. The matrix addition nonlinear, so ChatGPT can't simply unwind such a mistakes and redo the matrix addition with valid information. So, when pressed, it generates the most "probable" explanation based on the best fit to the existing matrix at that time that defines the "true" state of the conversation as defined by that matrix. The "attention" part of ChatGPT, which is the part that makes it so convincingly powerful, is a matrix state that cannot unwind itself and reflect on how it came to that state. There is, and cannot be, a self correction mechanism using such a mechanism (matrix) to define the state of a conversation.
-
GPT-4
Is anybody compiling a list of errors specific to GPT-4?
This has been a great resource to-date:
https://github.com/giuven95/chatgpt-failures
-
Thoughts on ChatGPT and the future of e-commerce
As a result, it confidently asserts obvious inaccuracies, like "it takes 9 women 1 month to make a baby."
-
Amid ChatGPT outcry, some teachers are inviting AI to class
Here's a github repo of chatGPT failing. There's a PopSci article describing what chatGPT is doing, why it's susceptible to errors: It's trying to predict what words are used together, irrespective of what information is correct.
-
Just got Access !
You could try any of the prompts ChatGPT failed from here https://github.com/giuven95/chatgpt-failures
-
The new Bing AI hallucinated during the Microsoft demo. A reminder these tools are not reliable yet
In this article, the author Dmitri Brereton shows some mistakes the Bing AI committed in the recent Microsoft demo. I have archived more failure case examples in this repo: https://github.com/giuven95/chatgpt-failures
- L'etica della robotica. Io contro chatGTP
- How worried are you about AI replacing you
-
ChatGPT updated with improved factuality and mathematical capabilities.
Previously it said 1.
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
gpt4all - gpt4all: run open-source LLMs anywhere
Milvus - A cloud-native vector database, storage for next generation AI applications
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
reflex - 🕸️ Web apps in pure Python 🐍
ggml - Tensor library for machine learning
othello_world - Emergent world representations: Exploring a sequence model trained on a synthetic task
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM