test
llama-dl
test | llama-dl | |
---|---|---|
9 | 17 | |
933 | 3,386 | |
- | - | |
2.5 | 8.8 | |
11 months ago | about 1 year ago | |
Python | Shell | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
test
- Measuring Multitask Language Understanding
-
Mixtral 7B MoE beats LLaMA2 70B in MMLU
Sources [1] MMLU Benchmark (Multi-task Language Understanding) | Papers With Code https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu [2] MMLU Dataset | Papers With Code https://paperswithcode.com/dataset/mmlu [3] hendrycks/test: Measuring Massive Multitask Language Understanding | ICLR 2021 - GitHub https://github.com/hendrycks/test [4] lukaemon/mmlu · Datasets at Hugging Face https://huggingface.co/datasets/lukaemon/mmlu [5] [2009.03300] Measuring Massive Multitask Language Understanding - arXiv https://arxiv.org/abs/2009.03300
-
BREAKING: Google just released its ChatGPT Killer
With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.
-
[Colab Notebook] Launch quantized MPT-30B-Chat on Vast.ai using text-generation-inference, integrated with ConversationChain
One method for comparison is the MMLU https://arxiv.org/abs/2009.03300.
- Partial Solution To AI Hallucinations
- Announcing GPT-4.
-
Show HN: Llama-dl – high-speed download of LLaMA, Facebook's 65B GPT model
Because there are many benchmarks that measure different things.
You need to look at the benchmark that reflects your specific interest.
So in this case ("I wasn't impressed that 30B didn't seem to know who Captain Picard was") the closest relevant benchmark they performed is MMLU (Massive Multitask Language Understanding"[1].
In the LLAMA paper they publish a figure of 63.4% for the 5-shot average setting without fine tuning on the 65B model, and 68.9% after fine tuning. This is significantly better that the original GPT-3 (43.9% under the same conditions) but as they note:
> "[it is] still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022))"
InstructGPT[2] (which OpenAI points at as most relevant ChatGPT publication) doesn't report MMLU performance.
[1] https://github.com/hendrycks/test
[2] https://arxiv.org/abs/2203.02155
-
DeepMind's newest language model, Chinchilla (70B parameters), significantly outperforms Gopher (280B) and GPT-3 (175B) on a large range of downstream evaluation tasks
Benchmark result is 67.6% which is 7.6% improvement from Gopher. MMLU is multiple choice Q&A over various subjects. Questions can be found linked in this github repo (see data).
llama-dl
-
Gitlab confirms it's removed Suyu, a fork of Nintendo Switch emulator Yuzu
There seems to be some confusion here. Let me step in as someone who has gone through this.
My repo https://github.com/shawwn/llama-dl was taken down last March by Facebook. They asserted copyright over LLaMA, which is obviously bogus since it was trained on data they do not own the copyright to. I was bummed about this, but after I mentioned on HN that I was willing to fight Meta, an anonymous person named L contacted me and sent $20k of Monero to cover legal fees. I was also contacted by an amazing lawyer who wanted to represent me in this. I was absurdly fortunate on both counts.
He drafted a counternotice, we sent it, and then my repo was restored within a week or so.
GitHub had no choice in the matter. Legally this is a required process. Ditto for GitLab. Both are US companies.
When YouTube-dl was taken down some time ago by a DMCA, Nat went to bat and got it restored, and GitHub made some sort of pledge to cover legal fees associated with bogus takedown requests.
Here’s the shitty part for this particular situation. A case can be made that the emulator is for the purpose of circumventing copyright protection mechanisms. This, sadly, is a solid legal basis for issuing a lawful takedown, as much as we all absolutely despise that idea. It’s pretty clear cut; Nintendo doesn’t want Switch games to be run on non-Nintendo platforms, and the emulator seeks to enable Switch games to be run on any platform. Therefore, the intent of the emulator is to circumvent Nintendo’s protection mechanisms.
So where does this leave us? Well, the team can file a counternotice. GitLab will restore the repo. But that opens up the team to a lawsuit by Nintendo. And as much as I want to stand up to bullies, there’s a difference between standing up to a guy shoving a kid in a locker vs standing up to a Silverback gorilla charging at you. Nintendo’s legal history implies the latter.
Welcome to Nintendo pain. The Smash community has been dealing with Nintendo’s BS for decades now. They shut down tournaments that use emulators for Smash Melee. And no one can do anything, because it’s their legal right to do so.
- [Chat Gpt] Metas LLaMA LLM ist durchgesickert – Führen Sie unzensierte KI auf Ihrem Heim-PC aus!
-
Run LLaMA and Alpaca on your computer
Your philosophical argument is interesting, but what the op was saying was one of the linked repos in inaccessible due to DMCA: https://github.com/shawwn/llama-dl
So while what you say may be true the DMCA seems to have worth for these orgs because they can get code removed by the host, who is uninterested in litigating, and the repo owner likely is even less capable of litigating the DMCA.
Unfortunately as a tool of fear and legal gridlock DMCA has shown itself to be very useful to those with ill intent.
- Meta DMCAs llama-dl Repository
- Load LLaMA Models Instantly
-
Is there some sort of open-source equivalent of this?
Here are some useful links: https://github.com/shawwn/llama-dl and https://rentry.org/llama-tard-v2#tips-and-tricks
- FLiP Stack Weekly for 13 March 2023
-
Using LLaMA with M1 Mac and Python 3.11
Sure. You can get models with magnet link from here https://github.com/shawwn/llama-dl/
To get running, just follow these steps https://github.com/ggerganov/llama.cpp/#usage
-
New JailBreak prompt + How to stop flagging/blocking!
https://rentry.org/llama-tard-v2#tips-and-tricks https://github.com/shawwn/llama-dl
- LLaMA, o ChatGPT da Meta vaza na internet e já pode ser baixada
What are some alternatives?
mmfewshot - OpenMMLab FewShot Learning Toolbox and Benchmark
llama.cpp - LLM inference in C/C++
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
llama - Inference code for Llama models
RAD - RAD Expansion Unit for C64/C128
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
ut - C++20 μ(micro)/Unit Testing Framework
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
elm-test-rs - Fast and portable executable to run your Elm tests
dalai - The simplest way to run LLaMA on your local machine
egghead - discord bot for ai stuff
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2