elm-test-rs
test
elm-test-rs | test | |
---|---|---|
5 | 9 | |
74 | 946 | |
- | - | |
1.4 | 2.5 | |
7 months ago | 11 months ago | |
Rust | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
elm-test-rs
-
elm-test and elm-test-rs
So I'm super interested in knowing more about your setup that makes it so much faster for elm-test than elm-test-rs. Would you mind open an issue in https://github.com/mpizenberg/elm-test-rs ?
-
Setting up an Elm project in 2022
To actually run the tests, however, there are currently two options. The first, as noted above, is node-test-runner, which is available from npm at elm-test. This utility will run the tests as defined in your Elm code, and return the results. There is a second option, elm-test-rs, which is written in Rust instead of Node. It has a handful of features that node-test-runner does not have, as well as some downsides (see the Github README for details), but in general both tools work very well for testing Elm code.
- Version 1.2 of elm-test-rs released (alternative to elm-test) with native ARM and Deno support
- Announcing elm-test-rs 1.0.0, a new tests runner for the Elm language, built in Rust
-
Announcing elm-test-rs 1.0.0, a fast and portable executable to run your Elm tests!
More info in the readme at https://github.com/mpizenberg/elm-test-rs
test
- Measuring Multitask Language Understanding
-
Mixtral 7B MoE beats LLaMA2 70B in MMLU
Sources [1] MMLU Benchmark (Multi-task Language Understanding) | Papers With Code https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu [2] MMLU Dataset | Papers With Code https://paperswithcode.com/dataset/mmlu [3] hendrycks/test: Measuring Massive Multitask Language Understanding | ICLR 2021 - GitHub https://github.com/hendrycks/test [4] lukaemon/mmlu · Datasets at Hugging Face https://huggingface.co/datasets/lukaemon/mmlu [5] [2009.03300] Measuring Massive Multitask Language Understanding - arXiv https://arxiv.org/abs/2009.03300
-
BREAKING: Google just released its ChatGPT Killer
With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.
-
[Colab Notebook] Launch quantized MPT-30B-Chat on Vast.ai using text-generation-inference, integrated with ConversationChain
One method for comparison is the MMLU https://arxiv.org/abs/2009.03300.
- Partial Solution To AI Hallucinations
- Announcing GPT-4.
-
Show HN: Llama-dl – high-speed download of LLaMA, Facebook's 65B GPT model
Because there are many benchmarks that measure different things.
You need to look at the benchmark that reflects your specific interest.
So in this case ("I wasn't impressed that 30B didn't seem to know who Captain Picard was") the closest relevant benchmark they performed is MMLU (Massive Multitask Language Understanding"[1].
In the LLAMA paper they publish a figure of 63.4% for the 5-shot average setting without fine tuning on the 65B model, and 68.9% after fine tuning. This is significantly better that the original GPT-3 (43.9% under the same conditions) but as they note:
> "[it is] still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022))"
InstructGPT[2] (which OpenAI points at as most relevant ChatGPT publication) doesn't report MMLU performance.
[1] https://github.com/hendrycks/test
[2] https://arxiv.org/abs/2203.02155
-
DeepMind's newest language model, Chinchilla (70B parameters), significantly outperforms Gopher (280B) and GPT-3 (175B) on a large range of downstream evaluation tasks
Benchmark result is 67.6% which is 7.6% improvement from Gopher. MMLU is multiple choice Q&A over various subjects. Questions can be found linked in this github repo (see data).
What are some alternatives?
node-test-runner - Runs elm-test suites from Node.js. Get it with npm install -g elm-test
mmfewshot - OpenMMLab FewShot Learning Toolbox and Benchmark
vite-elm-template - A default template for building Elm applications using Vite.
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
elm-review - Analyzes Elm projects, to help find mistakes before your users find them.
RAD - RAD Expansion Unit for C64/C128
vite-plugin-elm - A plugin for Vite enables you to compile an Elm application/document/element
ut - C++20 μ(micro)/Unit Testing Framework
editor-plugins - List of editor plugins for Elm.
egghead - discord bot for ai stuff
test - A library for writing unit tests in Dart.
llama-int8 - Quantized inference code for LLaMA models