ut
test
Our great sponsors
ut | test | |
---|---|---|
10 | 9 | |
1,197 | 933 | |
1.8% | - | |
7.0 | 2.5 | |
about 1 month ago | 11 months ago | |
C++ | Python | |
Boost Software License 1.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ut
-
[C++20][safety] static_assert is all you need (no leaks, no UB)
I don't think stepping through static_assert is a thing? Curious if it is, though. Since constexpr is either run-time or compile-time and static_assert is not a poor man's debugging facility could be to -Dstatic_assert(...) assert(__VA_ARGS__) and gdb the code. Alternatively, a more refined solution would be to use an UT framework (for example https://github.com/boost-ext/ut) which helps with that. IMHO, TDD can also limit the requirement of stepping into the code and with gurantees that the code is memory safe and UB safe there is less need for sanitizers and valgrind etc. depending on the coverage.
-
snatch -- A lightweight C++20 testing framework
It was not easy, I had to modify Boost UT to get it to run my tests. It doesn't support type-parametrized tests when the type parameter is non-copiable, which was the case for me. This is a symptom of a larger issue, which is that it relies on std::apply and std::tuple to generate the type-parametrized tests, which in turns requires instantiating the tuple and the contained objects (even though these instances aren't actually used; eh). That's a no go for me, since I need to carefully monitor when instance are created, and this was throwing off my test code. I had to effectively disable these checks to get it to run without failures. Then there was a similar issue with expect(), which doesn't work if part of the expression is non-copiable. I reported these issues to them.
-
[C++20] New way of meta-programming?
https://github.com/boost-ext/ut (for better user interface when defining tests without macros)
-
Getting started with Boost in 2022
https://github.com/boost-ext/ut from Kris Jusiak is worth checking
- How to unit test
-
Calculate Your Code Performance
C++: C++ has quite a number of benchmarking libraries some of the recent ones involving C++ 20's flexibility. The most notable being Google Bench and UT. C does not have many specific benchmarking libraries, but you can easily integrate C code with C++ benchmarking libraries in order to test the performance of your C code.
-
Benchmarking Code
UT
-
Another C++ unit testing framework without macros
In Boost.UT there is a number of different styles to add a parametrized test case. All of them are pretty cryptic bue to heavy isage of oeverloaded operators of custom "non-public" classes. Except for the for-loop method, in all other methods the list of parameter values goes after the test procedure definition. I find this inconvenient, as I want to see list of parameter value next to the test name. This is what I used to from the times I was coding a lot of unit tests in C#.
test
- Measuring Multitask Language Understanding
-
Mixtral 7B MoE beats LLaMA2 70B in MMLU
Sources [1] MMLU Benchmark (Multi-task Language Understanding) | Papers With Code https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu [2] MMLU Dataset | Papers With Code https://paperswithcode.com/dataset/mmlu [3] hendrycks/test: Measuring Massive Multitask Language Understanding | ICLR 2021 - GitHub https://github.com/hendrycks/test [4] lukaemon/mmlu · Datasets at Hugging Face https://huggingface.co/datasets/lukaemon/mmlu [5] [2009.03300] Measuring Massive Multitask Language Understanding - arXiv https://arxiv.org/abs/2009.03300
-
BREAKING: Google just released its ChatGPT Killer
With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.
-
[Colab Notebook] Launch quantized MPT-30B-Chat on Vast.ai using text-generation-inference, integrated with ConversationChain
One method for comparison is the MMLU https://arxiv.org/abs/2009.03300.
- Partial Solution To AI Hallucinations
- Announcing GPT-4.
-
Show HN: Llama-dl – high-speed download of LLaMA, Facebook's 65B GPT model
Because there are many benchmarks that measure different things.
You need to look at the benchmark that reflects your specific interest.
So in this case ("I wasn't impressed that 30B didn't seem to know who Captain Picard was") the closest relevant benchmark they performed is MMLU (Massive Multitask Language Understanding"[1].
In the LLAMA paper they publish a figure of 63.4% for the 5-shot average setting without fine tuning on the 65B model, and 68.9% after fine tuning. This is significantly better that the original GPT-3 (43.9% under the same conditions) but as they note:
> "[it is] still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022))"
InstructGPT[2] (which OpenAI points at as most relevant ChatGPT publication) doesn't report MMLU performance.
[1] https://github.com/hendrycks/test
[2] https://arxiv.org/abs/2203.02155
-
DeepMind's newest language model, Chinchilla (70B parameters), significantly outperforms Gopher (280B) and GPT-3 (175B) on a large range of downstream evaluation tasks
Benchmark result is 67.6% which is 7.6% improvement from Gopher. MMLU is multiple choice Q&A over various subjects. Questions can be found linked in this github repo (see data).
What are some alternatives?
Boost.Test - The reference C++ unit testing framework (TDD, xUnit, C++03/11/14/17)
mmfewshot - OpenMMLab FewShot Learning Toolbox and Benchmark
Catch - A modern, C++-native, test framework for unit-tests, TDD and BDD - using C++14, C++17 and later (C++11 support is in v2.x branch, and C++03 on the Catch1.x branch)
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
FakeIt - C++ mocking made easy. A simple yet very expressive, headers only library for c++ mocking.
RAD - RAD Expansion Unit for C64/C128
doctest - The fastest feature-rich C++11/14/17/20/23 single-header testing framework
elm-test-rs - Fast and portable executable to run your Elm tests
test - A library for writing unit tests in Dart.
egghead - discord bot for ai stuff
KmTest - Kernel-mode C++ unit testing framework in BDD-style
llama-int8 - Quantized inference code for LLaMA models