-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
It depends on the eval, but I think it's fair to say that it's close. Here is the AGI Eval results organized into a table w/ averages (also I put in the new Hermes LLama2 13B model as well: https://docs.google.com/spreadsheets/d/1kT4or6b0Fedd-W_jMwYp...
It beats out ChatGPT in every category except SAT-Math. We definitely need harder benchmarks.
So far, there's BIG-Bench Hard https://github.com/suzgunmirac/BIG-Bench-Hard and just published, Advanced Reasoning Benchmark https://arb.duckai.org/
You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.
While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.
You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.
While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.
You will want to look at HumanEval (https://github.com/abacaj/code-eval) and Eval+ (https://github.com/my-other-github-account/llm-humaneval-ben...) results for coding.
While Llama2 is an improvement over LLaMA v1, it's still nowhere near even the best open models (currently, sans test contamination, WizardCoder-15B, a StarCoder fine tune is at top). It's really not a competition atm though, ChatGPT-4 wipes the floor for coding atm.
That's helpful!
I've done a lot of work in audio synthesis, which is notoriously difficult measure. The gold-standard is human ratings of audio quality, but it is tough to design good tests (easy to fatigue raters) and the iteration time waiting for results is quite long.
Instead, there's now some projects which use neural networks trained on human ratings to predict audio quality, such as ViSQoL: https://github.com/google/visqol
This opens up fast iteration - scores going up generally corresponds to higher quality - followed by human testing at major milestones (eg, releasing a paper/model). VISQOL has a harder time comparing 'unrelated' models, IMO - ends up being not so great for comparison of different techniques, but excellent for measuring incremental improvement or catching regressions.
But, in the end, yes - you can use NN's to measure the quality of other NN's, so long as you're careful about it and make use of human raters from time to time as well.
The problem of test data getting into the training data seems to be an especially pernicious issue with LLM's, which isn't really arising in the audio synthesis space.