TruthfulQA
safari
TruthfulQA | safari | |
---|---|---|
4 | 5 | |
508 | 843 | |
- | 1.4% | |
2.8 | 3.5 | |
6 months ago | about 1 month ago | |
Jupyter Notebook | Assembly | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TruthfulQA
-
airoboros gpt-4 instructed + context-obedient question answering
Dataset: https://github.com/sylinrl/TruthfulQA
-
Scaling Transformer to 1M tokens and beyond with RMT
this is a great point.
do you know of any benchmarks doing this today?
given the acute need to evaluate models on contextual factuality, we're exploring how to create a benchmark for this purpose but prefer existing benchmarks if possible.
openai's truthfulqa[0] is close but does not focus on contextual factuality and targets a much harder problem of absolute truth.
if none exist, and people are interested in contributing, please reach out.
[0] https://github.com/sylinrl/TruthfulQA
-
[D] Is all the talk about what GPT can do on Twitter and Reddit exaggerated or fairly accurate?
I agree they show that you can brute-force mimick uncertainty estimates to some degree, and that the model is generally well calibrated (though on what is basically a set of trivia questions, so YMMV)... yet:
-
[R] TruthfulQA: Measuring How Models Mimic Human Falsehoods
Code for https://arxiv.org/abs/2109.07958 found: https://github.com/sylinrl/TruthfulQA
safari
-
MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers
> Also, we know that transformers can scale
Do we have strong evidence that other models don't scale or have we just put more time into transformers?
Convolutional resnets look to scale on vision and language: (cv) https://arxiv.org/abs/2301.00808, (cv) https://arxiv.org/abs/2110.00476, (nlp) https://github.com/HazyResearch/safari
MLPs also seem to scale: (cv) https://arxiv.org/abs/2105.01601, (cv) https://arxiv.org/abs/2105.03404
I mean I don't see a strong reason to turn away from attention as well but I also don't think anyone's thrown a billion parameter MLP or Conv model at a problem. We've put a lot of work into attention, transformers, and scaling these. Thousands of papers each year! Definitely don't see that for other architectures. The ResNet Strikes back paper is a great paper for one reason being that it should remind us all to not get lost in the hype and that our advancements are coupled. We learned a lot of training techniques since the original ResNet days and pushing those to ResNets also makes them a lot better and really closes the gaps. At least in vision (where I research). It is easy to railroad in research where we have publish or perish and hype driven reviewing.
-
Unlimiformer: Long-Range Transformers with Unlimited Length Input
After a very quick read, that's my understanding too: It's just KNN search. So I agree on points 1-3. When something works well, I don't care much about point 4.
I've had only mixed success with KNN search. Maybe I haven't done it right? Nothing seems to work quite as well for me as explicit token-token interactions by some form of attention, which as we all know is too costly for long sequences (O(n²)). Lately I've been playing with https://github.com/hazyresearch/safari , which uses a lot less compute and seems promising. Otherwise, for long sequences I've yet to find something better than https://github.com/HazyResearch/flash-attention for n×n interactions and https://github.com/glassroom/heinsen_routing for n×m interactions. If anyone here has other suggestions, I'd love to hear about them.
- How big a breakthrough is this "Hyena" architecture?
-
Hyena: This new technology could blow away GPT-4 and everything like it
Code: https://github.com/HazyResearch/safari
-
Scaling Transformer to 1M tokens and beyond with RMT
the code is here https://github.com/hazyresearch/safari you should try it and let us know your verdict.
What are some alternatives?
recurrent-memory-transformer - [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.
heinsen_routing - Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.
auto-evaluator
flash-attention - Fast and memory-efficient exact attention
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf