-
Interesting to see the use of ruff and black in the same project. https://github.com/openai/transformer-debugger/blob/main/.pr...
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
We may well look back in future years and view the underlying approach introduced in Reexpress as among the more significant results of the first quarter of the 21st century. With Reexpress, we can generate reliable probability estimates over high-dimensional objects (e.g., LLMs), including in the presence of a non-trivial subset of distribution shifts seen in practice. A non-vacuous argument can be made that this solves the alignment/super-alignment problem (the ultimate goal of the line of work in the post above, and why I mention this here), because we can achieve this behavior via composition with networks of arbitrary size.
Because the parameters of the large neural networks are non-identifiable (in the statistical sense), we operate at the unit of analysis of labeled examples/exemplars (i.e., the observable data), with a direct connection between the Training set and the Calibration set.
This has important practical implications. It works with essentially any generative AI model. For example, we can build an 'uncertainty-aware GPT-4' for use in enterprise and professional settings, such as law: https://github.com/ReexpressAI/Example_Data/blob/main/tutori...
(The need for reliable, controllable estimates is critical regardless of any notion of AGI, since the existing LLMs are already getting baked into higher-risk settings, such as medicine, finance, and law.)