Train CIFAR-10 in <7 seconds on an A100, the current world record.
I recently released a codebase in beta that modernizes a tiny model that gets really good performance on CIFAR-10 in about 18.1 or so seconds on the right single GPU -- a number of years ago the world record was 10 minutes, down from several days a few years previously.
While most of my work was porting and cleaning up certain parts of the code for a different purpose (just-clone-and-hack experimentation workbench), I've spent years optimizing neural networks at a very fine grained level, and many of the lessons learned here in debugging reflected that.
There is fundamentally a few NP-hard layers unfortunately but they are not hard blockers to progress. The model I mentioned above is extremely simple and has little "extra fat" where it is not needed. It also importantly seems to have good gradient and such flow throughout, something that's important for a model to be able to learn quickly. There are a few reasonable priors, like initializing and freezing the first convolution to whiten the inputs based upon some statistics from the training data. That does a shocking amount of work in stabilizing and speeding up training.
Ultimately, the network is simple, and there are a number of other methods to help it reach near-SOTA, but they are as simple as can be. I think as this project evolves and we get nearer to the goal (<2 seconds in a year or two), we'll keep uncovering good puzzle pieces showing exactly what it is that's allowing such a tiny network to perform so well. There's a kind of exponential value to having ultra-short training times -- you can somewhat open-endedly barrage-test your algorithm, something that's already led to a few interesting discoveries that I'd like to refine before publishing to the repo.
If you're interested, the code is here. The running code is a single .py with the upsides and downsides that come with that. If you're interested or have any questions, let me know! :D :))))
A Symbolic Regression engine
If interpretability is sufficiently important, you could straight-up search for mathematical formulae.
My SymReg library pops to mind. I'm thinking of rewriting it in multithreaded Julia this holiday season.
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Distributed High-Performance Symbolic Regression in Julia
Artificial Intelligence Projects (by RowColz)
> similar to genetic programming
A genetical algorithm was also what I was thinking of. Come up with some kind of symbolic (textual) way to represent a wiring/circuit diagram (graph) and evolve the most efficient "learner" using mutation and cross-breeding (e-sex). The earliest GA I read about used Lisp dicing.
As far as "easiest" AI for humans to work with, "Factor tables" may be a way:
AI tuning them becomes more like accounting instead of a lab with Doc Brown.
Symbolicregression.jl – High-Performance Symbolic Regression in Julia and Python
2 projects | news.ycombinator.com | 15 Jul 2023
[D] Is there any research into using neural networks to discover classical algorithms?
2 projects | /r/MachineLearning | 1 Jan 2023
Symbolic Regression is NP-hard
1 project | news.ycombinator.com | 13 Nov 2022
[D] Inferring general physical laws from observations in 300 lines of code
1 project | /r/MachineLearning | 2 Aug 2021
Fine-tuning a Mistral Language Model with Anyscale
2 projects | dev.to | 1 Feb 2024