nano-neuron
ARC
nano-neuron | ARC | |
---|---|---|
2 | 18 | |
2,244 | 3,169 | |
0.3% | - | |
0.0 | 0.0 | |
over 2 years ago | 6 months ago | |
JavaScript | JavaScript | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nano-neuron
-
Self-Parking Car in <500 Lines of Code
I've covered Multilayer Perceptrons with a bit more details in my homemade-machine-learning, machine-learning-experiments, and nano-neuron projects. You may even challenge that simple network to recognize your written digits.
-
JavaScript Algorithms and Data Structures
B NanoNeuron - 7 simple JS functions that illustrate how machines can actually learn (forward/backward propagation)
ARC
-
AMA: I'm Dave Greene, an Accidental Expert on Conway's Game of Life
It's great for generating synthetic data for training LLMs for solving Abstraction & Reasoning Corpus (ARC) by François Chollet. Game of life helps the LLMs with a 2D understanding of the world.
https://github.com/fchollet/ARC
-
Large Language Models As General Pattern Machines
It's quite hard. You can download the dataset here [1] and it comes with a little webpage so that you can try it yourself.
It's worth noting that you are allowed to make three guesses.
[1]: https://github.com/fchollet/ARC
-
Last chance of contributing to the ARC 2 dataset, ends 30 June 2023
https://github.com/fchollet/ARC
The ARC 2 dataset is crowd sourced. If you can come up with a challenging task, then please contribute it.
-
How long would you bet AGI WON'T happen?
In that case, per your your definition, there will always be edgecases. Take a look at ARC for an example of something that is easy for humans to do, but not yet doable by any AI at anywhere close to a human level. In respect to forecasting, I couldn't honestly say with any degree of confidence. Scaling larger transformer models may hit a roadblock, or it may work for everything, or there may be another development which changes the game. The soonest I think AGI that meets your definition will happen is 10 years from now, but I'm not confident in that prediction.
- “In 2033 it will seem utterly baffling how a bunch of tech folks lost their minds over text generators in 2023 -- like reading about Eliza or Minsky's 1970 quote about achieving human-level general intelligence by 1975” - Francois Challet at Google
-
Eight Things to Know About Large Language Models [pdf]
Yes, François Chollet released ARC(Abstraction and Reasoning Corpus) benchmark for this in 2019, and the benchmark can be scored automatically. Humans solve 100% of tests and GPTs solve 0% of tests and GPTs made exactly zero progress from 2019 to 2022.
https://twitter.com/fchollet/status/1631699463524986880
https://github.com/fchollet/ARC
-
AGI 2023/2024?
Secondly, there are a few benchmarks that might actually be a good way to gauge the intelligence of AI. The Abstract and Reasoning Corpus attempts to measure actual intelligence. Turns out that LLMs have not actually improved their score on this test since 2019! Whether GPT-4 will be able to do a better job, specially when images can be used, remains to be seen. However, initial results are not very promising.
-
Réflexions autour du challenge "ARC" proposé par François Chollet
Source
-
[D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches?
neuro-symbolic systems where the neural network is tasked to _invent the system_. take for instance the ARC task ( https://github.com/fchollet/ARC ), when humans do these tasks (it appears to be the case that) we first invent a set of symbolic rules appropriate for the task at hand, then apply these rules
-
Does anyone else feel that AI Art will be an total game-changer in society?
Current models are bad at using abstraction and reasoning to address new problems. They require training for each task, and performs poorly outside of the tasks they were trained on. Researchers are working on this but it's a hard problem - possibly the core of what it means to be "intelligent".
What are some alternatives?
self-parking-car-evolution - 🧬 Training the car to do self-parking using a genetic algorithm
amazona - Build Ecommerce Like Amazon By MERN Stack
conference-deadlines - :alarm_clock: AI conference deadline countdowns + Calendar overview with deadlines and conference dates.
ARC-Game - The Abstraction and Reasoning Corpus made into a web game
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
to-view-list-mern - Keep track of online stuff, which you may want to view later. Made using MERN stack.
igel - a delightful machine learning tool that allows you to train, test, and use models without writing code
gun - An open source cybersecurity protocol for syncing decentralized graph data.
incogly - Incogly is a video conferencing app aimed to remove any implicit bias in an interview and easing the process of remote collaboration.
metagol - Metagol - an inductive logic programming system
javascript-algorithms - 📝 Algorithms and data structures implemented in JavaScript with explanations and links to further readings
AI-Expert-Roadmap - Roadmap to becoming an Artificial Intelligence Expert in 2022