amazona
ARC
Our great sponsors
amazona | ARC | |
---|---|---|
1 | 17 | |
582 | 2,091 | |
- | - | |
0.0 | 0.0 | |
over 1 year ago | almost 2 years ago | |
JavaScript | JavaScript | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
amazona
-
Can someone plz help with this error in a React app. "TypeError: Cannot read property 'prototype' of undefined"
To be totally honest I'm kinda new to this, but it uses express server not sure in the browser tho, here's a link to the tutorial I used for a better understanding: https://github.com/basir/amazona
ARC
-
Large Language Models As General Pattern Machines
It's quite hard. You can download the dataset here [1] and it comes with a little webpage so that you can try it yourself.
It's worth noting that you are allowed to make three guesses.
[1]: https://github.com/fchollet/ARC
-
Last chance of contributing to the ARC 2 dataset, ends 30 June 2023
https://github.com/fchollet/ARC
The ARC 2 dataset is crowd sourced. If you can come up with a challenging task, then please contribute it.
-
How long would you bet AGI WON'T happen?
In that case, per your your definition, there will always be edgecases. Take a look at ARC for an example of something that is easy for humans to do, but not yet doable by any AI at anywhere close to a human level. In respect to forecasting, I couldn't honestly say with any degree of confidence. Scaling larger transformer models may hit a roadblock, or it may work for everything, or there may be another development which changes the game. The soonest I think AGI that meets your definition will happen is 10 years from now, but I'm not confident in that prediction.
- “In 2033 it will seem utterly baffling how a bunch of tech folks lost their minds over text generators in 2023 -- like reading about Eliza or Minsky's 1970 quote about achieving human-level general intelligence by 1975” - Francois Challet at Google
-
Eight Things to Know About Large Language Models [pdf]
Yes, François Chollet released ARC(Abstraction and Reasoning Corpus) benchmark for this in 2019, and the benchmark can be scored automatically. Humans solve 100% of tests and GPTs solve 0% of tests and GPTs made exactly zero progress from 2019 to 2022.
https://twitter.com/fchollet/status/1631699463524986880
https://github.com/fchollet/ARC
-
AGI 2023/2024?
Secondly, there are a few benchmarks that might actually be a good way to gauge the intelligence of AI. The Abstract and Reasoning Corpus attempts to measure actual intelligence. Turns out that LLMs have not actually improved their score on this test since 2019! Whether GPT-4 will be able to do a better job, specially when images can be used, remains to be seen. However, initial results are not very promising.
-
Réflexions autour du challenge "ARC" proposé par François Chollet
Source
-
[D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches?
neuro-symbolic systems where the neural network is tasked to _invent the system_. take for instance the ARC task ( https://github.com/fchollet/ARC ), when humans do these tasks (it appears to be the case that) we first invent a set of symbolic rules appropriate for the task at hand, then apply these rules
-
Does anyone else feel that AI Art will be an total game-changer in society?
Current models are bad at using abstraction and reasoning to address new problems. They require training for each task, and performs poorly outside of the tasks they were trained on. Researchers are working on this but it's a hard problem - possibly the core of what it means to be "intelligent".
-
AI conversation in 2011 vs. 2021
If you want an immediate proof of this, download this repository: https://github.com/fchollet/ARC and solve some problems, and then ask yourself to articulate precisely how you solved those problems. You won't be able to, and if you can, please write out the algorithm out in python or something - you'll easily win the nobel prize this year, and get a 7 figure job at whatever tech company you want.
What are some alternatives?
mern-stack-application - A MERN stack e-commerce website.
ARC-Game - The Abstraction and Reasoning Corpus made into a web game
accountill - Fullstack open source Invoicing application made with MongoDB, Express, React & Nodejs (MERN)
nano-neuron - 🤖 NanoNeuron is 7 simple JavaScript functions that will give you a feeling of how machines can actually "learn"
react-redux-firebase - Redux bindings for Firebase. Includes React Hooks and Higher Order Components.
to-view-list-mern - Keep track of online stuff, which you may want to view later. Made using MERN stack.
project_mern_memories - This is a code repository for the corresponding video tutorial. Using React, Node.js, Express & MongoDB you'll learn how to build a Full Stack MERN Application - from start to finish. The App is called "Memories" and it is a simple social media app that allows users to post interesting events that happened in their lives.
AI-Expert-Roadmap - Roadmap to becoming an Artificial Intelligence Expert in 2022
notion-clone - Edit Notes like in Notion.so. Full-Stack App using React/Express.
handtrack.js - A library for prototyping realtime hand detection (bounding box), directly in the browser.
Nodejs-rest-api-project-structure-Express - Nodejs project structure practices for building RESTful APIs using Express framework and MongoDB.
gun - An open source cybersecurity protocol for syncing decentralized graph data.