opencog
opennars
Our great sponsors
opencog | opennars | |
---|---|---|
1 | 5 | |
2,304 | 369 | |
0.0% | 1.9% | |
3.8 | 0.0 | |
about 1 year ago | about 3 years ago | |
Scheme | Java | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opencog
-
Teaching a Bayesian spam to filter play chess (2005)
Oh man, reading what you wrote out, it just occurred to me that learning is actually caching.
We already have a multitude of machines that can solve any problem: the global economy, corporations, capitalism (darwinian evolution casted as an economic model), organizations, our brains, etc.
So take an existing model that works, convert it to code made up of the business logic and tests that we write every day, and start replacing the manual portions with algorithms (automate them). The "work" of learning to solve a problem is the inverse of the solution being taught. But once you know the solution, cache it and use it.
I'm curious what the smallest fully automated model would look like. We can imagine a corporation where everyone has been replaced by a virtual agent running in code. Or a car where the driver is replaced by chips or (gasp) the cloud.
But how about a program running on a source code repo that can incorporate new code as long as all of its current unit tests pass. At first, people around the world would write the code. But eventually, more and more of the subrepos would be cached copies of other working solutions. Basically just keep doing that until it passes the Turing test (which I realize is just passé by today's standards, look at online political debate with troll bots). We know that the compressed solution should be smaller than the 6 billion base pairs of DNA. It just doesn't seem like that hard of a problem. Except I guess it is:
https://github.com/opencog/opencog
opennars
- AGI frameworks
-
How will artificial general intelligence come?
(For research)Test Chamber, the most complex application with its doors keys and pizzas is nowhere near complexity of the real ant world. Saw the car detection and Lego bot too. https://github.com/opennars/opennars/wiki/Test-Chamber
- How to make/program an AI? Is it even possible?
-
AI on the PC for fun
Non Axiomatic Reasoning System
-
What would the algorithm of imagination look like?
You could also look at OpenNars, written by Pei Wang who studied with Hofstadter: https://github.com/opennars/opennars The theory behind the NARS system might approach what you are thinking about.
What are some alternatives?
gluon-nlp - NLP made easy
Choco - An open-source Java library for Constraint Programming
ccg2lambda - Provide Semantic Parsing solutions and Natural Language Inferences for multiple languages following the idea of the syntax-semantics interface.
Hodoku - Hodoku is a solver/generator/trainer/analyzer for standard sudoku.
nlp-recipes - Natural Language Processing Best Practices & Examples
awesome-rust-formalized-reasoning - An exhaustive list of all Rust resources regarding automated or semi-automated formalization efforts in any area, constructive mathematics, formal algorithms, and program verification.
learn - Neuro-symbolic interpretation learning (mostly just language-learning, for now)
OpenNARS-for-Applications - General reasoning component for applications based on NARS theory.
nli4ct
fastInvoiceAI - FastInvoiceAI - Automate accounting of Peppol and EHF invoices in Java
Recaf - The modern Java bytecode editor
RoyalUr-Analysis - This repository is dedicated to the technical analysis of The Royal Game of Ur. We aim to answer: How much of the game is luck, and how much is skill?