Teaching a Bayesian spam to filter play chess (2005)

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • opencog

    A framework for integrated Artificial Intelligence & Artificial General Intelligence (AGI)

  • Oh man, reading what you wrote out, it just occurred to me that learning is actually caching.

    We already have a multitude of machines that can solve any problem: the global economy, corporations, capitalism (darwinian evolution casted as an economic model), organizations, our brains, etc.

    So take an existing model that works, convert it to code made up of the business logic and tests that we write every day, and start replacing the manual portions with algorithms (automate them). The "work" of learning to solve a problem is the inverse of the solution being taught. But once you know the solution, cache it and use it.

    I'm curious what the smallest fully automated model would look like. We can imagine a corporation where everyone has been replaced by a virtual agent running in code. Or a car where the driver is replaced by chips or (gasp) the cloud.

    But how about a program running on a source code repo that can incorporate new code as long as all of its current unit tests pass. At first, people around the world would write the code. But eventually, more and more of the subrepos would be cached copies of other working solutions. Basically just keep doing that until it passes the Turing test (which I realize is just passé by today's standards, look at online political debate with troll bots). We know that the compressed solution should be smaller than the 6 billion base pairs of DNA. It just doesn't seem like that hard of a problem. Except I guess it is:

    https://github.com/opencog/opencog

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Pydentic in prompt engineering

    1 project | /r/LangChain | 29 Nov 2023
  • 27-Jun-2023

    1 project | /r/dailyainews | 29 Jun 2023
  • Kor: Extract structured data using LLMs

    1 project | /r/hypeurls | 26 Jun 2023
  • Kor: Extract structured data using LLMs

    1 project | news.ycombinator.com | 26 Jun 2023
  • Information extraction in large documents with LLMs

    1 project | /r/MLQuestions | 10 Jun 2023