opencog VS nli4ct

Compare opencog vs nli4ct and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
opencog nli4ct
1 1
2,304 11
0.0% -
3.8 4.4
about 1 year ago 6 days ago
Scheme Jupyter Notebook
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

opencog

Posts with mentions or reviews of opencog. We have used some of these posts to build our list of alternatives and similar projects.
  • Teaching a Bayesian spam to filter play chess (2005)
    1 project | news.ycombinator.com | 30 Jan 2021
    Oh man, reading what you wrote out, it just occurred to me that learning is actually caching.

    We already have a multitude of machines that can solve any problem: the global economy, corporations, capitalism (darwinian evolution casted as an economic model), organizations, our brains, etc.

    So take an existing model that works, convert it to code made up of the business logic and tests that we write every day, and start replacing the manual portions with algorithms (automate them). The "work" of learning to solve a problem is the inverse of the solution being taught. But once you know the solution, cache it and use it.

    I'm curious what the smallest fully automated model would look like. We can imagine a corporation where everyone has been replaced by a virtual agent running in code. Or a car where the driver is replaced by chips or (gasp) the cloud.

    But how about a program running on a source code repo that can incorporate new code as long as all of its current unit tests pass. At first, people around the world would write the code. But eventually, more and more of the subrepos would be cached copies of other working solutions. Basically just keep doing that until it passes the Turing test (which I realize is just passé by today's standards, look at online political debate with troll bots). We know that the compressed solution should be smaller than the 6 billion base pairs of DNA. It just doesn't seem like that hard of a problem. Except I guess it is:

    https://github.com/opencog/opencog

nli4ct

Posts with mentions or reviews of nli4ct. We have used some of these posts to build our list of alternatives and similar projects.
  • NLI4CT: Multi-Evidence Natural Language Inference for Clinical Trial Reports
    1 project | /r/BotNewsPreprints | 8 May 2023
    How can we interpret and retrieve medical evidence to support clinical decisions? Clinical trial reports (CTR) amassed over the years contain indispensable information for the development of personalized medicine. However, it is practically infeasible to manually inspect over 400,000+ clinical trial reports in order to find the best evidence for experimental treatments. Natural Language Inference (NLI) offers a potential solution to this problem, by allowing the scalable computation of textual entailment. However, existing NLI models perform poorly on biomedical corpora, and previously published datasets fail to capture the full complexity of inference over CTRs. In this work, we present a novel resource to advance research on NLI for reasoning on CTRs. The resource includes two main tasks. Firstly, to determine the inference relation between a natural language statement, and a CTR. Secondly, to retrieve supporting facts to justify the predicted relation. We provide NLI4CT, a corpus of 2400 statements and CTRs, annotated for these tasks. Baselines on this corpus expose the limitations of existing NLI models, with 6 state-of-the-art NLI models achieving a maximum F1 score of 0.627. To the best of our knowledge, we are the first to design a task that covers the interpretation of full CTRs. To encourage further work on this challenging dataset, we make the corpus, competition leaderboard, website and code to replicate the baseline experiments available at: https://github.com/ai-systems/nli4ct

What are some alternatives?

When comparing opencog and nli4ct you can also consider the following projects:

opennars - OpenNARS for Research 3.0+

survey_kit - Flutter library to create beautiful surveys (aligned with ResearchKit on iOS)

gluon-nlp - NLP made easy

ccg2lambda - Provide Semantic Parsing solutions and Natural Language Inferences for multiple languages following the idea of the syntax-semantics interface.

TextFooler - A Model for Natural Language Attack on Text Classification and Inference

nlp-recipes - Natural Language Processing Best Practices & Examples

learn - Neuro-symbolic interpretation learning (mostly just language-learning, for now)

SurveyKit - Android library to create beautiful surveys (aligned with ResearchKit on iOS)