nli4ct
TextFooler
nli4ct | TextFooler | |
---|---|---|
1 | 1 | |
11 | 465 | |
- | - | |
4.4 | 0.0 | |
18 days ago | over 1 year ago | |
Jupyter Notebook | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nli4ct
-
NLI4CT: Multi-Evidence Natural Language Inference for Clinical Trial Reports
How can we interpret and retrieve medical evidence to support clinical decisions? Clinical trial reports (CTR) amassed over the years contain indispensable information for the development of personalized medicine. However, it is practically infeasible to manually inspect over 400,000+ clinical trial reports in order to find the best evidence for experimental treatments. Natural Language Inference (NLI) offers a potential solution to this problem, by allowing the scalable computation of textual entailment. However, existing NLI models perform poorly on biomedical corpora, and previously published datasets fail to capture the full complexity of inference over CTRs. In this work, we present a novel resource to advance research on NLI for reasoning on CTRs. The resource includes two main tasks. Firstly, to determine the inference relation between a natural language statement, and a CTR. Secondly, to retrieve supporting facts to justify the predicted relation. We provide NLI4CT, a corpus of 2400 statements and CTRs, annotated for these tasks. Baselines on this corpus expose the limitations of existing NLI models, with 6 state-of-the-art NLI models achieving a maximum F1 score of 0.627. To the best of our knowledge, we are the first to design a task that covers the interpretation of full CTRs. To encourage further work on this challenging dataset, we make the corpus, competition leaderboard, website and code to replicate the baseline experiments available at: https://github.com/ai-systems/nli4ct
TextFooler
-
DeepMind’s New AI with a Memory Outperforms Algorithms 25 Times Its Size
I'd be interested to see if these models are robust against algorithms like TextFooler [0]. I'm skeptical this trend of 10x'ing the parameters will solve the "clever hans" problem.
[0]: https://github.com/jind11/TextFooler
What are some alternatives?
survey_kit - Flutter library to create beautiful surveys (aligned with ResearchKit on iOS)
TextAttack - TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
opencog - A framework for integrated Artificial Intelligence & Artificial General Intelligence (AGI)
FinBERT-QA - Financial Domain Question Answering with pre-trained BERT Language Model
gluon-nlp - NLP made easy
ccg2lambda - Provide Semantic Parsing solutions and Natural Language Inferences for multiple languages following the idea of the syntax-semantics interface.
nlp-recipes - Natural Language Processing Best Practices & Examples
SurveyKit - Android library to create beautiful surveys (aligned with ResearchKit on iOS)
ARElight - Granular Viewer of Sentiments Between Entities in Massively Large Documents and Collections of Texts, powered by AREkit
KitanaQA - KitanaQA: Adversarial training and data augmentation for neural question-answering models