opencog
nlp-recipes
Our great sponsors
opencog | nlp-recipes | |
---|---|---|
1 | 5 | |
2,304 | 6,020 | |
0.0% | - | |
3.8 | 0.0 | |
about 1 year ago | over 1 year ago | |
Scheme | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opencog
-
Teaching a Bayesian spam to filter play chess (2005)
Oh man, reading what you wrote out, it just occurred to me that learning is actually caching.
We already have a multitude of machines that can solve any problem: the global economy, corporations, capitalism (darwinian evolution casted as an economic model), organizations, our brains, etc.
So take an existing model that works, convert it to code made up of the business logic and tests that we write every day, and start replacing the manual portions with algorithms (automate them). The "work" of learning to solve a problem is the inverse of the solution being taught. But once you know the solution, cache it and use it.
I'm curious what the smallest fully automated model would look like. We can imagine a corporation where everyone has been replaced by a virtual agent running in code. Or a car where the driver is replaced by chips or (gasp) the cloud.
But how about a program running on a source code repo that can incorporate new code as long as all of its current unit tests pass. At first, people around the world would write the code. But eventually, more and more of the subrepos would be cached copies of other working solutions. Basically just keep doing that until it passes the Turing test (which I realize is just passé by today's standards, look at online political debate with troll bots). We know that the compressed solution should be smaller than the 6 billion base pairs of DNA. It just doesn't seem like that hard of a problem. Except I guess it is:
https://github.com/opencog/opencog
nlp-recipes
-
Show HN: I turned my microeconomics textbook into a chatbot with GPT-3
https://github.com/topics/automatic-summarization
Microsoft/nlp-recipes lists current NLP tasks that would be helpful for a docs bot: https://github.com/microsoft/nlp-recipes#content
-
Show HN: DocsGPT, open-source documentation assistant, fully aware of libraries
https://github.com/topics/automatic-summarization
Though now archived,
> Microsoft/nlp-recipes lists current NLP tasks that would be helpful for a docs bot: https://github.com/microsoft/nlp-recipes#content
NLP Tasks: Text Classification, Named Entity Recognition, Text Summarization, Entailment, Question Answering, Sentence Similarity, Embeddings, Sentiment Analysis, Model Explainability, and Auto-Annotatiom
- ✨ 5 Free Resources for Learning Natural Language Processing with Python 🚀
-
Is there any utility software/bot that produces descriptor tags for a Reddit image post using the comments?
I found this (https://github.com/microsoft/nlp-recipes) resource and it has a list of pre-built or easily customizable NLP models that I'm going to try out.
-
Building a Aspect based sentiment classification
There is an NLP recipe from Microsoft on ABSA. Have you seen this? https://github.com/microsoft/nlp-recipes/blob/master/examples/sentiment_analysis/absa/absa.ipynb
What are some alternatives?
opennars - OpenNARS for Research 3.0+
ludwig - Low-code framework for building custom LLMs, neural networks, and other AI models
gluon-nlp - NLP made easy
OpenPrompt - An Open-Source Framework for Prompt-Learning.
ccg2lambda - Provide Semantic Parsing solutions and Natural Language Inferences for multiple languages following the idea of the syntax-semantics interface.
rasa - 💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
learn - Neuro-symbolic interpretation learning (mostly just language-learning, for now)
deepsegment - A sentence segmenter that actually works!
nli4ct
pymarl2 - Fine-tuned MARL algorithms on SMAC (100% win rates on most scenarios)
Parrot_Paraphraser - A practical and feature-rich paraphrasing framework to augment human intents in text form to build robust NLU models for conversational engines. Created by Prithiviraj Damodaran. Open to pull requests and other forms of collaboration.