ModelRunner
CoreNLP
ModelRunner | CoreNLP | |
---|---|---|
1 | 11 | |
57 | 9,469 | |
- | 0.5% | |
0.0 | 9.1 | |
almost 2 years ago | 2 days ago | |
Java | Java | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ModelRunner
-
Focus on the cool stuff, automate the rest and get a voice interface
Warning: this whole post is a blatant plug for my Open Source project https://github.com/etiennesillon/ModelRunner
There is lot of discussion around no code platforms and why developers don’t like them. My view is that they can be very useful to quickly get through the boring parts of a project, like creating master data management screens for example. So I’ve built my own version which interprets models at run time and, it turns out, understands natural language queries too!
Hi, my name is Etienne, I love coding and I’ve been doing it for a few decades now so I’d rather focus on code that keeps me interested. Unfortunately, I find that there is always a lot to code before I get to the interesting stuff. So, like every other half-decent programmer, I’ve always tried to automate as much as possible and build reusable libraries by adding levels of indirection and parameters.
I’ve been doing this for so long now that my code has become ‘hyper’ parameterised, so much so that I had to store all the parameters in configuration files. These evolved into complete models which are basically a mix between ER models and UML diagrams: they include Entities and Attributes but also support all UML relationships (plus Back References) as well as formulas in object notation like “Product.Name” and “Sum(OrderLines.Amount)”. I’ve even extended the idea to include workflow models to specify what happens when an object is created, updated or deleted or when a pre-requisite condition becomes true.
To simplify managing the models, I’ve written a graphical editor, starting with Eclipse GEF but since I like to reinvent the wheel, I moved to plain HTML5/JS. To make it even easier, I’ve added Google Speech Recognition so I can now design models by just talking to Chrome and when I’m done, I can deploy them with one click or by saying something like ‘please deploy the application’. This will create a schema for the data and the ‘meta’ application will be ready to offer standard, web based, data management screens.
At this stage you’re probably thinking “Great, you can design and deploy data driven apps with your voice, so what?”
Ok, let’s move on to something more interesting then, which is what the ‘meta’ app can do because it has access to all the information in the model at run time, like for example, the ability to manipulate the data using natural language queries.
This works because having access to the semantics in the model removes the current gap between Machine Learning based Natural Language Understanding systems, which are very flexible but mostly ignorant of the domain model and, on the other hand, old fashioned back end systems with very rigid APIs. You can find a more detailed discussion here: https://modeling-languages.com/modelrunner-open-source-no-co....
So I’ve also added Google Speech Recognition to the ‘meta’ application and I can now just speak to it and tell it to “create a city called Melbourne and set postcode to 3000 and set notes to the most liveable city in the world” or “get me a list of customers living in Sydney aged 40” which I think is pretty cool and almost justifies all the hours and late nights I’ve spent coding it!
I think this has pretty obvious applications like for example, being able to manage your data on the go by just talking to your phone instead of trying to use a GUI on a small screen.
So, I highly recommend the parameterised indirection approach but if you don’t have a lot of time to write your own code, you might want to have a look at mine, it’s all Open Source with an MIT license: https://github.com/etiennesillon/ModelRunner.
Or, if you just want to try it or watch a demo, just head to https://modelrunner.org.
Now, it’s still very much a work in progress and I’ve spent more time on the core engine than on the UI so if you try to break it, you probably will! But, if you give it a try, please let me know how you went!
Thank you!
CoreNLP
-
How does "Reclaim.ai" use AI for smart rescheduling?
The Stanford CoreNLP Model
-
One does not simply "create a visualization" from unstructured data!
If your looking at spacy have a look at Apache OpenNLP and Core NLP.
-
Has anyone here ever used the seaNMF model for short text topic modeling, and be willing to help me get started with it?
Tokenize with NLTK, SpaCy or CoreNLP
-
How to use CoreNLP with a large corpus(14.7 GB)?
It should not take nearly that long. However, again I must recommend you take this conversation to github
-
What universities are hubs for reinforcement learning research?
Stanford has a great program and the Stanford NLP Group maintains CoreNLP which I have used before.
-
POS-Tagger for declension of German words in Java?
So why not use the Stanford CoreNLP library?
-
A comparison of libraries for named entity recognition
If you need NER, there’s no need to implement it yourself. There are several popular libraries that can do this for you nowadays. Five of these libraries, Stanford CoreNLP, NLTK, OpenNLP, SpaCy, and GATE, were already mentioned in the title.
-
Making my own AI assistant
Check something like this out to start: https://stanfordnlp.github.io/CoreNLP/
-
Good tutorials for PyTorch?
You don't actually even need to learn how to do deep learning if you're doing something fairly basic, which it sounds like you are. There are a lot of good tools you can use basically straight out of the box for something like this. Check out https://huggingface.co/course/chapter1, https://course.spacy.io/en/, https://guide.allennlp.org/ and https://www.nltk.org/book/. If java's more your thing, add https://stanfordnlp.github.io/CoreNLP/ to the list.
-
[D] Java vs Python for Machine learning
To give a contrasting perspective, I think the Java ecosystem is much better suited for many data science tasks, and has a growing and well-maintained set of libraries for general purpose machine learning. I won't list them all, but TF-Java, DJL et al. have implementations of many modern architectures and there are a number of excellent libraries (CoreNLP, Lucene et al.) for working with text.
What are some alternatives?
Apache OpenNLP - Apache OpenNLP
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
Mallet - MALLET is a Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.
Deep Java Library (DJL) - An Engine-Agnostic Deep Learning Framework in Java
DKPro Core - Collection of software components for natural language processing (NLP) based on the Apache UIMA framework.
CogCompNLP - CogComp's Natural Language Processing Libraries and Demos: Modules include lemmatizer, ner, pos, prep-srl, quantifier, question type, relation-extraction, similarity, temporal normalizer, tokenizer, transliteration, verb-sense, and more.
Apache Solr - Apache Lucene and Solr open-source search software
java - Java bindings for TensorFlow
SeaNMF - Short Text Topic Modeling
NLTK - NLTK Source