duckling
projects
Our great sponsors
duckling | projects | |
---|---|---|
13 | 6 | |
4,015 | 1,246 | |
0.6% | 1.9% | |
0.0 | 4.7 | |
2 months ago | 23 days ago | |
Haskell | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
duckling
-
Experimental library for scraping websites using OpenAI's GPT API
For the reasons others have said I don't see it replacing 'traditional' scraping soon. But I am looking forward to it replacing current methods of extracting data from the scraped content.
I've been using Duckling [0] for extracting fuzzy dates and times from text. It does a good job but I needed a custom build with extra rules to make that into a great job. And that's just for dates, 1 of 13 dimensions supported. Being able to use an AI that handles them with better accuracy will be fantastic.
Does a specialised model trained to extract times and dates already exist? It's entity tagging but a specialised form (especially when dealing with historical documents where you may need Gregorian and Julian calendars).
[0] https://github.com/facebook/duckling
-
Automatisiert Kalendereinträge erstellen aus Mails mit Formatlosen Datumsangaben
Ah, sorry: https://github.com/facebook/duckling
-
Transforming free-form geospatial directions into addresses - SOTA?
To understand what relative distance and direction is indicated from the reference point, I'd look into something like Facebook & Wit.AI's Duckling, and a custom classifier to identify if it's on the reference point ("corner of"), or some distance from ("200 meters southwest"). If you can parse out a distance and direction, then it's all logic to plot the point.
-
Programming languages endorsed for server-side use at Meta
It also powers the backend of Wit.ai which FB owns. Wit's open-source entity parser, duckling, is written entirely in Haskell. https://github.com/facebook/duckling
- Data Cleaning using Machine Learning?
-
Unsplash chatbot for Discord, Pt. 2: more ways to bring pictures to Discord
Our RandomPicForLater intent will have one slot called reminderTime and will be of type @duckling.time. Duckling is a library that extracts entities from text, and it is one of the tools used in JAICP for this purpose. Entity types in Duckling are called dimensions and there's a number of them built in, among them is Time which suits us perfectly since we need to ask users when they want us to schedule a post for and then parse a text input into a datetime object.
-
Dependencies difference between cabal and stack
I'm working on a pretty interesting project right now and I'm having different results depending on the build tool used: with cabal, the test suite fails but it passes with stack.
-
Running Duckling on Windows
Try downloading the v0.2.0.0 release, extracting it somewhere, opening that location in powershell, and running these commands:
-
[ANN] Duckling v0.2.0.0 released
Duckling (https://github.com/facebook/duckling) is a library for parsing text into structured data.
-
Extract name:value relationships from plain text
If you really want high precision, Duckling is a good project to check out https://github.com/facebook/duckling
projects
-
Identify custom labels as well as existing labels with Spacy v3
When I was doing the same task, I used their `spacy project` command-line interface and extended their `ner_drugs` project, made things pretty easy. https://spacy.io/usage/projects https://github.com/explosion/projects/tree/v3/tutorials/ner_drugs
-
Build Spacy NER Loop for Dataframe
You could check out https://github.com/explosion/projects/tree/v3/tutorials for some sample code (this is the official spacy github)
-
Newbie question with Spacy Coreference Resolution
I used this example: https://github.com/explosion/projects/tree/v3/experimental/coref
-
Using pre-trained BERT embeddings for multi-class text classification
spaCy has an example project that uses BERT that you could use as a reference. It's multilabel but it should be easy to tweak the config to be just multiclass instead.
-
SpaCy v3.0 Released (Python Natural Language Processing)
The improved transformers support is definitely one of the main features of the release. I'm also really pleased with how the project system and config files work.
If you're always working with exactly one task model, I think working directly in transformers isn't that different from using spaCy. But if you're orchestrating multiple models, spaCy's pipeline components and Doc object will probably be helpful. A feature in v3 that I think will be particularly useful is the ability to share a transformer model between multiple components, for instance you can have an entity recogniser, text classifier and tagger all using the same transformer, and all backpropagating to it.
You also might find the projects system useful if you're training a lot of models. For instance, take a look at the project repo [here](https://github.com/explosion/projects/tree/v3/benchmarks/ner...). Most of the readme there is actually generated from the project.yml file, which fully specifies the preprocessing steps you need to build the project from the source assets. The project system can also push and pull intermediate or final artifacts to a remote cache, such as an S3 bucket, with the addressing of the artifacts calculated based on hashes of the inputs and the file itself.
The config file is comprehensive and extensible. The blocks refer to typed functions that you can specify yourself, so you can substitute any of your own layer (or other) functions in, to change some part of the system's behaviour. You don't _have_ to specify your models from the config files like this --- you can instead put it together in code. But the config system means there's a way of fully specifying a pipeline and all of the training settings, which means you can really standardise your training machinery.
Overall the theme of what we're doing is helping you to line up the workflows you use during development with something you can actually ship. We think one of the problems for ML engineers is that there's quite a gap between how people are iterating in their local dev environment (notebooks, scrappy directories etc) and getting the project into a state that you can get other people working on, try out in automation, and then pilot in some sort of soft production (e.g. directing a small amount of traffic to the model).
The problem with iterating in the local state is that you're running the model against benchmarks that are not real, and you hit diminishing returns quite quickly this way. It also introduces a lot of rework.
All that said, there will definitely be usage contexts where it's not worth introducing another technology. For instance, if your main goal is to develop a model, run an experiment and publish a paper, you might find spaCy doesn't do much that makes your life easier.
What are some alternatives?
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
syntaxdot - Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
ctparse - Parse natural language time expressions in python
Giveme5W1H - Extraction of the journalistic five W and one H questions (5W1H) from news articles: who did what, when, where, why, and how?
laserembeddings - LASER multilingual sentence embeddings as a pip package
rules - Durable Rules Engine
Kornia - Geometric Computer Vision Library for Spatial AI
BLINK - Entity Linker solution