SpaCy v3.0 Released (Python Natural Language Processing)

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • spaCy

    💫 Industrial-strength Natural Language Processing (NLP) in Python

    I'm sorry that this conflicted with your plans, but I feel strongly that distributing Python libraries via system package managers such as apt is very bad for users. The pain is felt especially by users who are relatively new to Python, who will end up with their system Python in a confusing state that is difficult to correct.

    We of course encourage anyone to clone the repo or install from an sdist if they want to compile from source. In fact you can do the following:

        git clone https://github.com/explosion/spaCy

  • projects

    🪐 End-to-end NLP workflows from prototype to production (by explosion)

    The improved transformers support is definitely one of the main features of the release. I'm also really pleased with how the project system and config files work.

    If you're always working with exactly one task model, I think working directly in transformers isn't that different from using spaCy. But if you're orchestrating multiple models, spaCy's pipeline components and Doc object will probably be helpful. A feature in v3 that I think will be particularly useful is the ability to share a transformer model between multiple components, for instance you can have an entity recogniser, text classifier and tagger all using the same transformer, and all backpropagating to it.

    You also might find the projects system useful if you're training a lot of models. For instance, take a look at the project repo [here](https://github.com/explosion/projects/tree/v3/benchmarks/ner...). Most of the readme there is actually generated from the project.yml file, which fully specifies the preprocessing steps you need to build the project from the source assets. The project system can also push and pull intermediate or final artifacts to a remote cache, such as an S3 bucket, with the addressing of the artifacts calculated based on hashes of the inputs and the file itself.

    The config file is comprehensive and extensible. The blocks refer to typed functions that you can specify yourself, so you can substitute any of your own layer (or other) functions in, to change some part of the system's behaviour. You don't _have_ to specify your models from the config files like this --- you can instead put it together in code. But the config system means there's a way of fully specifying a pipeline and all of the training settings, which means you can really standardise your training machinery.

    Overall the theme of what we're doing is helping you to line up the workflows you use during development with something you can actually ship. We think one of the problems for ML engineers is that there's quite a gap between how people are iterating in their local dev environment (notebooks, scrappy directories etc) and getting the project into a state that you can get other people working on, try out in automation, and then pilot in some sort of soft production (e.g. directing a small amount of traffic to the model).

    The problem with iterating in the local state is that you're running the model against benchmarks that are not real, and you hit diminishing returns quite quickly this way. It also introduces a lot of rework.

    All that said, there will definitely be usage contexts where it's not worth introducing another technology. For instance, if your main goal is to develop a model, run an experiment and publish a paper, you might find spaCy doesn't do much that makes your life easier.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

  • syntaxdot

    Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.

    Huggingface fills the need for task based prediction when you have a GPU.

    With model distillation, it should be possible to annotate hundreds of sentences per second on a single CPU with a library like Huggingface Transformers.

    For instance, one of my distilled Dutch multi-task syntax models (UD POS, language-specific POS, lemmatization, morphology, dependency parsing) annotates 316 sentences per second with 4 threads on a Ryzen 3700X. This distilled model has virtually no loss in accuracy, compared to the finetuned XLM-RoBERTa base model.

    I don't use Huggingface Transformers, but ported some of their implementations to Rust [1], but that should not make a big difference since all the heavy lifting happens in C++ in libtorch anyway.

    tl;dr: it is not true that tranformers are only useful for GPU prediction. You can get high CPU prediction speeds with some tricks (distillation, length-based bucketing in batches, etc.).

    [1] https://github.com/tensordot/syntaxdot/tree/main/syntaxdot-t...

  • Kornia

    Geometric Computer Vision Library for Spatial AI

    I haven't had a situation to use it, but I think Kornia looks cool: https://github.com/kornia/kornia

  • laserembeddings

    LASER multilingual sentence embeddings as a pip package

    I've been using LASER from Facebook Research via https://github.com/yannvgn/laserembeddings to accept multi-lingual input in front of the the domain-specific models for recommendations and stuff (that are trained on English annotated examples).

  • duckling

    Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

  • rules

    Durable Rules Engine (by jruizgit)

    Currently https://github.com/nilp0inter/experta but https://github.com/noxdafox/clipspy seems nice, I just shied away from using it due to uneasiness about FFI and debugging, even though the original CLIPS is still awesome and has a very interesting manual.

    There's also https://github.com/jruizgit/rules but haven't tried it yet.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts