Beginner questions about NER model evaluation.

This page summarizes the projects mentioned and recommended in the original post on reddit.com/r/LanguageTechnology

Our great sponsors
  • talent.io - Download talent.io’s Tech Salary Report
  • Scout APM - Less time debugging, more time building
  • SonarQube - Static code analysis for 29 languages.
  • seqeval

    A Python framework for sequence labeling evaluation(named-entity recognition, pos tagging, etc...)

    . The standard way to evaluate NER (or any other sequence labelling problem) is to use the conlleval script (https://www.clips.uantwerpen.be/conll2000/chunking/output.html) or through the seqeval package in python (https://github.com/chakki-works/seqeval) . Either way, you need a list of predicted labels and a list of gold labels (see the code example in the link, it should be trivial to converse your output to the same data format).

  • talent.io

    Download talent.io’s Tech Salary Report. Median salaries, most in-demand technologies, state of the remote work... all you need to know your worth on the market by tech recruitment platform talent.io

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts