[D] How do pretrained tokenizers work?

This page summarizes the projects mentioned and recommended in the original post on reddit.com/r/MachineLearning

Our great sponsors
  • OPS - Build and Run Open Source Unikernels
  • SonarLint - Deliver Cleaner and Safer Code - Right in Your IDE of Choice!
  • Scout APM - Less time debugging, more time building
  • transformers

    🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

    I have been using the pretrained tokenizers available from the huggingface/transformers library. And they have been working well for my use case.

  • sentencepiece

    Unsupervised text tokenizer for Neural Network-based text generation.

    For papers, take a look at references here https://github.com/google/sentencepiece

  • OPS

    OPS - Build and Run Open Source Unikernels. Quickly and easily build and deploy open source unikernels in tens of seconds. Deploy in any language to any cloud.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts