Help with aligned word embeddings

This page summarizes the projects mentioned and recommended in the original post on /r/LanguageTechnology

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • MUSE

    Discontinued A library for Multilingual Unsupervised or Supervised word Embeddings

  • We currently train our own vocabularies on Wikipedia and other sources, and we align the vocabularies using MUSE with default settings (0-5000 dictionary for training, 5000-6500 dictionary for evaluation and 5 refinements).

  • LASER

    Language-Agnostic SEntence Representations

  • You want LASER its a superbig model trained on tons of languages you can use it with sentence_transformers in python to compute embedings. Then you can use faiss or datasketch to find matches at K

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • electra

    ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

  • If you have at least a decent gaming gpu or want to bother with colab, you could get a relevant dataset and use electra https://github.com/google-research/electra

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts