esm

Evolutionary Scale Modeling (esm): Pretrained language models for proteins (by facebookresearch)

Esm Alternatives

Similar projects and alternatives to esm

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better esm alternative or higher similarity.

esm reviews and mentions

Posts with mentions or reviews of esm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-13.
  • Large language models generate functional protein sequences across families
    3 projects | news.ycombinator.com | 13 May 2023
    When evaluating this work, it’s important to remember that the functional labels on each of the 290 million input sequences were originally assigned by HMM as part of the pfam project, so the model is predicting a prediction.

    Furthermore, the authors must engage a lot of human curation to ensure the sequences they generate are active. First, they pick an easy target. Second, they employ by-hand classical bioinformatics techniques on their predicted sequences after they are generated. For example, they manually align them and select those which contain specific important amino acids at specific positions which are present in 100% of functional proteins of that class, and are required for function. This is all done by a human bioinformatics expert before they test the “generated” sequences.

    One other comment, in protein science, a sequence with 40% identity to another sequence is not “very different” if it is homologous. Since this model is essentially generating homologs from a particular class, it’s no surprise at a pairwise amino acid level, the generated sequences have this degree of similarity. Take proteins in any functional family and compare them. They will have the same overall 3-D structure—called their “fold”—yet have pairwise sequence identities much lower than 30–40%.

    Not to be negative. I really enjoyed reading this paper and I think the work is important. Some related work by Meta AI is the ESM series of models [1] trained on the same data (the UniProt dataset [2]).

    One thing I wonder is about the vocabulary size of this model. The number of tokens is 26 for the 20 amino acids and some extras, whereas for a LLM like Meta’s LLaMa the vocab size is 32,000. I wonder how that changes training and inference, and how we can adopt the transformer architecture for this scenario.

    1. https://github.com/facebookresearch/esm

    2. https://www.uniprot.org/help/downloads

  • Google DeepMind CEO Says Some Form of AGI Possible in a Few Years
    1 project | /r/neoliberal | 3 May 2023
  • Can anyone suggest some 3D protein function prediction software? I was using 3DLigandSite and they’ve gone down indefinitely.
    1 project | /r/labrats | 3 Apr 2023
    What's your input data look like? If you're predicting structures of mutants where there's a wild type structure available you can use variant prediction tools like ESM-IF or some of the protein language models like ESM-2
  • RFdiffusion: Diffusion model generates protein backbones
    3 projects | news.ycombinator.com | 30 Mar 2023
    Such an explosion of protein AI lately. It’s the absolute best time to be a protein scientist with an interest in ML. Every new model type is inevitably tried out on proteins. In this case, by grad students at a very famous protein design lab (Baker Lab at University of Washington). And they usually find some interesting application. Protein design presents tons of interesting challenges.

    The very largest plain transformer models trained on protein sequences (analogous to plain text) are about 15B parameters (I am thinking of Meta AI’s ESM-2 [1]). These can do for protein sequences what LLMs do for text (that is, they can “fill in the blank” to design variations, generate new proteins that look like their training data—which consists of all natural protein sequences), and tell you how likely it is that a given sequence exists.

    Some cool variations of transformers have applications for protein design, like the now-famous SE(3) equivariant transformer used in the structure prediction module of AlphaFold [2], now appearing in TFA

    1. https://github.com/facebookresearch/esm

  • Returning to snake's nest after a long journey, any major advances in python for science ?
    7 projects | /r/Python | 24 Jan 2023
    Likewise PyTorch is seeing a lot of sciml work, in particular to do with protein design. (See e.g. ESM2.)
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 7 May 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic esm repo stats
5
2,833
4.6
3 months ago

facebookresearch/esm is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of esm is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com