jaxtyping
esm
jaxtyping | esm | |
---|---|---|
7 | 5 | |
941 | 2,833 | |
3.9% | 3.6% | |
8.3 | 4.6 | |
13 days ago | 3 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jaxtyping
-
Writing Python like it's Rust
Try using [jaxtyping](https://github.com/google/jaxtyping).
It also supports numpy/pytorch/etc.
-
Writing Python like it’s Rust
Since you mention ML use-cases, you might like jaxtyping.
-
Scientific computing in JAX
jaxtyping: rich shape & dtype annotations for arrays and tensors (also supports PyTorch/TensorFlow/NumPy);
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
Heads-up that my newer jaxtyping project now exists.
-
Returning to snake's nest after a long journey, any major advances in python for science ?
As other folks have commented, type hints are now a big deal. For static typing the best checker is pyright. For runtime checking there is typeguard and beartype. These can be integrated with array libraries through jaxtyping. (Which also works for PyTorch/numpy/etc., despite the name.)
- Type annotations and runtime checking for shape and dtype
esm
-
Large language models generate functional protein sequences across families
When evaluating this work, it’s important to remember that the functional labels on each of the 290 million input sequences were originally assigned by HMM as part of the pfam project, so the model is predicting a prediction.
Furthermore, the authors must engage a lot of human curation to ensure the sequences they generate are active. First, they pick an easy target. Second, they employ by-hand classical bioinformatics techniques on their predicted sequences after they are generated. For example, they manually align them and select those which contain specific important amino acids at specific positions which are present in 100% of functional proteins of that class, and are required for function. This is all done by a human bioinformatics expert before they test the “generated” sequences.
One other comment, in protein science, a sequence with 40% identity to another sequence is not “very different” if it is homologous. Since this model is essentially generating homologs from a particular class, it’s no surprise at a pairwise amino acid level, the generated sequences have this degree of similarity. Take proteins in any functional family and compare them. They will have the same overall 3-D structure—called their “fold”—yet have pairwise sequence identities much lower than 30–40%.
Not to be negative. I really enjoyed reading this paper and I think the work is important. Some related work by Meta AI is the ESM series of models [1] trained on the same data (the UniProt dataset [2]).
One thing I wonder is about the vocabulary size of this model. The number of tokens is 26 for the 20 amino acids and some extras, whereas for a LLM like Meta’s LLaMa the vocab size is 32,000. I wonder how that changes training and inference, and how we can adopt the transformer architecture for this scenario.
1. https://github.com/facebookresearch/esm
2. https://www.uniprot.org/help/downloads
- Google DeepMind CEO Says Some Form of AGI Possible in a Few Years
-
Can anyone suggest some 3D protein function prediction software? I was using 3DLigandSite and they’ve gone down indefinitely.
What's your input data look like? If you're predicting structures of mutants where there's a wild type structure available you can use variant prediction tools like ESM-IF or some of the protein language models like ESM-2
-
RFdiffusion: Diffusion model generates protein backbones
Such an explosion of protein AI lately. It’s the absolute best time to be a protein scientist with an interest in ML. Every new model type is inevitably tried out on proteins. In this case, by grad students at a very famous protein design lab (Baker Lab at University of Washington). And they usually find some interesting application. Protein design presents tons of interesting challenges.
The very largest plain transformer models trained on protein sequences (analogous to plain text) are about 15B parameters (I am thinking of Meta AI’s ESM-2 [1]). These can do for protein sequences what LLMs do for text (that is, they can “fill in the blank” to design variations, generate new proteins that look like their training data—which consists of all natural protein sequences), and tell you how likely it is that a given sequence exists.
Some cool variations of transformers have applications for protein design, like the now-famous SE(3) equivariant transformer used in the structure prediction module of AlphaFold [2], now appearing in TFA
1. https://github.com/facebookresearch/esm
-
Returning to snake's nest after a long journey, any major advances in python for science ?
Likewise PyTorch is seeing a lot of sciml work, in particular to do with protein design. (See e.g. ESM2.)
What are some alternatives?
torchtyping - Type annotations and dynamic checking for a tensor's shape, dtype, names, etc.
progen - Official release of the ProGen models
MindsDB - The platform for customizing AI from enterprise data
beartype - Unbearably fast near-real-time hybrid runtime-static type-checking in pure Python.
diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/
typeguard - Run-time type checker for Python
plum - Multiple dispatch in Python
pyright - Static Type Checker for Python
madtypes - Python Type that raise TypeError at runtime
RFdiffusion - Code for running RFdiffusion
pytype - A static type analyzer for Python code
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more