deepchem
OpenWorm
deepchem | OpenWorm | |
---|---|---|
4 | 56 | |
5,124 | 2,256 | |
1.8% | 0.2% | |
9.9 | 6.1 | |
5 days ago | 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deepchem
-
Query
You can see a number of splitter implementations from the DeepChem package here. The code is quite elaborate but for your case it may not need to be. Still, if you can use their package for splitting that would be easiest.
- Deepchem – Democratizing Deep-Learning for Drug Discovery
- Deepchem dataset load_tox21()
- How do I transition into bioinformatics from a senior software engineer (14 years of experience)?
OpenWorm
-
The baffling intelligence of a single cell: The story of E. coli chemotaxis
So I have three thoughts about this.
The first is cell specialization, particularly neurons. It seems like nature really came up with a universal neuron. There aren't neurons for eyesight vs thinking, etc. They've experimented with this on frogs where they've reweired the optic nerve to a different part ofd the brain and the frog seems to see just fine. They've even added an eye and the frog seems to cope and use it just fine.
The second is the OpenWorm project [1]. This is an attempt to simulate a relatively simple organism with IIRC ~280 neurons. Despite lots of effort, the simulated version just doesn't match up to the real thing. In artificial neural networks we have a stupidly simplified model of neurons that tends to get reduced to a binary signal and an activation function. Thius can do a lot but it's clearly wholly inadequate for any realistic modelling. The protein interactions in a cell are mind-bogglingly complex.
The third is the three-body problem. To summarize, we have a general solution for the grvity interactions of two bodies. Add one more and we don't. We have classes of solutions but no general solution. This is why JPL needs to use supercomputers to calculate flight plans with a relatively low number of bodies. We see a relatively simple set of interactions lead to massive complexity with protein folding. I imagine that it just won't be computationally viable to simulate even a single realistic cell given all th einteractions that go on. We're simply left to make estimations.
[1]: https://openworm.org/
-
Money Bubble
> And this will not just be in government, it will be everywhere. The scariest part is that as people start to spend less time developing a skill set, and instead deferring to AI answers, you will cross a point where this problem can't be fixed (because nobody has the skills to fix it and the AI is trained on the outputs of previous generations of humans).
I think that would require AI development to approximately halt at close to the current level for over a lifetime.
Conditional on development halting, I'd agree with you. By analogy, there's this single, very useful, very powerful, set of "hidden methods that can be used to win all games, get rich, find love, determine the limits of thought itself!" — mathematics[0]. Do people like learning it? They do not. Calculator much easier. What a calculator does is none of that, calculators are merely arithmetic, but most people can't tell the difference between mathematics and arithmetic.
I think LLMs have the same effect on anything that can be expressed in words, and all the various image generator models have this effect on graphical arts. One must be extremely motivated to get past the "but the computer is better than me" hump.
However, I don't expect AI development to even approximately halt at anything close to the current level. There's a lot of room for self-play in domains like maths and computing where the proofs can be verified, and probably a lot of room for anything that can be RLHF'd, too. And that's also assuming we don't get any brain uploads; regardless of the question of "is such an upload of a human capable of consciousness", which absolutely matters, it may still be relevant to the economic issues of AI depending on the cost of running one depending on all the details of such an upload that I can't even begin to guess at at this point (last I heard, https://openworm.org was not actually measuring synaptic weights directly, but rather neural activity? I may be out of date, not my field).
Whatever happens, however good it does or doesn't get, I do expect something to go very weird before I reach the current state pension age — close enough that, if that something is "the machines break" or "society breaks", then there will still be plenty who remember the before times.
[0] https://www.smbc-comics.com/comic/secrets-2
-
Mind-reading devices are revealing the brain's secrets
Consider the case of a computer simulation of a worm [0]
If your simulation predicts the worm behavior up to some tolerance, you then laugh at it's supposed free will
There's some outside conditions that can't be controlled (the simulator is also a subset of the same universe) and may deterministically affect the worm behavior
To fully account for these factors and their corresponding deterministic chain, the simulation must grow more and more complex
Maybe free will could be thought instead of as a determinism ratio taking into account compute limits
I suspect a logical contradiction could arise presuming a subset of the universe being able to simulate the whole universe
If the simulation has to be limited, some "wiggle room" of free will must be granted
Considering the known vastness of the universe, I might as well wiggle
[0] https://openworm.org/
- Thousands of AI Authors on the Future of AI
- Openworm – a biological simulation of a worm with 302 neurons
- Openworm a biological simulation of a worm with 302 neurons
-
Discussion Thread
this is important
-
If you were to try and upload a human mind to a computer. How would you do it?
There is a complete, precise connectome for C. elegans, a roundworm with a central nervous system of 302 neurons and about 7k synapses. You can simulate the worm on your computer right now. More invertebrate connectomes have been mapped since then. It's been difficult to tell, from these animals, whether the connectome is the be-all-end-all of CNS function (SciAm 2012 -- I'd like to find an updated source on the discussion).
What are some alternatives?
torchdrug - A powerful and flexible machine learning platform for drug discovery
unknown-horizons - Unknown Horizons official code repository
deepqmc - Deep learning quantum Monte Carlo for electrons in real space
seagull - A Python Library for Conway's Game of Life
bidd-molmap - MolMapNet: An Efficient ConvNet with Knowledge-based Molecular Represenations for Molecular Deep Learning
bindsnet - Simulation of spiking neural networks (SNNs) using PyTorch.
chemicalx - A PyTorch and TorchDrug based deep learning library for drug pair scoring. (KDD 2022)
Life-Simulator1 - A life simulator in Python inspired by "Bitlife - Life Simulator"
pytorch_tempest - My repo for training neural nets using pytorch-lightning and hydra
CoreNeuron - Simulator optimized for large scale neural network simulations.
caer - High-performance Vision library in Python. Scale your research, not boilerplate.
c302 - The c302 framework for generating multiscale network models of C. elegans