bonito
transformers
bonito | transformers | |
---|---|---|
8 | 176 | |
373 | 125,369 | |
0.0% | 1.7% | |
7.3 | 10.0 | |
5 months ago | 3 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bonito
-
miRNA Detection
There is a technology called Nanopore. I’ve never used it myself but the concept was to be able to sequence samples of nucleic acid out in the field. A quick pubmed search indicated that it can detect miRNA, but maybe with some modifications. https://nanoporetech.com Best of luck with it!
-
Oxford University Press’s new logo is unfathomably bad
> An unpopular opinion: there are too many other logos that look like the original logo
There are also too many logos that look like the new logo. The first blue circle logos that spring to mind are Blue Circle Cement/Tarmac/Lafarge [1] and Oxford Nanopore [2].
[1]: https://en.wikipedia.org/wiki/Tarmac_(company)
[2]: https://nanoporetech.com/
- PORTABLE DNA SEQUENCER!!!!
-
Ask HN: Who is hiring? (May 2022)
Oxford Nanopore Technologies (https://nanoporetech.com/) | Front end developer | Full-time | Oxford | Remote (UK)
Oxford Nanopore Technologies is headquartered at the Oxford Science Park outside Oxford, UK, with satellite offices and commercial presence in many global locations across the US, APAC and Europe. Our DNA/RNA sequencing platform is the only technology that offers real-time analysis (for rapid insights), in fully scalable formats from pocket to population scale. Our goal is to enable the analysis of any living thing, by anyone, anywhere.
Tech stack: Electron, Stencil, React, Typescript, RxJS, GRPC
For more details, please email: [email protected]
- El primer genoma completo de un ser humano abre una nueva era en la ciencia
-
Buying artificial membranes
Ok this isn't really my area but I know that there are labs/companies performing these types of electrical current disturbance measurements of membrane-type proteins for both DNA sequencing (https://nanoporetech.com/) and protein sequencing (https://www.nature.com/articles/s41587-019-0401-y).
-
ELI5: Why home blood tests do not exist, while we can measure our sugar levels with personal devices at home?
Now nanopore sequencing is solid state and gets much longer reads. https://nanoporetech.com/ and https://en.wikipedia.org/wiki/Nanopore_sequencing
-
Raw nanowire sequencer data
Also best of luck with the basecaller, I will say that the latest guppy versions are very good, both in terms of accuracy and speed, as they are GPU accelerated and the best I've seen in accuracy. You may also be interested in Bonito, a tool to generate your own GPU basecalling model or tweak existing models to your data. https://github.com/nanoporetech/bonito.
transformers
-
AI enthusiasm #9 - A multilingual chatbot📣🈸
transformers is a package by Hugging Face, that helps you interact with models on HF Hub (GitHub)
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.