llm.f90
mamba
llm.f90 | mamba | |
---|---|---|
13 | 34 | |
48 | 6,280 | |
- | 2.9% | |
8.4 | 9.5 | |
about 2 months ago | 7 days ago | |
Fortran | C++ | |
MIT License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm.f90
- llm.f90: LLM Inference in Fortran
-
karpathy/llm.c
I'd like to think he took the name from my llm.f90 project https://github.com/rbitr/llm.f90
It was originally based off of Karpathy's llama2.c but I renamed it when I added support for other architectures.
Probable a coincidence :)
-
Winteracter – The Fortran GUI Toolset
I'm a Fortran hobbyist. I'm working (unfortunately less frequently now) on a LLM framework in Fortan: https://github.com/rbitr/llm.f90
- Fortran implementation of phi-2 LLM
- Fortran implementation of phi-2 language model
-
TinyLlama: An Open-Source Small Language Model
Also, I should promote the code I wrote for running this. It runs models in ggml format, the one I made available is an older checkpoint though. It's easy to convert the newer one. And it's in Fortran but it should be easy to get gfortran if you don't have it installed.
https://github.com/rbitr/llm.f90/tree/optimize16/purefortran
- Mamba LLM Inference on CPU
-
Minimal implementation of Mamba, the new LLM architecture, in 1 file of PyTorch
The original mamba code has a lot of speed optimizations and other stuff that make it difficult to immediately get so this will help with learning.
I can't help but also plug my own Mamba inference implementation. https://github.com/rbitr/llm.f90/tree/master/ssm
- Mamba state-space LLM inference
-
Guide to the Mamba architecture that claims to be a replacement for Transformers
You may also be interested in https://github.com/rbitr/llm.f90/tree/master/ssm it's my inference only implementation of mamba which ends up being much simpler than the training code in the original repo
mamba
-
Minimal implementation of Mamba, the new LLM architecture, in 1 file of PyTorch
>"everyone" seems to know Mamba. I never heard of Mamba
Only the "everybody who knows what mamba is" are the ones upvoting and commenting. Think of all the people who ignore it. For me, Mamba is the faster version of Conda [1], and that's why I clicked on the article.
https://github.com/mamba-org/mamba
-
Towards a New SymPy
Yes, this is a big disadvantage. But have you tried Mamba that aims at implementing Anaconda more efficiently? It works really well in most cases.
https://mamba.readthedocs.io/
-
Why are the bioconda bioconductor packages so slow to update?
Because conda is very slow at resolving dependencies. Mamba (https://github.com/mamba-org/mamba) is faster if that is your goal
-
Is pip gaining on conda for python libs?
use mamba instead
-
Real-world examples of std::expected in codebases?
We started using tl::expected in https://github.com/mamba-org/mamba/ since the beginning of this year and some other related projects like https://github.com/mamba-org/powerloader . I don't know much other big open-source codebases that use that specific lib.
- Mamba: A Drop-In Replacement for Conda Written in C++
-
What's Great about Julia?
Great writeup. Minor comment about the portion of the post mentioning Conda being glacially slow: Mamba [1] is a much better drop-in replacement written in C++. Not only is it significantly faster, but error messages are much more sane and helpful.
That being said, I do agree that Pkg.jl is much more sleek and modern than Conda/Mamba.
[1]: https://github.com/mamba-org/mamba
- Mamba Reaches 1.0
-
Given Rust’s rapidly growing popularity and wide range of use cases, it seems almost inevitable that it will overtake Python in the near future.
I thought that python could live a little longer when I learned about mamba. But then I found out it is written in C++? Why write a package manager for a dying language in a language that is almost dead???
-
Does anyone use virtual environments (Conan's virtual env. or Conda's) for C++
Yes, I use Conda enviroments (actually I use Mamba to manage them now).
What are some alternatives?
rwkv.f90 - Port of the RWKV-LM model in Fortran (Back to the Future!)
miniforge - A conda-forge distribution.
neural-fortran - A parallel framework for deep learning
conda - A system-level, binary package and environment manager running on all major operating systems and platforms.
inference-engine - A deep learning library for use in high-performance computing applications in modern Fortran
pip - The Python package installer
Fortran-code-on-GitHub - Directory of Fortran codes on GitHub, arranged by topic
pyenv - Simple Python version management
fastGPT - Fast GPT-2 inference written in Fortran
conda-lock - Lightweight lockfile for conda environments
mamba-minimal - Simple, minimal implementation of the Mamba SSM in one file of PyTorch.
pyre-check - Performant type-checking for python.