triton
codon
triton | codon | |
---|---|---|
30 | 34 | |
11,054 | 13,851 | |
4.3% | 0.6% | |
9.9 | 7.9 | |
3 days ago | 8 days ago | |
C++ | C++ | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
triton
- OpenAI Triton: language and compiler for highly efficient Deep-Learning
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
There's a ton of cool opportunity in the runtime layer. I've been keeping my eye on the compiler-based approaches. From what I've gathered many of the larger "production" inference tools use compilers:
- https://github.com/openai/triton
- Core Functionality for AMD #1983
- Project name easily confused with Nvidia triton
-
Nvidia's CUDA Monopoly
Does anyone have more inside knowledge from OpenAI or AMD on AMDGPU support for Triton?
I see this:
https://github.com/openai/triton/issues/1073
But it's not clear to me if we will see AMD GPUs as first class citizens for pytorch in the future?
- @soumithchintala (Cofounded and lead @PyTorch at Meta) on Twitter: I'm fairly puzzled by $NVDA skyrocketing... (cont.)
-
The tiny corp raised $5.1M
I thought this was a good overview of the idea Triton can circumvent the CUDA moat: https://www.semianalysis.com/p/nvidiaopenaitritonpytorch
It also looks like they added MLIR backend to Triton though I wonder if Mojo has advantages since it was built on MLIR? https://github.com/openai/triton/pull/1004
-
Anyone hosting a local LLM server
I'm pretty happy with the setup, because it allows me to keep all the AI stuff and its dozens of conda envs and repos etc. seperate from my normal setup and "portable". It may have some performance impact (although I don't personally notice any significant difference to running it "natively" on windows), and it may enable some extra functionality, such as access to OpenAi's Triton etc., but that's currently neither here nor there.
- Triton: Runtime for highly efficient custom Deep-Learning primitives
-
Mojo – a new programming language for all AI developers
Very cool development. There is too much busy work going from development to test to production. This will help to unify everything. OpenAI Triton https://github.com/openai/triton/ is going for a similar goal. But this is a more fundamental approach.
codon
-
Should I Open Source my Company?
https://github.com/exaloop/codon/blob/develop/LICENSE
Here are some others: https://github.com/search?q=%22Business+Source+License%22+%2...
-
Python running on the Dart VM?
I found at least one project that managed to compile python AOT to LLVM https://github.com/exaloop/codon. Even if LLVM is more expressive than Dart Kernel, that should at least be some evidence that this might not be too impractical.
-
Codon: Python Compiler
Their fannkuch benchmark seems to be a bit dishonest. They claim an enormous perf delta on https://exaloop.io/benchmarks.html but fannkuch uses factorial a lot and they define factorial with a very small (n=20) table: https://github.com/exaloop/codon/blob/fb461371613049539654c1...
Disclaimer: I've worked on several Python runtimes and compilers, but I'm not by any means out to get Codon. Just happened across this by accident while looking at their inline LLVM, which is neat.
-
The father of Swift made another baby: Mojo: looks to be based on Python using MLIR
If you literally want Python, but compiled ... Look at Codon: https://github.com/exaloop/codon
-
Mojo – a new programming language for all AI developers
Another "Python with high-performance compiled builds" would be https://github.com/exaloop/codon.
-
MIT Turbocharges Python’s Notoriously Slow Compiler
This is the project being discussed: https://github.com/exaloop/codon
-
Is there a way to use turn a project into a single executable file that doesn't require anyone to do anything like install Python before using it?
Try Codon? https://github.com/exaloop/codon
- Since when did Python haters spread out everywhere? Maybe DNF5 would be faster because of ditched it, maybe.
-
Budget HomeLab converted to endless money-pit
https://github.com/exaloop/codon might save you from the rewrite.
- What are your thoughts on Codon compiler having a paid licence?
What are some alternatives?
cuda-python - CUDA Python Low-level Bindings
Nuitka - Nuitka is a Python compiler written in Python. It's fully compatible with Python 2.6, 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, and 3.11. You feed it your Python app, it does a lot of clever things, and spits out an executable or extension module.
Halide - a language for fast, portable data-parallel computation
Numba - NumPy aware dynamic Python compiler using LLVM
GPU-Puzzles - Solve puzzles. Learn CUDA.
Cython - The most widely used Python to C compiler
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
taichi - Productive, portable, and performant GPU programming in Python.
web-llm - Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
julia - The Julia Programming Language
cutlass - CUDA Templates for Linear Algebra Subroutines
Nim - Nim is a statically typed compiled systems programming language. It combines successful concepts from mature languages like Python, Ada and Modula. Its design focuses on efficiency, expressiveness, and elegance (in that order of priority).