awesome-taichi
taichi_benchmark
awesome-taichi | taichi_benchmark | |
---|---|---|
3 | 4 | |
411 | 34 | |
0.0% | - | |
1.4 | 4.5 | |
about 1 year ago | about 1 year ago | |
Python | ||
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-taichi
-
Beginner thread: Useful resources!
Awesome Taichi repo: https://github.com/taichi-dev/awesome-taichi
-
From molecular simulation to black hole rendering - Taichi-Lang makes life easier for digital content creators
To have a better understanding of the scenarios where Taichi is (spontaneously) applied, we launched taichi-dev/awesome-taichi to collect and present top-notch Taichi-empowered projects. Most of the examples given below are available in this repo.
-
ETH Zürich uses Taichi Lang in its Physically-based Simulation course (AS 21)
The Taichi community is active and provides a wide range of reference codes.
taichi_benchmark
- Taichi Lang: A high-performance parallel programming language embedded in Python
-
I compared the performance of numerical computations facilitated by different acceleration toolkits... And here are the results!
In fact, Taichi's compiler relies on heavy underlying engineering to enable high performance. In the aforementioned code snippet, the summation is actually an atomic operation, which cannot be parallelized and is thus subject to limited operation efficiency. The most commonly used method of parallel computing optimization in vector summation is reduction, which is one of the must-have skills to learn parallel computing. The benchmark report concludes that, thanks to automatic reduction optimization implemented by its compiler, Taichi achieves a performance comparable to manually implemented CUB and way better than Numba.
-
Accelerate Python code 100x by import taichi as ti
Sure, here are some benchmarks that could be helpful: https://github.com/taichi-dev/taichi_benchmark
-
ETH Zürich uses Taichi Lang in its Physically-based Simulation course (AS 21)
Some are uncertain about Taichi Lang's performance against other frameworks such as CUDA, and would require a comprehensive benchmarking report. Actually, we published a systematic benchmarking report when we released Taichi Lang v1.0.0. Taichi has more or less comparable performance to CUDA. But, for sure, you will have much fewer lines of code with Taichi Lang!
What are some alternatives?
taichi - Productive, portable, and performant GPU programming in Python.
faster-python-with-taichi
Fast-Poisson-Image-Editing - A fast poisson image editing implementation that can utilize multi-core CPU or GPU to handle a high-resolution image input.
2d-fluid-simulator - 2D incompressible fluid solver implemented in Taichi.
Pyjion - Pyjion - A JIT for Python based upon CoreCLR
BlackHoleRayMarching
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
LBM_Taichi - Fluid solver based on Lattice Boltzmann method implemented by taichi programming language
BlenderPythonRenderer - A Python GPU renderer for Blender using the Taichi package
taichimd - Interactive, GPU-accelerated Molecular Dynamics using the Taichi programming language
taichi_elements - High-performance multi-material continuum physics engine in Taichi