-
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
It's definitely possible to focus on the CUDA/GPU side without diving deep into the math. Understanding parallel computing principles and memory optimization is key. I've found that focusing on specific use cases, like optimizing inference, can be a good way to learn. On that note, you might find https://github.com/codelion/optillm useful – it optimizes LLM inference and could give you practical experience with GPU utilization. What kind of AI applications are you most interested in optimizing?
-
May I plug-in with ClojureCUDA, a high-level library that lets you write CUDA with almost no overhead, but write it interactive Clojure REPL.
https://github.com/uncomplicate/clojurecuda
There's also tons of free tutorials at https://dragan.rocks
-
Related posts
-
LLaMA-rs: Run inference of LLaMA on CPU with Rust 🦀🦙
-
Interactive Programming for Artificial Intelligence Book Series
-
Neanderthal, Deep Diamond, and ClojureCUDA now support the latest CUDA 11.7 GPU computing platform.
-
Uncomplicate releases with better CUDA compatibility (Deep Diamond, Neanderthal, ClojureCUDA)
-
Deep Diamond 0.22.0 released