tinygrad
FizzBuzz Enterprise Edition
tinygrad | FizzBuzz Enterprise Edition | |
---|---|---|
58 | 329 | |
17,800 | 20,524 | |
- | 0.7% | |
9.7 | 0.0 | |
10 months ago | 5 days ago | |
Python | Java | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tinygrad
- tinygrad: extreme simplicity, easiest framework to add new accelerators to
-
GGML – AI at the Edge
Might be a silly question but is GGML a similar/competing library to George Hotz's tinygrad [0]?
[0] https://github.com/geohot/tinygrad
-
Render neural network into CUDA/HIP code
at first glance i thought may its like tinygrad. but looks has many ops than that tiny grad but most maps to underlying hardware provided ops?
i wonder how well tinygrad's apporach will work out, ops fusion sounds easy, just a walk a graph, pattern match it and lower to hardware provided ops?
Anyway if anyone wants to understand the philosophy behind tinygrad, this file is great start https://github.com/geohot/tinygrad/blob/master/docs/abstract...
-
llama.cpp now officially supports GPU acceleration.
There are currently at least 3 ways to run llama on m1 with GPU acceleration. - mlc-llm (pre-built, only 1 model has been ported) - tinygrad (very memory efficient, not that easy to integrate into other projects) - llama-mps (original llama codebase + llama adapter support)
- George Hotz building an AMD competitor to Nvidia.
-
George Hotz ROCm adventures
Hopefully we will see now full support with AMD hardware on https://github.com/geohot/tinygrad. You can read more about it on https://tinygrad.org/
-
The Coming of Local LLMs
tinygrad
https://github.com/geohot/tinygrad/tree/master/accel/ane
But I have not tested it on Linux since Asahi has not yet added support.
llama.cpp runs at 18ms per token (7B) and 200ms per token (65B) without quantization.
- Everything we know about Apple's Neural Engine
- Everything we know about the Apple Neural Engine (ANE)
- How 'Open' Is OpenAI, Really?
FizzBuzz Enterprise Edition
- FizzBuzzEnterpriseEdition
-
Simple Lasts Longer
That "Hello World Enterprise Edition" looks dangerously under-engineered - I could understand it! Far better to follow the best practices demonstrated in the Fizz Buzz Enterprise Edition...
https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
-
Writing Clean Code with FastAPI Dependency Injection
Clean code is a balancing act - you’ll want to make sure you don’t turn your codebase into something like this.
- Milyen hasznos Github repokat ismertek?
-
Yazılım sektörünü bırakmaya değer mi?
Bu hocam https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition
-
oopWentTooFar
amidoingitright
- 7+ layer generic architecture libraries are crying rn
- Primeagen Code Review - EnterpriseQualityCoding/FizzBuzzEnterpriseEdition: FizzBuzz Enterprise Edition is a no-nonsense implementation of FizzBuzz made by serious businessmen for serious business purposes.
-
Is Entreprise code unavoidable?
It seems to me that all large software projects eventually grow into "Enterprise" code. What I mean by this is something like FizzBuzz Enterprise Edition; large codebases with many layers where Design Patterns and SOLID principles are applied vigorously.
-
Java 21 makes me like Java again
???
I'll answer your question with a question: Have you seen https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris... ? :)
I'm guess that to those of us who remember when Java came out, "FizzBuzz: EE" is what we think of when we think of Java. :P
In Java I have to type a bazillion characters to get anything done! And make all these useless directories and files and InterfaceClassFactoryProtocolStreamingSerializer BS. And worry about how that executes.
C++? No bloat*, just speed
*Yes, there's some _optional_ bloat. But compared to Java? no contest.
What are some alternatives?
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Logback - The reliable, generic, fast and flexible logging framework for Java.
llama.cpp - LLM inference in C/C++
awesome-functional-python - A curated list of awesome things related to functional programming in Python.
openpilot - openpilot is an open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for 250+ supported car makes and models.
Simple Java Mail - Simple API, Complex Emails (Jakarta Mail smtp wrapper)
llama - Inference code for Llama models
yGuard - The open-source Java obfuscation tool working with Ant and Gradle by yWorks - the diagramming experts
tensorflow_macos - TensorFlow for macOS 11.0+ accelerated using Apple's ML Compute framework.
bitburner - Bitburner Game
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
Java-Hello-World-Enterprise-Edition