truffleruby
line_profiler
truffleruby | line_profiler | |
---|---|---|
25 | 17 | |
2,963 | 2,481 | |
0.1% | 1.3% | |
9.9 | 8.2 | |
4 days ago | 7 days ago | |
Ruby | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
truffleruby
- TruffleRuby 24.0.0
-
Mir: Strongly typed IR to implement fast and lightweight interpreters and JITs
I think it would be worth mentioning GraalVM and https://github.com/oracle/truffleruby in competitors section.
-
GraalVM for JDK 21 is here
GitHub page has some info: https://github.com/oracle/truffleruby#current-status
My question is, how viable is TruffleRuby vs JRuby?
-
Making Python 100x faster with less than 100 lines of Rust
I wonder why GraalVM is not more often used for these speed critical cases: https://www.graalvm.org/python/
Is the problem the Oracle involvement? (Same for ruby https://www.graalvm.org/ruby/)
-
Ruby 3.2’s YJIT is Production-Ready
Looks like it’s still a WIP
https://github.com/oracle/truffleruby/commits?author=eregon
- Implement Pattern Matching in TruffleRuby (GSoC)
- TruffleRuby – GraalVM Community Edition 22.2.0
-
Modern programming languages require generics
this comes at the cost of boxing ints inside Integer, though.
So, if you ignore for a moment primitives types, whenever you have generics, everything boils down to a single method accepting Objects and returning Objects. What the JVM does is to do runtime profiling of what actually you are passing to the generic method, and generate optimized routines for the "best case". In theory this is the best of the two worlds, because like in general you will have a single implementation of the method (avoiding duplication of the code), but if you use it in an hot spot you get the optimized code.
In a way, it is quite wasteful, because you throw away a lot of information at compile time, just to get it back (and maybe not all of it) at runtime through profiling, but in practice it works quite well.
A side effect of this is this makes the JVM a wonderful VM for running dynamic languages like Ruby and Python, because that information is _not_ there at compile time. In particular GraalVM/TruffleVM and exposes this functionality to dynamic language implementations, allowing very good performance (according to they website [1][2], Ruby and Python on TruffleVM are about 8x faster than the official implementation, and JS in line with V8)
[1] https://www.graalvm.org/ruby/
-
GraalVM 22.1: Developer experience improvements, Apple Silicon builds, and more
I opened a ticket some time ago about performance with Jekyll and liquid templates. At least in that case, yjit was way faster. I'm happy to retest though. Anything that would make my jekyll builds faster would help.
https://github.com/oracle/truffleruby/issues/2363
-
Ruby YJIT Ported to Rust
Here's a benchmark [1] done in Jan'22 against many ruby implementations, truffleRuby [2] seems to be way ahead in most, and at least ahead in all. Why truffleRuby isn't talk about much here?
[1] https://eregon.me/blog/2022/01/06/benchmarking-cruby-mjit-yj...
[2] https://github.com/oracle/truffleruby
line_profiler
- Ask HN: C/C++ developer wanting to learn efficient Python
- New version of line_profiler: 4.1.0
-
Making Python 100x faster with less than 100 lines of Rust
LineProfiler is the best tool to learn how to write performant Python and code optimization.
https://github.com/pyutils/line_profiler
You can literally see the hot spot of your code, then you can grind different algorithms or change the whole architecture to make it faster.
For example replace short for loops to list comprehensions, vectorize all numpy operations (only vectorize partially do not help the issue), using 'not any()' instead or 'all()' for boolean, etc.
Doing this for like 2 weeks, basically you can automatically recognized most bad code patterns in a glance.
-
Why is my Pubmed plant search app so slow?
You may want to try using a package like line_profiler to narrow down where the time is spent.
-
How to make nested for loops run faster
When tuning for performance, always measure. Never assume you know where the slow parts are. Run a line profiler and see where all the time is actually going.
-
I'm working on a world map generator, but I have one function in particular that is very slow and keeping me from being able to scale my maps to as large as I'd like... is there a way that I can optimize this depth first search function, or another way of grouping contiguous cells based on criteria?
Either way I would highly recommend running a profiler on your code to see where the program is spending most of its time. line_profiler is a very nice one, as it shows you execution time for each line.
-
Is it possible to make a function to check how many lines of code have been executed in the program so far (including said function’s lines)?
There are dedicated tools like line_profiler for python - if this doesn't do exactly what you need it can be easily modified.
-
Why does sklearn.Pipeline with regex outperform spacy for text preprocessing?
It's surprising to me that an sklearn pipeline and a spacy pipeline both doing simple regexing are vastly different in performance. I would go one layer deeper with measurement with something like line_profiler, which I've used to great effect to get line-by-line perf stats. This should illuminate why.
-
Hot profiling for Python
This looks really nice! Does it use line_profiler or is it a different implementation for the profiling? Either way the interface is fantastic!
-
Profiling and Analyzing Performance of Python Programs
# https://github.com/pyutils/line_profiler pip install line_profiler kernprof -l -v some-code.py # This might take a while... Wrote profile results to some-code.py.lprof Timer unit: 1e-06 s Total time: 13.0418 s File: some-code.py Function: exp at line 3 Line # Hits Time Per Hit % Time Line Contents ============================================================== 3 @profile 4 def exp(x): 5 1 4.0 4.0 0.0 getcontext().prec += 2 6 1 0.0 0.0 0.0 i, lasts, s, fact, num = 0, 0, 1, 1, 1 7 5818 4017.0 0.7 0.0 while s != lasts: 8 5817 1569.0 0.3 0.0 lasts = s 9 5817 1837.0 0.3 0.0 i += 1 10 5817 6902.0 1.2 0.1 fact *= i 11 5817 2604.0 0.4 0.0 num *= x 12 5817 13024902.0 2239.1 99.9 s += num / fact 13 1 5.0 5.0 0.0 getcontext().prec -= 2 14 1 2.0 2.0 0.0 return +s
What are some alternatives?
JRuby - JRuby, an implementation of Ruby on the JVM
SnakeViz - An in-browser Python profile viewer
artichoke - 💎 Artichoke is a Ruby made with Rust
memory_profiler - Monitor Memory usage of Python code
graalpython - A Python 3 implementation built on GraalVM
reloadium - Hot Reloading and Profiling for Python
ruby-packer - Packing your Ruby application into a single executable.
pprofile - Line-granularity, thread-aware deterministic and statistic pure-python profiler
graaljs - A ECMAScript 2023 compliant JavaScript implementation built on GraalVM. With polyglot language interoperability support. Running Node.js applications!
psutil - Cross-platform lib for process and system monitoring in Python
clj-kondo - Static analyzer and linter for Clojure code that sparks joy
prometeo - An experimental Python-to-C transpiler and domain specific language for embedded high-performance computing