austin
line_profiler
austin | line_profiler | |
---|---|---|
12 | 17 | |
1,362 | 2,481 | |
- | 1.3% | |
7.2 | 8.2 | |
24 days ago | 5 days ago | |
C | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
austin
-
Memray – A Memory Profiler for Python
I collected a list of profilers (also memory profilers, also specifically for Python) here: https://github.com/albertz/wiki/blob/master/profiling.md
Currently I actually need a Python memory profiler, because I want to figure out whether there is some memory leak in my application (PyTorch based training script), and where exactly (in this case, it's not a problem of GPU memory, but CPU memory).
I tried Scalene (https://github.com/plasma-umass/scalene), which seems to be powerful, but somehow the output it gives me is not useful at all? It doesn't really give me a flamegraph, or a list of the top lines with memory allocations, but instead it gives me a listing of all source code lines, and prints some (very sparse) information on each line. So I need to search through that listing now by hand to find the spots? Maybe I just don't know how to use it properly.
I tried Memray, but first ran into an issue (https://github.com/bloomberg/memray/issues/212), but after using some workaround, it worked now. I get a flamegraph out, but it doesn't really seem accurate? After a while, there don't seem to be any new memory allocations at all anymore, and I don't quite trust that this is correct.
There is also Austin (https://github.com/P403n1x87/austin), which I also wanted to try (have not yet).
Somehow this experience so far was very disappointing.
(Side node, I debugged some very strange memory allocation behavior of Python before, where all local variables were kept around after an exception, even though I made sure there is no reference anymore to the exception object, to the traceback, etc, and I even called frame.clear() for all frames to really clear it. It turns out, frame.f_locals will create another copy of all the local variables, and the exception object and all the locals in the other frame still stay alive until you access frame.f_locals again. At that point, it will sync the f_locals again with the real (fast) locals, and then it can finally free everything. It was quite annoying to find the source of this problem and to find workarounds for it. https://github.com/python/cpython/issues/113939)
- Pystack: Like Pstack but for Python
- High performance profiling for Python 3.11
- What are my Python processes at?
-
tqdm (Python)
Just wanted to add Austin: Python frame stack sampler for CPython written in pure C (https://github.com/P403n1x87/austin)
- Pyheatmagic: Profile and view your Python code as a heat map
-
Spy on Python down to the Linux kernel level
If you follow the call stack carefully you should be able to get to the point where sklearn calls ddot_kernel_8 (indirectly in this case). Austin(p) reports source files as well, so that shouldn't be a problem (provided all the debug symbols are available). If you're collecting data with austinp, don't forget to resolve symbol names with the resolve.py utility (https://github.com/P403n1x87/austin/blob/devel/utils/resolve..., see the README for more details: https://github.com/P403n1x87/austin/blob/devel/utils/resolve...)
- (How to) profile python code?
- Spy on the Python garbage collector with Austin 3.1
- Austin 3: 0-instrumentation, 0-impact Python CPU/wall time and memory profiling
line_profiler
- Ask HN: C/C++ developer wanting to learn efficient Python
- New version of line_profiler: 4.1.0
-
Making Python 100x faster with less than 100 lines of Rust
LineProfiler is the best tool to learn how to write performant Python and code optimization.
https://github.com/pyutils/line_profiler
You can literally see the hot spot of your code, then you can grind different algorithms or change the whole architecture to make it faster.
For example replace short for loops to list comprehensions, vectorize all numpy operations (only vectorize partially do not help the issue), using 'not any()' instead or 'all()' for boolean, etc.
Doing this for like 2 weeks, basically you can automatically recognized most bad code patterns in a glance.
-
Why is my Pubmed plant search app so slow?
You may want to try using a package like line_profiler to narrow down where the time is spent.
-
How to make nested for loops run faster
When tuning for performance, always measure. Never assume you know where the slow parts are. Run a line profiler and see where all the time is actually going.
-
I'm working on a world map generator, but I have one function in particular that is very slow and keeping me from being able to scale my maps to as large as I'd like... is there a way that I can optimize this depth first search function, or another way of grouping contiguous cells based on criteria?
Either way I would highly recommend running a profiler on your code to see where the program is spending most of its time. line_profiler is a very nice one, as it shows you execution time for each line.
-
Is it possible to make a function to check how many lines of code have been executed in the program so far (including said function’s lines)?
There are dedicated tools like line_profiler for python - if this doesn't do exactly what you need it can be easily modified.
-
Why does sklearn.Pipeline with regex outperform spacy for text preprocessing?
It's surprising to me that an sklearn pipeline and a spacy pipeline both doing simple regexing are vastly different in performance. I would go one layer deeper with measurement with something like line_profiler, which I've used to great effect to get line-by-line perf stats. This should illuminate why.
-
Hot profiling for Python
This looks really nice! Does it use line_profiler or is it a different implementation for the profiling? Either way the interface is fantastic!
-
Profiling and Analyzing Performance of Python Programs
# https://github.com/pyutils/line_profiler pip install line_profiler kernprof -l -v some-code.py # This might take a while... Wrote profile results to some-code.py.lprof Timer unit: 1e-06 s Total time: 13.0418 s File: some-code.py Function: exp at line 3 Line # Hits Time Per Hit % Time Line Contents ============================================================== 3 @profile 4 def exp(x): 5 1 4.0 4.0 0.0 getcontext().prec += 2 6 1 0.0 0.0 0.0 i, lasts, s, fact, num = 0, 0, 1, 1, 1 7 5818 4017.0 0.7 0.0 while s != lasts: 8 5817 1569.0 0.3 0.0 lasts = s 9 5817 1837.0 0.3 0.0 i += 1 10 5817 6902.0 1.2 0.1 fact *= i 11 5817 2604.0 0.4 0.0 num *= x 12 5817 13024902.0 2239.1 99.9 s += num / fact 13 1 5.0 5.0 0.0 getcontext().prec -= 2 14 1 2.0 2.0 0.0 return +s
What are some alternatives?
pyinstrument - 🚴 Call stack profiler for Python. Shows you why your code is slow!
SnakeViz - An in-browser Python profile viewer
memory_profiler - Monitor Memory usage of Python code
schema - Schema validation just got Pythonic
reloadium - Hot Reloading and Profiling for Python
yappi - Yet Another Python Profiler, but this time multithreading, asyncio and gevent aware.
pprofile - Line-granularity, thread-aware deterministic and statistic pure-python profiler
pystack - 🔍 🐍 Like pstack but for Python!
psutil - Cross-platform lib for process and system monitoring in Python
austin-python - Python wrapper for Austin, the CPython frame stack sampler.
prometeo - An experimental Python-to-C transpiler and domain specific language for embedded high-performance computing