DiskCache
austin
DiskCache | austin | |
---|---|---|
6 | 12 | |
2,157 | 1,355 | |
- | - | |
4.5 | 7.2 | |
15 days ago | 22 days ago | |
Python | C | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DiskCache
-
This Week In Python
python-diskcache β disk-backed cache (Django-compatible). Faster than Redis and Memcached. Pure-Python
-
Making a Password Manager, Should I Use MySQL or SQLite 3?
Based on your question about SQLite, it seems like you want to store the database inside of the program as opposed to on the internet. Furthermore, your data doesn't seem to be super relational to my knowledge. You might be better off using something like diskcache to store the data instead.
-
How I setup a sqlite cache in python
Give Grant some love and give grantjenks/python-diskcache a β.
- What new in Starlite 1.1
- tqdm (Python)
-
Need help with an OD indexer that I am writing in Python
Do you know this project which covers most your needs ? http://www.grantjenks.com/docs/diskcache/
austin
-
Memray β A Memory Profiler for Python
I collected a list of profilers (also memory profilers, also specifically for Python) here: https://github.com/albertz/wiki/blob/master/profiling.md
Currently I actually need a Python memory profiler, because I want to figure out whether there is some memory leak in my application (PyTorch based training script), and where exactly (in this case, it's not a problem of GPU memory, but CPU memory).
I tried Scalene (https://github.com/plasma-umass/scalene), which seems to be powerful, but somehow the output it gives me is not useful at all? It doesn't really give me a flamegraph, or a list of the top lines with memory allocations, but instead it gives me a listing of all source code lines, and prints some (very sparse) information on each line. So I need to search through that listing now by hand to find the spots? Maybe I just don't know how to use it properly.
I tried Memray, but first ran into an issue (https://github.com/bloomberg/memray/issues/212), but after using some workaround, it worked now. I get a flamegraph out, but it doesn't really seem accurate? After a while, there don't seem to be any new memory allocations at all anymore, and I don't quite trust that this is correct.
There is also Austin (https://github.com/P403n1x87/austin), which I also wanted to try (have not yet).
Somehow this experience so far was very disappointing.
(Side node, I debugged some very strange memory allocation behavior of Python before, where all local variables were kept around after an exception, even though I made sure there is no reference anymore to the exception object, to the traceback, etc, and I even called frame.clear() for all frames to really clear it. It turns out, frame.f_locals will create another copy of all the local variables, and the exception object and all the locals in the other frame still stay alive until you access frame.f_locals again. At that point, it will sync the f_locals again with the real (fast) locals, and then it can finally free everything. It was quite annoying to find the source of this problem and to find workarounds for it. https://github.com/python/cpython/issues/113939)
- Pystack: Like Pstack but for Python
- High performance profiling for Python 3.11
- What are my Python processes at?
-
tqdm (Python)
Just wanted to add Austin: Python frame stack sampler for CPython written in pure C (https://github.com/P403n1x87/austin)
- Pyheatmagic: Profile and view your Python code as a heat map
-
Spy on Python down to the Linux kernel level
If you follow the call stack carefully you should be able to get to the point where sklearn calls ddot_kernel_8 (indirectly in this case). Austin(p) reports source files as well, so that shouldn't be a problem (provided all the debug symbols are available). If you're collecting data with austinp, don't forget to resolve symbol names with the resolve.py utility (https://github.com/P403n1x87/austin/blob/devel/utils/resolve..., see the README for more details: https://github.com/P403n1x87/austin/blob/devel/utils/resolve...)
- (How to) profile python code?
- Spy on the Python garbage collector with Austin 3.1
- Austin 3: 0-instrumentation, 0-impact Python CPU/wall time and memory profiling
What are some alternatives?
cachetools - Extensible memoizing collections and decorators
pyinstrument - π΄Β Call stack profiler for Python. Shows you why your code is slow!
Beaker - WSGI middleware for sessions and caching
SnakeViz - An in-browser Python profile viewer
flask-cache-redis - :fire: Implementation of API Caching with Flask, Redis and Docker
line_profiler - Line-by-line profiling for Python
python-diskcache - Persistent dict, backed by sqlite3 and pickle, multithread-safe.
schema - Schema validation just got Pythonic
dogpile.cache
yappi - Yet Another Python Profiler, but this time multithreading, asyncio and gevent aware.
HermesCache
pystack - π π Like pstack but for Python!