scalene
just
Our great sponsors
scalene | just | |
---|---|---|
32 | 163 | |
11,125 | 16,971 | |
1.6% | - | |
9.3 | 9.1 | |
3 days ago | 3 days ago | |
Python | Rust | |
Apache License 2.0 | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scalene
-
Memray β A Memory Profiler for Python
I collected a list of profilers (also memory profilers, also specifically for Python) here: https://github.com/albertz/wiki/blob/master/profiling.md
Currently I actually need a Python memory profiler, because I want to figure out whether there is some memory leak in my application (PyTorch based training script), and where exactly (in this case, it's not a problem of GPU memory, but CPU memory).
I tried Scalene (https://github.com/plasma-umass/scalene), which seems to be powerful, but somehow the output it gives me is not useful at all? It doesn't really give me a flamegraph, or a list of the top lines with memory allocations, but instead it gives me a listing of all source code lines, and prints some (very sparse) information on each line. So I need to search through that listing now by hand to find the spots? Maybe I just don't know how to use it properly.
I tried Memray, but first ran into an issue (https://github.com/bloomberg/memray/issues/212), but after using some workaround, it worked now. I get a flamegraph out, but it doesn't really seem accurate? After a while, there don't seem to be any new memory allocations at all anymore, and I don't quite trust that this is correct.
There is also Austin (https://github.com/P403n1x87/austin), which I also wanted to try (have not yet).
Somehow this experience so far was very disappointing.
(Side node, I debugged some very strange memory allocation behavior of Python before, where all local variables were kept around after an exception, even though I made sure there is no reference anymore to the exception object, to the traceback, etc, and I even called frame.clear() for all frames to really clear it. It turns out, frame.f_locals will create another copy of all the local variables, and the exception object and all the locals in the other frame still stay alive until you access frame.f_locals again. At that point, it will sync the f_locals again with the real (fast) locals, and then it can finally free everything. It was quite annoying to find the source of this problem and to find workarounds for it. https://github.com/python/cpython/issues/113939)
- Scalene: A high-performance CPU GPU and memory profiler for Python
- Scalene: A high-performance, CPU, GPU, and memory profiler for Python
-
How can I find out why my python is so slow?
Use this my fren: https://github.com/plasma-umass/scalene
-
Making Python 100x faster with less than 100 lines of Rust
You should take a look at Scalene - it's even better.
-
Blog Post: Making Python 100x faster with less than 100 lines of Rust
I like seeing another Python profiler. The one I've been playing with is Scalene (GitHub). It does some fun things related to letting you see how much things are moving across the system Python memory boundary.
-
Cum as putea sa imbunatatesc timpul de rulare al pitonului?
Ai vazut "Python Performance Matters" by Emery Berger (Strange Loop 2022)? E in principiu o prezentare si demo cu Scalene.
- Scalene - A Python CPU/GPU/memory profiler with optimization proposals
- Scalene: A Python CPU/GPU/memory profiler with optimization proposals
-
OpenAI might be training its AI technology to replace some software engineers, report says
I tried out some features of machine learning models suggesting optimisations on code profiled by scalene and pretty much all of them would make the code less efficient, both time and memory wise. I am not worried. The devil is in the details and ML will not replace all of us anytime soon
just
-
Ask HN: What software sparks joy when using?
just - https://github.com/casey/just
-
GitHub switched to Docker Compose v2, action needed
Welp there is absolute chaos in that thread -- guess it's not an April Fools joke.
I wonder if relying on CI for anything other than provisioning machines is a mistake -- maybe we should have never moved from doing things from local scripts written in $LANGUAGE.
That said, I'm probably biased since I'm a massive fan of things like `make` and more appropriately for the current age, `just`[0]
-
Which command did you run 1731 days ago?
> When a command has some cognitive requirements I create a script with some ${1:-default} values and I store them all in $PATH enabled local/bin
I would consider using just for this:
-
Using Make β writing less Makefile
Your coworker's experience is more principled: Make is a mediocre tool for executing commands. It wasn't ever designed for that. Although it is pretty common to see what you are mentioning in projects because it doesn't require installing a dependency.
For a repo where an easy to install (single binary) dependency is a non-issue, consider using just. [1] You get `just -l` where you can see all the command available, the ability to use different languages, and overall simpler command writing.
-
Show HN: Just.sh β compiler that turns Justfiles into portable shell scripts
This is fantastic, but I'd say that this solution is somewhat in response to this open issue from 2019:
https://github.com/casey/just/issues/429
I really wish just was included as a package in distributions.
-
Sharing Saturday #496
So far, I didn't work on new features at all but on stabilizing the ground for further development: 1. CMake lists and modules were rewritten a lot, now managing builds and their configurations is much lesser pain. 2. Brought in Justfile for regular tasks, and it's great, no less. 3. Linters, formatters, analyzers for almost all the code (except for Janet for now, as because of it being a niche and young technology, it didn't get enough attention yet). 4. ECS stub. Now runtime class doesn't look like a god object. 5. Started writing unit tests which didn't happen with my personal projects before and maybe indicates how serious am I about this one :D 6. Some of previously hardcoded data has been moved to INI files. Now, if I release the game in 10 years, and in 10 more years some eccentric person decides to make a variant of it, it will be slightly simpler.
-
Whatβs with DevOps engineers using `make` of all things?
i've grown to like this for my personal projects. https://github.com/casey/just
-
Show HN: Jeeves β A Pythonic Alternative to GNU Make
Reminds me of `just`. Which I love.
-
Dev Containers: Open, Develop, Repeat...
In my example above, I installed the developer tool "Just" as a Dev Container feature. I could also install it by adding the install script to my Dockerfile. However, I would have to build my own Dockerfile and would have to maintain this piece of code myself. This Dev Container Feature works on different architectures and base images, which makes them convenient to use.
-
Show HN: Togomak β declarative pipeline orchestrator based on HCL and Terraform
One primary design goal togomak had from the beginning was concurrency. All tasks run concurrently, unless a `depends_on` argument is mentioned. `just` didn't support that when I was initially building togomak, but there is a feature coming in soon which I am looking forward to: https://github.com/casey/just/pull/1562 .
While I was building togomak, I read through Dagger [1], Earthly [2], Concourse CI [3], Jest and Make along with the stuff I was already working with - Jenkins, GitHub actions and GitLab CI. Dagger [1] is really great, I like its design - it supports writing pipelines in Python, Typescript, Go and a few more languages. togomak tries to abstract away a lot of it. Such as dependency management (in the case of python, the requirement of a python interpreter, and its package managers, etc). togomak is just a single statically-linked binary.
[1]: https://dagger.io/
What are some alternatives?
flask-profiler - a flask profiler which watches endpoint calls and tries to make some analysis.
Task - A task runner / simpler Make alternative written in Go
palanteer - Visual Python and C++ nanosecond profiler, logger, tests enabler
cargo-make - Rust task runner and build tool.
pytest-austin - Python Performance Testing with Austin
cargo-xtask
memray - Memray is a memory profiler for Python
Taskfile - Repository for the Taskfile template.
pyshader - Write modern GPU shaders in Python!
CodeLLDB - A native debugger extension for VSCode based on LLDB
viztracer - VizTracer is a low-overhead logging/debugging/profiling tool that can trace and visualize your python code execution.
cargo-release - Cargo subcommand `release`: everything about releasing a rust crate.