cargo-trace VS scalene

Compare cargo-trace vs scalene and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
cargo-trace scalene
1 32
35 11,240
- 2.0%
10.0 9.2
about 3 years ago 5 days ago
Rust Python
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cargo-trace

Posts with mentions or reviews of cargo-trace. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-29.
  • Dwarf-Based Stack Walking Using eBPF
    8 projects | news.ycombinator.com | 29 Nov 2022
    Are the authors here? Thanks for this! I'm always thrilled to see advances in profiling tools.

    I'm curious what they have to say about complexity/necessity of interpreting all of DWARF. cargo-trace (an neat and conceptually similar but abandoned project, I think) [1] says:

    > It can be empirically determined that almost all dwarf programs consist of a single instruction and use only three different instructions. rip+offset, rsp+offset or *cfa+offset, where cfa is the rsp value of the previous frame. The result of the unwinding is an array of instruction pointers.

    Do you find this to be true? Is more complex interpreting of DWARF necessary?

    And in the lkml thread linked from the article, Linus is extremely pessimistic about DWARF unwinding, [2] I'm sure not without justification. He's talking about kernel stacks, and I think the trade-off is different when you're trying to profile existing userspace applications and libraries compiled and implemented however, but nonetheless I'm curious to hear the authors say how applicable they think his points are.

    [1] https://github.com/dvc94ch/cargo-trace

    [2] https://lkml.org/lkml/2012/2/10/356

scalene

Posts with mentions or reviews of scalene. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-10.
  • Memray – A Memory Profiler for Python
    10 projects | news.ycombinator.com | 10 Feb 2024
    I collected a list of profilers (also memory profilers, also specifically for Python) here: https://github.com/albertz/wiki/blob/master/profiling.md

    Currently I actually need a Python memory profiler, because I want to figure out whether there is some memory leak in my application (PyTorch based training script), and where exactly (in this case, it's not a problem of GPU memory, but CPU memory).

    I tried Scalene (https://github.com/plasma-umass/scalene), which seems to be powerful, but somehow the output it gives me is not useful at all? It doesn't really give me a flamegraph, or a list of the top lines with memory allocations, but instead it gives me a listing of all source code lines, and prints some (very sparse) information on each line. So I need to search through that listing now by hand to find the spots? Maybe I just don't know how to use it properly.

    I tried Memray, but first ran into an issue (https://github.com/bloomberg/memray/issues/212), but after using some workaround, it worked now. I get a flamegraph out, but it doesn't really seem accurate? After a while, there don't seem to be any new memory allocations at all anymore, and I don't quite trust that this is correct.

    There is also Austin (https://github.com/P403n1x87/austin), which I also wanted to try (have not yet).

    Somehow this experience so far was very disappointing.

    (Side node, I debugged some very strange memory allocation behavior of Python before, where all local variables were kept around after an exception, even though I made sure there is no reference anymore to the exception object, to the traceback, etc, and I even called frame.clear() for all frames to really clear it. It turns out, frame.f_locals will create another copy of all the local variables, and the exception object and all the locals in the other frame still stay alive until you access frame.f_locals again. At that point, it will sync the f_locals again with the real (fast) locals, and then it can finally free everything. It was quite annoying to find the source of this problem and to find workarounds for it. https://github.com/python/cpython/issues/113939)

  • Scalene: A high-performance CPU GPU and memory profiler for Python
    1 project | /r/hypeurls | 18 Jun 2023
  • Scalene: A high-performance, CPU, GPU, and memory profiler for Python
    1 project | news.ycombinator.com | 18 Jun 2023
  • How can I find out why my python is so slow?
    2 projects | /r/Python | 30 May 2023
    Use this my fren: https://github.com/plasma-umass/scalene
  • Making Python 100x faster with less than 100 lines of Rust
    21 projects | news.ycombinator.com | 29 Mar 2023
    You should take a look at Scalene - it's even better.

    https://github.com/plasma-umass/scalene

  • Blog Post: Making Python 100x faster with less than 100 lines of Rust
    4 projects | /r/rust | 29 Mar 2023
    I like seeing another Python profiler. The one I've been playing with is Scalene (GitHub). It does some fun things related to letting you see how much things are moving across the system Python memory boundary.
  • Cum as putea sa imbunatatesc timpul de rulare al pitonului?
    1 project | /r/programare | 14 Mar 2023
    Ai vazut "Python Performance Matters" by Emery Berger (Strange Loop 2022)? E in principiu o prezentare si demo cu Scalene.
  • Scalene - A Python CPU/GPU/memory profiler with optimization proposals
    1 project | /r/CKsTechNews | 19 Feb 2023
  • Scalene: A Python CPU/GPU/memory profiler with optimization proposals
    1 project | news.ycombinator.com | 19 Feb 2023
  • OpenAI might be training its AI technology to replace some software engineers, report says
    4 projects | /r/programming | 28 Jan 2023
    I tried out some features of machine learning models suggesting optimisations on code profiled by scalene and pretty much all of them would make the code less efficient, both time and memory wise. I am not worried. The devil is in the details and ML will not replace all of us anytime soon

What are some alternatives?

When comparing cargo-trace and scalene you can also consider the following projects:

framehop - Stack unwinding library in Rust

flask-profiler - a flask profiler which watches endpoint calls and tries to make some analysis.

parca-agent - eBPF based always-on profiler auto-discovering targets in Kubernetes and systemd, zero code changes or restarts needed!

palanteer - Visual Python and C++ nanosecond profiler, logger, tests enabler

bcc - BCC - Tools for BPF-based Linux IO analysis, networking, monitoring, and more

pytest-austin - Python Performance Testing with Austin

memray - Memray is a memory profiler for Python

pyshader - Write modern GPU shaders in Python!

viztracer - VizTracer is a low-overhead logging/debugging/profiling tool that can trace and visualize your python code execution.

Dask - Parallel computing with task scheduling

Keras - Deep Learning for humans

magic-trace - magic-trace collects and displays high-resolution traces of what a process is doing