scalene VS PackageCompiler.jl

Compare scalene vs PackageCompiler.jl and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
scalene PackageCompiler.jl
32 26
11,174 1,371
1.4% 0.5%
9.2 7.8
7 days ago 3 days ago
Python Julia
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

scalene

Posts with mentions or reviews of scalene. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-10.
  • Memray – A Memory Profiler for Python
    10 projects | news.ycombinator.com | 10 Feb 2024
    I collected a list of profilers (also memory profilers, also specifically for Python) here: https://github.com/albertz/wiki/blob/master/profiling.md

    Currently I actually need a Python memory profiler, because I want to figure out whether there is some memory leak in my application (PyTorch based training script), and where exactly (in this case, it's not a problem of GPU memory, but CPU memory).

    I tried Scalene (https://github.com/plasma-umass/scalene), which seems to be powerful, but somehow the output it gives me is not useful at all? It doesn't really give me a flamegraph, or a list of the top lines with memory allocations, but instead it gives me a listing of all source code lines, and prints some (very sparse) information on each line. So I need to search through that listing now by hand to find the spots? Maybe I just don't know how to use it properly.

    I tried Memray, but first ran into an issue (https://github.com/bloomberg/memray/issues/212), but after using some workaround, it worked now. I get a flamegraph out, but it doesn't really seem accurate? After a while, there don't seem to be any new memory allocations at all anymore, and I don't quite trust that this is correct.

    There is also Austin (https://github.com/P403n1x87/austin), which I also wanted to try (have not yet).

    Somehow this experience so far was very disappointing.

    (Side node, I debugged some very strange memory allocation behavior of Python before, where all local variables were kept around after an exception, even though I made sure there is no reference anymore to the exception object, to the traceback, etc, and I even called frame.clear() for all frames to really clear it. It turns out, frame.f_locals will create another copy of all the local variables, and the exception object and all the locals in the other frame still stay alive until you access frame.f_locals again. At that point, it will sync the f_locals again with the real (fast) locals, and then it can finally free everything. It was quite annoying to find the source of this problem and to find workarounds for it. https://github.com/python/cpython/issues/113939)

  • Scalene: A high-performance CPU GPU and memory profiler for Python
    1 project | /r/hypeurls | 18 Jun 2023
  • Scalene: A high-performance, CPU, GPU, and memory profiler for Python
    1 project | news.ycombinator.com | 18 Jun 2023
  • How can I find out why my python is so slow?
    2 projects | /r/Python | 30 May 2023
    Use this my fren: https://github.com/plasma-umass/scalene
  • Making Python 100x faster with less than 100 lines of Rust
    21 projects | news.ycombinator.com | 29 Mar 2023
    You should take a look at Scalene - it's even better.

    https://github.com/plasma-umass/scalene

  • Blog Post: Making Python 100x faster with less than 100 lines of Rust
    4 projects | /r/rust | 29 Mar 2023
    I like seeing another Python profiler. The one I've been playing with is Scalene (GitHub). It does some fun things related to letting you see how much things are moving across the system Python memory boundary.
  • Cum as putea sa imbunatatesc timpul de rulare al pitonului?
    1 project | /r/programare | 14 Mar 2023
    Ai vazut "Python Performance Matters" by Emery Berger (Strange Loop 2022)? E in principiu o prezentare si demo cu Scalene.
  • Scalene - A Python CPU/GPU/memory profiler with optimization proposals
    1 project | /r/CKsTechNews | 19 Feb 2023
  • Scalene: A Python CPU/GPU/memory profiler with optimization proposals
    1 project | news.ycombinator.com | 19 Feb 2023
  • OpenAI might be training its AI technology to replace some software engineers, report says
    4 projects | /r/programming | 28 Jan 2023
    I tried out some features of machine learning models suggesting optimisations on code profiled by scalene and pretty much all of them would make the code less efficient, both time and memory wise. I am not worried. The devil is in the details and ML will not replace all of us anytime soon

PackageCompiler.jl

Posts with mentions or reviews of PackageCompiler.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-04.
  • Potential of the Julia programming language for high energy physics computing
    10 projects | news.ycombinator.com | 4 Dec 2023
    Yes, julia can be called from other languages rather easily, Julia functions can be exposed and called with a C-like ABI [1], and then there's also various packages for languages like Python [2] or R [3] to call Julia code.

    With PackageCompiler.jl [4] you can even make AOT compiled standalone binaries, though these are rather large. They've shrunk a fair amount in recent releases, but they're still a lot of low hanging fruit to make the compiled binaries smaller, and some manual work you can do like removing LLVM and filtering stdlibs when they're not needed.

    Work is also happening on a more stable / mature system that acts like StaticCompiler.jl [5] except provided by the base language and people who are more experienced in the compiler (i.e. not a janky prototype)

    [1] https://docs.julialang.org/en/v1/manual/embedding/

    [2] https://pypi.org/project/juliacall/

    [3] https://www.rdocumentation.org/packages/JuliaCall/

    [4] https://github.com/JuliaLang/PackageCompiler.jl

    [5] https://github.com/tshort/StaticCompiler.jl

  • Strong arrows: a new approach to gradual typing
    1 project | news.ycombinator.com | 21 Sep 2023
  • Making Python 100x faster with less than 100 lines of Rust
    21 projects | news.ycombinator.com | 29 Mar 2023
    One of Julia's Achilles heels is standalone, ahead-of-time compilation. Technically this is already possible [1], [2], but there are quite a few limitations when doing this (e.g. "Hello world" is 150 MB [7]) and it's not an easy or natural process.

    The immature AoT capabilities are a huge pain to deal with when writing large code packages or even when trying to make command line applications. Things have to be recompiled each time the Julia runtime is shut down. The current strategy in the community to get around this seems to be "keep the REPL alive as long as possible" [3][4][5][6], but this isn't a viable option for all use cases.

    Until Julia has better AoT compilation support, it's going to be very difficult to develop large scale programs with it. Version 1.9 has better support for caching compiled code, but I really wish there were better options for AoT compiling small, static, standalone executables and libraries.

    [1]: https://julialang.github.io/PackageCompiler.jl/dev/

  • What's Julia's biggest weakness?
    7 projects | /r/Julia | 18 Mar 2023
    Doesn’t work on Windows, but https://github.com/JuliaLang/PackageCompiler.jl does.
  • I learned 7 programming languages so you don't have to
    8 projects | news.ycombinator.com | 12 Feb 2023
    Also, you can precompile a whole package and just ship the binary. We do this all of the time.

    https://github.com/JuliaLang/PackageCompiler.jl

    And getting things precompiled: https://sciml.ai/news/2022/09/21/compile_time/

  • Julia performance, startup.jl, and sysimages
    3 projects | /r/Julia | 19 Nov 2022
    You can have a look at PackageCompiler.jl
  • Why Julia 2.0 isn’t coming anytime soon (and why that is a good thing)
    1 project | news.ycombinator.com | 12 Sep 2022
    I think by PackageManager here you mean package compiler, and yes these improvements do not need a 2.0. v1.8 included a few things to in the near future allow for building binaries without big dependencies like LLVM, and finishing this work is indeed slated for the v1.x releases. Saying "we are not doing a 2.0" is precisely saying that this is more important than things which change the user-facing language semantics.

    And TTFP does need to be addressed. It's a current shortcoming of the compiler that native and LLVM code is not cached during the precompilation stages. If such code is able to precompile into binaries, then startup time would be dramatically decreased because then a lot of package code would no longer have to JIT compile. Tim Holy and Valentin Churavy gave a nice talk at JuliaCon 2022 about the current progress of making this work: https://www.youtube.com/watch?v=GnsONc9DYg0 .

    This is all tied up with startup time and are all in some sense the same issue. Currently, the only way to get LLVM code cached, and thus startup time essentially eliminated, is to build it into what's called the "system image". That system image is the binary that package compiler builds (https://github.com/JuliaLang/PackageCompiler.jl). Julia then ships with a default system image that includes the standard library in order to remove the major chunk of code that "most" libraries share, which is why all of Julia Base works without JIT lag. However, that means everyone wants to have their thing, be it sparse matrices to statistics, in the standard library so that it gets the JIT-lag free build by default. This means the system image is huge, which is why PackageCompiler, which is simply a system for building binaries by appending package code to the system image, builds big binaries. What needs to happen is for packages to be able to precompile in a way that then caches LLVM and native code. Then there's no major compile time advantage to being in the system image, which will allow things to be pulled out of the system image to have a leaner Julia Base build without major drawbacks, which would then help make the system compile. That will then make it so that an LLVM and BLAS build does not have to be in every binary (which is what takes up most of the space and RAM), which would then allow Julia to much more comfortably move beyond the niche of scientific computing.

  • Is it possible to create a Python package with Julia and publish it on PyPi?
    6 projects | /r/Julia | 23 Apr 2022
  • GenieFramework – Web Development with Julia
    4 projects | news.ycombinator.com | 6 Apr 2022
  • Julia for health physics/radiation detection
    3 projects | /r/Julia | 9 Mar 2022
    You're probably dancing around the edges of what [PackageCompiler.jl](https://github.com/JuliaLang/PackageCompiler.jl) is capable of targeting. There are a few new capabilities coming online, namely [separating codegen from runtime](https://github.com/JuliaLang/julia/pull/41936) and [compiling small static binaries](https://github.com/tshort/StaticCompiler.jl), but you're likely to hit some snags on the bleeding edge.

What are some alternatives?

When comparing scalene and PackageCompiler.jl you can also consider the following projects:

flask-profiler - a flask profiler which watches endpoint calls and tries to make some analysis.

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

palanteer - Visual Python and C++ nanosecond profiler, logger, tests enabler

julia - The Julia Programming Language

pytest-austin - Python Performance Testing with Austin

Genie.jl - 🧞The highly productive Julia web framework

memray - Memray is a memory profiler for Python

LuaJIT - Mirror of the LuaJIT git repository

pyshader - Write modern GPU shaders in Python!

Dash.jl - Dash for Julia - A Julia interface to the Dash ecosystem for creating analytic web applications in Julia. No JavaScript required.

viztracer - VizTracer is a low-overhead logging/debugging/profiling tool that can trace and visualize your python code execution.

Transformers.jl - Julia Implementation of Transformer models