PyCall.jl VS are-we-fast-yet

Compare PyCall.jl vs are-we-fast-yet and see what are their differences.

PyCall.jl

Package to call Python functions from the Julia language (by JuliaPy)

are-we-fast-yet

Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays (by smarr)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
PyCall.jl are-we-fast-yet
28 18
1,437 314
1.1% -
6.1 8.8
about 1 month ago 2 months ago
Julia Java
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

PyCall.jl

Posts with mentions or reviews of PyCall.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-06.

are-we-fast-yet

Posts with mentions or reviews of are-we-fast-yet. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-21.
  • Boehm Garbage Collector
    9 projects | news.ycombinator.com | 21 Jan 2024
    > Sure there's a small overhead to smart pointers

    Not so small, and it has the potential to significantly speed down an application when not used wisely. Here are e.g. some measurements where the programmer used C++11 and did everything with smart pointers: https://github.com/smarr/are-we-fast-yet/issues/80#issuecomm.... There was a speed down between factor 2 and 10 compared with the C++98 implementation. Also remember that smart pointers create memory leaks when used with circular references, and there is an additional memory allocation involved with each smart pointer.

    > Garbage collection has an overhead too of course

    The Boehm GC is surprisingly efficient. See e.g. these measurements: https://github.com/rochus-keller/Oberon/blob/master/testcase.... The same benchmark suite as above is compared with different versions of Mono (using the generational GC) and the C code (using Boehm GC) generated with my Oberon compiler. The latter only is 20% slower than the native C++98 version, and still twice as fast as Mono 5.

  • A C++ version of the Are-we-fast-yet benchmark suite
    2 projects | /r/cpp | 26 Jun 2023
    See https://github.com/smarr/are-we-fast-yet/blob/master/docs/guidelines.md.
  • The Bitter Truth: Python 3.11 vs. Cython vs. C++ Performance for Simulations
    1 project | news.ycombinator.com | 24 Dec 2022
    That's a very interesting article, thanks. Interesting to note that Cython is only about twice as fast as Python 3.10 and only about 40% faster than Python 3.11.

    The official Python site advertises a speedup of 25% from 3.10 to 3.11; in the article a speedup of 60% was measured. It therefore usually makes sense to measure different algorithms. Unfortunately there is no Python or C++ implementation yet for https://github.com/smarr/are-we-fast-yet.

  • Comparing Language Implementations with Objects, Closures, and Arrays
    1 project | news.ycombinator.com | 20 Mar 2022
  • Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
    2 projects | /r/programming | 20 Mar 2022
    1 project | /r/SoftwarePerf | 20 Mar 2022
  • .NET 6 vs. .NET 5: up to 40% speedup
    15 projects | news.ycombinator.com | 21 Nov 2021
    > Software benchmarks are super subjective.

    No, they are not, but they are just a measurement tool, not a source of absolute thruth. When I studied engineering at ETH we learned "Who measures measures rubbish!" ("Wer misst misst Mist!" in German). Every measurement has errors and being aware of these errors and coping with it is part of the engineering profession. The problem with programming language benchmarks is often that the goal is to win by all means; to compare as fairly and objectively as possible instead, there must be a set of suitable rules adhered to by all benchmark implementations. Such a set of rules is e.g. given for the Are-we-fast-yet suite (https://github.com/smarr/are-we-fast-yet).

  • Is CoreCLR that much faster than Mono?
    2 projects | news.ycombinator.com | 29 Aug 2021
    I am aware of the various published test results where CoreCLR shows fantastic speed-ups compared to Mono, e.g. when calculating MD5 or SHA hash sums.

    But my measurements based on the Are-we-fast-yet benchmark suite (see https://github.com/smarr/are-we-fast-yet and https://github.com/rochus-keller/Oberon/tree/master/testcases/Are-we-fast-yet) show a completely different picture. Here the difference between Mono and CoreCLR (both versions 3 and 5) is within +/- 10%, so nothing earth shattering.

    Here are my measurement results:

    https://github.com/rochus-keller/Oberon/blob/master/testcases/Are-we-fast-yet/Are-we-fast-yet_results_linux.pdf comparing the same benchmark on the same machine run under LuaJIT, Mono, Node.js and Crystal.

    https://github.com/rochus-keller/Oberon/blob/master/testcases/Are-we-fast-yet/Are-we-fast-yet_results_windows.pdf comparing Mono, .Net 4 and CoreCLR 3 and 5 on the same machine.

    Here are the assemblies of the Are-we-fast-yet benchmark suite used for the measurements, in case you want to reproduce my results: http://software.rochus-keller.ch/Are-we-fast-yet_CLI_2021-08-28.zip.

    I was very surprised by the results. Perhaps it has to do with the fact that I measured on x86, or that the benchmark suite used includes somewhat larger (i.e. more representative) applications than just micro benchmarks.

    What are your opinions? Do others have similar results?

  • Is CoreCLR really that much faster than Mono?
    6 projects | /r/dotnet | 29 Aug 2021
    There is a good reason for this; have a look at e.g. https://github.com/smarr/are-we-fast-yet/blob/master/docs/guidelines.md.
  • Why most programming language performance comparisons are most likely wrong
    1 project | /r/programming | 9 Feb 2021
    Then apparently the SOM nbody program is taken as the basis of a new Java nbody program.

What are some alternatives?

When comparing PyCall.jl and are-we-fast-yet you can also consider the following projects:

py2many - Transpiler of Python to many other languages

gleam - ⭐️ A friendly language for building type-safe, scalable systems!

Revise.jl - Automatically update function definitions in a running Julia session

crystal - The Crystal Programming Language

julia - The Julia Programming Language

fast-ruby - :dash: Writing Fast Ruby :heart_eyes: -- Collect Common Ruby idioms.

Genie.jl - 🧞The highly productive Julia web framework

Oberon - Oberon parser, code model & browser, compiler and IDE with debugger

Smalltalk - Parser, code model, interpreter and navigable browser for the original Xerox Smalltalk-80 v2 sources and virtual image file

libffi - A portable foreign-function interface library.

.NET Runtime - .NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.