php-spx VS FrameworkBenchmarks

Compare php-spx vs FrameworkBenchmarks and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
php-spx FrameworkBenchmarks
7 366
1,872 7,373
- 1.0%
7.4 9.8
3 months ago 3 days ago
C Java
GNU General Public License v3.0 only GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

php-spx

Posts with mentions or reviews of php-spx. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-10-07.
  • What are modern profiling tools?
    5 projects | /r/PHP | 7 Oct 2022
    Not used it in a while, but https://github.com/NoiseByNorthwest/php-spx is worth checking out.
  • How to profile your PHP applications with Xdebug
    3 projects | news.ycombinator.com | 7 May 2022
    https://github.com/NoiseByNorthwest/php-spx

    SPX could be loaded with docker-compose like this article does for Xdebug. But if you already have a PHP environment, the easiest way to install it is to compile it (sudo apt install php-dev && make && cp modules/spx.so /usr/lib/php/....).

  • How to use xdebug to pinpoint PHP in a large application?
    2 projects | /r/PHPhelp | 3 Oct 2021
    Looks like, this one was not yet mentioned: you can try SPX (https://github.com/NoiseByNorthwest/php-spx)
  • Crystal Lang 1.0 Release
    16 projects | news.ycombinator.com | 22 Mar 2021
    (See also my other comment, which makes a totally different point that I decided to note separately because this got big and would have buried it)

    Well, I have ADHD. I've found the most effective approach (on top of treatment) that helps me retain focus is reexec-on-save, a la `while :; do tput clear; $thing; inotifywait -q -e moved_to .; done`. I usually have a dozen of those in old shell histories (^R FTW). (Ha, my laptop actually has exactly 12, and my other machine has 23 - although ignoredups is off...)

    $thing might be `bash ./script.sh` (because my text editor's atomic rename doesn't understand execute bits >.>), `php script.php` or `gcc -O0 script.c && ./script`. (Also, as an aside I used to use `-e close_write $file` until I realized watching even giant directories is equivalently efficient to watching a file.)

    Shell scripts (the small kind that run few subprocesses) are typically fast. Likewise, small C programs of <1000-2000 lines compile just about instantly on modern hardware; and where modern hardware isn't available and what I'm trying to do doesn't leverage too many libraries or whatnot, tcc has been able to swing the balance firmly in my favor in the past, which has been great.

    But for better or worse, PHP is currently the language I use the most. Because it's faster than Python and Ruby.

    A while back I wanted to do a bit of analysis on a dataset of information that was only published as a set of PDF documents... yayyy. But after timidly gunzipping the stream blocks and googling random bits of PDF's command language ("wat even is this"), I discovered to my complete surprise that it was trivial to interpret the text coordinate system and my first "haha let's see how bad this is" actually produced readable text on pretty much the first go. (To be pedantic, step #-1 was "draw little boxes", step #0 was "how to x,y correctly" and step #1 was "replace boxes with texWHAT it worked?!")

    With rendering basically... viable (in IIRC 300-500 LOC O.o), the next step was the boring stir-the-soup-for-8-hours bespoke state machine that cross-correlated text coordinates with field meanings ("okay, that's a heading, and the next text instruction draws the field value underneath. OK, assert that the heading is bold, the value is not, and they're both exactly the same (floating-point) Y position.")

    While that part took a while, it was mostly extremely easy, because I was pretty much linearly writing the script "from start to finish", ie just chipping away at the rock face of the task at hand until I processed an entire document, then the next document ("oh no"), then the next one ("ugh") and so forth ("wait, the edge cases are... decreasing? :D"). My workflow was pretty much founded entirely on the above-noted method.

    Loading/gunzipping a given PDF and getting to the point where the little pipeline would crash would typically complete in the span of time it would take me to release the CTRL key after hitting CTRL+S. So while the process was objectively quite like stirring soup, it did not feel like that at all and I was able to kind of float a bit as my brain cohesively absorbed the mental model of the architecture I was building without any distractions, pauses or forced context switches getting jammed in the mental encoding process like so many wrenches.

    Soon 15 documents were handled correctly, then 20, then 30, then 100 ("oooh, if all the items on the page add up exactly right it pushes line 2 of the summary heading down to the second page! Hmmm... how on earth to special-case that without refactoring to look at more than 1 page at a time..."), and then I hit some sort of threshold and it suddenly just started ticking through PDFs like crazy without asserting. Which was both awesome and a Problem™: the thing ran at something like ~60 PDFs/sec, and while jumping to just after the last successfully-processed PDF on restart worked great when the code crashed constantly, now I was sitting spinning for tens of seconds, getting distracted as I anticipated the next crash. ADHD(R)(TM).

    I wasn't surprised to learn from htop that the script was disk-bound; for some reason my ZFS mirror setup will happily read sequentially at 200MB/s, but thousands-of-tiny-files situations are... suffice to say apt unconditionally takes 60 seconds to install the smallest thing, unless the entire package db is in the FS cache. I'm not sure why. The PDFs were sharded sanely, but they were still in separate files. So I decided to pack them all into a giant blob, and since there weren't too many PDFs and they were numbered sequentially I used a simple offset-based index at the front of the blob where `fseek(data_start + ( * 4)); $o = fread(4); fseek($o);` would give me random seeking.

    Reading the blob instead promptly pegged a single CPU core (yay!), and gave me IIRC ~150+ PDFs/sec. This was awesome. But I was still just a tiny bit curious, so after googling around for a profiler and having a small jawdrop moment about SPX (https://github.com/NoiseByNorthwest/php-spx), I had a tentative look at what was actually using the most CPU (via `SPX_ENABLED=1 php ./script.php`, which will automatically print a one-page profile trace to stdout at graceful exit or ^C).

    Oh. The PDF stack machine interpreter is what's taking all the CPU time. That tiny 100 line function was the smallest in the whole script. lol

    So, I moved that function to the preprocessor/packer, then (after some headscratching) serialized the array of tokenized commands/strings into the blob by prefixing commands with \xFF and elements with \xFF\xFE\xFF so I could explode() on \xFF and tell commands from strings by checking if the previous entry was \xFE (and just skip entries of '\xFE' when I found them) :D. Then I reran the preprocessor to regenerate the pack file.

      $ php convert_dlcache.php
  • Don't blindly trust profilers
    2 projects | /r/PHP | 9 Mar 2021
    I've written a bit about this issue in php-spx's README https://github.com/NoiseByNorthwest/php-spx#notes-on-accuracy

FrameworkBenchmarks

Posts with mentions or reviews of FrameworkBenchmarks. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-25.
  • Why choose async/await over threads?
    11 projects | news.ycombinator.com | 25 Mar 2024
    Eh. Async and to a lesser extent green threads are the only solutions to slowloris HTTP attacks. I suppose your other option is to use a thread pool in your server - but then you need to but hide your web server behind nginx to keep it safe. (And it is safe because uses async IO).

    Async is also usually wildly faster for networked services than blocking IO + thread pools. Look at some of the winners of the techempower benchmarks. All of the top results use some form of non blocking IO. (Though a few honourable mentions use go - with presumably a green thread per request):

    https://www.techempower.com/benchmarks/

    I’ve also never seen Python or Ruby get anywhere near the performance of nodejs (or C#) as a web server. A lot of the difference is probably how well tuned v8 and .net are, but I’m sure the async-everywhere nature of javascript makes a huge difference.

    11 projects | news.ycombinator.com | 25 Mar 2024
    Neat. Thanks for sharing!

    Interestingly, may-minihttp is faring very well in the TechEmpower benchmark [1], for whatever those benchmarks are worth. The code is also surprisingly straightforward [2].

    [1] https://www.techempower.com/benchmarks/

    [2] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...

  • Ntex: Powerful, pragmatic, fast framework for composable networking services
    2 projects | news.ycombinator.com | 23 Mar 2024
    ntex was formed after a schism in actix-web and Rust safety/unsafety, with ntex allowing more unsafe code for better performance.

    ntex is at the top of the TechEmpower benchmarks, although those benchmarks are not apples-to-apples since each uses its own tricks: https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s...

  • A decent VS Code and Ruby on Rails setup
    8 projects | news.ycombinator.com | 21 Feb 2024
    Ruby is slow. Very slow. How much you may ask? https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s... fastest Ruby entry is at 272th place. Sure, top entries tend to have questionable benchmark-golfing implementations, but it gives you a good primer on the overhead imposed by Ruby.

    It is also not early 00s anymore, when you pick an interpreted language, you are not getting "better productivity and tooling". In fact, most interpreted languages lag behind other major languages significantly in the form of JS/TS, Python and Ruby suffering from different woes when it comes to package management and publishing. I would say only TS/JS manages to stand apart with being tolerable, and Python sometimes too by a virtue of its popularity and the amount of information out there whenever you need to troubleshoot.

    If you liked Go but felt it being a too verbose to your liking, give .NET a try. I am advocating for it here on HN mostly for fun but it is, in fact, highly underappreciated, considered unsexy and boring while it's anything but after a complete change of trajectory in the last 3-5 years. It is actually the* stack people secretly want but simply don't know about because it is bundled together with Java in the public perception.

    *productive CLI tooling, high performance, works well in a really wide range of workloads from low to high level, by far the best ORM across all languages and back-end framework that is easier to work with than Node.JS while consuming 0.1x resources

  • Ruby 3.3
    11 projects | news.ycombinator.com | 24 Dec 2023
    RoR and whatever C++ based web backend there is count as a valid comparison in my book. But comparing the languages itself is maybe a bit off.

    On a side note, you can actually compare their performance here if you’re really curious. But take it with a grain of salt since these are synthetic benchmarks.

    https://www.techempower.com/benchmarks

  • API: Go, .NET, Rust
    3 projects | /r/dotnet | 9 Dec 2023
    Most benchmarks you'll find essentially have someone's thumb on the scale (intentionally or unintentionally). Most people won't know the different languages well enough to create comparable implementations and if you let different people create the implementations, cheating happens. The TechEmpower benchmarks aren't bad, but many implementations put their thumb on the scale (https://www.techempower.com/benchmarks). For example, a lot of the Go implementations avoid the GC by pre-allocating/reusing structs or allocate arrays knowing how big they need to be in advance (despite that being against the rules). At some point, it becomes "how many features have you turned off." Some Go http routers (like fasthttp and those built off it like Atreugo and Fiber) aren't actually correct and a lot of people in the Go community discourage their use, but they certainly top the benchmarks. Gin and Echo are usually the ones that are well-respected in the Go community.
  • Rage: Fast web framework compatible with Rails
    12 projects | news.ycombinator.com | 4 Dec 2023
    TechEmpower has a few different classes of benchmark. https://www.techempower.com/benchmarks/

    Off the top of my head:

    - json serialization

    - fetching random objects from an actual mysql/psql database

    - cached queries

    - performing mutations / data updates

    writing "hello world" as a response is naturally going to do 75k per second

    12 projects | news.ycombinator.com | 4 Dec 2023
    There is certainly a lot of speculation in Techempower benchmarks and top entries can utilize questionable techniques like simply writing a byte array literal to output stream instead of constructing a response, or (in the past) DB query coalescing to work around inherent limitations of the DB in case of Fortunes or DB quries.

    And yet, the fastest Ruby entry is at 274th place while Rails is at 427th.

    https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s...

  • Node.js – v20.8.1
    2 projects | news.ycombinator.com | 15 Oct 2023
    oh what machine? with how many workers? doing what?

    search for "node" on this page: https://www.techempower.com/benchmarks/#section=data-r21

  • Strong typing, a hill I'm willing to die on
    9 projects | news.ycombinator.com | 4 Oct 2023

What are some alternatives?

When comparing php-spx and FrameworkBenchmarks you can also consider the following projects:

zio-http - A next-generation Scala framework for building scalable, correct, and efficient HTTP clients and servers

django-ninja - 💨 Fast, Async-ready, Openapi, type hints based framework for building APIs

drogon - Drogon: A C++14/17 based HTTP web application framework running on Linux/macOS/Unix/Windows [Moved to: https://github.com/drogonframework/drogon]

LiteNetLib - Lite reliable UDP library for Mono and .NET

PHPSpy - low-overhead sampling profiler for PHP 7+

C++ REST SDK - The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.

SQLBoiler - Generate a Go ORM tailored to your database schema.

Laravel - The Laravel Framework.

CoreWCF - Main repository for the Core WCF project

Spiral Framework - High-Performance PHP Framework

web-frameworks - Which is the fastest web framework?

bjoern - A screamingly fast Python 2/3 WSGI server written in C.