roapi
py-spy
Our great sponsors
roapi | py-spy | |
---|---|---|
24 | 25 | |
3,070 | 11,850 | |
1.7% | - | |
6.9 | 6.4 | |
about 1 month ago | 15 days ago | |
Rust | Rust | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
roapi
- Full-fledged APIs for slowly moving datasets without writing code
-
Tuql: Automatically create a GraphQL server from a SQLite database
If your use case is read-only I suggest taking a look at roapi[1]. It supports multiple read frontends (GraphQL, SQL, REST) and many backends like SQLite, JSON, google sheets, MySQL, etc.
[1] https://github.com/roapi/roapi
- Who is using AXUM in production?
-
Ask HN: Best way to provide access to large data sets
For smaller datasets then anywhere up to a few mb which isn't so bad reasonable with an API but in theory for historic data it could be up to several gb. I've not seen datasette go that high (IIRC it's a 1000 row return limit by default).
That's what got me intrigued with Atlassians offering, as data lakes tend to be something internal to a company, not something I've ever seen offered as an interaction point to users.
I've also tested out roapi [1] which is nice if the data is in some structured format already (Parquet/JSON)
[1] https://github.com/roapi/roapi
-
"thread 'main' panicked at 'no CA certificates found'", when running application in docker container
https://github.com/roapi/roapi/issues/103?
- Roapi 0.9 release adds support for all cloud storage providers
-
SQLite-based databases on the Postgres protocol? Yes we can
Very cool and well executed project. Love the sprinkle of Rust in all the other companion projects as well :)
The ROAPI(https://github.com/roapi/roapi) project I built also happened to support a similar feature set, i.e. to expose sqlite through a variety of remote query interfaces including pg wire protocols, rest apis and graphqls.
- Using Rust to write a Data Pipeline. Thoughts. Musings.
-
PostgREST – Serve a RESTful API from Any Postgres Database
> why not just accept SQL and cut out all the unnecessary mapping?
You might be interested in what we're building: Seafowl, a database designed for running analytical SQL queries straight from the user's browser, with HTTP CDN-friendly caching [0]. It's a second iteration of the Splitgraph DDN [1] which we built on top of PostgreSQL (Seafowl is much faster for this use case, since it's based on Apache DataFusion + Parquet).
The tradeoff for allowing the client to run any SQL vs a limited API is that PostgREST-style queries have a fairly predictable and low overhead, but aren't as powerful as fully-fledged SQL with aggregations, joins, window functions and CTEs, which have their uses in interactive dashboards to reduce the amount of data that has to be processed on the client.
There's also ROAPI [2] which is a read-only SQL API that you can deploy in front of a database / other data source (though in case of using databases as a data source, it's only for tables that fit in memory).
[0] https://seafowl.io/
[1] https://www.splitgraph.com/connect
[2] https://github.com/roapi/roapi
-
Command-line data analytics made easy
It could be the NDJSON parser (DF source: [0]) or could be a variety of other factors. Looking at the ROAPI release archive [1], it doesn't ship with the definitive `columnq` binary from your comment, so it could also have something to do with compilation-time flags.
FWIW, we use the Parquet format with DataFusion and get very good speeds similar to DuckDB [2], e.g. 1.5s to run a more complex aggregation query `SELECT date_trunc('month', tpep_pickup_datetime) AS month, COUNT(*) AS total_trips, SUM(total_amount) FROM tripdata GROUP BY 1 ORDER BY 1 ASC)` on a 55M row subset of NY Taxi trip data.
[0]: https://github.com/apache/arrow-datafusion/blob/master/dataf...
[1]: https://github.com/roapi/roapi/releases/tag/roapi-v0.8.0
[2]: https://observablehq.com/@seafowl/benchmarks
py-spy
- Minha jornada de otimização de uma aplicação django
- Graphical Python Profiler
-
Grasshopper – An Open Source Python Library for Load Testing
For CPU cycles, py-spy[0] is getting more and more used. For RAM, I would like to known too...
[0] -- https://github.com/benfred/py-spy
-
Debugging a Mixed Python and C Language Stack
Theres also Py Spy, a profiling tool that can generate flame charts containing a mix of python and C (or C++) calls.
https://github.com/benfred/py-spy
It's worked really well for my needs
-
python to rust migration
You should profile your consumer to check the bottlenecks. You can use the excellent py-spy(written in Rust). IMO a few usage of Numba there and there should solve your performance issues.
-
Has anyone switched from numpy to Rust?
So as a first step you'll want to profile your program to figure out where it's slow, and hopefully that'll also tell you why it's slow. I'm the (biased) author of the Sciagraph profiler which is designed for this sort of application (https://sciagraph.com) but you can also try py-spy, which isn't as well designed for data processing/analysis applications (e.g. it won't visualize parallelism at all) but can still be informative (https://github.com/benfred/py-spy). Both are written in Rust ;)
-
Trace your Python process line by line with minimal overhead!
Any advantages/disadvantages compared to py-spy [1]?
[1]: https://github.com/benfred/py-spy
-
Python 3.11 delivers.
Python profiling is enabled primarily through cprofile, and can be visualized with help of tools like snakeviz (output flame graph can look like this). There are also memory profilers like memray which does in-depth traces, or sampling profilers like py-spy.
-
Tales of serving ML models with low-latency
A good profiler would be https://github.com/benfred/py-spy . If you run your app/benchmark with it, it should be able to draw a flamegraph telling you where the majority of time is spent. The info here is quite fine grained so it would already tell you where the bottleneck is. Without a full-fledged profiler you can also measure the timings in various parts of the code to understand where the bottleneck is.
-
Profiling a Python library written in Rust (Maturin)
Might be worth raising an issue on py-spy (a python profiler written in rust which "supports profiling native python extensions written in languages like C/C++ or Cython" to see if that can close the loop.
What are some alternatives?
php-parquet - PHP implementation for reading and writing Apache Parquet files/streams. NOTICE: Please migrate to https://github.com/codename-hub/php-parquet.
pyflame
qframe - Immutable data frame for Go
pyinstrument - 🚴 Call stack profiler for Python. Shows you why your code is slow!
materialize - The data warehouse for operational workloads.
python-uncompyle6 - A cross-version Python bytecode decompiler
delta-rs - A native Rust library for Delta Lake, with bindings into Python
memory_profiler - Monitor Memory usage of Python code
fluvio - Lean and mean distributed stream processing system written in rust and web assembly.
icecream - 🍦 Never use print() to debug again.
datasette - An open source multi-tool for exploring and publishing data
line_profiler