|10 days ago||3 days ago|
|Apache License 2.0||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Open source Business intelligence platform made with Python
7 projects | news.ycombinator.com | 28 Nov 2021
Rows.com: Spreadsheets on Steroids
5 projects | news.ycombinator.com | 10 Nov 2021
Standalone Python virtual server example https://github.com/finos/perspective/tree/master/examples/to...
JupyterLab demo on Binder https://mybinder.org/v2/gh/finos/perspective/master?urlpath=...
DuckDB-WASM: Efficient Analytical SQL in the Browser
2 projects | news.ycombinator.com | 29 Oct 2021
Show HN: Vizzu – Open-source charting library focused on animating charts
5 projects | news.ycombinator.com | 17 Oct 2021
The best example of WASM being used to render to canvas (it's also visualizations) I've seen is "Perspective":
"Perspective is an interactive analytics and data visualization component, which is especially well-suited for large and/or streaming datasets. Originally developed at J.P. Morgan and open-sourced through the Fintech Open Source Foundation (FINOS), Perspective makes it simple to build user-configurable analytics entirely in the browser, or in concert with Python and/or Jupyterlab. Use it to create reports, dashboards, notebooks and applications, with static data or streaming updates via Apache Arrow."
Open Source Is Finally Coming to Financial Services
3 projects | news.ycombinator.com | 15 Oct 2021
Man the a16z marketing machine is working hard unfortunately at cost of quality.
For those interested in FS and open source today, especially with a capital markets lens check out:
Lots of great projects, one I used recently and a favourite was this:
Perspective 1.0.0, an open source BI tool built on WebAssembly
2 projects | reddit.com/r/programming | 15 Oct 2021
As far as customizing the Perspective datagrid, the story on this is evolving :) . With the 1.0 release, we've released an NFT demo with a more current version of the plugin API, as well as new plugin API docs. Replacing innerHTML is only costly if you trigger a relayout before the replacement, which you'd want to avoid - check the pudgy-penguins demo source for examples which replaces these with without the intermediate DOM tree being rendered (though this is brower-dependent). If you can't, e.g. the replacement is async or whatever, the underlying regular-table component has an API that allows you to return the DOM elements themselves per cell, but you'd need to write a simple plugin to integrate this as Perspective's version provides its own dataListener.2 projects | reddit.com/r/programming | 15 Oct 2021
2 projects | reddit.com/r/Python | 13 Oct 2021
By the way, the link to the blog on the project website results in a 404.2 projects | reddit.com/r/Python | 13 Oct 2021
1 project | news.ycombinator.com | 13 Oct 2021
Awkward: Nested, jagged, differentiable, mixed type, GPU-enabled, JIT'd NumPy
5 projects | news.ycombinator.com | 16 Dec 2021
Hi! I'm the original author of Awkward Array (Jim Pivarski), though there are now many contributors with about five regulars. Two of my colleagues just pointed me here—I'm glad you're interested! I can answer any questions you have about it.
First, sorry about all the TODOs in the documentation: I laid out a table of contents structure as a reminder to myself of what ought to be written, but haven't had a chance to fill in all of the topics. From the front page (https://awkward-array.org/), if you click through to the Python API reference (https://awkward-array.readthedocs.io/), that site is 100% filled in. Like NumPy, the library consists of one basic data type, `ak.Array`, and a suite of functions that act on it, `ak.this` and `ak.that`. All of those functions are individually documented, and many have examples.
The basic idea starts with a data structure like Apache Arrow (https://arrow.apache.org/)—a tree of general, variable-length types, organized in memory as a collection of columnar arrays—but performs operations on the data without ever taking it out of its columnar form. (3.5 minute explanation here: https://youtu.be/2NxWpU7NArk?t=661) Those columnar operations are compiled (in C++); there's a core of structure-manipulation functions suggestively named "cpu-kernels" that will also be implemented in CUDA (some already have, but that's in an experimental stage).
A key aspect of this is that structure can be manipulated just by changing values in some internal arrays and rearranging the single tree organizing those arrays. If, for instance, you want to replace a bunch of objects in variable-length lists with another structure, it never needs to instantiate those objects or lists as explicit types (e.g. `struct` or `std::vector`), and so the functions don't need to be compiled for specific data types. You can define any new data types at runtime and the same compiled functions apply. Therefore, JIT compilation is not necessary.
We do have Numba extensions so that you can iterate over runtime-defined data types in JIT-compiled Numba, but that's a second way to manipulate the same data. By analogy with NumPy, you can compute many things using NumPy's precompiled functions, as long as you express your workflow in NumPy's vectorized way. Numba additionally allows you to express your workflow in imperative loops without losing performance. It's the same way with Awkward Array: unpacking a million record structures or slicing a million variable-length lists in a single function call makes use of some precompiled functions (no JIT), but iterating over them at scale with imperative for loops requires JIT-compilation in Numba.
Just as we work with Numba to provide both of these programming styles—array-oriented and imperative—we'll also be working with JAX to add autodifferentiation (Anish Biswas will be starting on this in January; he's actually continuing work from last spring, but in a different direction). We're also working with Martin Durant and Doug Davis to replace our homegrown lazy arrays with industry-standard Dask, as a new collection type (https://github.com/ContinuumIO/dask-awkward/). A lot of my time, with Ianna Osborne and Ioana Ifrim at my university, is being spent refactoring the internals to make these kinds of integrations easier (https://indico.cern.ch/event/855454/contributions/4605044/). We found that we had implemented too much in C++ and need more, but not all, of the code to be in Python to be able to interact with third-party libraries.
If you have any other questions, I'd be happy to answer them!
Test Parquet float16 Support in Pandas
3 projects | dev.to | 14 Dec 2021
https://github.com/apache/arrow/issues/2691 https://issues.apache.org/jira/browse/ARROW-7242 https://issues.apache.org/jira/browse/PARQUET-1647
Any role that Rust could have in the Data world (Big Data, Data Science, Machine learning, etc.)?
8 projects | reddit.com/r/rust | 4 Dec 2021
pigeon-rs: Open source email automation written in Rust
5 projects | reddit.com/r/rust | 20 Nov 2021
Connectorx is using arrow2 data format for fetching from a database. This data format is optimized for columnar data :
Introducing tidypolars - a Python data frame package for R tidyverse users
9 projects | reddit.com/r/rstats | 10 Nov 2021
I think having a basic understanding of pandas, given how broadly it's used, is beneficial. That being said, polars seems to be matching or beating data.table in performance, so I think it'd be very worth it to take it up. Wes McKinney, creator of pandas, has been quite vocal about architecture flaws of pandas -- which is why he's been working on the Arrow project. polars is based on Arrow, so in principle it's kinda like pandas 2.0 (adopting the changes that Wes proposed).9 projects | reddit.com/r/rstats | 10 Nov 2021
So the question is really - how is polars so fast? Polars is packed by Apache Arrow, which is a columnar memory format that is designed specifically for performance.
Comparing SQLite, DuckDB and Arrow
5 projects | news.ycombinator.com | 27 Oct 2021
The Data Engineer Roadmap 🗺
11 projects | dev.to | 19 Oct 2021
C++ Jobs - Q4 2021
4 projects | reddit.com/r/cpp | 2 Oct 2021
Technologies: Apache Arrow, Flatbuffers, C++ Actor Framework, Linux, Docker, Kubernetes
How to use Spark and Pandas to prepare big data
3 projects | dev.to | 21 Sep 2021
Pandas user-defined function (UDF) is built on top of Apache Arrow. Pandas UDF improves data performance by allowing developers to scale their workloads and leverage Panda’s APIs in Apache Spark. Pandas UDF works with Pandas APIs inside the function, and works with Apache Arrow to exchange data.
What are some alternatives?
h5py - HDF5 for Python -- The h5py package is a Pythonic interface to the HDF5 binary data format.
polars - Fast multi-threaded DataFrame library in Rust | Python | Node.js
ta-lib - Python wrapper for TA-Lib (http://ta-lib.org/).
arquero - Query processing and transformation of array-backed data tables.
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
spark-rapids - Spark RAPIDS plugin - accelerate Apache Spark with GPUs
arrow-rs - Official Rust implementation of Apache Arrow
Apache HBase - Apache HBase
duckdb_and_r - My thoughts and examples on DuckDB and R
cylon - Cylon is a fast, scalable, distributed memory, parallel runtime with a Pandas like DataFrame.
python-rust-arrow-interop-example - Example of using the Apache Arrow C Data Interface between Python and Rust