Show HN: Pidove, an Alternative to the Java Streams API

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • pidove

  • functionaljava

    Functional programming in Java

  • ? Sometimes passing a lambda (or other function) as an argument is a simpler approach to specialization than defining a subclass. That I think is mainstream and accepted in Java today.

    There is a lot more to "functional programming" than that, such as the use of persistent collections. In some cases (such as managing the symbol table in a compiler) those methods lead to good efficiency and great simplification, in other cases they are ways to make easy problems punishingly hard.

    pidove builds on top of ordinary Java Collections and doesn't push more exotic approaches as does

    http://www.functionaljava.org/

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • Reactive Streams

    Reactive Streams Specification for the JVM

  • There is a very big design space for "Stream" APIs.

    Microsoft's LINQ for instance can compile a stream operation into a SQL statement and JooQ does the same. That system offers query optimization and efficient joins that depend on the query system having complete visibility into the queries. indexes built ahead of time, etc.

    Another extreme is a system like

    https://www.reactive-streams.org/

    that are especially good for apply a filter and map and other operations to a stream of real time events, e.g. instead of having a pull operation such as a for-loop over an Iterable, items go into the system from a stream.

    I've worked on systems that use the later kind of streaming to run batch jobs and you can get great performance (780% speedup with 8 cpus) on crazy heterogenous workloads. You do have to be careful though to shut the system down or flush it out or otherwise you get wrong answers. Frequently those frameworks don't shut themselves down properly unless you implement clean shutdown yourself.

    The point is that operators like "filter" and "map" and the rest are so powerful because they are portable between the minimal pidove up to a Hadoop cluster.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts