I am reading this article https://www.frontiersin.org/articles/10.3389/fnins.2015.00492/full and thinking how to create an Amazon EMR infrastructure wih PySpark. Why is the GPU server not one of the nodes in the Apache Spark cluster? Or this is just an abstract view and the nodes are also the GPUs?

This page summarizes the projects mentioned and recommended in the original post on /r/apachespark

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • spark-rapids

    Spark RAPIDS plugin - accelerate Apache Spark with GPUs

  • The spark-rapids project allows one to run multi-GPU ETL workloads on a Spark cluster. https://github.com/NVIDIA/spark-rapids In such a setup, the GPU nodes are part of the Spark cluster. Multi-GPU nodes are viable, although an executor is currently limited to a single GPU.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts