how much does it actually cost in terms of computer power for open AI to respond

This page summarizes the projects mentioned and recommended in the original post on /r/OpenAI

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • alpa

    Training and serving large-scale neural networks with auto parallelization.

  • alpa.ai states "You will need at least 350GB GPU memory on your entire cluster to serve the OPT-175B model. For example, you can use 4 x AWS p3.16xlarge instances, which provide 4 (instance) x 8 (GPU/instance) x 16 (GB/GPU) = 512 GB memory."

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Alpa: Auto-parallelizing large model training and inference (by UC Berkeley)

    1 project | news.ycombinator.com | 23 Jun 2022
  • MatX: Faster Chips for LLMs

    2 projects | news.ycombinator.com | 5 Aug 2023
  • Run Llama2-70B in Web Browser with WebGPU Acceleration

    1 project | news.ycombinator.com | 24 Jul 2023
  • Ask HN: How to get good as a self taught ML engineer?

    1 project | news.ycombinator.com | 4 Jul 2023
  • Ask HN: What new programming language(s) are you most excited about?

    1 project | news.ycombinator.com | 2 Jul 2023