-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Relying on hosted inference with LLMs, such as via OpenAI API, in production has some challenges. The use of APIs should be designed around unstable latency, rate limits, token counts, costs, etc. To make it observable we've built tracing and monitoring specifically for AI apps. For example, the OpenAI Python library is monitored automatically, no need to do anything. We'll be adding support for more libraries. If you'd like to give it try, see https://github.com/graphsignal/graphsignal or the docs.
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a more popular project.
Related posts
-
Show HN: Python Monitoring for LLMs, OpenAI, Inference, GPUs
-
Show HN: Python Monitoring for AI: LLMs, OpenAI, Inference, GPUs
-
Show HN: Python Monitoring for AI: LLMs, OpenAI, Inference, GPUs
-
[N] Monitor OpenAI API Latency, Tokens, Rate Limits, and More with Graphsignal
-
[N] Easily profile FastAPI model serving