Why does this exist?

I've found myself checking the official OpenAI status dashboard when I'm experiencing either a slow model or model errors, but the status dashboard normally shows all green even when I'm experiencing API/Playground/ChatGPT issues. So the official status doesn't seem to capture non-catastrophic but still elevated rates of errors or slowness. This unofficial OpenAI status page fixes that.

At a Glance

Compare current model performance with previous data. Colors reflect two day percentiles: lower values and greener shades indicate better performance.

Last updated less than a minute ago
10/2/2023, 8:40:57 PM UTC
ModelTime / 256 Tokens (Performance, Percentile)Difference2 Day AverageHourly Error Rate (Status)
Loading...

Completion Time / 256 Tokens

The time it takes for a model to generate 256 tokens. Lower values indicate better performance.

Loading...

Errors

Loading...

Latency

The response time for the OpenAI API's models call, indicating base latency independent of the model's response time, as called from US-West. Shorter durations are preferable.