Degraded Performance

Incident Report for Flow

Resolved

Our engineers have determined the root cause of this outage was a spike in traffic between internal APIs, which caused cyclic crashes between two services within Flow as one repeatedly overwhelmed the other. We will be prioritizing work to distribute our API's boot-time request volume to prevent this manor of failure in the future.
Posted Oct 31, 2019 - 11:05 PDT

Update

All Flow services have returned to normal operation. We are still closely monitoring to ensure it stays that way, and will conduct a root cause analysis shortly. We appreciate your patience.
Posted Oct 31, 2019 - 10:11 PDT

Monitoring

Response times and error rates are returning to normal levels. We are closely monitoring the situation. Some notifications or chat messages may still be delayed as the backlog is processed.
Posted Oct 31, 2019 - 09:58 PDT

Update

Traffic to Flow Chat and notifications processing is slowly resuming. A hosting provider outage is inhibiting our ability to mitigate this issue in a timely manner.
Posted Oct 31, 2019 - 09:44 PDT

Update

Flow Chat and Notifications have been taken offline as we try and rectify the performance problems
Posted Oct 31, 2019 - 09:26 PDT

Investigating

Flow is experiencing unusual load and some services may be slower to load than usual. We are investigating mitigations.
Posted Oct 31, 2019 - 08:54 PDT
This incident affected: API & Integrations, Messaging, and Notifications.