Degraded Performance
Incident Report for Flow
Resolved
Our engineers have determined the root cause of this outage was a spike in traffic between internal APIs, which caused cyclic crashes between two services within Flow as one repeatedly overwhelmed the other. We will be prioritizing work to distribute our API's boot-time request volume to prevent this manor of failure in the future.
Posted Oct 31, 2019 - 18:05 UTC
Update
All Flow services have returned to normal operation. We are still closely monitoring to ensure it stays that way, and will conduct a root cause analysis shortly. We appreciate your patience.
Posted Oct 31, 2019 - 17:11 UTC
Monitoring
Response times and error rates are returning to normal levels. We are closely monitoring the situation. Some notifications or chat messages may still be delayed as the backlog is processed.
Posted Oct 31, 2019 - 16:58 UTC
Update
Traffic to Flow Chat and notifications processing is slowly resuming. A hosting provider outage is inhibiting our ability to mitigate this issue in a timely manner.
Posted Oct 31, 2019 - 16:44 UTC
Update
Flow Chat and Notifications have been taken offline as we try and rectify the performance problems
Posted Oct 31, 2019 - 16:26 UTC
Investigating
Flow is experiencing unusual load and some services may be slower to load than usual. We are investigating mitigations.
Posted Oct 31, 2019 - 15:54 UTC
This incident affected: Flow API, Flow Chat, and Flow Notifications.