As there have been no further cases of the incident in the past week, we are now closing this incident.
Jan 14, 13:39 EST
We've completed what we hope is the final deployment for lowering overall database load. We will continue to monitor for the next few hours before closing this incident.
Jan 11, 15:36 EST
Our monitors indicate that older manifests are working as expected now
Jan 8, 15:25 EST
We've deployed additional changes to further reduce database load. Unfortunately, we've also encountered a bug in validation of some older manifests and are currently working to deploy a quick fix for that. During this time, pulls of these older manifests will block.
Jan 8, 14:29 EST
We are currently in the process of making changes to our database and expect to be back in service shortly
Jan 2, 22:43 EST
We've been monitoring the occasional database outages over the last week. At this point, we're planning on making further changes to better support load in hopes that the lockups will stop.
Jan 2, 15:09 EST
We have seen no additional recurrences of the issue in the past 10 hours or so, but we will continue to monitor as we await a response from our database provider.
Dec 21, 16:52 EST
We've had another instance of the database losing its ability to process queries. We have deployed further changes to reduce database load and will continue to monitor.
Dec 21, 00:33 EST
We've deployed a change to reduce load on the database with the hopes of prevent the massive lockups that we've been seeing. We're continue to monitor and report,.
At this time we'd like to once again apologize to our customers for these problems. We know you trust us to manage and deploy your software, and we recognize how extremely frustrating is must be to be encountering these issues.
Dec 20, 19:51 EST
We're continuing to see spikes of timeouts, locks and overall erratic behavior on the database server. We're currently investigating workarounds to reduce load on the system while simultaneously looking for issues on the database server itself.
Dec 20, 19:34 EST
We're continuing to monitor and search after all locks resolved.
Dec 20, 18:23 EST
Unfortunately, the database restart did not solve the problem. We're continuing to investigate why the database locks up for random intervals. We'll continue updating as we find more information.
Dec 20, 17:24 EST
We've failed our primary database over to its secondary and are currently rescaling the fleet. We will continue to monitor for any unusual spikes or problems.
Dec 20, 16:43 EST
The database problems we thought we mitigated earlier have returned. We're currently investigating.
Dec 20, 16:35 EST