Our engineers are performing updates on the My.kinsta dashboard. We’re deeply sorry for any inconveniences. We’re working hard on getting it back online soon. Thank you for your patience.
Updates have been successfully completed. Thank you for your patience.
Our engineers are investigating the issue.
This incident has been resolved.
We are investigating reports of an issue with one of our US based Google Cloud Compute Engine instances.
The issue has been quickly resolved. Sorry for the inconvenience.
We are investigating reports of an issue with one of our US based Google Cloud Compute Engine instances. We will publish an update as soon as we know more.
March 15, 1:00 pm
Google Compute Engine instances in us-central zones are experiencing a networking issue. Incident #17006
March 15, 2:12 pm
Google Compute Engine services has restored internet connectivity. They are taking necessary actions to prevent a recurrence. We apologize for the inconvenience.
One of our load balancers was producing intermittent connectivity issues for a number of sites. This was resolved within 15 minutes. We apologize for the inconvenience.
AWS is currently investigating slow propagation of DNS edits to the Route 53 DNS servers.
Record changes are propagating, while they work through the backlog of pending changes that have accumulated. They expect full recovery to take several more hours. They are continuing to throttle ChangeResourceRecordSets API calls.
Queries to existing DNS records remain unaffected.
For more details, please visit Amazon’s status page: https://status.aws.amazon.com/.
Adding new sites on the Kinsta dashboard is disabled until AWS resolves the issue, please contact our support for further help.
March 14, 2:54 pm
All DNS update operations are now completing successfully. The ability to add new sites on the Kinsta dashboard has now been restored.
Earlier this morning Amazon Web Services started experiencing outages in some regions. Due to many services, plugins, and dependencies relying on Amazon, this may result in scripts and or timeouts on your WordPress site.
Amazon is actively working on restoring services, you can see more here:
AWS has restored services across most regions. We will continue to monitor the status.
The issue has now been resolved by AWS.
We are aware of a massive DDoS attack which is targeting our infrastructure.
It took a few minutes but we’ve stabilized now and everything is back to normal.
One of our Google Cloud EU based load balancer instance’s host machine has shut down unexpectedly, and had to be restarted on a new host machine. Google Cloud usually live migrates instances from one host machine to another without downtime, however this was a serious hardware failure which didn’t allow the process to complete in time. The new load balancer instance was up and running 4 minutes after our engineers have been notified and after running the necessary tests to ensure its consistency it began serving requests 7 minutes after the initial compute.instances.hostError event. We apologize for the inconvenience.
One of our load balancers was overwhelmed and we resolved the issue as soon as possible. The downtime was around 5 minutes and our engineers have made the necessary steps to avoid such instances in the future. We apologize for the inconvenience!
The issue has been resolved.
One of our host machines (has been running for an extended period of time without any issues) experienced a kernel crash because of a bug in the Btrfs driver. After a quick assessment of the situation, we decided to hard reboot the instance and restore the running containers one by one checking data consistency.
We have reported the bug to the official Linux Kernel dev list. They confirmed that there’s a bug in the Btrfs driver.
We are awfully sorry for the inconveniences, we are working together with the developers to uncover the cause and to patch it as soon as possible!
Thanks for your understanding!
The issue has been resolved.
Google Cloud shut one of our VMs down without notice. One of the disks attached to the machine experienced a complete failure, and thus we are unable to restart the machine. Google Cloud is working on restoring the disk.
July 28, 3:44 pm
We are continuing to experience an issue with with one of our US-based Google Compute Engine instances. We apologize for any inconvenience affected clients are experiencing. Google Cloud continues to provide updates to us as they work through this.
July 28, 4:32 pm
Google Cloud has determined the reason for the failure and is working to replace the disk. Access to our snapshots are blocked as well as of now. We apologize for any inconvenience affected clients are experiencing.
July 28, 7:05 pm
The affected disk is not showing erroneous anymore, we have started creating a snapshot of the VM instance. We are in the process of booting the machine now as well.
July 28, 7:10 pm
The snapshot creation lasted about 5 minutes, and once it was finished we performed a manual hard reboot on the affected VM instance. We sincerely apologize for any inconvenience caused. We are re-architecting our entire system to prevent a situation like this in the future.