System Status

March 29, 2017

Intermittent Connectivity on Google Cloud US Based VM Instance

Resolved

We are investigating reports of an issue with one of our US based Google Cloud Compute Engine instances.

Updates

The issue has been quickly resolved. Sorry for the inconvenience.

March 15, 2017

Google Compute Engine Experiencing Disruption

Resolved

We are investigating reports of an issue with one of our US based Google Cloud Compute Engine instances. We will publish an update as soon as we know more.

Updates

March 15, 1:00 pm

Google Compute Engine instances in us-central zones are experiencing a networking issue. Incident #17006

March 15, 2:12 pm

Google Compute Engine services has restored internet connectivity. They are taking necessary actions to prevent a recurrence. We apologize for the inconvenience.

March 14, 2017

Intermittent Connectivity on Load Balancer

Resolved

One of our load balancers was producing intermittent connectivity issues for a number of sites. This was resolved within 15 minutes. We apologize for the inconvenience.

March 14, 2017

AWS slow DNS propagation

Resolved

AWS is currently investigating slow propagation of DNS edits to the Route 53 DNS servers.

Record changes are propagating, while they work through the backlog of pending changes that have accumulated. They expect full recovery to take several more hours. They are continuing to throttle ChangeResourceRecordSets API calls.
Queries to existing DNS records remain unaffected.

For more details, please visit Amazon’s status page: https://status.aws.amazon.com/.

Adding new sites on the Kinsta dashboard is disabled until AWS resolves the issue, please contact our support for further help.

Updates

March 14, 2:54 pm

All DNS update operations are now completing successfully. The ability to add new sites on the Kinsta dashboard has now been restored.

February 28, 2017

AWS Outage

Resolved

Earlier this morning Amazon Web Services started experiencing outages in some regions. Due to many services, plugins, and dependencies relying on Amazon, this may result in scripts and or timeouts on your WordPress site.

Amazon is actively working on restoring services, you can see more here:
https://status.aws.amazon.com/

Updates

AWS has restored services across most regions. We will continue to monitor the status.

The issue has now been resolved by AWS.

February 15, 2017

DDoS Attack

Resolved

We are aware of a massive DDoS attack which is targeting our infrastructure.

Updates

It took a few minutes but we’ve stabilized now and everything is back to normal.

January 20, 2017

Google Cloud EU Load Balancer Outage

Resolved

One of our Google Cloud EU based load balancer instance’s host machine has shut down unexpectedly, and had to be restarted on a new host machine. Google Cloud usually live migrates instances from one host machine to another without downtime, however this was a serious hardware failure which didn’t allow the process to complete in time. The new load balancer instance was up and running 4 minutes after our engineers have been notified and after running the necessary tests to ensure its consistency it began serving requests 7 minutes after the initial compute.instances.hostError event. We apologize for the inconvenience.

December 16, 2016

Google Cloud US-based Load Balancer

Resolved

One of our load balancers was overwhelmed and we resolved the issue as soon as possible. The downtime was around 5 minutes and our engineers have made the necessary steps to avoid such instances in the future. We apologize for the inconvenience!

Updates

The issue has been resolved.

September 15, 2016

Google Cloud US-Based Virtual Machine – Kernel Bug

Resolved

One of our host machines (has been running for an extended period of time without any issues) experienced a kernel crash because of a bug in the Btrfs driver. After a quick assessment of the situation, we decided to hard reboot the instance and restore the running containers one by one checking data consistency.
We have reported the bug to the official Linux Kernel dev list. They confirmed that there’s a bug in the Btrfs driver.

We are awfully sorry for the inconveniences, we are working together with the developers to uncover the cause and to patch it as soon as possible!

Thanks for your understanding!

Updates

The issue has been resolved.

July 28, 2016

Google Cloud US-Based Virtual Machine Failure

Resolved

Google Cloud shut one of our VMs down without notice. One of the disks attached to the machine experienced a complete failure, and thus we are unable to restart the machine. Google Cloud is working on restoring the disk.

Updates

July 28, 3:44 pm

We are continuing to experience an issue with with one of our US-based Google Compute Engine instances. We apologize for any inconvenience affected clients are experiencing. Google Cloud continues to provide updates to us as they work through this.

July 28, 4:32 pm

Google Cloud has determined the reason for the failure and is working to replace the disk. Access to our snapshots are blocked as well as of now. We apologize for any inconvenience affected clients are experiencing.

July 28, 7:05 pm

The affected disk is not showing erroneous anymore, we have started creating a snapshot of the VM instance. We are in the process of booting the machine now as well.

July 28, 7:10 pm

The snapshot creation lasted about 5 minutes, and once it was finished we performed a manual hard reboot on the affected VM instance. We sincerely apologize for any inconvenience caused.  We are re-architecting our entire system to prevent a situation like this in the future.

July 26, 2016

Google Cloud US Based VM Instance Degraded Disk Performance

Resolved

We are investigating reports of an issue with one of our US based Google Cloud Compute Engine instances. We will publish an update as soon as we know more.

Updates

July 26, 2:10 pm

We are experiencing an issue with with one of our US based Google Compute Engine instances.

For everyone who is affected, we apologize for any inconvenience you may be experiencing. We are already working on tracking down the issue.

July 26, 7:28 pm

The issue with the US based Google Compute Engine instance has been identified. The root of the problem is degraded disk performance in one of our ZFS pools on the VM instance which may cause service intermittence for those clients whose sites are hosted on this machine.

We are already working on resolving the issue and making appropriate improvements to prevent or minimize future recurrence.

July 27, 2:50 am

After identifying the root cause we performed a manual hard reboot on the affected VM instance. The complete downtime was 60 minutes while the emergency maintenance lasted.

To prevent future incidents, we’ll live-migrate the affected containers to another host machine. The process won’t affect your sites in any way, there won’t be any downtime.

We sincerely apologize for falling below the level of service you rely on!

April 22, 2016

Legacy VPS Servers: Network Connectivity Issues In Frankfurt And Tokyo

Resolved

Our server provider for our legacy VPS server architecture is currently investigating issues affecting sites hosted in the Frankfurt and Tokyo data centers, and also possibly in the Dallas data center.

Updates

April 22, 6:24 pm

The connectivity issue has been resolved.