Septiembre 15, 2016

Google Cloud US-Based Virtual Machine – Kernel Bug
Resolved

One of our host machines (has been running for an extended period of time without any issues) experienced a kernel crash because of a bug in the Btrfs driver. After a quick assessment of the situation, we decided to hard reboot the instance and restore the running containers one by one checking data consistency.
We have reported the bug to the official Linux Kernel dev list. They confirmed that there’s a bug in the Btrfs driver.

We are awfully sorry for the inconveniences, we are working together with the developers to uncover the cause and to patch it as soon as possible!

Thanks for your understanding!

Updates

The issue has been resolved.

Julio 28, 2016

Google Cloud US-Based Virtual Machine Failure
Resolved

Google Cloud shut one of our VMs down without notice. One of the disks attached to the machine experienced a complete failure, and thus we are unable to restart the machine. Google Cloud is working on restoring the disk.

Updates

Julio 28, 3:44 pm

We are continuing to experience an issue with with one of our US-based Google Compute Engine instances. We apologize for any inconvenience affected clients are experiencing. Google Cloud continues to provide updates to us as they work through this.

Julio 28, 4:32 pm

Google Cloud has determined the reason for the failure and is working to replace the disk. Access to our snapshots are blocked as well as of now. We apologize for any inconvenience affected clients are experiencing.

Julio 28, 7:05 pm

The affected disk is not showing erroneous anymore, we have started creating a snapshot of the VM instance. We are in the process of booting the machine now as well.

Julio 28, 7:10 pm

The snapshot creation lasted about 5 minutes, and once it was finished we performed a manual hard reboot on the affected VM instance. We sincerely apologize for any inconvenience caused.  We are re-architecting our entire system to prevent a situation like this in the future.

Julio 26, 2016

Google Cloud US Based VM Instance Degraded Disk Performance
Resolved

We are investigating reports of an issue with one of our US based Google Cloud Compute Engine instances. We will publish an update as soon as we know more.

Updates

Julio 26, 2:10 pm

We are experiencing an issue with with one of our US based Google Compute Engine instances.

For everyone who is affected, we apologize for any inconvenience you may be experiencing. We are already working on tracking down the issue.

Julio 26, 7:28 pm

The issue with the US based Google Compute Engine instance has been identified. The root of the problem is degraded disk performance in one of our ZFS pools on the VM instance which may cause service intermittence for those clients whose sites are hosted on this machine.

We are already working on resolving the issue and making appropriate improvements to prevent or minimize future recurrence.

Julio 27, 2:50 am

After identifying the root cause we performed a manual hard reboot on the affected VM instance. The complete downtime was 60 minutes while the emergency maintenance lasted.

To prevent future incidents, we’ll live-migrate the affected containers to another host machine. The process won’t affect your sites in any way, there won’t be any downtime.

We sincerely apologize for falling below the level of service you rely on!

Abril 22, 2016

Legacy VPS Servers: Network Connectivity Issues In Frankfurt And Tokyo
Resolved

Our server provider for our legacy VPS server architecture is currently investigating issues affecting sites hosted in the Frankfurt and Tokyo data centers, and also possibly in the Dallas data center.

Updates

Abril 22, 6:24 pm

The connectivity issue has been resolved.

Febrero 02, 2016

Network Connectivity Issue: Atlanta
Resolved

Our server provider currently investigating network connectivity issues in our Atlanta data center. We’re keeping an eye on the issue and we’ll be updating you as soon as anything changes

Enero 01, 2016

Network Connectivity Issue: Atlanta
Resolved

Our server provider currently investigating network connectivity issues in our Atlanta data center. We’re in contact with them continuously and will have updates for you soon.

Updates

Enero 1, 8:44 pm

This is a distributed DoS attack which is targeting the infrastructure in Atlanta. There is no ETA at the moment, but their engineers and leadership team members are working closely to solve this issue as soon as they can.

Enero 2, 3:44 am

Here’s an update from our server provider: “Our network operations and systems teams have been working non-stop for the last ~36 hours toward a resolution of the Atlanta outage. We have acquired a dedicated transit link that is now directly connected to the Linode network, and we are waiting for this transit provider to apply DDoS mitigation hardening, after which we believe that Atlanta should be restored to full service. ”

Enero 2, 9:23 am

Our server provider believes that they’ve closed all of the attack vectors that can lead to a DDoS taking down the entire Atlanta datacenter