Posted:
If you don’t have a second to spare, you soon will! On June 30, 2015 at precisely 23:59:60 UTC, the world will experience its 26th recorded leap second. It will be the third one experienced by Google. If you use Google Compute Engine, you need to be aware of how leap seconds can affect you.

What is a leap second?
It's sort of like a very small leap year. Generally, the Earth's rotation slows down over time, thus lengthening the day. In leap years, we add an extra day in February to sync the calendar year back up with the astronomical year. Similarly, an extra second is occasionally added to bring coordinated universal time in line with mean solar time.  Leap seconds in Unix time are commonly implemented by repeating the last second of the day.

When do leap seconds happen?
By convention, leap seconds happen at the end of either June or December. However, unlike leap years, leap seconds do not happen at regular intervals, because the Earth's rotation speed varies irregularly in response to climatic and geological events. For example, the 2011 earthquake in Japan shortened the day by 1.8 microseconds by speeding up the Earth's rotation.  

How does Google handle this event?
We have a clever way of handling leap seconds that we posted about back in 2011. Instead of repeating a second, we “smear” away the extra second. During a 20-hour “smear window” centered on the leap second, we slightly slow all our servers’ system clocks (by approximately 14 parts per million). At the end of the smear window, the entire leap second has been added, and we are back in sync with civil time. (This method is a little simpler than the leap second handling we posted back in 2011. The outcome is the same: no time discontinuities.) Twenty hours later, the entire leap second has been added and we are back in sync with non-smeared time.

Why do we smear the extra second?
Any system that depends on careful sequencing of events could experience problems if it sees a repeated second. This problem is accentuated for multi-node distributed systems, because a one second jump dramatically magnifies time sync discrepancies between multiple nodes. Imagine two events going into a database under the same timestamp (or even worse, the later one being recorded under an earlier timestamp), when in reality one follows another. How would you know later what the real sequence was? Most software isn't written to explicitly handle leap seconds, including most of ours.  During the 2005 leap second, we noticed various problems like this with our internal systems. To avoid changing all time-using software to handle leaps correctly, we instead attempt to make leaps invisible by adding a little bit of the extra second to our servers' clocks over the course of a day, rather than all at once.

What services does this apply to on Google Cloud Platform?
Only Virtual Machines running on Google Compute Engine are affected by the time smear as they are the only entities that can manually sync time. All other services within Google Cloud Platform are unaffected as we take care of that for you.

How will I be affected?
All of our Compute Engine services will automatically receive this “smeared” time, so if you are using the default NTP service (metadata.google.internal) or the system clock, everything should be taken care of for you automatically (note that the default NTP service does not set the Leap Indicator bit). If, however, you are using an external time service, you may see a full-second “step”, or perhaps several small steps. We don’t know how external NTP services will handle the leap second, and thus cannot speculate on exactly how time will be kept in sync. If you use an external NTP service with your Compute Engine virtual machines, you should be prepared to understand how those time sources handle the leap second, and how that behavior might affect your applications and services. If possible, you should avoid using external NTP sources on Compute Engine during the leap event.

The worst possible configuration during a leap second is to use a mixture of non-smearing and smearing NTP servers (or servers that smear differently): behavior will be undefined, but probably bad.

If you run services on both Google Compute Engine and other providers that do not smear leap seconds, you should be aware that your services can see discrepancies in time during the leap second.

What is Google's NTP service?
From inside a virtual machine running on Google Compute Engine, you can use metadata.google.internal. You can also just use the system clock, which is automatically synced with the smeared leap second. Google does not offer an external NTP service that advertises smeared leap seconds.

You can find documentation about configuring NTP on Compute Engine instances
here. If you need any assistance, please visit the Help & Support center.

-Posted by Noah Maxwell & Michael Rothwell, Site Reliability Engineers

Posted:
Back in March, we announced the availability of Google Cloud Launcher where (at the time) you could launch more than 120 popular open source application packages that have been configured by Bitnami or Google Click to Deploy. Since then, we have received many customer requests for additional solutions. We heard you!

Today, less than three months after launch, we have added 25 new solutions to Cloud Launcher. Recent additions include: Chef, Crate, OpenCart, Sharelock, Codiad and SimpleInvoices  - and new solutions are being added on an ongoing basis.

We are also announcing the addition of 14 new operating systems to Cloud Launcher. These include Windows, Ubuntu, Redhat, SUSE, Debian and CentOS.  Moreover, we’ve simplified the initial creation flow to make things even faster and simpler.
Figure 1 - The updated Cloud Launcher operating system section

To help users compare these solutions, we’ve updated the Cloud Launcher interface with detailed information on pricing, support (for OS), and free trial.

Figure 2 - The updated Cloud Launcher detailed solution interface

And finally, in line with our vision of providing customers with complete solutions that can be rapidly deployed, Google Cloud Monitoring is now integrated out of the box with 50 solutions. Built-in reports for components such as MySQL, Apache, Cassandra, Tomcat, PostgreSQL, and Redis provide DevOps an integrated view into their application.
Launcher screenshot scrubbed copy.png
Figure 3 - Google Cloud Monitoring Dashboard for Apache Web Server

You can get started with Cloud Launcher today to launch your favorite application packages on Google Cloud Platform in a matter of minutes. And do remember to give us feedback via the links in Cloud Launcher or join our mailing list for updates and discussions. Enjoy building!

- Posted by Ophir Kra-Oz, Group Product Manager

Posted:
We know you have a choice of public cloud providers – and choosing the best fit for your application or workload can be a daunting task. Customers like Avaya, Snapchat, Ocado and Wix have selected Google Cloud Platform because of our innovation and proven performance, combined with flexible pricing models. We’ve recently made headlines for our latest product introductions like Google Cloud Storage Nearline and Google Cloud Bigtable, and today, we’re also raising the bar with our pricing options.

Compared to other public cloud providers, Google Cloud Platform is now 40% less expensive for many workloads. Starting today, we are reducing prices of all Google Compute Engine Instance types as well as introducing a new class of preemptible virtual machines that delivers short-term capacity for a very low, fixed cost. When combined with our automatic discounts, per-minute billing, no penalties for changing machine types, and no need to enter into long-term fixed-price commitments, it’s easy to see why we’re leading the industry in price/performance.

Price Reductions

Last year, we committed that Google Cloud Platform prices will follow Moore’s Law, and effective today we’re reducing prices of virtual machines by up to 30%.

Configuration
US Price Reduction
Standard
High Memory
High CPU
Small
Micro
20%
15%
5%
15%
30%

The price reductions in Europe and Asia are similar. Complete details on our compute pricing is available at our Compute Engine pricing page.
We have continued to lower our pricing since Google Compute Engine was launched in November of 2013; together, these price cuts have reduced VM prices by more than half.

Introducing Google Compute Engine Preemptible VMs

For some applications we can do even better: if your workload is flexible, our new Preemptible VMs will run your short-duration batch jobs 70% cheaper than regular VMs. Preemptible VMs are identical to regular VMs, except availability is subject to system supply and demand. Since we run Preemptible VMs on resources that would otherwise be idle, we can offer them at substantially reduced costs. Customers such as Descartes Labs have already found them to be a great option for workloads like Hadoop MapReduce, visual effects rendering, financial analytics, and other computationally expensive workloads.

Importantly, unlike other clouds’ Spot Instances, the price of Preemptible VMs is fixed  making their costs predictable.

Regular n1-standard-1
Preemptible n1-standard-1
Savings
$0.050 /hour
$0.015 /hour
70%

For further information about Preemptible VM pricing, please visit our website.

Google Cloud Platform costs 40% less for many workloads vs. other public cloud providers

Our continued price/performance leadership goes well beyond list prices. Our combination of sustained use discounting, no prepaid lock-in and per-minute billing offers users a structural price advantage which becomes apparent when we consider real-world applications. Consider a typical web application or mobile backend. Its development environment supports software builds and tests, presenting a bursty, daytime load on cloud computing resources. The production environment handles actual user traffic, with a diurnal cycle of demand, aggregate growth over time, and a larger overall footprint than the development environment. The developer environment would benefit from per-minute billing because it can be turned on and off more quickly and you only pay for what you use. The production environment would benefit from sustained use discounting, up to 30% additional discount with no upfront fee or commitment, because it always needs to be on.

Our customer-friendly billing, discounting, and lack of prepaid lock-in, combined with lower list prices, leads to a 40% lower price on Google Cloud Platform for many real-world workloads. Our TCO Tool lets you explore how different combinations of development and production instances, as well as environmental assumptions, change the total cost of a real-world application hosted in the cloud.

Many factors influence the total cost of a real-world application, including the likelihood of design changes, the rate of decrease of compute prices, and whether you’ve been locked into price contracts which are now above market rates, or on instances that don’t fit your current needs anymore. With Google Cloud Platform’s customer-friendly pricing model, you're not required to make a long-term commitment to a price, machine class, or region ahead of time.

This graphic illustrates how our lower list prices and customer-friendly pricing practices can combine to produce a 40% total savings.
Your exact savings depend on your specific application, and may be even greater than what is shown here. To see the impact of our customer-friendly pricing on your specific workload, explore our TCO Tool.

If you have specific pricing questions, please visit the updated pricing page on our website. To get started with testing your own workload, we’ve made it easy with our free trial program.

- Posted by Urs Hölzle, Senior Vice President, Technical Infrastructure

Posted:
Many developers are going beyond web services and are leveraging the cloud’s scalability and pay-as-you-go nature for other compute-intensive workloads. They’re accomplishing tasks such as video encoding, rendering for visual effects, and crunching huge amounts of information for data analytics, simulation, and genomics. These use cases are a great match for cloud computing, as they consume a large volume of compute resources but typically only run on a periodic basis.

Today we are introducing Google Compute Engine Preemptible Virtual Machines, in beta for all customers in all regions. Preemptible VMs are the same as regular instances except for one key difference - they may be shut down at any time. While that may sound disruptive, it actually makes them a great choice for distributed, fault-tolerant workloads that do not require continuous availability of any single instance. By not guaranteeing indefinite uptime, we are able to offer them at a substantial discount to normal instances. Preemptible VM pricing is fixed. You will always get low cost and financial predictability, without taking the risk of gambling on variable market pricing. The savings begin with your first minute of usage, with prices as low as $0.01 per core hour.

Some of our customers are already saving money with Preemptible VMs:
  • Citadel is using Preemptible VMs as part of their overall cloud solution: “This is efficiency, innovation and execution at its best," said Joe Squeri, CIO of Citadel. "We are a major consumer of cloud compute resources and welcome Google Cloud Platform’s Preemptible VM offering as it provides competitive pricing without the complexities of navigating multiple geography-based pricing models that lack transparency.“
  • Descartes Labs, a deep learning AI company focused on understanding satellite imagery, recently completed a massive experiment using almost 30,000 CPUs to process 1 petabyte of NASA imagery in just 16 hours. Mark Johnson, CEO and co-founder said, “The Preemptible VM pricing model is a game changer for a seed-funded startup like ours, because of the significant cost reduction. We're excited to continue using them in the future as we increase the amount of data we process to identify and determine the health of global crops."
  • Google’s own Chrome security team runs their Clusterfuzz tool to perform non-stop randomized security testing in the cloud against the latest code in Chrome running on thousands of virtual machines. Having more compute power means they can find (and then fix) security bugs faster. Using Preemptible VMs, they were able to double their scale while decreasing their costs.
We carry spare capacity in our datacenters for a variety of reasons. Preemptible VMs fill this spare capacity, but let us reclaim it if needed, helping us optimize our datacenter utilization. In exchange we’re able to offer these VMs at a big discount. The tradeoff is that Preemptible VMs are limited to a 24 hour runtime, and will sometimes be preempted (shut down) earlier than that. Other than that you get all the same features of Google Compute Engine, such as fast and easy provisioning, consistently great performance, and always encrypted data written to persistent disk.

Creating a Preemptible VM works with our current tools: it’s as easy as checking a box in the Google Developer Console, or adding “--preemptible” to the gcloud command line. When Preemptible VMs are terminated, they'll receive a 30 second notice allowing you to shutdown cleanly (including saving work, if applicable).

For customers using Hadoop on Google Compute Engine, we’ve made things even easier. Our open-source bdutil tool helps you create a Hadoop cluster that gets data directly from Google Cloud Storage via the Connector for Hadoop. You can now use bdutil to mix in Preemptible VMs: just add “--preemptible .25 to run 25% of your cluster (or whatever portion you desire) as Preemptible VMs.

For more details about using Preemptible VMs, please check out the documentation. For more pricing information (including our pricing calculator tool and details on how to get a true cost comparison between cloud providers), you can consult our Compute Engine pricing page.  If you have questions or feedback, head over to the getting help page to get in touch with us.

We hope you will find Preemptible VMs useful in getting the most value out of Google Cloud Platform, and we look forward to hearing all about the great new applications you build with them!

-Posted by Paul Nash, Senior Product Manager

Posted:
Google Cloud Platform is built with you, our users, in mind. We want to arm you with amazingly innovative tools, wrapped inside a beautiful experience, so you can move from idea to impact fast. We’ll help you automate the time-consuming infrastructure, security, and data parts of your cloud stack, allowing you to focus on building amazing applications, achieving your goals, and growing your business. Now, we’d like to show you in-person how exactly you can do that, better than you ever have before.

Introducing Next, a Google Cloud Platform event series that brings Google Cloud Platform to you live so you can learn about and try the tools to build your next great project.
GCP_banner_twittercard_multicity_TwitterHandle_MB.jpg
This June, Next will make stops in New York, San Francisco, Tokyo, London, and Amsterdam. You’ll learn firsthand how developers and IT professionals like you are using Google Cloud Platform to quickly make amazing new things that get you from an idea to an application and/or decision quickly. We’ll take you through our latest services and features then get you trained via our hands-on labs.

We’re lining up an amazing group of industry leaders and experts from companies such as Avaya, Snapchat, DeNA, and Outfit7 to share their unique stories and explain how Google Cloud Platform has helped them reach their goals and thrive. They’re tackling today’s most difficult business challenges and succeeding - let us show you how.

We’ll also be joined by many partners including PWC, Tableau, Fastly, and Equinix to share their individual solutions on reimagining business in the cloud, understanding customers with data-intensive workloads, delivering performance driven scalability, and building a flexible cloud architecture.

And if Next isn’t coming to a city near you, you can always stream one of our keynotes online. Either way, we’re excited to show you how to Build What’s Next.

Please join us at Next - registration is open now.

Posted:
All applications need at least some data to function. Big data applications like gaming analytics, weather modeling, and video rendering as well as tools such as flume, MapReduce, and database replication are obvious examples of software that process and move large amounts of data. Even a seemingly simple website might have to copy dictionaries, articles, pictures, and all sorts of data across VMs, and that can add up to a lot. Sometimes that data must be accessible through a file system, and traditional tools like secure copy (scp) might not be enough to handle the increasing data sizes.
Big data applications commonly read data from disk, transform it, then use a tool like secure copy (scp) to move it to another VM for further computation. Scp is limited by several factors, from its threading model to encryption hardware in the virtual machines CPU’s, and is eventually limited by the Persistent Disk read and write quota per virtual machine. It can transfer close to 128MBytes/sec (single stream) or 240MBytes/sec (multiple streams).
This is what the current flow looks like:
Screen Shot 2015-04-30 at 7.22.49 AM.png
Diagram: a common data pipeline scenario


In this post we will describe an innovative new way of transferring large amount of data between VMs. Google Compute Engine Persistent Disks offer a feature called Snapshots,  which are point-in-time copies of all data on the disk. While snapshots are commonly used for backups, they can also rapidly be turned into new disks. These new disks can be attached to a different running virtual machine than where they were created, thereby moving the data from the first virtual machine to the second. The process of transferring data using snapshot involves three simple steps:
  1. Create a snapshot from the source disk.
  2. Download the snapshot to a new destination disk.
  3. Attach and mount the destination disk to a virtual machine.


Using Persistent Disk Snapshot you can move data between your virtual machines at speeds upwards of 1024MBytes/sec (8Gbps). That’s an up to 8x speed increase over scp! Below is a graph that shows a comparison of moving data with secure copy and snapshots. 
Diagram: Data Transfer comparisons


The huge advantage of the snapshot-based approach stems from the performance of Google Cloud Platform’s Persistent Disk snapshotting process. The following graph shows the time it takes to snapshot Persistent Disks of increasing size, along with the effective throughput (PD-SSD was used in this experiment). The time it takes to do the snapshot is roughly the same up to 500GB (bars in the graph) and steps up at the 1TB mark. Therefore, the effective throughput (i.e., “speed”) of the snapshot process, which is shown as the line in the graph, increases almost linearly.  
Google Compute Engine Persistent Disk Snapshot speed is outstanding in the industry. Below is a comparison graph with another cloud provider that also provides snapshots. As you can see, while Google Cloud Platforms upload times remain flat as the size increases, our competitor’s upload time increases as the size increases.
Google Compute Engine tests were performed in us-centra1-f using PD-SSD. Snapshot sizes are: 32GB, 100GB, 200GB, 500GB and 1000GB.

There is a cost of 2.6 cents/GB/month for taking a Persistent Disk snapshot, which might seem like a lot on top of the hourly virtual machine price for copying data. However the actual average cost comes out to about $0.003 per 500GB of data transferred because the snapshot used for transfer purpose is short lived (under 10min) and its pricing is prorated based on a granularity of seconds. You can delete the snapshot immediately after the transfer is complete. That means for less than a penny you can move a terabyte of data at 8x the speed of traditional tools.

For hands-on practice, you can find more about snapshot commands on our documentation, as well as a previous blog about how to safely make a Persistent Disk Snapshot. Happy Snapshotting!

-Posted by Stanley Feng, Software Engineer, Cloud Performance Team