InfoWorld described yesterday how Compute Engine is Google's first unabashed IaaS (infrastructure-as-a-service) product, a cloud that allows users to spin up enormous numbers of virtual Linux machines that run on the same infrastructure that powers Google.
But how will customers decide whether to use Google Compute Engine, Rackspace Cloud, Windows Azure, HP Cloud, or another IaaS provider? For an informed answer that question, InfoWorld turned to Michael Crandell, CEO and founder of RightScale, the cloud-management services company that helps customers work with everything from Amazon EC2 to Microsoft Azure.
RightScale has been testing out Compute Engine for some time now as a run-up to integrating its services with theirs. Crandell told InfoWorld how, in the course of working with Google over the past year, he got a feel for what Google is offering and how Compute Engine is going to differentiate itself from the competition.
"For the last decade, we've all thought of Google having their infrastructure as part of their 'secret sauce'," Crandell explains. "They're pretty upfront about saying, 'We're now exposing that same secret sauce infrastructure. We know how to run infrastructure really well on a global scale, so now we're exposing that to you.'"
Three points for Google Compute Engine
Even in this early stage, three major things about Google Cloud stood out for Crandell. First was the way Google was leveraging the use of its own private network to make its cloud resources uniformly accessible across the globe.
"When you create a Google Compute Engine account and use their resources," he said, "they provide a private network, a LAN of sorts that spans different regions. For example, if you set up an architecture to replicate a database from region A to region B, in the Google cloud, you don't need to traverse the public Internet to do it. You're using their private network." How precisely that network is implemented (as their own private fiber or simply a very efficiently-routed VPN) is not disclosed by Google. But the key thing is that the whole structure is seen as a single network from a programming point of view. "This makes it easier if you're building cross-regional architectures." (It's expected that Google will eventually expand Compute Engine to territories outside the United States.)
Another key difference was boot times, which are both fast and consistent in Google's cloud. A basic Ubuntu image boots in about two minutes. "That [consistency and speed] matters in two contexts: Automation in scaling, which is more responsive if it works faster, and the daily rhythms of a dev-and-test environment, where folks are building up and tearing down multi-server environments, which allows faster iteration."
Third is encryption. Google offers at-rest encryption for all storage, whether it's local or attached over a network. "Everything's automatically encrypted," says Crandell, "and it's encrypted outside the processing of the VM so there's no degradation of performance to get that feature." (Amazon offers encryption for S3 objects, but it's an optional, enabled-per-object feature.)
Still revving up the Engine
Google has taken care to underscore how Compute Engine is still officially a beta-level, "limited preview" product, and may be that way for some time. Consequently, it's not likely people are going to be yanking their machine instances from Amazon and parking them with Google en masse. If they want to do that, though, they won't find it hard. Crandell cited some video transcoding tests performed on another cloud service, which was then moved (using RightScale's own technology) to 325 Google Compute Engine cores with minimal effort.
Among the things that could be considered rough edges: The scheduled maintenance windows of Google Compute Engine, which are announced well in advance. These are not really an issue for large-scale data processing workloads -- the kind run by customers that Google is trying to attract at first -- but require more sophisticated deployment architectures for persistent workloads. Also to be expected in the future are more instance types.
A key question that's come up about Google Compute Engine has been one of the simplest: What kept Google from offering this functionality before?
The first and most likely possibility is that they weren't fully confident they could do it in the form of a service open to others. Granted, Google has spent years refining and testing their own in-house cloud to the point where it has become an immensely predictable quantity, albeit just for Google. Up until now, all of us have consumed Google's services in terms of applications hosted on top of their infrastructure. The nature of that infrastructure itself has been invisible to us. Even with the introduction of GCE, it's still invisible -- we now just get to consume it in a more raw form, one that revolves around machine images rather than Google's predefined application sets.
The second possibility is that Google felt patience would be rewarded. The market is flooded with IaaS offerings right now, and perhaps Google felt that by biding its time and seeing what the market would become, they could better target their offerings. What they have now, though, is very much like what Amazon's EC2 was in its early years -- in other words, it's clearly not mature yet.
Amazon Web Services offers a broader palette of instance types with higher CPU and RAM caps than GCE does right now, so there's little danger of GCE eclipsing Amazon in the short run even if GCE's pricing is highly competitive. Plus, the sheer level of existing adoption of Amazon's services, and the de facto standard Amazon has brought to IaaS, makes switching away (even with tools like RightScale's) that much tougher.
No comments yet.
Leave a comment
You must be logged in to post a comment.
Trackbacks are disabled.