By Damon Miller, Director of Technical Field Services
One of the most interesting trends in cloud computing is the emergence of “hybrid” solutions which span environments that were historically isolated from one another. A traditional data center offers finite capacity in support of business applications, but it is ultimately limited by obvious constraints (physical space, power, cooling, etc.). Virtualization has extended the runway a bit, effectively increasing density within the data center, however the physical limits remain. Cloud computing opens the door to huge pools of computing capacity worldwide. This “infinite” capacity is proving tremendously compelling to IT organizations, providing on-demand access to resources to meet short and long-term needs. The emerging challenge is integration—combining these disparate environments to provide a seamless and secure platform for computing services. CloudSwitch provides a software solution that allows users to extend a data center environment into the public cloud securely without modification of workloads or network configurations. I’d like to discuss a specific example of how CloudSwitch delivered a solution which spanned environments in a corporate data center and external cloud.
A large financial services company approached us some time ago with an ambitious plan to leverage cloud computing as a strategic initiative within the organization. Their goals were to reduce operating costs, improve responsiveness to the various business units, and differentiate themselves within the industry through technological innovation. Security was a fundamental requirement and a number of risk assessment groups were involved throughout the design and evaluation phases of the engagement. Finally, this company also wanted to leverage a traditional colo environment from their cloud vendor to provide high-speed access to shared storage while also supporting their traffic monitoring equipment. After a period of technical diligence, we established a reference architecture which satisfied all internal security requirements while remaining true to the fundamental goal of moving to a dynamic cloud environment. The result was a true realization of the hybrid model.
In the customer’s reference architecture, there are three primary components:
- Internal data center environment hosting the CloudSwitch Appliance (CSA)
- Private colo environment hosting the CloudSwitch Instance (CSI) and CloudSwitch Datapath (CSD) as well as shared storage for cloud instances
- Public cloud environment hosting customer workloads
The CloudSwitch Appliance is deployed into the customer’s data center environment to allow central management of one or more colo environments. Each of these environments supports an isolated cloud deployment, for example for a particular business unit. CloudSwitch’s virtual switch and bridge components are implemented for high-speed connectivity between cloud servers and shared storage. Finally, the public cloud environment is used to host actual customer workloads (operating systems). Network communication and local storage are protected through CloudSwitch’s secure overlay network and transparent disk encryption functionality.
This approach yields several benefits:
- Multiple instances of this dedicated environment can be independently deployed to support different business units
- High-speed access to the enterprise cloud environment is available since the colo environment is physically located in the same facility
- Physical infrastructure can be deployed into the colo environment in support of cloud servers—for example, shared storage devices
- Dedicated firewalls can be deployed and traffic inspection is possible, satisfying the security groups’ requirements
The reference architecture supports the organization’s high-level goals while remaining compliant with all existing security and regulatory requirements. Cloud servers have high-speed access to shared storage as a result of the colo deployment alongside the public cloud environment. All network traffic and storage is encrypted automatically through CloudSwitch’s security capabilities, and through CloudSwitch’s role-based access controls (RBAC) the security team has centralized control over who is able to access each cloud environment. The end result is a deployment model which truly implements a hybrid environment combining resources from the public cloud with traditional colo resources to deliver a secure, scalable platform for dynamic computing.
By Guest Blogger Erik Heels, Partner at Clock Tower Law Group, experts in patent law
Wikipedia defines "cloud computing" as "the logical computational resources (data, software) accessible via a computer network (through WAN or Internet etc.), rather than from a local computer. Managing local computers is hard: there are security issues, computer lifecycle issues, accessibility issues. Cloud computing, ideally, is easy: set it and forget it, access your data from anywhere, outsource your IT headaches to your service provider. To end users, whether individuals or companies, "the cloud" is an abstraction, a computing environment that can expand to suit users' needs.
What's The Problem?
One problem with cloud computing is that both cloud computing providers and law enforcement agencies can access your files, usually more easily than if your stored the files on your own computer.
Also, security breaches, like the much-publicized Dropbox security breach, during which all Dropbox accounts were accessible to all users without any password protection, can occur in the cloud.
For users, it is important to know whether your data is secure, who can access it, and what happens when there is a security breach.
For service providers, it is important to comply with both US and non-US laws including (1) data retention laws, which are ostensibly designed to help law enforcement entities do their job and (2) data disclosure laws, which are ostensibly deigned to help users know when their private information has been compromised.
Is Encryption The Answer?
Most cloud computing providers (1) authenticate (e.g. transfer usernames and password) via secure connections and (2) transfer (e.g. via HTTPS) data securely to/from their servers (so-called "data on the wire"), but, as far as I can tell, none (3) encrypts stored data (so-called "data at rest") automatically.
So if you want your data to be secure in the cloud, then consider encrypting the stored data. And don't store your encryption keys on the same server! It is unclear whether a cloud computing provider could be compelled by law enforcement agencies to decrypt data that (1) it has encrypted or that (2) users have encrypted, but if the provider has the keys, decryption is at least possible.
I have used and abandoned both Microsoft's Encrypting File System (EFS) and Apple's FileVault for encrypting data on my desktop computers. But desktop encryption is painfully slow! Perhaps cloud computing providers can leverage the power of their data centers to make the performance hit of encryption-decryption imperceptible to the user. That would be cool. And would make the benefits of cloud computing greatly outweigh the risks.
Here are three security questions you should ask of your cloud computing provider:
- Data on the Wire. Are files transferred to/from cloud servers encrypted by default?
- Data at Rest. Are files stored on cloud servers encrypted by default?
- Data Retention. If files on cloud servers are encrypted and there is a request from law enforcement to decrypt the data, then what do you do? Bonus question: What if you have the key(s)?
I searched for answers to these questions for four cloud computing providers (sourced in part from TechTarget's list of top cloud computing providers and Wikipedia's list of cloud computing providers) that are popular with small businesses like mine:
Simple Google searches of these providers' websites provided more questions than answers on the topic of encryption:
- search Amazon.com for encryption
- search Google.com for encryption
- search Apple.com for encryption
- search Dropbox.com for encryption
Cloud service providers need to do a much better job of communicating what is and what is not secure about their offerings. For example, I would characterize Dropbox's security page as misleading at best:
Just because your files are transferred securely to Dropbox does not mean they are stored in an encrypted format on Dropbox's servers. And it is the "rare exception" that is, or should be, the concern of users.
For More Information
- International Association of Privacy Professionals: Ten Steps Every Organization Should Take To Address Global Data Security Breach Notification Requirements. I would add "11. Get insurance" and "12" Get a good lawyer."
- Electronic Frontier Foundation (EFF): Surveillance Self-Defense. What can the government legally do to spy on your computer data and communications? And what can you legally do to protect yourself against such spying?
- Electronic Frontier Foundation: Mandatory Data Retention. Regarding controversial laws that require Internet Service Providers (ISPs) to collect and store records documenting the online activities of users.
- PrivacyLawCompliance.com. Law firm specializing in helping Massachusetts companies comply with privacy laws.
- ZDNet: Microsoft Admits Patriot Act Allows Access To EU-Based Cloud Data
- Centre for Commercial Law Studies (CCLS) at Queen Mary, University of London: 'Personal Data' In The UK, Anonymisation, and Encryption
As more individuals and companies move their computer files and computer applications from local client computers (over which they have a great deal of control) to remote server computers (over which they have limited control), security becomes a bigger concern - both for users and for service providers.
Erik J. Heels is an MIT engineer; trademark, domain name, and patent lawyer; Red Sox fan; and music lover. He blogs about technology, law, baseball, and rock 'n' roll at ErikJHeels.com. His law firm, Clock Tower Law Group, represents cool companies such as CloudSwitch.
By Dave Armlin, Director of Customer Support at CloudSwitch
Cloud security remains a top concern for enterprise cloud deployments. Unresolved policy and control issues make it difficult to meet the requirements of corporate security and networking teams. As a result, we frequently hear from our customers that they assume they can only put the lowest-risk data and applications into the cloud – or that their cloud projects are on hold till the security issues get resolved. This is a major limitation for cloud adoption, often creating a false belief that the cloud only works for apps “that don’t matter,” or for companies who are willing to take risks.
Customers Have the Right to Demand More
We believe that customers have the right to demand more from the cloud industry when it comes to security. They know the levels of security needed across the range of apps and data in their portfolios. And they shouldn’t have to settle for anything less than the security and control they’ve put in place internally.
Here’s what customers have the right to expect regarding cloud security:
- The right to control their data: In the shared environment of the cloud, customer data needs to be protected from unauthorized access at all times, and must be off limits to cloud providers and their technology partners. This means that data needs to be encrypted end to end, from inside the corporate firewall, across the Internet, and within the cloud — in storage, during processing, and in transit through the cloud network. The cloud should be a seamless extension of the customer’s IT environment, while the cloud provider sees only an encrypted connection running into its virtual servers and storage.
- The right to own their encryption keys: The biggest encryption challenge in the cloud involves managing the encryption keys used to decrypt data. The standard practice of storing the keys in the cloud and exposing them to the cloud provider greatly reduces the effectiveness of encrypting the data in the first place. Storing keys in virtual storage alongside the data also defeats much of the protection since if someone gains access to the disk, they will have both the data and the keys needed to access it. Thus the control of the encryption keys need to stay with the customer at all times, with keys delivered securely to the virtual machines in the cloud only when needed to decrypt the data for processing.
- The right to their access policies: For many enterprise applications, the only way to use the cloud safely is for the customer to use their own security policies and remain in control of them in the cloud. System administrators already have controls in place, typically with Active Directory, and use Role-Based Access Control (RBAC) to define users, groups, and roles to control access to applications and computing resources. A customer should be able to extend the internal security policies out to the cloud, so roles and permissions are consistent regardless of where a workload runs.
- The right to their network services: Every enterprise has a unique network infrastructure and configuration settings for providing connectivity between servers and applications. This includes a combination of things like addressing, related services (DHCP/DNS), identity and directory services (LDAP/Active Directory), WAN optimizers, load balancers, and firewalls. Cloud providers have completely different network architectures designed to support their multi-tenant environments. Customers should be able to choose whether they want to use the cloud provider’s network services or extend the products they’ve already put in place internally (many of which are now available in the cloud as virtual appliances).
- The right to their compliance processes: If the business depends on the ability to demonstrate compliance with government or industry regulations, the customer already has proven processes in place. Customers should be able to extend those compliance processes into the cloud, rather than be required by the cloud provider to adopt a whole new set of guidelines and procedures.
- The right to put their data where they want: Often, data must legally reside in specific geographic locations (e.g., EU, Canada), but the rest of the app tiers can be located wherever makes sense for performance and latency reasons. Customers should be able to put their data in the most suitable environment and move it when needed, whether to a preferred cloud or back to the data center, without being constrained by a particular cloud platform or technology stack. Applications should be able to run across multiple networks, geographic locations and computing environments, tying back seamlessly to processes running in the data center.
For Cloud Providers, It’s Time to Step Up
Making these rights available to cloud customers is not easy; otherwise cloud providers would have done it already. But if customers don’t set their standards high, they’ll start making compromises, either in the level of security they’re willing to accept or the types of workloads they’re willing to put in the cloud. For their part, cloud providers and their technology partners need to give customers the same security and control they already expect internally so they can use the cloud without risk and without constraints. Customers have the right to demand a safe environment for their apps and data — when the cloud industry can deliver it, everybody wins.
By John Considine
A few weeks ago Amazon released a new feature for Amazon Web Services (AWS) called CloudFormation. This allows a user to organize the process for provisioning and operating resources in the AWS environment and is an evolution of the AWS model of “some assembly required.” We have often viewed the features and functions within AWS as a box of parts, from which users are left to build their own creations. This model is highly biased towards developers, the kind of people who like to have a box of parts and are willing to put in the effort to build new and interesting creations from them.
CloudFormation allows a user to coordinate a number of features within Amazon’s environment, such as: launch a set of AMIs (virtual machine image w/ application), configure a security group (pseudo firewall), setup an ELB (Amazon’s version of a web load balancer), and configure CloudWatch monitoring and alarms. All of this can be managed from a template that describes each of these setup steps, and is written in easy-to-use JSON.
So this new feature is pretty cool, but after working with it for a while, I’ve been wondering who the target user is. If you are a developer that is interacting with AWS through their API, then you already have a method of coordinating the resources and services in Amazon. By definition, up to this point, you had no choice. But more than that, if you are programming to the API, you want to have control over the details of your deployment, and to be able to monitor the steps and process. The CloudFormation is an alternative to your current methods, but is not necessarily better – if you are using the APIs, you still have to monitor the progress and deal with faults during the CloudFormation process.
On the other side of the spectrum, there are the “enterprise-class” users who are looking for full configuration management of their deployments – they want to control the full lifecycle of their system and software deployments including change control of all of the components within the system. The CloudFormation solution is really a provisioning engine, and even at that, it leaves off the early and late parts of provisioning – the actual configuration of the base servers, and the “customization” aspects of running in Amazon. Configuration and customization include things like creating the base images, controlling the OS configuration (kernels, boot parameters, etc.), selecting device drivers for consistent integration and operation, adjusting for randomly-changing IP addresses in Amazon, configuring load balancing based on the notion of instance ID rather than IP address, etc. The actual construction of the application and the configuration of the OS is done outside of CloudFormation, with CloudFormation operating as a provisioning engine.
Given that the developers have the tools they need to coordinate the provisioning and the enterprises are looking for full configuration management, where does this leave the target market for CloudFormation? Clearly the Amazon console users that are interacting with Amazon through the AWS portal are best served by this new feature. CloudFormation gives these users a simple “portal” for provisioning and managing their cloud deployments – but it comes at the cost of programmatic access and integration with existing application lifecycle tools and processes. Console interaction drives cloud activity into its own silo, and fosters the concept of the cloud as being a separate, foreign, and independent environment.
So what does this feature mean for CloudSwitch customers? Not much really, since our customers are looking for tight integration with their existing systems and processes, and want to have end-to-end control over their virtual hardware, operating systems, and application configuration. While CloudFormation is designed to allow a user to coordinate a number of features and functions of AWS, the user still has to use the new and somewhat different components provided by AWS. For example: using AMIs for their VM images, limitations on the kernels, operating systems, and OS configurations, firewall and load balancing configurations that are non-standard, and behaviors in the deployment and operation that deviate from the expected behavior in the enterprise.
In the CloudSwitch model, if a user wants to configure a firewall, they use a full-featured firewall with full configurability, not an Amazon-specific version; if a user wants to monitor their applications, they use their existing tools and processes; and if a user wants to have full configuration management of their deployments, they can control every detail of their servers virtual hardware, operating systems, networking, and applications – and not conform to the restrictions of the cloud provider. As our customers know, CloudSwitch is about giving the enterprise full control over cloud configurations and processes, rather than coordinating the components that a cloud provider delivers.
Today, Amazon announced a new version of their Virtual Private Cloud (VPC) offering. This move reflected the learning and requirements from enterprise customers who have tested or deployed VPC over the past year. Initially, VPC was developed to provide a secure tunnel (layer-3) from the customer’s data center to EC2, to make it easier to receive the approval of internal security groups for cloud deployments. Customers had noted that there were many aspects of VPC they felt could be enhanced, primarily around making the networking options more sophisticated and flexible. As Jeff Barr noted in his blog on the new release, a major focus was to enable public-IP access through a cloud-based firewall/load balancer to the cloud VMs. We’re glad to see this new set of capabilities, since we had definitely heard this requirement from customers as well, and responded to it last year.
We’re excited to support customers and prospects who are interested in evaluating the latest version of VPC and integrating it into their EC2 deployments. We believe that together, CloudSwitch’s award-winning software and Amazon’s VPC offering deliver enterprise-class security and integration with customers’ internal data centers and private clouds. CloudSwitch provides full encryption of all data and communications, and lets you extend your enterprise network topologies into the cloud without modifications. This means that your entire cloud deployment is secured end-to-end, and remains completely under your control, as if it were running locally behind your firewall.
It’s great to see Amazon’s ongoing commitment to innovation and to work with joint customers who want to make the cloud a truly secure extension of their internal environments.
By the CloudSwitch Team
Over the past year we've had the pleasure of working with Terremark as a partner, as we jointly engage with enterprise customers who want to leverage hybrid clouds. For these customers and prospects, hybrid means the flexibility to combine their traditional data centers, new private clouds and managed service/colo environments with public clouds such as Terremark's Enterprise Cloud. Please join us tomorrow, March 3rd from 1:00-2:00pm EST to learn about hybrid clouds based on our hands-on experiences with enterprise customers who are using Terremark for a full range of cloud services.
By Ellen Rubin
The way you know you’re in the midst of a technology shift and market disruption is when organizations don’t behave the way you expect them to based on past track records. Cloud computing has been filled with surprises and unexpected behavior from the get-go. First, Amazon, a retailer, turns out to be a technology powerhouse in disguise and changes the rules of IT infrastructure. Then, “real” technology leaders like IBM, Dell, EMC, HP and others make lots of announcements about cloud but essentially do little and re-brand existing offerings as “cloud-enabled.” Next, Verizon, the phone company, buys Terremark in a bid to become a global cloud leader. And of course, there’s always the fact that the federal government has embraced cloud widely and is spending large amounts of money to build private clouds and leverage public ones.
So, in a world that sometimes seems upside-down, how surprising is it really that the F500, and in particular, the corporate IT groups within these huge organizations, have often turned out to be the early adopters and drivers of cloud in all flavors – private, public and hybrid? When we started CloudSwitch, our hypothesis (based on all sorts of track records and past behaviors) was that within the enterprise market, mid-tier companies (defined loosely as several hundred million to a few billion dollars in revenues) would try cloud first. This was because we were betting that these organizations had enough pain from internal data center management (cost, over-provisioning, not their core business, lack of responsiveness to business users, etc.) that cloud computing’s benefits would overcome their initial concerns. And in fact, this is true of many mid-tier enterprises, who have indeed taken the leap into cloud over the past couple of years, along with the developer and start-up communities.
But the companies who seem to be driving enterprise adoption of cloud and defining many of the requirements for vendors in our experience are at the multi-billion-dollar revenue mark, and often within the F500. Our initial hypothesis here was that these companies would be too large and resistant to change to be early adopters, unlike the smaller, more nimble mid-tier players. But it turns out that these companies have such enormous capital expenditures in data centers and infrastructure investments that they’re determined to adopt cloud to move them to a lower cost curve (“get off the data center treadmill”) and help them break through the internal limitations on self-service provisioning and scaling that have frustrated their business users for years.
Even more unexpectedly, many of the people who are leading the way within these companies are managers and architects within the corporate IT group. It’s interesting to note that in previous technology shifts – SaaS and virtualization come to mind – the revolution was staged from within business units or at the developer level, and corporate IT came on board once these technologies were de facto standards. It’s possible that with these experiences in mind, corporate IT (and the CIO in particular) has decided to take the lead this time around, and not wait to find out what’s been going on without enterprise security, control or standards.
Last year, corporate IT was struggling to absorb the avalanche of information about cloud and to separate the hype from meaningful architectures and use cases. With some encouragement from the large technology vendors, corporate IT shops retreated into private clouds as the safe way to go. This year, with hybrid clouds all the rage, it feels like enterprises and IT managers are coming into their own. They’ve been speaking with more confidence based on their pilots and initial deployments, and have come to see cloud as something that can be shaped and driven by real enterprise requirements – not just a new set of processes/resources that need to be run as a separate and un-integrated silo.
In this hybrid model, F500 enterprises are working with vendor partners to build private clouds, and identify application categories that can run completely in public clouds, and those that need to span internal and external environments. They’re asking for management, orchestration and federation technologies that let them be vendor-agnostic and “position independent” (so apps can run in a given environment at a particular point in time, regardless of underlying infrastructures). This process is clearly a multi-year learning experience with the usual fits-and-starts as companies bump into the inevitable limitations of new technology and meet resistance from internal stakeholders. But the trend is clear. And although relatively few of these large enterprises are willing to go on record yet with their case studies, we can see first-hand the in-roads cloud is making among some of the largest pharmas, banks and manufacturing companies in the world, and it’s exciting to be part of the paradigm shift.
By Ellen Rubin
Back in September, I blogged about a strange situation I had noticed in the cloud market: Where Are the Telcos? At that time, Verizon had made their initial announcement about their new CaaS offering for the SMB market, which only seemed to highlight the relative lack of leadership and progress by telcos in the cloud market. With yesterday’s big news about Verizon acquiring Terremark, it appears that the telcos are starting to show up.
Several people commented on the earlier blog that my arguments were mainly true for North American telcos, and as I’ve learned since, outside of North America, the telcos have been far more active in building clouds and embracing this new business model. But as a global leader, Verizon’s recent activities are a major step in the evolution of the cloud industry, likely to have significant impact on the market overall. While it’s too early to understand how the acquisition of Terremark will play out in terms of specific services, data center locations, etc., what’s clear is that one of the largest telcos with massive resources, broad reach and enterprise credibility has made a real commitment to integrating cloud computing into their business model.
Till now, there really hasn’t been a large, enterprise player to compete with Amazon – Terremark has made impressive progress over the past few years (as we've seen first-hand as a partner), but is still relatively small on its own. With the strength of Verizon behind it, Terremark now has the opportunity to scale in an unprecedented way and extend enterprise cloud computing, leveraging its technology stack and expertise. Stay tuned for more thoughts as the acquisition plans unfold, but one thing is certain: 2011 is off to a very interesting start…
Register for a live webinar with Terremark and CloudSwitch:
March 3, 2011: 1:00 PM - 2:00 PM EST
By John Considine
I started out writing a blog about the state of cloud computing to review how things have evolved in the cloud space over the last year (2010 was a good year for cloud computing) but I got sidetracked thinking about how clouds are converging, or in reality, not converging.
It’s clear that end users of cloud computing would like to see true interoperability. Companies want the freedom to pick a cloud that meets their needs, without worrying that choices made today will cost them big in the future or lock them in. Interoperability would mean that a company could choose a cloud for a given workload, and if conditions change, they could opt to bring the workload back in-house or move to another cloud environment – without requiring a major engineering project or a shift to a different computing paradigm.
However, there are several things working against this interoperability, making it unlikely to happen anytime soon based on emerging industry standards:
- There are many types of requirements from end users feeding into the cloud definition; customers are looking for architectures in the cloud that match their application configurations, performance requirements, geographic locations, and security concerns. They want specific infrastructure capabilities (think SANs, network gear, and hypervisors) because these are existing enterprise standards, and look for specific flavors of architecture/topologies/OS that most closely match what they already have.
- This range of customer requirements creates opportunities for cloud providers to differentiate based on features and services that let them serve specific market segments better than their competitors – think security, performance, specialties (like government or medical), or even different hypervisors (for compatibility with in-house platforms), networking architectures, and pricing models.
- The competition among cloud providers in turn leads to intense “land grabs” by technology vendors in the cloud market. This includes the big guys like VMware, Microsoft, and Citrix as well as startups like Eucalyptus, Cloud.com, and Nimbula. It also includes most of the networking players and many of the IT ops providers. Each of these vendors has a different view on how cloud infrastructure should be built and managed (using their solutions and core components), and these differences alter the design of the cloud as well as the attributes of the cloud that the end users can control.
In the end, although everyone is talking about standards and converging models for cloud computing, the customers, cloud providers, and technology vendors are in fact all working against standards – not because they don’t want or believe in standards, but because market forces make this inevitable. Customers demand variety and flexibility from the cloud to meet their specific needs, while technology and cloud providers rush to deliver what their customers want, to differentiate themselves and create “unfair advantage” in an infrastructure market that might otherwise commoditize them.
So how will all of this all play out? Today, we see some basic moves around standardizing APIs (such as Eucalyptus/Amazon, the vCloud efforts, etc.). These only scratch the surface of interoperability, without addressing the underlying complexity of cloud infrastructure. It is possible that in a few “cloud generations” the industry will mature enough for some of the grand unification computing models to come into existence. These are very cool models where the workloads are self-descriptive and the cloud will accept or reject loads based on their ability to satisfy the complete requirements encapsulated within the workload. I love this vision, but it requires a lot of different groups (software vendors, cloud providers, hypervisor vendors, application developers, and infrastructure component vendors) to get together and optimize for the whole instead of for their particular product or business. What this really means is: not in the near future.
Where does this leave those who want to use the cloud? Fortunately, there are a number of "cloud enablement" players out there focused on orchestration and interoperability, whose goal is to make it easy for companies to take advantage of cloud computing without worrying about all the differences between clouds. At CloudSwitch, we believe true interoperability lies well beyond simple API aggregation – what enterprises need is a solution that lets them create and migrate workloads in the cloud that are not only position independent, but also hypervisor and cloud provider agnostic.
By John Considine
Happy New Year! In this first post of 2011, I’d like to explore one of the primary ways the cloud landscape is evolving. Two of the pillars of cloud computing, Infrastructure as a Service (IAAS) and Platform as a Service (PAAS), are showing some interesting trends as cloud providers adapt to meet the needs of their customers. Over the coming year, we may see these familiar models evolving into something new since the ideal solution for most enterprises is not one approach or the other but some combination of both.
Traditionally these two methods of cloud computing have been quite distinct. Infrastructure as a Service providers like Amazon EC2, Terremark, and Savvis promise to remove the burden of managing physical infrastructure — everything from server installation and support to network and storage infrastructure build-out and management. Platform as a Service offerings like Force.com, Google App engine, and Microsoft’s Azure provide these benefits of IAAS in addition to offloading management of the underlying system and application software. Operating system software, core services, and even high level application building blocks are managed by the provider, freeing users from worrying about things like patching, updates, core application configuration and management.
Many feel that IAAS does not go far enough to eliminate the unnecessary overhead of managing common software components. Significant effort is involved in managing operating system lifecycles and common software components, and the PAAS supporters are pushing for cloud computing to eliminate this wasteful effort.
PAAS also has its downsides, particularly transition costs and vendor lock-in. In order to transition your workloads into PAAS, you have to adopt and design for the specific offering that the PAAS provider has created. This creates the potential for vendor lock-in because your new applications are using the specific services built by your PAAS provider, and you are unlikely to find the same services from another vendor.
We have found that enterprises are using all forms of cloud computing, from SAAS offerings like Salesforce to PAAS offerings from Microsoft to IAAS from Amazon. Within any given enterprise, there are multiple groups, departments, and users that have specific problems and are seeking solutions wherever they can find them. For example, business users in the organization are turning to SAAS to solve their customer management and collaboration, while developers are building new solutions on the PAAS platforms to speed development, and IT ops teams are utilizing the rapid provisioning and scalability of IAAS to get their work done. It is perhaps the pressure to provider broader solutions for these organizations that is leading to some interesting changes in the services offered by IAAS and PAAS cloud providers.
From IAAS providers, we’re seeing a trend to offer more PAAS services. This is apparent in Amazon’s offerings as they add services such as Relational Database Service (RDS) and Elastic MapReduce (Hadoop) to their SimpleDB and Simple Queue Service (SQS). These higher level services extend beyond simple IAAS into the realm of PAAS since they are not plain virtual machines but full services managed by Amazon. They carry all the benefits and disadvantages of PAAS offerings, with the interesting characteristic of being integrated as part of the overall Amazon solutions.
At the same time, we are seeing a new offering from Microsoft on their Azure platform, the VM Role. This soon to be released offering from Microsoft extends the PAAS nature of Azure into the IAAS realm. Now the customer can control all aspects of instances in Azure including all OS configurations and settings. Of course, this carries the downsides of IAAS as well, in that the customer must now completely manage the OS and applications.
These two providers give us insight to how the market is evolving — IAAS and PAAS are converging as vendors in each space adopt characteristics of the other. This convergence makes a lot of sense since enterprises want just as much control as they really need and the ability to offload work to a provider whenever feasible. In cases where they need to control the entire environment, they want low-level control. This is good news for both providers as well as customers because providers can offer value-added services that are “sticky” for their customers, while different groups within a given enterprise can choose the level of service they want, from fully-managed to completely controllable, now perhaps from the same provider.
As IAAS and PAAS morph into a converged model, CloudSwitch provides the position independence and flexibility that allow enterprises to take advantage of this evolving market without having to adapt to each cloud provider. By making the entire application stack (or selected portions of it) completely portable, from low level infrastructure to high level application control, customers can run their applications where it makes sense, with the management capabilities they need. It’s all about giving customers the choices they want, without risk of lock-in.