By John McEleney
In Ray Ozzie’s thoughtful memorandum to employees, “Dawn of a New Day,” he implores everyone in the company to embrace the cloud or perish. What I found even more interesting are his comments about complexity. "Complexity kills," said Ozzie. "Complexity sucks the life out of users, developers and IT. Complexity makes products difficult to plan, build, test and use. Complexity introduces security challenges. Complexity causes administrator frustration."
I think Ray is correct on both fronts: people need to push forward towards the cloud as it transforms the way most companies build, manage and consume applications and infrastructure. The danger as we adopt this major platform shift is that we undermine its impact by adding huge amounts of complexity to our organizations or our technology platforms.
Let’s be clear, no one starts out a project by saying, “I’m going to design the most complex system possible.” Unfortunately, it is simply human nature that complexity enters our thought processes and systems incrementally and before we know it, we have tangled messes. Why is this? Is it because it’s just too hard to make things simple? Is it simply a fact that these systems are just technically complex? Or have we created a tech culture that believes that you get more “value” or “stickiness” by designing a complex solution?
Simplicity requires determination and focus. We must have the courage to stand up to our peers and assert that usability and simplicity are not synonymous with being underpowered, but rather the opposite - the system is even more powerful. This is often much hard to do as part of a broader organization than as an individual developer. This must be part of the DNA of the corporate culture – otherwise simplicity will be rejected by the organization’s “complexity antibodies.”
Of course, enterprise infrastructure and cloud infrastructures have real issues around security, control, automation, security, resiliency, performance… these are all complex, hairy problems that require some pretty serious heavy technical lifting. But it’s equally clear to me that the cloud provides a new, fresh canvas on which we can innovate, create, design and dream about how to meet broad customer needs without drowning our innovation in a never-ending spiral of complexity.
Is it worth it for companies to invest in building a culture around simplicity? As the market cap of Apple, a company that is laser-focused on eliminating complexity, grows to $280B and outstrips Microsoft’s by almost 30%, I think the market has spoken.
By Ellen Rubin
We’ve written extensively about the benefits of hybrid clouds, since it’s a core part of our founding vision at CloudSwitch. For most of this past year, the cloud market has been focused on defining the differences between public and private clouds and weighing the costs and benefits. Slowly the conversation has shifted to what we believe is the central axiom of cloud: it’s not all or nothing on-premise or in an external cloud; it’s the ability to federate across multiple pools of resources, matching application workloads to their most appropriate infrastructure environments.
To reiterate some key thoughts we’ve written about in the past, the idea of hybrid clouds encompasses several use cases:
- Using multiple clouds for different applications to match business needs. For example, Amazon or Rackspace could be used for applications that need large horizontal scale, and Savvis, Terremark or BlueLock for applications that need stronger SLAs and higher security. An internal cloud is another federation option for applications that need to live behind the corporate firewall.
- Allocating different elements of an application to different environments, whether internal or external. For example, the compute tiers of an application could run in a cloud while accessing data stored internally as a security precaution (“application stretching”).
- Moving an application to meet requirements at different stages in its lifecycle, whether between public clouds or back to the data center. For example, Amazon or Terremark's vCloud Express could be used for development, and when the application is ready for production it could move to Terremark's Enterprise Cloud or similar clouds. This is also important as applications move towards the end of their lifecycle, where they can be moved to lower-cost cloud infrastructure as their importance and duty-cycle patterns diminish.
CloudSwitch customers and prospects are clear that hybrid clouds are the way to go. Here are some examples of recent conversations:
“It’s going to take our internal IT group more than 18 months to build a private cloud; in the meantime we can use the public clouds now for on-demand capacity and scalability.” – VP of Business IT group at a large Wall Street firm
“We’re highly virtualized and we see external clouds as pools of virtualized resources that are available as extensions of our internal infrastructure.” – IT Director at a large healthcare company
“We have compliance data that will never leave our firewall but we like the idea of scaling out the computing resources in the cloud for peak periods.” – VP of Informatics at a large pharma
We’ve also been tracking some validation from more official sources on the growth of public clouds and the hybrid model. For example, a recent study by SandHill Group surveyed more than 500 IT executives and indicated that the biggest growth in cloud computing will be in hybrid clouds (from 13% now to 43% in three years). Another survey by Evans Data finds an even higher adoption rate among IT developers, suggesting that the hybrid cloud model is set to dominate the coming IT landscape.
It’s also interesting to see the importance of the hybrid model taking hold among industry insiders with many different perspectives. We saw this at VMworld 2010, where there was tremendous interest in hybrid clouds, from Paul Maritz’s keynote predicting a hybrid cloud future through many sessions and product announcements. Veteran cloud watcher James Urquhart points out that the hybrid approach lets you hedge your bets in cloud computing, using technology that allows you to decouple the application from the underlying infrastructure and move it to the right environment so you don’t get locked in. And even private cloud advocates acknowledge that hybrid has an essential role, where public cloud platforms serve as extensions of private cloud deployments.
It’s gratifying to see the CloudSwitch founding vision gain broad industry acceptance, with the hybrid model as key enabler for cloud computing. It’s even more satisfying to seeing the vision coming to life as more and more customers leverage our technology to run their applications effortlessly in the right environment, whether an internal data center, private cloud, or public cloud. Enterprise users and their companies are the real winners.
By John Considine
Last week Citrix announced OpenAccess and OpenBridge, two new offerings for cloud computing. OpenAccess focuses on single sign-on and identity management while OpenBridge is designed to allow connections between local resources and cloud resources. The OpenBridge announcement highlights an interesting debate occurring around hybrid cloud computing – how should cloud networks be connected?
The debate centers on layer-2 versus layer-3 connectivity. Traditionally, network topologies for remote data centers, co-location facilities, and managed services have been built with layer-3 (routed) networks. This made sense since you were creating separate networks for each location and then creating rules for communication between the different locations. Setting up these networks requires lengthy planning and re-configuration to enable the organization’s core network to communicate with the new external resources. In addition, the rules and services for servers deployed both in the data center and remote facilities have to be updated. Although deploying layer-3 networks is time-consuming and complex, it’s the way things have always been done by the service providers.
Interestingly, most of the new cloud solutions are also following this layer-3 model because it’s so established and familiar. Amazon introduced their VPC offering last year that enabled connectivity between the customer’s data center and their cloud over a layer-3 network. VMware has released vShield Edge services that use layer-3 networks to connect between virtual data center (VDC) networks.
So where is the debate? Enterprise IT is discovering that the attributes and configuration of layer-3 networking work against some of the most powerful concepts in cloud computing. Most enterprises are looking to the cloud for dynamic applications and deployments. They want to be able to scale resources on demand, rapidly provision new resources for development and testing, and enable self-service models. If, for each new environment, they had to get permission to alter the core networking or edge devices and then actually get someone to do it, much of the advantage of the agility of cloud computing would be lost.
The layer-3 approach has two fundamental issues that make it problematic for cloud use cases: (1) layer-3 is location-dependent, and (2) changing configurations in the cloud involves changing core or edge services to match. If each cloud resource is an independent network with its own addressing scheme, then applications and services deployed to the cloud have to be updated relative to their location. Further, applications that want to interact with the cloud also have to be updated. Yes, this can be mitigated with DNS and other techniques, but that just leads back to problem #2.
Because of this realization, we looked for an alternative as we designed our CloudSwitch software that would allow enterprises to access the full power of cloud computing. With respect to networking, the answer was support for layer-2 connectivity between the cloud and the data center. Layer-2 networking allows for position independence since the network in the cloud is a direct extension of the network in the data center. This means that all servers have the same addresses and routing protocols and thus become location independent (from the user and application level, the location of the server cannot be determined). With this solution, users can select where they want to run their applications locally or in the cloud, and do not have to reconfigure anything.
Of course, creating a layer-2 connection between the data center and a cloud can be challenging. The actual bridging part is not too hard since the networking technologies have existed for quite some time. The challenges lie in two factors: cloud provider control and security implications. In terms of cloud provider control, for a layer-2 bridge to work, the cloud provider must allow the customer to control the networking within the cloud offering. This means that the cloud provider must allow customers to specify the addressing for each server they deploy in the cloud. Most public clouds do not have this capability; they assign addresses (either in ranges or per server) and almost universally, these will not align with your internal addressing schemes. This means that a “standard” layer-2 solution is not compatible with most public clouds. Because we believe that having a layer-2 option is critical for enterprises looking to embrace cloud computing, we have worked hard to support this in all clouds, even when the native cloud doesn’t. This is one of the strengths of our Cloud Isolation Technology™ – adding value and capabilities to each cloud we support.
The more major challenge of extending your networks to the cloud is of course security. By bridging your networks to the cloud, you have to trust the cloud provider and their security measures. This can be difficult because as a customer, you have no control over what the cloud provider implements or changes over the course of operation. This is another reason we built our CloudSwitch software around our Cloud Isolation Technology. If you really want to create a hybrid cloud computing environment, you need the confidence to integrate tightly with the cloud. CloudSwitch enables this confidence by allowing the customer to separate their environment from the cloud provider’s infrastructure in a highly controlled fashion. This means that not only do we protect your network and storage traffic from being accessed by the cloud provider, but we prevent any traffic from outside our isolation layer from entering your data center.
In the end, we believe that to achieve true hybrid cloud computing, a solution must support both layer-2 and layer-3 networking, and that is what we have built. Our customers can choose to interact with their servers in the cloud utilizing an automated layer-2 connection, or create specific rules and routing to access via layer-3, and because of our Cloud Isolation Technology, we can support this even in clouds that don’t natively support full control over network addressing.
It is great to see that a major player like Citrix has embraced the idea of layer-2 bridging with their CloudBridge offering as it helps highlight the importance of this network technology. Of course, there is a lot more to cloud federation than networking. Full security control, resource allocation and management, application migration, and lifecycle management are other key elements that are essential for a successful deployment, all automated and simplified by CloudSwitch.
By Guest Author, Kamesh Pemmaraju
In a typical enterprise today, one finds a heterogeneous mix of modern platforms and legacy platforms of many vintages. With the emergence of a variety of cloud service models (IaaS, SaaS, PaaS) and an array of deployment models (private, public, and community), we will most likely see a heterogeneous mix of cloud environments in the enterprise of the future. Furthermore, cloud computing may be a great fit for some applications and workloads, but there will always be some data, processes, and applications that will remain on-premises for reasons of regulatory compliance, mission-critical or classified data, control, and cost.
While the trend toward cloud computing is inevitable, security, privacy, lock-in, and performance continue to be major obstacles for accelerated public cloud adoption. The lack of standards is another barrier as one CIO of a large insurance company said during our research:
"The big topic we are discussing is if we are not happy with the SLA of an existing vendor, how quickly can we re-outsource? Lock-in, interoperability and standards are big issues for us. I can’t move my workload easily between clusters due to incompatibilities between vendors and between virtual machines. We have to think about compatibility of compute, storage, and network virtual resources.”
– CIO, insurance company
Because there aren't established industry standards just yet in cloud computing, most enterprises remain wary about getting locked into a single vendor architecture and API. As adoption increases, however, open standards will naturally emerge. While premature standards can stifle market innovation, CIOs believe proprietary standards can be worse (and history has proved that the half-life of such standards tends to be very short).
The nature of the beast is such that customers need to consider using multiple cloud providers to meet their specific scalability, security, flexibility, and functionality needs. One Fortune 500 financial company CIO we interviewed as part of our "Leaders in the Cloud" research study said their company will move 20% of their application portfolio to specific clouds that meets the workload characteristics of their applications in the next 3-5 years. With a typical large enterprise application landscape of between 10,000 and 15,000 applications, that 20% translates to 2,000-3,000 applications! The numbers are staggering when you scale that out to the Global 2000 companies.
Our study surveyed more than 500 IT executives and indicated that the biggest growth will be in hybrid clouds (from 13 percent now to 43 percent in three years). These executives are looking for ways to seamlessly migrate/interoperate their data and applications (both legacy and new) between clouds and their datacenters based on their own business needs, risks, and architectural considerations.
We will see a number of use cases and variations of the hybrid approach. Enterprise customers will pick and choose applications and their IaaS, PaaS, and SaaS (*aaS) vendors based on their business needs thus creating a diverse and heterogeneous cloud environment. One healthcare company CIO emphasized that this is actually their preferred adoption model and explained the reason for it:
“Rather than stick to one [cloud] product that meets all of our needs, we have taken the approach of using multiple [cloud] vendors and solutions. Even though this may increase the integration complexity, we find that we get the most innovative solutions with the least amount of expenses and the fastest time.”
Examples of hybrid clouds include: bursting out from an internal to a public cloud when needs required more capacity; running logic and processing in the cloud and leaving the database in the data center; and performing highly parallelized database processing in the cloud combined with other logic processing in the data center and on. We will also see many storage-related use cases where companies and organizations of all sizes will augment their on-premise storage with cloud storage (potentially from various vendors) in a hybrid model deployment.
Some of the unique aspects of heterogeneous clouds working in concert with on-premise infrastructure include:
- Managing federated identity and security
- Migrating data, workloads, and applications
- Creating/buying and maintaining integration or "glue" applications to connect the clouds and to manage workflow and business processes
- Managing metering, billings, and relationships with multiple cloud vendors
Hybrid models can increase complexity due to interoperability issues and the need to deal with different tools, API's, and management frameworks. Customers would like to use their familiar existing technologies, tools, and user interfaces to handle hybrid cloud scenarios seamlessly and securely. The ideal scenario is when applications in the cloud look and behave exactly like their counterparts within the datacenter. This can be challenging if you are dealing with multiple cloud vendors and a variety of cloud architectures. In a recent interview, Ellen Rubin, VP of Products at CloudSwitch discussed how they are delivering technologies which will enable companies to use all of their existing infrastructure tools, networking architecture, security policies, active directories, firewalls, CDN systems, identity management systems, load balancers, and so on to interoperate seamlessly — and securely — with the applications in the cloud as if they are running locally.
Because of the existing heterogeneous infrastructure and the emergence of multiple clouds within and without large enterprises, cloud management technologies are becoming increasingly critical. A cloud management layer provides abstraction and governance capabilities and an adapter architecture enabling a "single pane of glass" for managing all the physical and cloud sub-environments.
Our survey data suggests that Small and Midsize Enterprises (SME) are adopting the hybrid and external cloud model much more quickly than others and are also the most likely to be the ones to use multiple cloud vendors in an integrated way. What I'm generally finding is that individual business units and departments in mid-tier and large enterprises are using a bottoms-up strategy and deploying cloud services in isolated pockets to solve specific and tactical problems. According to Ellen Rubin, CloudSwitch is seeing hybrid adoption taking place among the early adopter enterprises (F1000 and even F500) as the dominant model.
To learn more, join me at Cloudswitch's upcoming Webinar “Making Hybrid Clouds Work in the Real World” on Wednesday Oct, 13th, 1:00 pm - 2:00 pm EDT. As a guest speaker, I will discuss our research findings on where cloud reality stands today versus all the hype, including which types of enterprises are adopting cloud and why (or why not). I will also provide an overview of the hybrid cloud architecture and explain why hybrid clouds are poised for the greatest growth. Watch the recording on demand.
Kamesh Pemmaraju is the Director of cloud research at the Sand Hill Group. He consults with companies—enterprises and technology vendors—to help accelerate their transition to the cloud. He is the co-author of the critically acclaimed "Leaders in the Cloud" research study that is a result of 70+ hours of one-on-one interviews with CIO’s and IT executives from 30 companies. His blog has been recognized in the top 50 bloggers on cloud computing and in CloudTP's best cloud computing blogs list. He welcomes your comments, opinions, and questions. For information on developments, customers, vendors, people, solutions, trends, news, opinions, interviews, webcasts, events, and blog posts on cloud computing, follow Kamesh on twitter @kpemmaraju and his LinkedIn Profile http://www.linkedin.com/in/kpemmaraju.