By John McEleney
In Ray Ozzie’s thoughtful memorandum to employees, “Dawn of a New Day,” he implores everyone in the company to embrace the cloud or perish. What I found even more interesting are his comments about complexity. "Complexity kills," said Ozzie. "Complexity sucks the life out of users, developers and IT. Complexity makes products difficult to plan, build, test and use. Complexity introduces security challenges. Complexity causes administrator frustration."
I think Ray is correct on both fronts: people need to push forward towards the cloud as it transforms the way most companies build, manage and consume applications and infrastructure. The danger as we adopt this major platform shift is that we undermine its impact by adding huge amounts of complexity to our organizations or our technology platforms.
Let’s be clear, no one starts out a project by saying, “I’m going to design the most complex system possible.” Unfortunately, it is simply human nature that complexity enters our thought processes and systems incrementally and before we know it, we have tangled messes. Why is this? Is it because it’s just too hard to make things simple? Is it simply a fact that these systems are just technically complex? Or have we created a tech culture that believes that you get more “value” or “stickiness” by designing a complex solution?
Simplicity requires determination and focus. We must have the courage to stand up to our peers and assert that usability and simplicity are not synonymous with being underpowered, but rather the opposite - the system is even more powerful. This is often much hard to do as part of a broader organization than as an individual developer. This must be part of the DNA of the corporate culture – otherwise simplicity will be rejected by the organization’s “complexity antibodies.”
Of course, enterprise infrastructure and cloud infrastructures have real issues around security, control, automation, security, resiliency, performance… these are all complex, hairy problems that require some pretty serious heavy technical lifting. But it’s equally clear to me that the cloud provides a new, fresh canvas on which we can innovate, create, design and dream about how to meet broad customer needs without drowning our innovation in a never-ending spiral of complexity.
Is it worth it for companies to invest in building a culture around simplicity? As the market cap of Apple, a company that is laser-focused on eliminating complexity, grows to $280B and outstrips Microsoft’s by almost 30%, I think the market has spoken.
By Ellen Rubin
We’ve written extensively about the benefits of hybrid clouds, since it’s a core part of our founding vision at CloudSwitch. For most of this past year, the cloud market has been focused on defining the differences between public and private clouds and weighing the costs and benefits. Slowly the conversation has shifted to what we believe is the central axiom of cloud: it’s not all or nothing on-premise or in an external cloud; it’s the ability to federate across multiple pools of resources, matching application workloads to their most appropriate infrastructure environments.
To reiterate some key thoughts we’ve written about in the past, the idea of hybrid clouds encompasses several use cases:
- Using multiple clouds for different applications to match business needs. For example, Amazon or Rackspace could be used for applications that need large horizontal scale, and Savvis, Terremark or BlueLock for applications that need stronger SLAs and higher security. An internal cloud is another federation option for applications that need to live behind the corporate firewall.
- Allocating different elements of an application to different environments, whether internal or external. For example, the compute tiers of an application could run in a cloud while accessing data stored internally as a security precaution (“application stretching”).
- Moving an application to meet requirements at different stages in its lifecycle, whether between public clouds or back to the data center. For example, Amazon or Terremark's vCloud Express could be used for development, and when the application is ready for production it could move to Terremark's Enterprise Cloud or similar clouds. This is also important as applications move towards the end of their lifecycle, where they can be moved to lower-cost cloud infrastructure as their importance and duty-cycle patterns diminish.
CloudSwitch customers and prospects are clear that hybrid clouds are the way to go. Here are some examples of recent conversations:
“It’s going to take our internal IT group more than 18 months to build a private cloud; in the meantime we can use the public clouds now for on-demand capacity and scalability.” – VP of Business IT group at a large Wall Street firm
“We’re highly virtualized and we see external clouds as pools of virtualized resources that are available as extensions of our internal infrastructure.” – IT Director at a large healthcare company
“We have compliance data that will never leave our firewall but we like the idea of scaling out the computing resources in the cloud for peak periods.” – VP of Informatics at a large pharma
We’ve also been tracking some validation from more official sources on the growth of public clouds and the hybrid model. For example, a recent study by SandHill Group surveyed more than 500 IT executives and indicated that the biggest growth in cloud computing will be in hybrid clouds (from 13% now to 43% in three years). Another survey by Evans Data finds an even higher adoption rate among IT developers, suggesting that the hybrid cloud model is set to dominate the coming IT landscape.
It’s also interesting to see the importance of the hybrid model taking hold among industry insiders with many different perspectives. We saw this at VMworld 2010, where there was tremendous interest in hybrid clouds, from Paul Maritz’s keynote predicting a hybrid cloud future through many sessions and product announcements. Veteran cloud watcher James Urquhart points out that the hybrid approach lets you hedge your bets in cloud computing, using technology that allows you to decouple the application from the underlying infrastructure and move it to the right environment so you don’t get locked in. And even private cloud advocates acknowledge that hybrid has an essential role, where public cloud platforms serve as extensions of private cloud deployments.
It’s gratifying to see the CloudSwitch founding vision gain broad industry acceptance, with the hybrid model as key enabler for cloud computing. It’s even more satisfying to seeing the vision coming to life as more and more customers leverage our technology to run their applications effortlessly in the right environment, whether an internal data center, private cloud, or public cloud. Enterprise users and their companies are the real winners.
By John Considine
Last week Citrix announced OpenAccess and OpenBridge, two new offerings for cloud computing. OpenAccess focuses on single sign-on and identity management while OpenBridge is designed to allow connections between local resources and cloud resources. The OpenBridge announcement highlights an interesting debate occurring around hybrid cloud computing – how should cloud networks be connected?
The debate centers on layer-2 versus layer-3 connectivity. Traditionally, network topologies for remote data centers, co-location facilities, and managed services have been built with layer-3 (routed) networks. This made sense since you were creating separate networks for each location and then creating rules for communication between the different locations. Setting up these networks requires lengthy planning and re-configuration to enable the organization’s core network to communicate with the new external resources. In addition, the rules and services for servers deployed both in the data center and remote facilities have to be updated. Although deploying layer-3 networks is time-consuming and complex, it’s the way things have always been done by the service providers.
Interestingly, most of the new cloud solutions are also following this layer-3 model because it’s so established and familiar. Amazon introduced their VPC offering last year that enabled connectivity between the customer’s data center and their cloud over a layer-3 network. VMware has released vShield Edge services that use layer-3 networks to connect between virtual data center (VDC) networks.
So where is the debate? Enterprise IT is discovering that the attributes and configuration of layer-3 networking work against some of the most powerful concepts in cloud computing. Most enterprises are looking to the cloud for dynamic applications and deployments. They want to be able to scale resources on demand, rapidly provision new resources for development and testing, and enable self-service models. If, for each new environment, they had to get permission to alter the core networking or edge devices and then actually get someone to do it, much of the advantage of the agility of cloud computing would be lost.
The layer-3 approach has two fundamental issues that make it problematic for cloud use cases: (1) layer-3 is location-dependent, and (2) changing configurations in the cloud involves changing core or edge services to match. If each cloud resource is an independent network with its own addressing scheme, then applications and services deployed to the cloud have to be updated relative to their location. Further, applications that want to interact with the cloud also have to be updated. Yes, this can be mitigated with DNS and other techniques, but that just leads back to problem #2.
Because of this realization, we looked for an alternative as we designed our CloudSwitch software that would allow enterprises to access the full power of cloud computing. With respect to networking, the answer was support for layer-2 connectivity between the cloud and the data center. Layer-2 networking allows for position independence since the network in the cloud is a direct extension of the network in the data center. This means that all servers have the same addresses and routing protocols and thus become location independent (from the user and application level, the location of the server cannot be determined). With this solution, users can select where they want to run their applications locally or in the cloud, and do not have to reconfigure anything.
Of course, creating a layer-2 connection between the data center and a cloud can be challenging. The actual bridging part is not too hard since the networking technologies have existed for quite some time. The challenges lie in two factors: cloud provider control and security implications. In terms of cloud provider control, for a layer-2 bridge to work, the cloud provider must allow the customer to control the networking within the cloud offering. This means that the cloud provider must allow customers to specify the addressing for each server they deploy in the cloud. Most public clouds do not have this capability; they assign addresses (either in ranges or per server) and almost universally, these will not align with your internal addressing schemes. This means that a “standard” layer-2 solution is not compatible with most public clouds. Because we believe that having a layer-2 option is critical for enterprises looking to embrace cloud computing, we have worked hard to support this in all clouds, even when the native cloud doesn’t. This is one of the strengths of our Cloud Isolation Technology™ – adding value and capabilities to each cloud we support.
The more major challenge of extending your networks to the cloud is of course security. By bridging your networks to the cloud, you have to trust the cloud provider and their security measures. This can be difficult because as a customer, you have no control over what the cloud provider implements or changes over the course of operation. This is another reason we built our CloudSwitch software around our Cloud Isolation Technology. If you really want to create a hybrid cloud computing environment, you need the confidence to integrate tightly with the cloud. CloudSwitch enables this confidence by allowing the customer to separate their environment from the cloud provider’s infrastructure in a highly controlled fashion. This means that not only do we protect your network and storage traffic from being accessed by the cloud provider, but we prevent any traffic from outside our isolation layer from entering your data center.
In the end, we believe that to achieve true hybrid cloud computing, a solution must support both layer-2 and layer-3 networking, and that is what we have built. Our customers can choose to interact with their servers in the cloud utilizing an automated layer-2 connection, or create specific rules and routing to access via layer-3, and because of our Cloud Isolation Technology, we can support this even in clouds that don’t natively support full control over network addressing.
It is great to see that a major player like Citrix has embraced the idea of layer-2 bridging with their CloudBridge offering as it helps highlight the importance of this network technology. Of course, there is a lot more to cloud federation than networking. Full security control, resource allocation and management, application migration, and lifecycle management are other key elements that are essential for a successful deployment, all automated and simplified by CloudSwitch.
By Guest Author, Kamesh Pemmaraju
In a typical enterprise today, one finds a heterogeneous mix of modern platforms and legacy platforms of many vintages. With the emergence of a variety of cloud service models (IaaS, SaaS, PaaS) and an array of deployment models (private, public, and community), we will most likely see a heterogeneous mix of cloud environments in the enterprise of the future. Furthermore, cloud computing may be a great fit for some applications and workloads, but there will always be some data, processes, and applications that will remain on-premises for reasons of regulatory compliance, mission-critical or classified data, control, and cost.
While the trend toward cloud computing is inevitable, security, privacy, lock-in, and performance continue to be major obstacles for accelerated public cloud adoption. The lack of standards is another barrier as one CIO of a large insurance company said during our research:
"The big topic we are discussing is if we are not happy with the SLA of an existing vendor, how quickly can we re-outsource? Lock-in, interoperability and standards are big issues for us. I can’t move my workload easily between clusters due to incompatibilities between vendors and between virtual machines. We have to think about compatibility of compute, storage, and network virtual resources.”
– CIO, insurance company
Because there aren't established industry standards just yet in cloud computing, most enterprises remain wary about getting locked into a single vendor architecture and API. As adoption increases, however, open standards will naturally emerge. While premature standards can stifle market innovation, CIOs believe proprietary standards can be worse (and history has proved that the half-life of such standards tends to be very short).
The nature of the beast is such that customers need to consider using multiple cloud providers to meet their specific scalability, security, flexibility, and functionality needs. One Fortune 500 financial company CIO we interviewed as part of our "Leaders in the Cloud" research study said their company will move 20% of their application portfolio to specific clouds that meets the workload characteristics of their applications in the next 3-5 years. With a typical large enterprise application landscape of between 10,000 and 15,000 applications, that 20% translates to 2,000-3,000 applications! The numbers are staggering when you scale that out to the Global 2000 companies.
Our study surveyed more than 500 IT executives and indicated that the biggest growth will be in hybrid clouds (from 13 percent now to 43 percent in three years). These executives are looking for ways to seamlessly migrate/interoperate their data and applications (both legacy and new) between clouds and their datacenters based on their own business needs, risks, and architectural considerations.
We will see a number of use cases and variations of the hybrid approach. Enterprise customers will pick and choose applications and their IaaS, PaaS, and SaaS (*aaS) vendors based on their business needs thus creating a diverse and heterogeneous cloud environment. One healthcare company CIO emphasized that this is actually their preferred adoption model and explained the reason for it:
“Rather than stick to one [cloud] product that meets all of our needs, we have taken the approach of using multiple [cloud] vendors and solutions. Even though this may increase the integration complexity, we find that we get the most innovative solutions with the least amount of expenses and the fastest time.”
Examples of hybrid clouds include: bursting out from an internal to a public cloud when needs required more capacity; running logic and processing in the cloud and leaving the database in the data center; and performing highly parallelized database processing in the cloud combined with other logic processing in the data center and on. We will also see many storage-related use cases where companies and organizations of all sizes will augment their on-premise storage with cloud storage (potentially from various vendors) in a hybrid model deployment.
Some of the unique aspects of heterogeneous clouds working in concert with on-premise infrastructure include:
- Managing federated identity and security
- Migrating data, workloads, and applications
- Creating/buying and maintaining integration or "glue" applications to connect the clouds and to manage workflow and business processes
- Managing metering, billings, and relationships with multiple cloud vendors
Hybrid models can increase complexity due to interoperability issues and the need to deal with different tools, API's, and management frameworks. Customers would like to use their familiar existing technologies, tools, and user interfaces to handle hybrid cloud scenarios seamlessly and securely. The ideal scenario is when applications in the cloud look and behave exactly like their counterparts within the datacenter. This can be challenging if you are dealing with multiple cloud vendors and a variety of cloud architectures. In a recent interview, Ellen Rubin, VP of Products at CloudSwitch discussed how they are delivering technologies which will enable companies to use all of their existing infrastructure tools, networking architecture, security policies, active directories, firewalls, CDN systems, identity management systems, load balancers, and so on to interoperate seamlessly — and securely — with the applications in the cloud as if they are running locally.
Because of the existing heterogeneous infrastructure and the emergence of multiple clouds within and without large enterprises, cloud management technologies are becoming increasingly critical. A cloud management layer provides abstraction and governance capabilities and an adapter architecture enabling a "single pane of glass" for managing all the physical and cloud sub-environments.
Our survey data suggests that Small and Midsize Enterprises (SME) are adopting the hybrid and external cloud model much more quickly than others and are also the most likely to be the ones to use multiple cloud vendors in an integrated way. What I'm generally finding is that individual business units and departments in mid-tier and large enterprises are using a bottoms-up strategy and deploying cloud services in isolated pockets to solve specific and tactical problems. According to Ellen Rubin, CloudSwitch is seeing hybrid adoption taking place among the early adopter enterprises (F1000 and even F500) as the dominant model.
To learn more, join me at Cloudswitch's upcoming Webinar “Making Hybrid Clouds Work in the Real World” on Wednesday Oct, 13th, 1:00 pm - 2:00 pm EDT. As a guest speaker, I will discuss our research findings on where cloud reality stands today versus all the hype, including which types of enterprises are adopting cloud and why (or why not). I will also provide an overview of the hybrid cloud architecture and explain why hybrid clouds are poised for the greatest growth. Watch the recording on demand.
Kamesh Pemmaraju is the Director of cloud research at the Sand Hill Group. He consults with companies—enterprises and technology vendors—to help accelerate their transition to the cloud. He is the co-author of the critically acclaimed "Leaders in the Cloud" research study that is a result of 70+ hours of one-on-one interviews with CIO’s and IT executives from 30 companies. His blog has been recognized in the top 50 bloggers on cloud computing and in CloudTP's best cloud computing blogs list. He welcomes your comments, opinions, and questions. For information on developments, customers, vendors, people, solutions, trends, news, opinions, interviews, webcasts, events, and blog posts on cloud computing, follow Kamesh on twitter @kpemmaraju and his LinkedIn Profile http://www.linkedin.com/in/kpemmaraju.
By Pavan Pant
We recently talked about the latest release of CloudSwitch Enterprise and since that blog post went live we have garnered a lot of interest in our public IP gateway to the cloud - the ability to provide secure access from the public Internet to servers in the cloud. There are many cases in which customers want to deploy Internet facing applications to the cloud to help reduce bandwidth constraints within their data centers and improve performance by moving compute resources closer to their customers. In order to accomplish this, customers needed a firewall in the cloud to ensure secure Internet access to their servers in the cloud which is exactly what CloudSwitch delivered.
As Director of Product Management at CloudSwitch I have had the pleasure of speaking to our customers to understand their use cases and have found that they are talking about things beyond just the migration of servers to the cloud. Customers have started thinking about adding servers to the cloud in a scalable fashion to handle surges in traffic and have frequently requested public connectivity to their servers in the cloud via a firewall. This shows a broadening of the use cases and growing adoption of the cloud. Given that our most popular feature so far is related to our public IP feature I thought it would be useful to dive into some use cases and how our firewall in the cloud can be configured to meet those use cases.
Use Case 1: Hosting Infrastructure in the Cloud
One of our customers was seeing heavy traffic spikes during major holidays and marketing campaigns. Rather than provision new equipment or rent more space in its colo – both expensive options – this company now leverages the cloud and CloudSwitch to handle peak overflow traffic easily, giving website visitors secure, public connectivity to cloud resources through public IP addresses, while managing these same resources through CloudSwitch’s secure data center connections. With the public IP gateway in the cloud our customers can now securely host a multi-tier application in the cloud.
Other things we have heard about include the ability to use the cloud for peak capacity for surges in traffic when the market opens and closes. The idea is to use a firewall for public connectivity to the cloud and a load balancer to have overflow traffic automatically routed to the appropriate server in the cloud.
Consider a scenario where you have two front-end web servers in the cloud, Sharepoint 2007 server, a SugarCRM server, and database servers running SQL 2008. These servers have been migrated to the cloud using CloudSwitch which means that they have the same IP addresses as they did in the data center. This diagram shows how CloudSwitch can deploy a colo-type footprint in the cloud by host a multi-tiered application in a secure fashion with public connectivity.
Once you have your servers in the cloud, the next step in allowing public connectivity to the cloud is to move our SmoothWall firewall (in the “Network Library” folder) to the cloud. Our network library only has one firewall at the moment but we intend to add many more network related infrastructure components in the near future.
You will notice that the new public IP access feature has one interface (red interface) that is assigned a public address while another interface (green interface) can be placed on the network LAN for your servers in the cloud. Once you have moved this firewall to the cloud and started it there it connects to the Internet through the red interface and acquires a public IP address (eg: Amazon Elastic IP). It then connects to your data center through the green interface on the same subnet as the CSA. The public IP address is reserved in Amazon for as long as this appliance is in CloudSwitch – the IP address is released when you delete the appliance in CloudSwitch. This means that you can power off the appliance and still keep the IP address. All this can be configured by opening a console window through CloudSwitch:
Once you have the firewall in the cloud configured, you can create firewall rules in SmoothWall to determine what type of traffic from the public Internet should be sent to your servers in the cloud.
In addition to this, you can also configure the firewall to send traffic to specific subnets or exclude traffic from going to specific servers on a subnet. Once these firewall rules have been configured you will be leveraging a cloud provider’s bandwidth for public connectivity and have the flexibility to increase your footprint in the cloud instead of being limited by a traditional data center footprint.
Use Case 2: Remote Office Scenario
Another use case we hear about is related to getting remote users access to shared resource pools in the cloud. Instead of having remote users go through a VPN server in the data center and then out to the cloud, customers would like to conserve bandwidth by providing them with direct access to cloud resources via a secure, layer 3 tunnel.
This is also a scenario that is possible via CloudSwitch’s firewall in the cloud. It is possible for SmoothWall to inter-operate with any VPN product that supports IPSec and standard encryption techniques such as 3DES. As a result of this, customers can now have their employees accessing their servers in the cloud from a remote office over a secure layer 3 tunnel.
Full Feature Set for Firewall in the Cloud
Unlike the simplistic firewalls provided by cloud providers our SmoothWall firewall has some useful features that CloudSwitch allows customers to leverage in the cloud. It is probably worthwhile to go through some of these:
1. Timed Access
CloudSwitch’s firewall in the cloud has the ability to create firewall rules that allow or disallow access at certain times of the day, for a specified group of servers in the cloud. The timed access controls are only performed on the listed machines. Customers can enter one IP address or network with netmasks per line in the supplied text box. e.g. 192.168.168.0/24 will block/allow the entire range of 192.168.168.0 through 192.168.168.255; alternatively it can be entered 192.168.168.0/255.255.255.0
SmoothWall is able to decide if some of the network traffic is more urgent than others. Imagine your network connection is like a multilane freeway or motorway and allocate specific bandwidth to specific servers.
Logging for CloudSwitch’s firewall in the cloud includes reports of who was trying to do what. Much like any standard log viewer, customers can select the date they are interested in viewing using the drop-down boxes at the top of the page. The body of the page displaying the log files is made up of a table of packets that were dropped by the firewall. Included here are the Source and Destination IP addresses and ports, as well as the protocol involved.
4. IP Block Configuration
This page enables the administrator to selectively block external IP addresses from accessing the SmoothWall and any machines behind it.
5. Dynamic DNS
If our customers have a connection with dynamic IP, the dynamic DNS section of SmoothWall allows you to use dynamic DNS service provided by dyndns.org, no-ip.com, hn.org, dhs.org and/or dyns.cx. These services allow people without a static IP address to have a subdomain name pointing to their computer, allowing them to run services like a web server, VNC, etc.
The first step for using dynamic DNS with SmoothWall is, of course, to subscribe to this free service with one of the supported providers. Once this is done, you just have to fill in the following configuration information on SmoothWall's dynamic DNS configuration page.
While all these capabilities are great it does beg the question of why customers would not just use firewalls provided by Amazon or Terremark? As mentioned earlier, cloud providers typically only have firewalls with simplistic rule sets. Customers do not need to be constrained by a cloud provider’s firewall anymore – with CloudSwitch they now have the ability to define a rich set of firewall rules, services and policies that controls public internet access to their servers in the cloud.
I hope that the use cases and feature set outlined in this blog post helps customers grasp the details of what it takes to provide secure, public connectivity to resources in the cloud.
We recognize that security is of paramount importance in the cloud especially when it comes to allowing users to access servers through the public internet. Our goal at CloudSwitch has always been to provide customers with a secure, simple way to leverage the cloud. Stay tuned for more exciting new enhancements as we continue to make it easier for customers to take advantage of the cloud.
By John Considine
Just a week after our blog post on the telcos, we find another big company joining the cloud computing tsunami – Oracle’s announcement of its “cloud in a box” offering as well as new offerings of Oracle software running on Amazon’s EC2.
For a company whose leader shunned the term “cloud” last year, this is a lot of cloud announcements in one week. Oracle’s new Exalogic Elastic Cloud is perhaps the first “cloud in a box” solution that is actually delivered in a box (of hardware). Unlike the offerings we have seen from Eucalyptus, Nimbula, Azure, and VMware, the Exalogic product contains the control software as well as the hardware components to make a virtualized resource pool. The other vendors have focused on delivering a software solution that can be combined with the users’ choice of servers, storage, and networking gear to build a cloud.
Oracle, powered by Sun’s server and system technology, has decided to deliver a complete cloud solution that contains up to 360 CPU cores, 2.8TB of RAM, and 40TB of storage in a single rack of equipment. This big box is reportedly priced at just over $1M. Oracle’s motivation for this box is to deliver on the promise of building an entire stack of both hardware and software that has been engineered to work together to deliver better performance, reliability, and scale. Overall, the Exalogic system has impressive performance characteristics and may be a great solution for data center consolidation, but…
Placing the term “Elastic” in the name of this offering is stretching the accepted definition of the term as it relates to cloud computing. The Exalogic server is a contained set of resources that is purchased, operated, and maintained as part of the enterprise infrastructure. You can scale your applications up and down within this solution, but in the end, you are limited to the number of cores, amount or RAM, and size of the storage you purchased. While you can add more racks to the solution, you are stuck paying for the whole thing independent from what you really use – not exactly elastic or pay for only what you use. My only other problem with Exalogic is the range of supported operating systems – we like the Linux and Solaris support, but a quote from Rick Schultz of Oracle – “There is no demand for Windows at the moment” – makes me wonder who they are talking to. More than half the enterprise workloads CloudSwitch has deployed to the cloud are Windows-based; how can there be no demand for Windows in Exalogic?
The other interesting difference in the Exalogic solution as compared to the big (public) cloud offerings is the design center for the hardware. Clouds like Amazon and Google were developed around “stripped down” servers to act as generic compute components. The redundant components normally used to improve the reliability of a server are removed from the compute nodes to reduce the component cost, and software and other application-level techniques are used to make up for the fail-able components. Each of the servers in the Exalogic solution has redundant power supplies, 2 solid state disk drives, and redundant Infiniband controllers. This more expensive hardware allows the system to survive component failures with minimal disruption to the running applications – a traditional enterprise infrastructure design, with high reliability to support a lot of VM’s packed on a single piece of hardware.
The difference between the two approaches highlights the upcoming battle between architectures in the cloud – stripped down commodity servers versus highly available high-end servers as the basis for cloud computing. The early leader in this space is the commodity server approach because of the types of applications initially targeted to clouds – stateless horizontally scalable web applications. But as we start putting more core enterprise applications into the cloud, the HA architectures become more interesting, and thus we expect this architecture to gain ground. We see these architectures gaining ground already with clouds like Terremark, BlueLock, and Savvis.
The other announcement this week from Oracle is expanded support for running Oracle software in Amazon’s Elastic Compute Cloud. Oracle has provided templates (AMI’s) in Amazon for its database software since 2008, and this week they have expanded the number of applications they will support in Amazon to include Oracle E-Business Suite, Oracle's PeopleSoft Enterprise, Oracle's Siebel CRM, Oracle Fusion Middleware, Oracle Database, and Oracle Linux. In addition to expanding the software supported on AWS, Oracle has taken the step of “certifying” the software for operation in Amazon. This means that customers can now get support from both Oracle and AWS for those applications. Although Oracle’s lead cloud story seems to be about the Exalogic box, I believe that this announcement does more to advance cloud computing for enterprises. Support for these key Oracle products in Amazon’s cloud adds credibility to public cloud computing, as it allows enterprises to really use the cloud for their core applications. This is one of the areas that a cloud provider cannot fix, it is up to the software vendors to expand their horizons to embrace the cloud and Oracle is blazing the trail.
I think the only downside to the Oracle-Amazon announcement is the lack of integration with Oracle’s control software. The FAQ’s from Amazon and Oracle emphatically state that the management controls for Oracle deployments to the cloud is exclusively the Amazon console and tool set. This is a shame since we believe that seamless integration between the data center and the cloud is key to a successful enterprise cloud deployment; creating a disjointed environment just adds work with no value for the enterprise and ultimately leads to cloud lock-in. Our enterprise customers have told us consistently that they want a “single pane of glass” from which they can manage pools of resources both internal and external.
Finally, while I like the architecture of the Exalogic Elastic Cloud, and believe that it could form the basis of a new class of cloud computing offerings, it too may be missing a critical point. If an enterprise decides to deploy their private cloud on this technology, there is no connection or relationship between the applications deployed to the private cloud and those running in the public cloud. This, once again, highlights the importance of cloud federation – you will never break the cycle of buying more hardware and infrastructure if you don’t embrace technology that allows you to access the public clouds.
By Ellen Rubin
This week’s Verizon announcement about their new CaaS offering for SMBs highlights a strange situation in the cloud computing market. While Amazon has been growing explosively and MSP/colo providers like Rackspace, Terremark and Savvis have rushed to embrace cloud in their business models, the telcos have been slow to enter the fray.
Telcos in many ways seem like the most likely players to lead and ultimately win in the land-grab of cloud computing. They’ve got the huge scale, geographic coverage, existing enterprise relationships and experience in service delivery that would appear to give them unfair advantage. As noted in the Verizon announcement and some recent blogs, telcos have a “unique opportunity to position cloud computing as an extension of their managed networking solutions (such as MPLS-based VPNs), by offering ‘on-net’ cloud computing capabilities backed up by end-to-end service-level agreements (SLAs).” In fact, the networking infrastructure and ability to offer dedicated and secure access is one of the telcos’ greatest strengths since it addresses some of the key concerns about cloud security and bandwidth.
So it’s worth considering why the telcos aren’t yet a dominant force in the industry. To a certain extent, it’s taken a couple of years for them to perceive the threat of Amazon et al to their core businesses. The response has been primarily a defensive one, as noted by IDC’s Melanie Posey: “Right now they’re concerned with, ‘If our existing customers want cloud in addition to the traditional hosting we’re offering them, we have to have something too or they’ll take that incremental business to somebody else.’” Marketing announcements and pricing model changes have so far been the fastest and lowest-cost response to this threat. For example, some telcos are now offering per-month pricing instead of the traditional annual or multi-year structures.
In parallel, the telcos are doing the heavy lifting required to build new cloud services. A lot of the real spending so far in the cloud market is being done by these players: buying new gear from the server, storage and networking vendors; installing new software and management tools from the hypervisor and service management players; designing new architectures with the help of consulting firms; leveraging existing infrastructures from Terremark, OpSource and others, etc. This all takes significant time and money.
While this investment is taking place, there’s relatively little to see in terms of live customer deployments. But in the meantime, the first-mover cloud providers and customer early adopters are moving full-speed to test and improve their offerings and cloud footprints. They’re shaping and defining cloud requirements and best practices based on real-life customer engagements. The risk for the telcos in being late to the party is that they’re not getting the customer insights first-hand and are missing the direct experience needed for successful scale-out and service delivery. Without this, they could end up delivering too little, too late. Still, given the size and projected growth of the cloud market opportunity, there’s no doubt it would be a mistake to count the telcos out.
By Pavan Pant
As with any transformative technology that is new to the market, both public and private clouds have generated massive amounts of hype, bold predictions, a whole lot of confusion and raging debates amongst the cloud cognoscenti. Opinions vary across the spectrum with some experts claiming that data centers will be rendered obsolete by the public cloud, while others are dismissive of the public cloud but support private clouds. It’s clear to us at CloudSwitch that a more likely scenario lies squarely in the middle of those two extremes. This week at VMworld (where we were exhibiting with our partner, Terremark), we were pleased to hear that VMware believes that “hybrid cloud is the tide coming in.” From Paul Maritz’s keynote through many sessions and product announcements (including the release of the long-awaited vCloud Director), the message was all about hybrid clouds.
One of our previous blog posts discussed the notion of hybrid clouds and the fact that most enterprises will follow such an approach in the future. Amazon, Terremark, Rackspace, Savvis, Blue Lock and other public cloud providers give customers elasticity, better service delivery and low CapEx costs. Meanwhile, there are solutions such as Eucalyptus and VMware’s vCloud Director that provide the interface and management tools to help organizations build private clouds while interfacing with public clouds to create hybrid cloud models.
Both use different APIs for their hybrid models with Eucalyptus delivering tight integrations for EC2 using Amazon’s APIs and VMware vCloud Director working with vCloud DataCenter Services (VMware’s terminology for public cloud providers) such as Terremark that leverage vCloud APIs. However, these technologies do not assist with creating an environment that spans hypervisors and cloud providers without changing the applications. If customers build private clouds that are not using the same virtualization infrastructure as their preferred public clouds then what does it really mean to hybridize their clouds?
Consider a scenario where a customer builds a private cloud using Eucalyptus or VMware vCloud Director. That private cloud still ends up being different from your data center (much like a public cloud) - the networking may be different, versions of virtualization technology may be different and the storage infrastructure may be different. All this means that applications in the data center will need to be changed before moving to the private cloud. As an example, if your QA team runs servers on their own subnet in the data center how can this be transitioned to a private or public cloud without incurring additional costs to change those servers?
CloudSwitch’s core value proposition lies in the ability to securely transport a customer’s existing virtual infrastructure to the cloud provider of their choice, independent of the provider’s underlying virtualization infrastructure (VMware, Xen, etc.). This effectively allows customers to securely move and operate servers from their data center across hypervisors to private cloud providers without requiring them to make any modifications to their application – we maintain the same IP address, MAC address, storage controllers, subnet information, etc. Once customers have moved their servers to the cloud they can operate and manage them just as they would in their data center. CloudSwitch has an intuitive web based interface which gives customers server lifecycle management options such as start, stop and clone.
Similarly, if customers have a private cloud which uses either Eucalyptus or VMware vCloud Director CloudSwitch can speak to those APIs and facilitate the transfer and management from these private clouds to public clouds. This enables a hybrid model where private clouds leverage public clouds for spikes in usage (cloudburst), or lab-on-demand use cases for training and POCs. CloudSwitch does all the work of integrating the environments across these private and public cloud hypervisors, merging networks and transferring servers without modifying them in any way.
Many years ago, I had the privilege to work on the first iterations of RSA’s identity federation product both as an engineer and as a product manager. Federated single sign on enabled the portability of identities across security domains and allowed for the secure exchange of sensitive data outside the firewall without requiring any changes to the identity itself.
While the markets for Identity Management and cloud computing are unambiguously different, the notion of federation to make portability and interoperability easier for enterprises is a common theme. CloudSwitch is in a unique position to help enterprises with true cloud federation by moving workloads seamlessly from the data center to the cloud (private or public), between private and public clouds (hybrid), across public clouds and back to the data center without requiring customers to make any changes to their applications. Regardless of the starting point, CloudSwitch offers customers an easy, effective method to leverage the benefits of the cloud while ensuring portability across clouds.
By Ellen Rubin
As we work with dozens of companies that are actively running pilots and doing early deployments in the cloud, it made me think about what the “new normal” will look like in enterprise IT infrastructure. A recent report from the Yankee Group shows that adoption of cloud is accelerating, with 24% of large enterprises already using IaaS, and another 37% expected to adopt IaaS within the next 24 months. It’s clearly a time of major shifts in the IT world, and while we wait for the hype to subside and the smoke to clear, some early outlines of the new paradigm are emerging. Here’s what it looks like to us at CloudSwitch:
- Hybrid is the dominant architecture: on-prem environments (be they traditional data centers or the emerging private clouds) will need to be federated with public clouds for capacity on-demand. This is particularly true for spikey apps and use cases that are driven by short-term peaks such as marketing campaigns, load/scale testing and new product launches. The tie-back to the data center from external pools of resources is a critical component, as is maintaining enterprise-class security and control over all environments. Multiple cloud providers, APIs and hypervisors will co-exist and must be factored into the federation strategy.
- Applications are “tiered” into categories of workloads: just as storage has been tiered based on how frequently it’s accessed and how important it is to mission-critical operations, application workloads will be categorized based on their infrastructure requirements. In the end, app developers and users don’t really want to care about where and how the application is hosted and managed; they just want IT to ensure a specific QoS and meet specific business requirements around geography, compliance, etc. The cloud offers a new opportunity to access a much broader range of resources that can be “fit” against the needs of the business. In some cases, the current IT infrastructure is over-provisioning and over-delivering production gear for lower-importance/usage apps; in other cases it’s woefully under-delivering.
- IT becomes a service-enabler, not just a passive provider of infrastructure resources: IT is now in a position to provide self-service capabilities across a large set of resources, internally and externally, to developers, field and support teams. This requires a new set of skills, as we’ve blogged about before, but the cloud gives IT the opportunity to meet business needs in a much more agile and scalable way, while still maintaining control over who gets to use which resources and how.
- The channel shifts from resellers to service providers: as noted by Andrew Hickey at ChannelWeb, the opportunities for resellers will need to shift as companies reduce their large hardware and software buys in favor of the cloud. The new focus will be on providing services and consulting with an opex model and monthly payments, and expertise in change management and predictive use models will become core competencies. We’ve already started to see this shift at CloudSwitch with a new crop of cloud-focused consulting/SI boutiques springing up in the market to help CIOs plan their cloud deployments.
For many enterprises, these shifts are still being discussed at a high level as CIOs formulate their cloud strategies. Other organizations are diving right in and selecting a set of applications to showcase the benefits of cloud to internal stakeholders. We’ve been fortunate at CloudSwitch to work with some of the earliest cloud adopters and with our cloud provider partners to help define some of the “new normal.”
By John Considine
We’ve been hard at work over the past two years building the underlying infrastructure for our CloudSwitch software, with the design goal of innovating rapidly on top of this architecture. The latest release of CloudSwitch Enterprise has proven that we’re able to introduce new capabilities quickly, and provides some insight into features that are on the way.
This release contains some great features and improvements that have been driven by our early customers. We’ve introduced a feature that has been the #1 request from customers and prospects—public IP access. To understand the background on this feature, we have to start with the CloudSwitch security model: we’ve designed our system for maximum security when deploying applications to the cloud. In the earlier versions of our software, this design meant that all access to the machines deployed into the cloud was routed through the data center. This allowed the customers to utilize their firewalls and rules to govern what happens in the cloud. For many enterprises, this remains the preferred mode for deploying the CloudSwitch solution. However, many of our customers want to deploy internet-facing applications and are looking to the cloud to help reduce bandwidth constraints within their data centers and improve performance by moving their computing closer to the customers. By routing all traffic back to the data center, we were neither improving the bandwidth constraints nor were we letting customers take advantage of the geographical distribution of their computing.
What we needed to do was to let our customers control the internet access to their servers in the cloud—which sounds a lot like a firewall. In keeping with our philosophy of maintaining existing enterprise policies and procedures, we did not want to introduce a new and partial firewall solution into the customer’s environment. We wanted to allow the customers to deploy existing firewall solutions into the cloud so that they have the knowledge, trust, and control over their cloud resources.
The new public IP access feature allows the end user to assign a public IP address to one of the interfaces on a standard firewall appliance. The other interface can be placed on the network LAN for your servers in the cloud. This allows the customer to define rules, services, and polices for how public internet access is granted to resources in the cloud. This is immensely powerful – it brings services like VPN access, dhcp, dynamic DNS, proxies, full firewall rule sets, and logging to cloud deployments. You’re no longer limited to the set of functions that a cloud provider offers for control of firewall resources or load balancers.
A second new feature is the CloudSwitch Library. This is a resource area that contains virtual machines and infrastructure elements that can be deployed to the cloud. You may have seen the beginnings of this in the June commercial release of our product where we introduced the “Sample VMs” folder. In the latest release, we have expanded the capabilities of this feature to allow for different types of virtual machines and appliances to be deployed to the cloud. This release includes a “Network Appliances” folder for network-related infrastructure components –think firewalls, load balancers, and WAN optimizers. We have included a popular open source firewall and load balancer (Smoothwall + HA proxy) to allow our customers to have access to a full feature firewall solution for the cloud.
The final improvement in this release is the addition of better geographic control over where you want to run your applications in the cloud. Since we have customers coming not only from all over the US, but all over the world, we have made it easier to select the geography within our user interface. Users can enable and select these regions quickly and easily to deploy workloads into data centers from Ireland to Virginia to Singapore. What is really cool to see in our product is a single network spanning all of these data centers and allowing the virtual machines to operate seamlessly, as if they were all local.
CloudSwitch’s unique architecture and our powerful Cloud Isolation Technology™ make it possible to create and deliver these new features quickly. We’re constantly enhancing our software to make everything “just work” for enterprises in the cloud.