By Damon Miller, Director of Technical Field Services
One of the most interesting trends in cloud computing is the emergence of “hybrid” solutions which span environments that were historically isolated from one another. A traditional data center offers finite capacity in support of business applications, but it is ultimately limited by obvious constraints (physical space, power, cooling, etc.). Virtualization has extended the runway a bit, effectively increasing density within the data center, however the physical limits remain. Cloud computing opens the door to huge pools of computing capacity worldwide. This “infinite” capacity is proving tremendously compelling to IT organizations, providing on-demand access to resources to meet short and long-term needs. The emerging challenge is integration—combining these disparate environments to provide a seamless and secure platform for computing services. CloudSwitch provides a software solution that allows users to extend a data center environment into the public cloud securely without modification of workloads or network configurations. I’d like to discuss a specific example of how CloudSwitch delivered a solution which spanned environments in a corporate data center and external cloud.
A large financial services company approached us some time ago with an ambitious plan to leverage cloud computing as a strategic initiative within the organization. Their goals were to reduce operating costs, improve responsiveness to the various business units, and differentiate themselves within the industry through technological innovation. Security was a fundamental requirement and a number of risk assessment groups were involved throughout the design and evaluation phases of the engagement. Finally, this company also wanted to leverage a traditional colo environment from their cloud vendor to provide high-speed access to shared storage while also supporting their traffic monitoring equipment. After a period of technical diligence, we established a reference architecture which satisfied all internal security requirements while remaining true to the fundamental goal of moving to a dynamic cloud environment. The result was a true realization of the hybrid model.
In the customer’s reference architecture, there are three primary components:
- Internal data center environment hosting the CloudSwitch Appliance (CSA)
- Private colo environment hosting the CloudSwitch Instance (CSI) and CloudSwitch Datapath (CSD) as well as shared storage for cloud instances
- Public cloud environment hosting customer workloads
The CloudSwitch Appliance is deployed into the customer’s data center environment to allow central management of one or more colo environments. Each of these environments supports an isolated cloud deployment, for example for a particular business unit. CloudSwitch’s virtual switch and bridge components are implemented for high-speed connectivity between cloud servers and shared storage. Finally, the public cloud environment is used to host actual customer workloads (operating systems). Network communication and local storage are protected through CloudSwitch’s secure overlay network and transparent disk encryption functionality.
This approach yields several benefits:
- Multiple instances of this dedicated environment can be independently deployed to support different business units
- High-speed access to the enterprise cloud environment is available since the colo environment is physically located in the same facility
- Physical infrastructure can be deployed into the colo environment in support of cloud servers—for example, shared storage devices
- Dedicated firewalls can be deployed and traffic inspection is possible, satisfying the security groups’ requirements
The reference architecture supports the organization’s high-level goals while remaining compliant with all existing security and regulatory requirements. Cloud servers have high-speed access to shared storage as a result of the colo deployment alongside the public cloud environment. All network traffic and storage is encrypted automatically through CloudSwitch’s security capabilities, and through CloudSwitch’s role-based access controls (RBAC) the security team has centralized control over who is able to access each cloud environment. The end result is a deployment model which truly implements a hybrid environment combining resources from the public cloud with traditional colo resources to deliver a secure, scalable platform for dynamic computing.
By John McEleney
Today we’re extremely excited to announce that we are being acquired by Verizon and joining Terremark, its IT services subsidiary. This is major news for us, and we believe for the cloud industry as well.
We’ve been working together with Terremark for almost two years and have built great relationships with Verizon and Terremark, based on our hands-on experience with the Terremark clouds. It’s clear that F1000 companies are looking for enterprise-class cloud services that cover a broad range of their needs – not only commodity clouds, but also higher levels of SLAs, enterprise procurement processes, professional services, security models and dedicated systems. And they want these to be provided by a trusted name in enterprise IT services like Verizon.
The other critical aspects for enterprise cloud adoption are the ones we founded CloudSwitch to address: enterprise control, simple on-boarding, tight integration with enterprise networking, security and management systems, and the freedom to move application workloads to the right cloud without complex re-engineering or lock in. The combined capabilities of Verizon, Terremark and CloudSwitch offers enterprises what they’ve been looking for and moves the industry forward, helping to further define the enterprise-class cloud model.
It’s important to highlight (and very important to us at CloudSwitch) that Terremark is strategically committed to open policies and will maintain support for multiple clouds and hypervisors, since we believe that enterprises truly value this openness. We’re also impressed at Verizon’s commitment and leadership strategy in the enterprise cloud market: after acquiring Terremark for $1.4B and creating a subsidiary within Verizon, they’ve now brought on a software company to add software development and innovation capabilities to the team. That’s the kind of leadership the enterprise cloud market requires.
We’re looking forward to working even more closely with Verizon, and our whole CloudSwitch team will be staying right here in Boston to build and scale our software and deliver new software-based capabilities. We’ve been at the forefront of cloud innovation since 2008, and this begins a new chapter for us as we team with Terremark to take enterprise-class services to the next level.
By Dave Armlin, Director of Customer Support
New CloudSwitch customers and prospects are coming up to speed every week and there are a number of questions that show up frequently enough that I thought it would be helpful to cover them in a blog. When we work with customers, our goal is to make their experience getting started in the cloud fast and easy, and to make sure they feel comfortable with the ongoing simplicity and security of the CloudSwitch model.
Here are their top 5 questions:
1. How do I move applications to the cloud?
CloudSwitch literally makes moving an application to the cloud a simple drag-and-drop operation. A virtual machine (or group of VMs) is selected from a VM location (vCenter,ESX machine, or CIFS share) in the CloudSwitch user interface, the target public cloud region/zone/location is selected, and the machine is moved over a secure tunnel to the cloud. Storage for the virtual machine in the cloud is automatically allocated and encrypted, and keys are kept under the customer’s control.
Virtual machines that are moved to the cloud retain their MAC and IP addresses, since the CloudSwitch appliance acts as a layer-2 bridge allowing these machines to appear as if they are running in the data center behind your firewall.
2. What applications should I move to the cloud?
A wide variety of apps are good candidates to be moved to the cloud. As Ellen Rubin blogged about recently, legacy applications are certainly great candidates for offloading from your internal data centers. Web servers and web applications like SharePoint, .NET, J2EE/SOA, Drupal, Wordpress, Wikis, corporate intranets, or batch processing applications are all good candidates as well.
When selecting applications for the cloud, you need to be aware of latency between the data center and the cloud. Latency is a function of physical distance between the data center and the cloud region you’ve selected. For instance, a data center on the East Coast in the US should see around 20ms latency between the various public cloud regions on the East Coast.
Select applications and place them in closest proximity to the virtual machines and data center services that are accessed most by these applications. For instance, a web application that utilizes a database heavily may perform best if the web tier and the database are both deployed to the same cloud and region. A web application that utilizes a database infrequently and caches results may perform well with the database in the data center and the web tier in the cloud.
3. What changes to my network do I have to make to use CloudSwitch?
Minimal. Outbound port 443 to the Internet has to be opened for the CloudSwitch appliance to create a secure encrypted connection to the cloud. This is outbound traffic only, nothing inbound. There are no changes to your network configurations.
The CloudSwitch appliance requires promiscuous mode and forged transmits set to “Allow” on the Virtual Switch or Port Group for the network adapter assigned to CloudSwitch in your virtual environment. For more information, check out this blog article on networking and ESX.
4. Can I get a virtual physical console to my machine in the cloud?
Yes. CloudSwitch provides a virtual console accessible from the CloudSwitch user interface via a browser that allows you to interact with the base system to make network changes or other tasks one might perform at a physical console. Access to this console can be secured to specific users or groups using Role-Based Access Controls (RBAC) in the CloudSwitch user interface.
5. Can I allow traffic from the Internet reach my machines in the cloud directly as opposed to going through my corporate firewall?
Yes, CloudSwitch supplies a cloud firewall that allows you to assign a public IP to a virtual machine and control access to VMs in the cloud from the Internet. Pavan Pant, our Director of Product Management, blogged about this a while back. You have full configurability for permissions/access to all cloud resources through this firewall.
By Ellen Rubin
Last week, I was on a panel at the CompTIA Breakaway conference in DC, with Scott Crenshaw from RedHat and Ron Culler from Secure Designs. Scott made an interesting comment about the three types of applications out there: (1) new apps that are being architected from scratch for the cloud; (2) legacy apps that are being re-architected for the cloud; and (3) everything else. It was a useful framework for our discussion about cloud migration and security, but it also made me think a bit about the issue of legacy apps and why these remain so controversial for the cloud industry.
If I had a dime for every panel discussion that led to a heated debate around whether or not to re-architect for the cloud… I think the heat around this issue reflects some underlying confusion about how to handle all those “annoying” legacy apps. It’s an area of particular interest to us here at CloudSwitch, so I’d like to share our thoughts and hopefully generate some additional productive discussion in the industry.
Let’s start with (1) new apps. High-profile customer stories from companies like Netflix are creating momentum around the idea of building enterprise apps – even mission-critical ones – to run specifically in the cloud. Of course, start-ups and SMB’s have been doing this for years, since they quickly realized that the cloud provides a low-capital way to get their businesses started and frees them from long-term expensive contracts with hosters and colos. But the idea of building greenfield enterprise apps that take advantage of the cloud’s agility & scalability is only slowly gaining traction.
This is due to several concerns of enterprise stakeholders. While individual developers love the idea of coding directly for the cool new platform of the cloud (without fighting corporate IT for access to servers), corporate IT often feels threatened by a new process/platform that may make them less relevant or able to set policies and standards. Corporate IT also recognizes that as the cloud apps go into production, all the serious issues around reliability, performance and integration will fall on them and may be extremely complex and difficult to manage. Security and networking teams have the expected concerns about changes to existing policies and access, and overall loss of control. And all groups share a fear of cloud lock-in since you’ve essentially built your app to run in a specific cloud.
Next there’s (2) teaching old legacy apps new tricks by re-architecting them for the cloud. This is appealing because it allows enterprises to move off outdated (and often costly) OS’s and hardware. It also allows the app to get true benefit from the scalability, geographic distribution and rapid provisioning of the cloud—and to run better in an environment where server performance and availability can be highly variable. Traditional legacy apps are often limited to scaling up vs out, and have requirements for network and storage configurations that may not exist in the cloud.
So why not re-architect? Most of the legacy apps we see at enterprise customers are either non-mission-critical (tier II, III) or less frequently used, with occasional bursts during peak periods. It’s not always economical to re-architect these or to spend precious developer resources on building/testing/supporting the new apps. Plus, the apps themselves may have some inherent limitations due to the age of their architectures (think SAP, SAS, Oracle Apps, etc. – apps that were designed long before the cloud gained attention, and that may not behave well if re-architected or may pose licensing challenges).
And finally, there’s (3) the “everything else” category – legacy apps that include all sorts of custom apps designed for specific purposes and business uses that may or may not still be important to the enterprise. You’d be amazed at how many of these there are. A typical F1000 enterprise can have hundred or even thousands of apps, and very few are mission-critical or worth the effort to re-architect. But there they are, sitting in your data center, still important for some particular group or maybe for compliance reasons, so you don’t want to get rid of them, either. The cloud is a great place to relocate these apps, and provides options for closer geographic proximity to the actual users, as well as the cost benefits of shutting down apps when not in use.
I find that among the “clouderati” there’s often a lack of interest in this last category of apps, mainly because they’re not very sexy or high profile. Enterprises, on the other hand, are pretty interested in them since they represent a large plurality (if not majority) of the apps that need to be considered in a broader cloud strategy. Also, since by definition these are not the critical apps that the enterprise depends on, they’re the easiest to try first in the cloud to show a low-risk success story to potential cloud users.
Large cloud providers and cloud enablement vendors are starting to take greater notice of legacy apps (both the kinds that should be re-architected as well as those that should be left alone). Amazon’s VPC strategy and VM migration tool reflect a growing recognition of legacy app requirements, as does VMware’s vCloud Director strategy, and Citrix’s CloudStack/CloudBridge. The industry as a whole has begun to focus on making it easier to migrate legacy apps and keeping them integrated with the enterprise environment they rely on.
This is good news for enterprise customers, and no surprise to us at CloudSwitch, where legacy apps have always been part of the vision. We believe that unless legacy apps can be safely and seamlessly run in any cloud environment with full enterprise control, enterprises will hold off adopting cloud in a major way. For every Netflix out there, there are hundreds of enterprises that will not build apps specifically for the cloud, or will only do this for a tiny percent of their application portfolios. And regardless of which category of app we’re discussing, you still need these apps to tie into enterprise management/monitoring systems, data, networking and security. Legacy apps are the proving ground for the cloud’s enterprise-readiness and maturity, and the industry should embrace this challenge head-on.
By Guest Blogger Erik Heels, Partner at Clock Tower Law Group, experts in patent law
Wikipedia defines "cloud computing" as "the logical computational resources (data, software) accessible via a computer network (through WAN or Internet etc.), rather than from a local computer. Managing local computers is hard: there are security issues, computer lifecycle issues, accessibility issues. Cloud computing, ideally, is easy: set it and forget it, access your data from anywhere, outsource your IT headaches to your service provider. To end users, whether individuals or companies, "the cloud" is an abstraction, a computing environment that can expand to suit users' needs.
What's The Problem?
One problem with cloud computing is that both cloud computing providers and law enforcement agencies can access your files, usually more easily than if your stored the files on your own computer.
Also, security breaches, like the much-publicized Dropbox security breach, during which all Dropbox accounts were accessible to all users without any password protection, can occur in the cloud.
For users, it is important to know whether your data is secure, who can access it, and what happens when there is a security breach.
For service providers, it is important to comply with both US and non-US laws including (1) data retention laws, which are ostensibly designed to help law enforcement entities do their job and (2) data disclosure laws, which are ostensibly deigned to help users know when their private information has been compromised.
Is Encryption The Answer?
Most cloud computing providers (1) authenticate (e.g. transfer usernames and password) via secure connections and (2) transfer (e.g. via HTTPS) data securely to/from their servers (so-called "data on the wire"), but, as far as I can tell, none (3) encrypts stored data (so-called "data at rest") automatically.
So if you want your data to be secure in the cloud, then consider encrypting the stored data. And don't store your encryption keys on the same server! It is unclear whether a cloud computing provider could be compelled by law enforcement agencies to decrypt data that (1) it has encrypted or that (2) users have encrypted, but if the provider has the keys, decryption is at least possible.
I have used and abandoned both Microsoft's Encrypting File System (EFS) and Apple's FileVault for encrypting data on my desktop computers. But desktop encryption is painfully slow! Perhaps cloud computing providers can leverage the power of their data centers to make the performance hit of encryption-decryption imperceptible to the user. That would be cool. And would make the benefits of cloud computing greatly outweigh the risks.
Here are three security questions you should ask of your cloud computing provider:
- Data on the Wire. Are files transferred to/from cloud servers encrypted by default?
- Data at Rest. Are files stored on cloud servers encrypted by default?
- Data Retention. If files on cloud servers are encrypted and there is a request from law enforcement to decrypt the data, then what do you do? Bonus question: What if you have the key(s)?
I searched for answers to these questions for four cloud computing providers (sourced in part from TechTarget's list of top cloud computing providers and Wikipedia's list of cloud computing providers) that are popular with small businesses like mine:
Simple Google searches of these providers' websites provided more questions than answers on the topic of encryption:
- search Amazon.com for encryption
- search Google.com for encryption
- search Apple.com for encryption
- search Dropbox.com for encryption
Cloud service providers need to do a much better job of communicating what is and what is not secure about their offerings. For example, I would characterize Dropbox's security page as misleading at best:
Just because your files are transferred securely to Dropbox does not mean they are stored in an encrypted format on Dropbox's servers. And it is the "rare exception" that is, or should be, the concern of users.
For More Information
- International Association of Privacy Professionals: Ten Steps Every Organization Should Take To Address Global Data Security Breach Notification Requirements. I would add "11. Get insurance" and "12" Get a good lawyer."
- Electronic Frontier Foundation (EFF): Surveillance Self-Defense. What can the government legally do to spy on your computer data and communications? And what can you legally do to protect yourself against such spying?
- Electronic Frontier Foundation: Mandatory Data Retention. Regarding controversial laws that require Internet Service Providers (ISPs) to collect and store records documenting the online activities of users.
- PrivacyLawCompliance.com. Law firm specializing in helping Massachusetts companies comply with privacy laws.
- ZDNet: Microsoft Admits Patriot Act Allows Access To EU-Based Cloud Data
- Centre for Commercial Law Studies (CCLS) at Queen Mary, University of London: 'Personal Data' In The UK, Anonymisation, and Encryption
As more individuals and companies move their computer files and computer applications from local client computers (over which they have a great deal of control) to remote server computers (over which they have limited control), security becomes a bigger concern - both for users and for service providers.
Erik J. Heels is an MIT engineer; trademark, domain name, and patent lawyer; Red Sox fan; and music lover. He blogs about technology, law, baseball, and rock 'n' roll at ErikJHeels.com. His law firm, Clock Tower Law Group, represents cool companies such as CloudSwitch.
By Pavan Pant, Director of Product Management
As customers continue their march to the cloud we have heard from a large number who want to use SharePoint Server in the cloud. Two major concerns that show up frequently are migration of existing custom deployments and data security.
These organizations have spent years customizing their SharePoint deployments so they work just right in their environment, and moving to the cloud is a daunting proposition. Consider a scenario where a customer has deployed SharePoint and each department has its own intranet and individual sites for employees – the proliferation of these sites across organizations and the customization required has created a situation where customers typically stay away from using the cloud for their existing SharePoint deployments and start from scratch in the cloud.
We’ve also heard from customers who already have SharePoint deployed in their data centers with sensitive content (e.g., PII information) and would love to take advantage of the elasticity the cloud has to offer but have security concerns about using the cloud. In a shared multi-tenant environment customer data needs to be protected from unauthorized access at all times, and must be off limits to cloud providers. This essentially means that customers need full disk and network encryption to protect their data while it is at rest and in motion.
CloudSwitch allows you to take your existing SharePoint deployments and run them the cloud without requiring any changes to your application or networking. In addition, all your data remains secure – we provide full network and disk encryption (including encryption of the boot partition) in the cloud to ensure that your content remains secure while in transit to the cloud and in the cloud itself. Most importantly, the disk encryption keys remain in your control as opposed to being stored and managed with the cloud provider.
One of our customers is a large health insurance company that has sensitive patient data and other information in their SharePoint content management system. Their primary goal was to offload their ongoing management of the SharePoint servers in their data center and use Amazon’s public cloud. This would allow them not only to lower their costs but also to take advantage of the elasticity offered by the public cloud. The configuration in their data center is a two-tier SharePoint deployment – one server runs SQL while the other runs both the SharePoint Content Server and the Front-End IIS server.
With CloudSwitch’s software in place in their internal VMware environment, this customer was able to migrate their existing SharePoint deployment to the cloud securely, simply and without any changes whatsoever (IP address, MAC address, network configurations, etc.). Their end users can access and use the SharePoint sites for content management exactly as they did in the data center. SharePoint administrators are able to add servers to the farm, cluster the SQL server and burst in the cloud as needed just as they would in the data center but with all their security needs being met. Also, with the “infinite” scalability of the cloud, they no longer need to worry about the time it takes to buy and install new storage. They can allocate new resources to their cloud SharePoint deployment in minutes.
In addition to all this, the customer can also continue using their Active Directory installation in the data center to control authentication and authorization to the SharePoint portal – again, all of this without installing any agents or software on servers in the customer’s data center or any agents for the customer’s servers in the cloud.
Leveraging the Cloud
I recently attended a cloud computing panel where one of the panelists was lamenting how SharePoint was never architected with the cloud in mind because cloud providers like Amazon impose networking and storage constraints (e.g., dynamic IP address and ephemeral storage) that SharePoint does not handle well. Some of the main reasons to deploy SharePoint in a multi-tenant environment are to consolidate resources and take advantage of the scale the cloud offers – by having multiple users in a single deployment that can take advantage of storage as you grow. Many enterprises have been shying away from using SharePoint in the cloud because of concerns around security, storage management and networking implications. That applies only if you think about the cloud as an opaque system where only the cloud provider can control networking, security and configuration. With CloudSwitch, all of the control is shifted back to the enterprise and the users can run their existing processes and applications. We do all the heavy lifting for you so you can move your SharePoint deployments to the cloud and get started today!
By John Considine
Last week I wrote about the Cloud.com acquisition and what it means for Citrix, Rackspace, OpenStack and the industry. Next, I’d like to dig into the VMware announcement about their cloud infrastructure suite. Citrix clearly wanted to announce their news just prior to VMware’s, and for a good reason – Citrix is hitting VMware in a weak spot of their cloud strategy. It’s pretty clear that VMware is not getting the vCloud adoption they were anticipating from service providers and even enterprises.
In Paul Maritz’s presentation, he mentioned that VMware “…has been working closely with service providers because you need the same stacks on both sides [the private cloud and public cloud] to be able to ‘slide’ applications to the cloud…and back again.” At CloudSwitch we are dedicated to the notion that you don’t have to have identical infrastructure stacks between the data center and the cloud. You have to expect that what a cloud provider chooses will not necessarily be the same as what the enterprise has chosen, or that they will work together in lock-step. VMware seems committed to the strategy that they will provide the complete solution on both sides of the cloud, and that all parties will work together to stay coordinated. This is very different from Citrix’s positioning around more open and heterogeneous solutions.
Citrix and VMware have been competing in the virtualization space for years with a battle of features (mostly Citrix catching up with VMware and trying to gain share in the enterprise virtualization market), but the scope of the competition has been growing thanks to cloud computing. Cloud computing expands the server virtualization fight from hypervisor features to integrated stacks for deploying/managing infrastructure. The hypervisors remain important, but the new frontier contains everything from core networking to storage management, to large scale deployments to self-service IT. In this new battle, Citrix has some real strength in networking (Netscaler, etc.) and application delivery, and with their Cloud.com acquisition, they are capturing some proven orchestration technologies.
VMware is investing huge resources to expand their cloud offerings (Maritz claims a million man hours). Their focus is on adding features to their hypervisor and layers to their stack (vSphere+vCenter SRM+vCenter Operations+vShield+vCloud Director). They have lots of expertise in this area and direct interaction with enterprise customers and requirements. On the service provider side, they are dependent on feedback from VMware-based partners to provide input and learn how to build and run large-scale infrastructure clouds. We’ll have to see how this approach plays out vs. Citrix’s CloudStack.
In the end, this competition is great for all of us as well as for CloudSwitch specifically. The competition in the cloud space will continue to drive innovation, new features, and simplification of deployment for this great new platform called cloud computing. CloudSwitch is all about choice and giving enterprises control and flexibility in their cloud architectures. As the world of cloud computing evolves, we love to see different options, technologies, and capabilities – because a world filled with different cloud choices needs a CloudSwitch to connect all of the pieces.
By John Considine
Here we are on July 12, mid-summer when you think most people are wondering about going to the beach in 90 degree weather, and instead we have big cloud news. Early this morning we were greeted with the announcement that Citrix is buying Cloud.com for more than $200M. After the initial congratulations to Sheng and the team at Cloud.com, the twitter-sphere and blogosphere went wild with thoughts and deal analysis. At the same time, everyone was waiting to hear what VMware was going to say in their “Cloud Infrastructure Launch” webcast.
I’ll start with some thoughts on the Cloud.com acquisition. Rather than go into the rationale and size of the deal, I’d like to focus on what this acquisition means to Citrix, OpenStack, and Rackspace.
It’s clear that Citrix has been a major supporter of OpenStack and that support makes sense since it’s a great way to compete against VMware. OpenStack is shaping up to be the answer to the closed source solutions being developed by VMware for building both public and private clouds. It’s also clear that Citrix needed a boost to gain traction and credibility in the market for producing cloud infrastructure. They have a good hypervisor and are busy building out more features, tools, and performance – but cloud infrastructure is a lot more than hypervisor + new features.
Enter the Cloud.com guys. They know how to build clouds, and already have traction with service providers and enterprises alike. They have proved that they know how to make a cloud scale – and this is not as easy as it sounds. Keep in mind that building clouds is more complicated than standing up virtualized clusters and running standard tools; there are complex networking, storage, and workload placement problems, not to mention versioning, maintenance, and operations.
So this is clearly a good thing for Citrix, and on the face of it, a good thing for Cloud.com – but what about OpenStack? The bigger question here is: what does this really mean for the OpenStack community? Is this a case of Citrix providing the enterprise/supported version of OpenStack as RedHat does for Linux? Will we see a set of capabilities delivered by Citrix that are built on OpenStack, but that are exclusive to Citrix’s CloudStack? Will OpenStack be increasingly driven by Citrix’s needs and integration with Xen (versus other hypervisors)?
If Citrix remains committed to its stated direction of providing the software for clouds (rather than building a cloud themselves), they are in a great position to capitalize on OpenStack. Rackspace, on the other hand, has a more complicated opportunity with OpenStack. They created the idea and built the community, but they are limited by the fact that they are themselves a cloud provider, so it would be hard for them to sell and support OpenStack software to other cloud providers and vendors. That leaves the door open for someone else to step in and become the enterprise software vendor for OpenStack. Clearly Citrix has been targeting this, and the addition of Cloud.com adds the full software stack and knowhow for building and deploying a cloud.
And in other news...VMware announced its Cloud Infrastructure suite today. Given that the VMware announcement was previously scheduled, I can only conclude that Citrix picked today to overlap with their announcement. These two announcements occurring on this sleepy mid-July date point to high activity in the cloud space, and the coming wars around who is going to provide the cloud infrastructure for both service providers and the enterprise. This may become a battle of closed source solutions from VMware and open source solutions from Citrix (and OpenStack)… I’ll write more about the VMware announcement in my next blog, but the battle between Citrix and VMware is definitely heating up.
By John McEleney
Gotta get-get… the Black Eyed Peas get it, heck they wrote a song about it: "That future boom boom, gotta get it now…"―we gotta get moving to the cloud!
We're officially halfway through 2011 and many senior IT professionals are probably looking at their 2011 objectives and thinking to themselves, "How am I going to get everything done?" I am sure all of the IT professionals have long lists of company-specific items, and I am equally confident that they all have something about "cloud strategy" on those lists.
We work with customers ranging in size, regulated industries, purely commercial as well as government agencies. But the organizations that we see making real and measurable progress have one thing in common: they have a forward-thinking executive who is willing to take some risk and simply wants to get moving and demonstrate some progress as they learn.
We also see a lot of fear about determining a cloud strategy, ultimately leading to analysis paralysis. Consider the following "Dilbert" advice from a technology author (I removed his name to save embarrassment):
As part of this process, firms should develop a high-level "cloud adoption vision," as well as a short-term business case for cloud computing anchored to the long-term vision. Align your strategy with your organization's business objectives and risk management framework. Establish a governance process and standards that address security requirements, support consistent and logical cloud adoption, and prevent the proliferation of random, uncoordinated initiatives around the enterprise. And recruit people who understand cloud services and can lead strategy development, vendor selection and ongoing management. Considering these factors and approaching the cloud thoughtfully will make for a smoother, more successful ascent.
This "motherhood and apple pie" strategy is very logical, hard to debate and in theory will eliminate risk; however, I would argue that gaining practical experience faster will ultimately yield better results. No amount of careful planning and organizational discussion will substitute for seeing how different cloud services really work and how specific applications behave in the cloud when end users test-drive them.
It's reasonable to start with lower-risk applications and less-sensitive data, but the most important thing for IT organizations that need to make progress this year is simply: Gotta get-get… get moving to the cloud!
By Ellen Rubin
Sometimes it’s fun to look back at your predictions to remember what you were thinking at the time and see how accurate you turned out to be. Based on some recent conversations, I decided to revisit one of our early blog posts from 2009, where we were envisioning the direction this industry would take and the role our technology would play in it.
That post, Dynamic Cloud Fitting — the Future in Automated Cloud Management, described a world where workloads would move automatically to the right environment to meet a customer’s business and technical requirements. It also explored how an entity called a “cloud broker” (initially defined by Gartner in 2009) would provide the technology and expertise to achieve that goal.
Fast forward to 2011. The idea of workloads being redirecting on the fly across different clouds for real-time optimized cost/performance is still a vision for the distant future – both because the technology is not yet available and also because customers have shown little interest to date. However the main concept of a cloud broker “fitting” a workload to a cloud environment based on technical and business requirements turns out to be very important to customers, and is already possible today. By providing an intermediation layer spanning multiple clouds, cloud broker software from companies like CloudSwitch can provide a range of capabilities beyond the scope of an individual cloud provider or service.
In terms of cloud “fitting,” what customers want is the ability to set parameters on a number of dimensions in order to control usage and optimize workload performance. These parameters include (but are certainly not limited to):
- Which cloud services they want to make available (and for which users)
- Which geographic locations (regions, zones, data centers)
- Cost limitations – per hour, based on quotas, etc.
- Maximum latency that can be tolerated
- Virtual machine requirements for CPU, memory, etc.
- Maximum provisioning time that is acceptable
- Minimum SLA required for reasonable availability
What’s clear is that cloud broker software must incorporate an algorithmic approach to mapping the requirements of the enterprise, user groups, individual users, and specific workloads against the possible cloud services that have been enabled. This is a non-trivial process and is based on capturing and tracking a mix of inputs from administrators and users, as well as real-time data from the virtual machines, networks, and cloud providers. Think of this as being somewhat similar to recommendation engines on websites like kayak.com that help “fit” travelers’ requirements and preferences against available flights. Only in this case, the “flights” are instances that can be provided by one or more cloud provider or by internal virtualized resources.
Another important aspect for the cloud broker software is to implement the “fitting” algorithm in the context of a role-based access control (RBAC) system. Think of this in terms of layers of enterprise controls and permissions that guide users’ options for self-service access to cloud resources. For example, a global administrator may set up the initial constraints based on which cloud services are available for the entire enterprise, while a business unit administrator may have more narrow limits for her users based on quotas, geographic constraints for certain teams, etc. – and the final end-user just wants to get his work done quickly and cost-effectively without worrying about any of this.
One point we didn’t foresee in our original blog post on cloud fitting was how the cloud broker’s role would expand. In addition to on-boarding applications into the cloud, customers now look to cloud brokers to fill important gaps in areas that cloud providers either don’t want to deliver (such as multi-cloud capability) or that are hard to deliver because of architectural limitations.
Security is a good example of the latter, where the shared environment of the cloud makes it hard to give individual customers control over encryption and key management, something that enterprises frequently require to get CSO sign-off. Extension of network configurations into the cloud with full configurability is also challenging for most cloud providers, since their network architectures are by definition fairly “flattened” and limited in options like multiple sub-nets (unless the customer is willing to pay for a dedicated network setup). This is another area where cloud brokers can help bridge the gap between what enterprise users need and what multiple cloud providers can deliver.
So the role of a cloud broker turns out to be evolving and growing broader over time, and no doubt will continue to do so. There’s a broad consensus that the broker’s role is important — not just here at CloudSwitch, but also among industry analysts like Gartner, other technology vendors, and our enterprise customers. The key insight is that cloud brokers allow enterprises to extend their control over their applications and data into the cloud. This ability to put control in the hands of the customer is what matters the most.