By Pavan Pant
We recently talked about CloudSwitch’s security model while highlighting our integration with Active Directory. Our architecture addresses three areas of protection which we believe are required to make the cloud secure for enterprises – security within the data center, between the data center and the cloud, and within the cloud itself. Given that this is an area of paramount importance for enterprises I thought it would be useful to continue with the theme of security by discussing our role-based access control (RBAC) model. CloudSwitch’s RBAC capability is directly related to protecting resources in your data center from unauthorized access, while also controlling the privileges users have over cloud resources.
Years of experience in enterprise software development have taught us that retro-fitting an access control model is not a viable option – it’s like closing the barn door after the horse has bolted. Our solution was built from the ground up with an RBAC mechanism in place. We developed a granular RBAC model which allows administrators to delegate permissions across users and groups using roles and access control lists (ACLs) defined by a CloudSwitch administrator.
This gives customers the ability to group users with similar job functions into roles, and to give them authorizations to perform actions on objects in CloudSwitch. Our objects are entities such as folders, virtual machines, cloud accounts, etc. Using our RBAC capabilities allows customers to create a least-privileged access control model by only providing users with access that is absolutely essential for cloud operations. Every object and action in our system can be assigned to an ACL so that an administrator can enforce policies for cloud usage, cloud control, and local resource control. This approach allows customers to select roles in which users can operate, and the capabilities of each role are based on those users’ expected responsibilities. For example, a developer role might have permissions to create, clone, start, stop, and delete servers, whereas an operator role might only have start and stop permission.
Another important point here is that administrators can grant or revoke privileges on a CloudSwitch object independent of what the role does. As an example, you may have a set of IT administrators with privileges in CloudSwitch to start, stop, and delete servers that are running in the cloud. However, there could be a specific subset of production servers that you may not want even the IT administrators to control. With our RBAC model you can grant users permissions across the whole system with the ability to restrict access for specific servers.
These controls take on even greater significance as customers move production workloads to the cloud. With that in mind I thought it would be useful to walk through some of the RBAC use cases we have heard about from our customers, and how CloudSwitch can be configured to meet those use cases.
RBAC Use Cases
Use Case 1: Creating Sandboxes in the Cloud for Developers and QA
One of our large customers in the pharmaceutical space was running into a problem where their research scientists were increasingly faced with delays in gaining access to computing resources primarily due to a large and growing IT organization. They wanted to use the public cloud for their computing needs as an alternative to using internal IT resources.
Their primary objective was to deliver a streamlined solution to their developers which would allow them to clone read-only gold images created by administrators. The process needed to be as simple as possible with the appropriate security controls in place to prevent developers from modifying the images shared with them by administrators.
With CloudSwitch, this customer’s administrators were able to easily upload their gold image to the cloud, provision a server template in the cloud using the gold image and place it in a folder structure within CloudSwitch that only developers could access. Once that step was complete, the customer used our RBAC model to ensure that developers had permissions to clone the server template made available by the administrators and permissions to perform server lifecycle actions on the cloned server (start, stop, delete, power off, add NICs, add disks).
The end result was that developers could easily login to CloudSwitch’s user interface, clone the administrator template that was made available to them, start that cloned server in the cloud and shut it down when their work was complete. This was a much quicker and cost-efficient way to get access to compute resources in the cloud, especially when compared to the customer’s previous approach of waiting for resources from their IT department. It also ensured that the developers had just the right amount of privileges to perform their daily activities in the cloud.
Use Case 2: Network Administrator Privileges
Our customers have also frequently asked us about separating out permissions such that only specific users have the ability to modify network settings for cloud networks. Customers wanted each department to control their own network mappings without allowing other users or groups to modify the networking configuration.
To solve this problem with CloudSwitch you would simply create a role (e.g., a Network Admin role) and define an ACL where only users in that role would have the ability to configure the network and NIC configuration for servers in the cloud, or even networking configurations for CloudSwitch components. You could even go a step further by creating a “Network Administrator Subnet 1” role for servers on a specific subnet in the cloud, and a “CloudSwitch Network Admin” role for users who only have permissions to manage networking configurations for CloudSwitch components such as the CloudSwitch Appliance which resides in your data center or private cloud.
Other Common Use Cases
Other customer scenarios involve using RBAC to define and limit which import sources can be moved to the cloud, and which target clouds can be used. CloudSwitch allows customers to migrate virtual machines from VMware or Xen to the public cloud without making any changes to the virtual machine (e.g., no changes to the kernel, OS, IP address, MAC address, storage controllers, subnet information, etc.). As part of this process, you can define import sources from VMware or Xen in the user interface, and specify which roles get access to those resources. You can create multiple import sources in CloudSwitch for different groups within the organization while ensuring that the appropriate people or groups (e.g., Development, Quality Assurance) have the right amount of access to these import sources. We have also had cases where customers want to restrict which cloud regions (or cloud providers) certain groups have access to. For example, one of our customers wanted most of their users to deploy in Amazon’s US-East region since it is cheaper than US-West. However, there was a group on the west coast that really benefited from the geographic proximity of using US-West. CloudSwitch’s RBAC model allowed this customer to grant that one group access to the more expensive resources in US-West while the rest of the organization was restricted to using resources in US-East.
These are the types of granular access control capabilities that a growing number of customers have requested, especially as they move production workloads to the cloud. It has been great to see large enterprises across verticals using our RBAC capabilities to secure their cloud deployments, from the data center, to the cloud and within the cloud. CloudSwitch was designed with the hybrid cloud in mind and our core value proposition lies in the ability to securely transport your virtual infrastructure to the cloud provider of your choice without requiring any modifications. A large part of that vision hinges on giving enterprises the ability to control the type of access people have both in your data center and in the cloud. We’ve built a solution that gives customers the ability to use their existing security policies and permissions in the cloud instead of creating new ones for their cloud deployments.
By the CloudSwitch Team
Over the past year we've had the pleasure of working with Terremark as a partner, as we jointly engage with enterprise customers who want to leverage hybrid clouds. For these customers and prospects, hybrid means the flexibility to combine their traditional data centers, new private clouds and managed service/colo environments with public clouds such as Terremark's Enterprise Cloud. Please join us tomorrow, March 3rd from 1:00-2:00pm EST to learn about hybrid clouds based on our hands-on experiences with enterprise customers who are using Terremark for a full range of cloud services.
By Ellen Rubin
The way you know you’re in the midst of a technology shift and market disruption is when organizations don’t behave the way you expect them to based on past track records. Cloud computing has been filled with surprises and unexpected behavior from the get-go. First, Amazon, a retailer, turns out to be a technology powerhouse in disguise and changes the rules of IT infrastructure. Then, “real” technology leaders like IBM, Dell, EMC, HP and others make lots of announcements about cloud but essentially do little and re-brand existing offerings as “cloud-enabled.” Next, Verizon, the phone company, buys Terremark in a bid to become a global cloud leader. And of course, there’s always the fact that the federal government has embraced cloud widely and is spending large amounts of money to build private clouds and leverage public ones.
So, in a world that sometimes seems upside-down, how surprising is it really that the F500, and in particular, the corporate IT groups within these huge organizations, have often turned out to be the early adopters and drivers of cloud in all flavors – private, public and hybrid? When we started CloudSwitch, our hypothesis (based on all sorts of track records and past behaviors) was that within the enterprise market, mid-tier companies (defined loosely as several hundred million to a few billion dollars in revenues) would try cloud first. This was because we were betting that these organizations had enough pain from internal data center management (cost, over-provisioning, not their core business, lack of responsiveness to business users, etc.) that cloud computing’s benefits would overcome their initial concerns. And in fact, this is true of many mid-tier enterprises, who have indeed taken the leap into cloud over the past couple of years, along with the developer and start-up communities.
But the companies who seem to be driving enterprise adoption of cloud and defining many of the requirements for vendors in our experience are at the multi-billion-dollar revenue mark, and often within the F500. Our initial hypothesis here was that these companies would be too large and resistant to change to be early adopters, unlike the smaller, more nimble mid-tier players. But it turns out that these companies have such enormous capital expenditures in data centers and infrastructure investments that they’re determined to adopt cloud to move them to a lower cost curve (“get off the data center treadmill”) and help them break through the internal limitations on self-service provisioning and scaling that have frustrated their business users for years.
Even more unexpectedly, many of the people who are leading the way within these companies are managers and architects within the corporate IT group. It’s interesting to note that in previous technology shifts – SaaS and virtualization come to mind – the revolution was staged from within business units or at the developer level, and corporate IT came on board once these technologies were de facto standards. It’s possible that with these experiences in mind, corporate IT (and the CIO in particular) has decided to take the lead this time around, and not wait to find out what’s been going on without enterprise security, control or standards.
Last year, corporate IT was struggling to absorb the avalanche of information about cloud and to separate the hype from meaningful architectures and use cases. With some encouragement from the large technology vendors, corporate IT shops retreated into private clouds as the safe way to go. This year, with hybrid clouds all the rage, it feels like enterprises and IT managers are coming into their own. They’ve been speaking with more confidence based on their pilots and initial deployments, and have come to see cloud as something that can be shaped and driven by real enterprise requirements – not just a new set of processes/resources that need to be run as a separate and un-integrated silo.
In this hybrid model, F500 enterprises are working with vendor partners to build private clouds, and identify application categories that can run completely in public clouds, and those that need to span internal and external environments. They’re asking for management, orchestration and federation technologies that let them be vendor-agnostic and “position independent” (so apps can run in a given environment at a particular point in time, regardless of underlying infrastructures). This process is clearly a multi-year learning experience with the usual fits-and-starts as companies bump into the inevitable limitations of new technology and meet resistance from internal stakeholders. But the trend is clear. And although relatively few of these large enterprises are willing to go on record yet with their case studies, we can see first-hand the in-roads cloud is making among some of the largest pharmas, banks and manufacturing companies in the world, and it’s exciting to be part of the paradigm shift.
By Ellen Rubin
Seems like it was only yesterday when industry pundits were backing away from public clouds in favor of the safer, more big-vendor-compliant “private clouds.” After Amazon shook things up with its new paradigm for computing and storage clouds in 2007, and started to gain traction (along with Rackspace and other cloud providers) in 2008 and 2009 – 2010 so far has been in many ways a retreat from the forces of innovation and the emergence of much fear, uncertainty and doubt about the perils of the public cloud. But lately, I’m seeing the pendulum start to swing back in favor of public clouds, albeit with a twist.
Not surprisingly, private clouds look more familiar and comfortable to IT managers, big vendors and consulting/SI/service providers. They involve purchases of hardware, software and services through traditional enterprise procurement processes. They allow resources to stay behind the firewall under enterprise control. They fall within the existing legal, compliance and audit structures. With the addition of many flavors of “cloud in a box” offerings, they start to address the main issues that drove developers to the public clouds to begin with: self-service, provisioning on demand and the ability to get access to more scalable resources without requiring large upfront cap ex.
Public clouds have all the benefits that have been written about extensively (horizontal scaling, true on-demand capabilities, pure op ex, etc.). But for much of this year, the debate in the industry has been all about how worried everyone is about using public clouds (security, control, etc. etc.), and how uncertain they are about whether IaaS will really take off.
But there are some recent indications that the public cloud is hot again. A great study by Appirio speaks to growing industry comfort with public clouds and the likelihood that these will have a dominant place in IT infrastructure. At the Up2010 cloud event this week in San Francisco, Doug Hauger, GM of Microsoft’s Azure cloud, referred to this study extensively to make the point that public clouds are gaining credibility. James Staten of Forrester recently blogged about his predictions for 2011, including: “You will build a private cloud and it will fail.” His point is not to discredit private clouds as an approach but to remind companies beginning this process how incredibly hard it is to build a large, scalable, on-demand, multi-tenant cloud – even just for internal users.
Staten’s predictions make the case for how the cloud market has evolved in 2010, as enterprises planned their cloud strategies, implemented their pilots and defined their cloud architectures. Rather than seeing public clouds as “the other alternative” to private ones, enterprises and vendors have begun to view these as compatible strategies in a more sophisticated hybrid cloud model.
We’re huge fans of the hybrid model at CloudSwitch, and it’s great to see customers embracing public clouds as extensions of their private ones (as well as of their traditional virtualized data centers). The critical point about public clouds is that they allow testing, innovation and quick success or failure to happen in a low-cost way. This learning is imperative for the hybrid model, and public clouds are here now, today, working well and allowing enterprises to gain experience and log cloud mileage as they build out the rest of their cloud infrastructures. With CloudSwitch, these companies are now able to view the public cloud as a safe and seamless extension of their internal environment, in effect turning the public cloud into a “private” cloud as well.
By Ellen Rubin
We’ve written extensively about the benefits of hybrid clouds, since it’s a core part of our founding vision at CloudSwitch. For most of this past year, the cloud market has been focused on defining the differences between public and private clouds and weighing the costs and benefits. Slowly the conversation has shifted to what we believe is the central axiom of cloud: it’s not all or nothing on-premise or in an external cloud; it’s the ability to federate across multiple pools of resources, matching application workloads to their most appropriate infrastructure environments.
To reiterate some key thoughts we’ve written about in the past, the idea of hybrid clouds encompasses several use cases:
- Using multiple clouds for different applications to match business needs. For example, Amazon or Rackspace could be used for applications that need large horizontal scale, and Savvis, Terremark or BlueLock for applications that need stronger SLAs and higher security. An internal cloud is another federation option for applications that need to live behind the corporate firewall.
- Allocating different elements of an application to different environments, whether internal or external. For example, the compute tiers of an application could run in a cloud while accessing data stored internally as a security precaution (“application stretching”).
- Moving an application to meet requirements at different stages in its lifecycle, whether between public clouds or back to the data center. For example, Amazon or Terremark's vCloud Express could be used for development, and when the application is ready for production it could move to Terremark's Enterprise Cloud or similar clouds. This is also important as applications move towards the end of their lifecycle, where they can be moved to lower-cost cloud infrastructure as their importance and duty-cycle patterns diminish.
CloudSwitch customers and prospects are clear that hybrid clouds are the way to go. Here are some examples of recent conversations:
“It’s going to take our internal IT group more than 18 months to build a private cloud; in the meantime we can use the public clouds now for on-demand capacity and scalability.” – VP of Business IT group at a large Wall Street firm
“We’re highly virtualized and we see external clouds as pools of virtualized resources that are available as extensions of our internal infrastructure.” – IT Director at a large healthcare company
“We have compliance data that will never leave our firewall but we like the idea of scaling out the computing resources in the cloud for peak periods.” – VP of Informatics at a large pharma
We’ve also been tracking some validation from more official sources on the growth of public clouds and the hybrid model. For example, a recent study by SandHill Group surveyed more than 500 IT executives and indicated that the biggest growth in cloud computing will be in hybrid clouds (from 13% now to 43% in three years). Another survey by Evans Data finds an even higher adoption rate among IT developers, suggesting that the hybrid cloud model is set to dominate the coming IT landscape.
It’s also interesting to see the importance of the hybrid model taking hold among industry insiders with many different perspectives. We saw this at VMworld 2010, where there was tremendous interest in hybrid clouds, from Paul Maritz’s keynote predicting a hybrid cloud future through many sessions and product announcements. Veteran cloud watcher James Urquhart points out that the hybrid approach lets you hedge your bets in cloud computing, using technology that allows you to decouple the application from the underlying infrastructure and move it to the right environment so you don’t get locked in. And even private cloud advocates acknowledge that hybrid has an essential role, where public cloud platforms serve as extensions of private cloud deployments.
It’s gratifying to see the CloudSwitch founding vision gain broad industry acceptance, with the hybrid model as key enabler for cloud computing. It’s even more satisfying to seeing the vision coming to life as more and more customers leverage our technology to run their applications effortlessly in the right environment, whether an internal data center, private cloud, or public cloud. Enterprise users and their companies are the real winners.
By John Considine
Just a week after our blog post on the telcos, we find another big company joining the cloud computing tsunami – Oracle’s announcement of its “cloud in a box” offering as well as new offerings of Oracle software running on Amazon’s EC2.
For a company whose leader shunned the term “cloud” last year, this is a lot of cloud announcements in one week. Oracle’s new Exalogic Elastic Cloud is perhaps the first “cloud in a box” solution that is actually delivered in a box (of hardware). Unlike the offerings we have seen from Eucalyptus, Nimbula, Azure, and VMware, the Exalogic product contains the control software as well as the hardware components to make a virtualized resource pool. The other vendors have focused on delivering a software solution that can be combined with the users’ choice of servers, storage, and networking gear to build a cloud.
Oracle, powered by Sun’s server and system technology, has decided to deliver a complete cloud solution that contains up to 360 CPU cores, 2.8TB of RAM, and 40TB of storage in a single rack of equipment. This big box is reportedly priced at just over $1M. Oracle’s motivation for this box is to deliver on the promise of building an entire stack of both hardware and software that has been engineered to work together to deliver better performance, reliability, and scale. Overall, the Exalogic system has impressive performance characteristics and may be a great solution for data center consolidation, but…
Placing the term “Elastic” in the name of this offering is stretching the accepted definition of the term as it relates to cloud computing. The Exalogic server is a contained set of resources that is purchased, operated, and maintained as part of the enterprise infrastructure. You can scale your applications up and down within this solution, but in the end, you are limited to the number of cores, amount or RAM, and size of the storage you purchased. While you can add more racks to the solution, you are stuck paying for the whole thing independent from what you really use – not exactly elastic or pay for only what you use. My only other problem with Exalogic is the range of supported operating systems – we like the Linux and Solaris support, but a quote from Rick Schultz of Oracle – “There is no demand for Windows at the moment” – makes me wonder who they are talking to. More than half the enterprise workloads CloudSwitch has deployed to the cloud are Windows-based; how can there be no demand for Windows in Exalogic?
The other interesting difference in the Exalogic solution as compared to the big (public) cloud offerings is the design center for the hardware. Clouds like Amazon and Google were developed around “stripped down” servers to act as generic compute components. The redundant components normally used to improve the reliability of a server are removed from the compute nodes to reduce the component cost, and software and other application-level techniques are used to make up for the fail-able components. Each of the servers in the Exalogic solution has redundant power supplies, 2 solid state disk drives, and redundant Infiniband controllers. This more expensive hardware allows the system to survive component failures with minimal disruption to the running applications – a traditional enterprise infrastructure design, with high reliability to support a lot of VM’s packed on a single piece of hardware.
The difference between the two approaches highlights the upcoming battle between architectures in the cloud – stripped down commodity servers versus highly available high-end servers as the basis for cloud computing. The early leader in this space is the commodity server approach because of the types of applications initially targeted to clouds – stateless horizontally scalable web applications. But as we start putting more core enterprise applications into the cloud, the HA architectures become more interesting, and thus we expect this architecture to gain ground. We see these architectures gaining ground already with clouds like Terremark, BlueLock, and Savvis.
The other announcement this week from Oracle is expanded support for running Oracle software in Amazon’s Elastic Compute Cloud. Oracle has provided templates (AMI’s) in Amazon for its database software since 2008, and this week they have expanded the number of applications they will support in Amazon to include Oracle E-Business Suite, Oracle's PeopleSoft Enterprise, Oracle's Siebel CRM, Oracle Fusion Middleware, Oracle Database, and Oracle Linux. In addition to expanding the software supported on AWS, Oracle has taken the step of “certifying” the software for operation in Amazon. This means that customers can now get support from both Oracle and AWS for those applications. Although Oracle’s lead cloud story seems to be about the Exalogic box, I believe that this announcement does more to advance cloud computing for enterprises. Support for these key Oracle products in Amazon’s cloud adds credibility to public cloud computing, as it allows enterprises to really use the cloud for their core applications. This is one of the areas that a cloud provider cannot fix, it is up to the software vendors to expand their horizons to embrace the cloud and Oracle is blazing the trail.
I think the only downside to the Oracle-Amazon announcement is the lack of integration with Oracle’s control software. The FAQ’s from Amazon and Oracle emphatically state that the management controls for Oracle deployments to the cloud is exclusively the Amazon console and tool set. This is a shame since we believe that seamless integration between the data center and the cloud is key to a successful enterprise cloud deployment; creating a disjointed environment just adds work with no value for the enterprise and ultimately leads to cloud lock-in. Our enterprise customers have told us consistently that they want a “single pane of glass” from which they can manage pools of resources both internal and external.
Finally, while I like the architecture of the Exalogic Elastic Cloud, and believe that it could form the basis of a new class of cloud computing offerings, it too may be missing a critical point. If an enterprise decides to deploy their private cloud on this technology, there is no connection or relationship between the applications deployed to the private cloud and those running in the public cloud. This, once again, highlights the importance of cloud federation – you will never break the cycle of buying more hardware and infrastructure if you don’t embrace technology that allows you to access the public clouds.
By Pavan Pant
As with any transformative technology that is new to the market, both public and private clouds have generated massive amounts of hype, bold predictions, a whole lot of confusion and raging debates amongst the cloud cognoscenti. Opinions vary across the spectrum with some experts claiming that data centers will be rendered obsolete by the public cloud, while others are dismissive of the public cloud but support private clouds. It’s clear to us at CloudSwitch that a more likely scenario lies squarely in the middle of those two extremes. This week at VMworld (where we were exhibiting with our partner, Terremark), we were pleased to hear that VMware believes that “hybrid cloud is the tide coming in.” From Paul Maritz’s keynote through many sessions and product announcements (including the release of the long-awaited vCloud Director), the message was all about hybrid clouds.
One of our previous blog posts discussed the notion of hybrid clouds and the fact that most enterprises will follow such an approach in the future. Amazon, Terremark, Rackspace, Savvis, Blue Lock and other public cloud providers give customers elasticity, better service delivery and low CapEx costs. Meanwhile, there are solutions such as Eucalyptus and VMware’s vCloud Director that provide the interface and management tools to help organizations build private clouds while interfacing with public clouds to create hybrid cloud models.
Both use different APIs for their hybrid models with Eucalyptus delivering tight integrations for EC2 using Amazon’s APIs and VMware vCloud Director working with vCloud DataCenter Services (VMware’s terminology for public cloud providers) such as Terremark that leverage vCloud APIs. However, these technologies do not assist with creating an environment that spans hypervisors and cloud providers without changing the applications. If customers build private clouds that are not using the same virtualization infrastructure as their preferred public clouds then what does it really mean to hybridize their clouds?
Consider a scenario where a customer builds a private cloud using Eucalyptus or VMware vCloud Director. That private cloud still ends up being different from your data center (much like a public cloud) - the networking may be different, versions of virtualization technology may be different and the storage infrastructure may be different. All this means that applications in the data center will need to be changed before moving to the private cloud. As an example, if your QA team runs servers on their own subnet in the data center how can this be transitioned to a private or public cloud without incurring additional costs to change those servers?
CloudSwitch’s core value proposition lies in the ability to securely transport a customer’s existing virtual infrastructure to the cloud provider of their choice, independent of the provider’s underlying virtualization infrastructure (VMware, Xen, etc.). This effectively allows customers to securely move and operate servers from their data center across hypervisors to private cloud providers without requiring them to make any modifications to their application – we maintain the same IP address, MAC address, storage controllers, subnet information, etc. Once customers have moved their servers to the cloud they can operate and manage them just as they would in their data center. CloudSwitch has an intuitive web based interface which gives customers server lifecycle management options such as start, stop and clone.
Similarly, if customers have a private cloud which uses either Eucalyptus or VMware vCloud Director CloudSwitch can speak to those APIs and facilitate the transfer and management from these private clouds to public clouds. This enables a hybrid model where private clouds leverage public clouds for spikes in usage (cloudburst), or lab-on-demand use cases for training and POCs. CloudSwitch does all the work of integrating the environments across these private and public cloud hypervisors, merging networks and transferring servers without modifying them in any way.
Many years ago, I had the privilege to work on the first iterations of RSA’s identity federation product both as an engineer and as a product manager. Federated single sign on enabled the portability of identities across security domains and allowed for the secure exchange of sensitive data outside the firewall without requiring any changes to the identity itself.
While the markets for Identity Management and cloud computing are unambiguously different, the notion of federation to make portability and interoperability easier for enterprises is a common theme. CloudSwitch is in a unique position to help enterprises with true cloud federation by moving workloads seamlessly from the data center to the cloud (private or public), between private and public clouds (hybrid), across public clouds and back to the data center without requiring customers to make any changes to their applications. Regardless of the starting point, CloudSwitch offers customers an easy, effective method to leverage the benefits of the cloud while ensuring portability across clouds.
By John McEleney
This weekend I participated my ninth Pan-Mass Challenge, a 192-mile benefit ride for the Jimmy Fund and The Dana Farber Cancer Institute (5,000 riders who will jointly raise $31M). With approximately 12-plus hours of saddle time, I had plenty of time to think. What struck me, beyond the amazing logistics and incredible spirit of the riders and volunteers, was that this all started with s simple vision of one man—Billy Starr.
He had lost his mom to cancer and decided to do an annual bike ride across Massachusetts as a fund raiser for cancer research.
Each year we will grow the amount that we give for cancer research. His communication strategy was equally clear: cancer affects everyone, therefore, everyone should want to be involved.
Since he started this journey, he has raised over a quarter of a billion (that’s a “B” as in billion) for cancer research. Today the PMC funds 50% of the research budget at Dana Farber. So what does this have to do with cloud computing and CloudSwitch? Nothing and everything. From a technical stand point, absolutely nothing. From a business standpoint, it has everything to do with cloud computing. Let me explain.
It is easy to be confused about how the future will evolve with cloud computing. Every day we hear and read the postulations from the optimists that everything will move to the cloud. We are equally confronted with the contrasting views of the fear mongers that nothing will move to the cloud. To get some perspective and a simple view of the cloud world, I think you need to step back and take a broader view of the transformation that is happening.
Every day, people are getting more and more comfortable with the idea that data is located somewhere else (other than on the physical device they can see). Email and facebook are two proof points. Organizations are also getting more comfortable with not having all of their data and/or applications physically in their data center. Just look at the growth of SaleForce and use of raw Amazon compute resources. We recently had a discussion with a senior IT person who pointed out that the finance team wanted to know why the IT people were buying so many books (they weren’t buying books, they were submitting their AWS charges on expense reports!).
We believe the future is clear: the public cloud WILL be part of the IT organization in the future. There may be many obstacles and objections during the adoption process, but the economics and business agility that public clouds provide are so compelling, that organizations will have to adopt them or risk losing competitive advantage. Our mission at CloudSwitch is to help organizations extend their data centers to the cloud.
If you want to make an impact, you have to be clear about what you are trying to accomplish. Billy was clear with the PMC and he aggressively pursued that goal. At CloudSwitch we are passionate about helping companies embrace the cloud. Our vision is clear—we think that the cloud will help businesses become more agile and we think we have a role to play in making this happen. Try CloudSwitch today.
By John Considine
There’s a long running debate about the true role of Virtual Machines (VMs) in cloud computing. In talking with CTOs at the large vendors as well as the “Clouderati” over the last two years, there seems to be the desire to eliminate the VM from cloud computing. A colleague of mine, Simeon Simeonov, wrote a blog a couple of weeks ago that made the case for eliminating the VM. While the argument is appealing, and there is growing support for the idea, I’d like to argue that there are compelling reasons to keep the Virtual Machine as the core of cloud computing.
Virtual Machines encompass “virtual hardware” and very real operating systems. VMs drive the economics and flexibility of the cloud by allowing complete servers to be created on-demand and in many cases share the same physical hardware. The virtual machines provide a complete environment for applications to run – just like they would on their own individual server, including both the hardware and operating system.
Sim and other cloud evangelists would like to see applications developed independent of the underlying operating systems and hardware. Implied in this argument is that developers shouldn’t be constrained anymore by an “outdated” VM construct, but should design from scratch for the cloud and its horizontal scalability. This reminds me of early conversations I had when we were just starting CloudSwitch that went something like: “If you just design your applications to be stateless, fault tolerant, and horizontally scalable, then you can run them in the cloud.” The message seemed to be that if you do all of the work to make your applications cloud-like, they will run great in the cloud. The motivation is cost savings, flexibility, and almost infinite scalability, and the cost is redesigning everything around the limitations and architectures offered by the cloud providers.
But why should we require everyone to adapt to the cloud instead of adapting the cloud to the users? Amazon’s EC2 was the very first “public cloud” and it was designed with some really strange attributes that were driven from a combination of technology choices and a web-centric view of the world. We ended up with notions of “ephemeral storage” and effectively random IP address assignment as well as being told that the servers can and will fail without notice or remediation. These properties would never work in an enterprise datacenter; I can’t imagine anyone proposing them, much less a company implementing them.
But somehow, and this is what disruption is really about, it was OK for Amazon to offer this because the users would adjust to the limitations. The process began with customers selecting web based applications to be put in the cloud. Then a number of startups formed to make this new computing environment easier to use; methods of communicating the changing addresses, ways to persist storage, methods of monitoring and restarting resources in the cloud, and much more.
As cloud computing continued to evolve, the clouds started offering “better” features. Amazon introduced persistent block storage (EBS) to provide “normal” storage, VPC to allow for better IP address management, and a host of other features that allow for more than just web applications to run in the cloud. In this same timeframe a number of cloud providers entered the market with features and functions that were more closely aligned with “traditional” computing architectures.
The obvious question is what is driving these “improvements”? Clearly the early clouds had captured developers and web applications without these capabilities – just look at the number of startups using the cloud (pretty much all of them). I’d assert that the enterprise customers are driving the more recent cloud feature sets – since the enterprise has both serious problems and serious money to spend. If this is true, then we can project forward on the likely path both the clouds and the enterprises will follow.
This brings us back to the role of the Virtual Machine. Enterprises have learned over the years that details matter in complex systems. Even though we want to move towards application development that doesn’t touch the hardware or operating systems objects, we must recognize that there is important work done at this level – hardware control, the creation and management of sockets, memory management, file system access, etc. No matter how abstract the applications become, there is some form of an operating system that works with these low level constructs. Further, changes at the operating system level can affect the whole system – think Windows automatic updates, Linux YUM updates, new packages or kernel patches have caused whole systems to fail; this is the reason that enterprises tightly control these updates. This means in turn that the enterprise needs to have control of their operating systems if they want to use their software and management policies, and the way that you control your operating system in the cloud is with VMs.
Enterprise requirements are driving the evolution and adoption of the cloud and this will make the use of VMs even more important than it has been to date. Cloud providers know that enterprise customers are critical to their own success and will make sure that they deliver a cloud model that feels familiar and controllable to enterprise IT and developers.
By John Considine
When Rackspace first started talking with me about open sourcing their cloud software, I was truly intrigued. The idea of releasing the software behind their cloud was unexpected given that most cloud providers treat their infrastructure, and particularly their control software, as a differentiator. One of the things that make the software so valuable is the hard earned lessons from building, scaling, and maintaining a cloud. An infrastructure that has actually been deployed and scaled to cloud size has real value to everyone trying to build a cloud. So when a company that has been in the cloud business for a long time in “cloud years” decides to open up and share their software, you have to stop and look.
Last week, Rackspace held an event that brought together a veritable who’s who in cloud computing “to validate the code and ratify the project roadmap”. The sheer size of the summit was a tribute to both Rackspace and those who are looking to advance cloud computing. What I found most interesting was the number of attendees that are potential competitors to Rackspace – other cloud providers or hosters looking at getting into cloud computing. Of course, open source means that anyone can use and improve the code, but the guys at Rackspace inviting these guys and them attending says a lot about the industry. When I talked to Lew Moorman and Jim Curry about this, they said it was simple; they want to compete in the cloud the same way they compete in their hosting business -- with their service. During the design summit, the Rackspace crew stated that they are going to do everything in the open; this means that they are going to put it all out there and not hold back certain pieces as private code. Given this, I really believe that they want to compete on their “Fanatical Support”.
Rackspace and NASA are teaming up to release the source code for implementing a cloud – Rackspace is providing their Cloud Files software for building a scalable object store system and NASA is providing their Nebula code for building a Cloud Server system. The developers from both Rackspace and NASA presented details about their software, lessons learned, and future directions, and then they turned to the attendees to solicit requirements and suggestions. Hot topics included APIs, controls and methods for distributing VMs into the cloud (scheduling), and Networking.
The OpenStack project will utilize the Rackspace API, but will also support API “extensions” so that a number of APIs can be added. It is no surprise that there was desire to support the Amazon API since it is already a “standard” of sorts, and is the primary API for NASA’s Nebula component. The question here is that if the OpenStack software supports multiple APIs for controlling the clouds, what is the true API, and how will OpenStack help drive standards if it supports multiple options?
A lot of companies out there are spending a lot of money and resources to build clouds, and the biggest are rather secretive about how they do it. This is a bold move by Rackspace, NASA, and all of those supporting the effort to drive a fully open project to build the clouds to compete against proprietary solutions. We look forward to more clouds to target both inside the enterprise and in the public domain because we believe that more options will help move everyone closer to a better way of computing – Cloud Computing.