Showing posts with label webcast. Show all posts
Showing posts with label webcast. Show all posts
Wednesday, March 13, 2013
What if the killer app for enterprise mobility and tablets is actually access to multiple apps?
Posted by
Jay Fry
at
9:47 PM
For every new technology, the holy grail is always to find the “killer app.” That phrase existed long before “app” referred to a little beveled square on your iPad.
The killer app for the PC was arguably two: word processing and spreadsheets. (Anyone remember WordStar or Visicalc?) For the Internet, it was email, later enabled by a notable app of its own: the web browser.
So, what’s the killer app for tablets?
What is the one thing that’s going to make tablets absolutely mandatory going forward? The question is even more interesting when you add an enterprise perspective. What is the thing that will make it a necessity for every employee to have (and use) an iPad or a Galaxy Note or a Surface?
In trying to answer this question, I think back to something our CTO, Stephen Vilke, said in a recent webinar about the dos and don’ts for bringing enterprise applications to the iPad. I’ve been posting the highlights of his “dos & don’ts,” and here’s one that’s directly related to this line of thinking:
DO understand that the next “killer app” for the enterprise to be delivered on a tablet is actually the blend of multiple apps.
Stephen’s comments boiled down to this: if it’s productivity you want to enable, you must provide access to the suite of applications and tools that users are comfortable using and can be productive with.
This means that an organization needs to make a number of their key business applications widely available to the workforce simultaneously via mobile, regardless of which device they bring. So, in fact, one single app is not the thing that will drive usage, but perhaps it's the ability to do productive work on a whole set of enterprise applications – the same ones they’ve already been using, and new ones that are being created.
As Stephen noted:
The benefits of a consolidated app workspace are clear: they offer an engaging and more user-friendly and productive experience to improve ROI on mobility investments. However, complexity increases, as application variety ranges from ERP through to the many hundreds or thousands of corporate applications required to support business operations – and finding the sweet spot might not be so easy.
There are a number of ways to achieve application mobilization in the enterprise, including custom client apps, multi-platform middleware, HTML5, software-as-a-service (SaaS) cloud-based solutions, or through a virtualized/remote desktop.
IT managers need to assess each route for cost, speed, business benefit, and practicality. They’ll need to decide if development is best done in-house or through partnerships with other providers. Will they want to build custom apps internally, or use an outsourced, hosted solution? Influencing factors will include the number and type of mobile platforms being supported. Not to the mention security, performance, and UX issues that we hear about from IT, from application owners, and from users.
But coming back to the “killer app as multiple app access” idea, Gartner thought this concept was interesting enough to come up with a new category for it, something they call “workspace aggregators.” We’ve blogged about the concept here a couple times over the past few months, and think it’s an interesting way to describe a new approach (one that Framehawk itself is taking).
And, if you think about it, the rise of something new on the hardware side is generally matched by a parallel rise of a killer app on the software side. If not, the hardware is generally headed for the dustbin of history sooner rather than later.
Tablets have followed a bit of a new trajectory, though. Tablets have the interesting characteristic of trying to present all the killer apps from all previous computing platforms in this new form factor. Or at least, that’s the promise. For productivity apps like email, word processing, and spreadsheets, tablet makers have either tried to put forward their own versions (Apple) or are a bit slow to market with tablet versions of their existing packages (Microsoft).
Real enterprise tablet usage has been hindered by 2 things. First, tablet providers have struggled a bit to deliver continuity with existing environments and tools as I noted. And IT has had difficulty finding a safe way to incorporate them into the enterprise environment and still maintain a user experience that won’t make the users rebel.
So, it seems to me that a real tipping point for the official, legitimate adoption of tablets in the enterprise just might be the ability to get to those productivity apps AND intuitive, secure access for new and existing enterprise applications.
Put this all in one workspace for particular sets of employees and you just might have something, well, killer.
For more "confessions of a CTO" (from Framehawk's Stephen Vilke) about the 7 Dos & Don'ts for Bringing Enterprise Applications to the iPad, you can download our white paper.
This post also appears on the Framehawk blog.
Sunday, January 27, 2013
Want to avoid data leakage from mobile enterprise apps? Use the cloud
Posted by
Jay Fry
at
9:23 PM
You know the conventional wisdom: if you’re using mobile devices, the best way to secure enterprise application data is some combination of locked-down devices and strong data security measures.
However, both IT and users know the truth that comes with these approaches: they ratchet up hidden costs while killing user experience and productivity, all in the name of avoiding data leakage.
So what are the better options for mobile access to enterprise applications?
The problem is that there haven’t been too many. But there is one you might not have thought of: use cloud computing.
Hold on, you say, isn’t the cloud inherently insecure? Plus, why would I add another wrinkle in communicating back and forth with tablets -- something that's already pretty iffy over mobile networks. Isn’t that a big gamble? Actually, it's not -- if you do it right. With a smart approach (and a technology partner who can deliver on a couple key components), cloud computing can be a surprisingly effective technique to solve the security, performance, user experience, and cost issues plaguing enterprises in providing mobile access to enterprise applications.
Intrigued? We’re doing a free webcast on the topic with InformationWeek at 10 a.m. Pacific on Tuesday, Jan. 29, 2013. Join us and we’ll walk you through what I’m proposing here.
The speaker, our CTO and co-founder Stephen Vilke, will look at existing approaches and the trade-offs that enterprises are currently making in application mobilization. He’ll detail the architectural components (both pros and cons) of a cloud-based approach. And, he’ll show how IT can deliver both secure application data and a UX that employees rave about through the use of a cloud-based architecture.
Stephen will discuss:
- New architectural ideas that mean you don’t ever put any data on the mobile device
- A way for applications to communicate with tablets that’s fast and secure – even over unreliable mobile networks
- How smart use of the cloud can enable the security and usability required by enterprise mobility
- How IT can enable BYOD and still maintain control
- A way to future-proof your development and cost structure
If you're interested in hearing more about this approach, especially given existing application investments and tight application development budgets, join us on Tuesday. We’ll cover how to pull it off.
Stephen will also leave time to take live questions during the event. And we promise to keep the vendor sales pitch (yes, Framehawk can help you solve a lot of these issues) to a bare minimum.
Hope you can join us.
Click here to register for our InformationWeek webcast "How to Avoid Data Leakage from Mobile Enterprise Applications: Use the Cloud" at 10 a.m. Pacific (1 p.m. Eastern) on Tuesday, Jan. 29, 2013. The event will be moderated by Erik Sherman (@ErikSherman), blogger for CBS MoneyWatch and Inc.com.
This post also appears on the Framehawk blog.
Friday, January 25, 2013
Making mobile user experience ‘tablet-y’ for enterprise applications
Posted by
Jay Fry
at
2:54 PM
We’ve been checking off the various dos & don’t for bringing enterprise applications to iPads and other mobile devices. There are a lot of them. So many that our CTO Stephen Vilke did an entire webcast about the topic (summarized in this white paper).
Last week I brought up mobile security. The issue that goes hand-in-hand with that is user experience (UX). In fact, mobile user experience is usually what suffers when IT operations and corporate compliance get their way.
Stephen, however, is not one to just say, “Oh, well, the users are just going to have to deal with it.” In fact, avoiding that mistake is core to his CTO tip this time around:
DON’T underestimate the importance of building a rich user experience.
From Stephen’s perspective if security is king for tablets in the enterprise, then user experience is certainly next in line for the throne. IT departments simply must deliver a strong user experience, says Stephen. If the (albeit brief) history of mobile has taught us anything, it’s that if people don’t like it, they won’t use it.
"How many times have you heard, 'our sales team, managers -- insert group here -- are not using a new system because it’s not easy to use'? Or 'the users hate using the application because it’s hard to do anything with it'?
"The more time you spend at the beginning of a project making sure there is a rich user experience, the more user satisfaction will increase. This does not have to mean a full re-write for your legacy applications, but rather it is about researching how your audience interacts with applications on their current hardware (PC and laptop) and adding some iPadness to that application when you deliver it on a tablet. Make it tablet-y! No one wants a PC experience replicated exactly on a tablet.
"In fact, at the core of this 'consumerization of IT' revolution inside the enterprise is user experience -- employees asking to use their own iPad at work because it’s easy to use, and easy to be productive with. The only reason employees use the IT systems at work is because their job depends on it. If workers weren’t forced to execute expense reports with scanners, scissors, and tape, and instead could execute it faster with an iPhone app, they would likely opt for the quick route and actually spend a little more time doing their job. Moreover, they might even enter information into a CRM system more frequently if they could do it from their iPad wherever they happen to be."
User experience drives user adoption. And, as Stephen has noted more than a couple times in his career, good news travels fast. The more people use something, the more they will share their experiences with others, and the faster the rate of adoption.
Moreover, building a strong user experience is going to drive productivity across your range of use cases. Technology should not get in the way of a user’s productivity. Virtual desktop infrastructure (VDI) solutions are notorious for letting the user down when it comes to the user experience. A salesperson, physician, investment advisor, or whatever the role, does not want the mobile version of their virtual app to slow them down. Conversely, creating native applications unique to the user’s job can increase their productivity as well as their effectiveness, but can also be time consuming and very, very expensive. Says Stephen:
"Try to make it simple. If done right, UX can drastically decrease support costs. Leveraging a simple user experience, one that is intuitive and user-friendly means that there will be fewer knots to untangle down the line. The up-front costs distributing applications to tablets are one thing, sustaining their upkeep and performance is something else.
"Companies with successful implementations spend roughly 25 percent of their implementation costs on delivering user adoption – for things like training, communications, and change management. Larger implementations can spend roughly 30-35 percent on user adoption. Spending time at the beginning of a project on the user experience can lower these costs."
Think about it. Says Stephen: “no one trained you to use Google, Craigslist, or CNN.com.” He’s not saying to just drop all of those mobile security concerns. But remember this: UX is worth more time and effort than IT has been used to devoting to it. And in this more tablet-y world, that’s going to have to change.
This post also appears on the Framehawk blog.
Tuesday, January 8, 2013
One thing enterprises can't compromise on: mobile security
Posted by
Jay Fry
at
10:35 PM
The latest in our series of 7 dos & don’ts for bringing enterprise applications to iPads is likely so self-evident – and important – that it probably should have been listed first.
The topic is mobile security. No surprise. In fact, not thinking about how to avoid unauthorized access and data breaches would indeed be a serious (and job-threatening) confession from anyone in IT related to a mobile project.
Here’s what our CTO and co-founder Stephen Vilke recommended in our recent webcast:
DON’T even think about it if it’s not secure. That goes for both hardware and data.
While IT is often driven by end user expectations – especially when dealing
with mobile devices – security is still an
IT mandate. Naturally, enterprises wanting to make use of tablets, smartphones, and the like will
have more stringent security requirements than those provided automatically by consumer devices.
Stephen went to great lengths to emphasize that it is an absolute must for IT to properly secure both an organization’s data at rest and data
in motion.
Ideally, no data should be stored on the mobile device itself. Instead, newer technologies can ensure that no data leaves your data center. Sophisticated communication protocols should be leveraged to provide a mobile connection to enterprise systems without the physical transfer of data between device and network.
On the hardware side, because most tablets lack USB ports and DVD drives, at least
one element of security is easier to manage than for conventional laptops (although this may change in the future). However, in addition to being easier to misplace, tablets’ portability and desirability make them obvious targets for theft.
As a result, robust encryption and password enforcement are critical to ensure data security, and tracking and remote wipe can be important to make sure that lost or stolen devices do not lead to major breaches of confidentiality or disclosure of sensitive information. And given the rate of change, IT has to be on top of the latest, while remembering a few of the things from the past. Says Stephen:
“As companies develop security and mobility strategies to deal with these devices, it is worth bearing in mind the lessons we learned from managing laptops, and how we thought about securing those devices way back when.
“There are now more attack vectors than ever for the bad guys, so having policies, standards, and guidelines around security are a must. Education is key only if it’s enforced. This goes from two-factor authentication (2FA) to sensitive client data to VPN connections to credentials storage.
“The tablet is forcing us to build on what we’ve learned before and to rethink what needs to be secured – and when. Tablets won’t be powerful enough for the foreseeable future to run edge-point analysis, intrusion detection, anti-virus and yet still supply the user with app functionality. We (IT) crippled hugely powerful machines to the point of 10-minute boot times – these tablets have no chance. However, their simplicity offers new strategies – these need to be thought out.”
If it’s not obvious from these comments (or previous blog posts), Stephen sees security as one of the most important issues in enabling tablets in your enterprise IT environment.
It’s no surprise, then, that we’ve wrapped strict security measures into everything we’re working on here at Framehawk. In fact, we've taken some new approaches to enable a whole new level of security for applications that will be mobile-enabled. For more on the architectural differences that make security a big differentiator for Framehawk, you can start with this white paper (registration required).
To read more of our CTO Stephen Vilke's perspectives on enterprise mobility, you can download “Confessions of a CTO: 7 Dos & Don’ts for Bringing Your Existing Apps to the iPad,” the companion white paper to this series of blog posts and our recent webcast (registration required).
[This post also appears on the Framehawk blog.]
Labels:
mobile security,
mobility,
webcast
Tuesday, November 20, 2012
For mobile enterprise applications, perfect is the enemy of done
Posted by
Jay Fry
at
3:17 PM
The pressure for IT to enable enterprise application access for iPads and the like is immediate and immense. The problem with trying to do something fast, however, is it often requires a great deal of that IT budget of yours…which truthfully wasn’t built with many of the costs to mobilize applications in mind.
How do you balance speed and cost when it comes to enterprise mobility? Here’s another one of the top dos & don’ts that our CTO Stephen Vilke talked about in his “Confessions of a CTO” webcast a few weeks back:
No matter what, deliver initial value quickly, said Stephen in his commentary. Time is of the essence, especially when it comes to mobility. Get some initial capabilities out there. Expose them to real-world users. Allow folks to experiment. See what they use and what they don’t, and then make further investment decisions based on that data.
Certainly choosing an approach that is scalable, affordable, and fast is no easy task. In many markets, competitive advantages are fleeting, as mass-market players scramble to create the “next big thing,” which is often defined (at least in the fashion sense) by the consumerization of IT.
As an IT team worried about enterprise mobility, where should you put your emphasis? Stephen suggests going for speed:
“Tackle the first part of the speed/cost equation first: pace. It’s rare that technology from the consumer world is adopted seamlessly into the enterprise – and accommodating tablets with legacy applications in a natural way has been no exception.
“The resource, effort, and policy demands on a firm trying to bridge this transition can take a considerable period of time. Perfect is the enemy of done. So, find a good – not perfect – solution that gets you started quickly. And one that you can learn some lessons from. That’s key. From there, you can mature and get a deeper understanding of your users and use cases.”
Once you find a solution that your risk, compliance, and regulatory groups can live with, next comes the task of getting all relevant stakeholders engaged. Having everyone on the same page will ensure that there are no unforeseen roadblocks in implementation. A big piece of this, Stephen recounts from his experiences, is (of course) around budget:
“After optimizing for speed, make sure that it is affordable. There are plenty of vendors in the market with blends of professional services, co-development partnership, and customized offerings that can be difficult for a company to digest fiscally.
“Mobility can be just like any other new hardware integration: expense can run high, sneak up on you fast – especially as maintenance of a new workflow creates a compounded cost structure. So have a sense of the operations and finance budget that your IT department can absorb before investigating options.”
As important as the cost factors can be, however, don’t let them derail your efforts to get something mobile actually done. Your users will thank you for it. After all, they are the ones trying to find ways to get their own jobs done using their newest mobile device.
Your employees won’t be shy telling you whether you were successful with your enterprise mobility efforts. They’ll vote with their feet pretty quickly. Or, more likely (since they are using touch-screen tablets) with their fingers.
This is part of our continuing series of posts featuring insights on enterprise mobility from Framehawk’s CTO and co-founder Stephen Vilke. You can see a replay of Stephen’s webinar “Confessions of a CTO: 7 Dos & Don’t for Bringing Your Existing Enterprise Apps to the iPad” here or download the white paper here.
This post also appears on the Framehawk blog.
Labels:
IT consumerization,
mobility,
webcast
Monday, October 22, 2012
What are your dos & don’ts for bringing enterprise apps to the iPad?
Posted by
Jay Fry
at
1:00 PM
In technologies areas that are as new as the push to use iPads with enterprise applications, the experiences of peers are often the best guide to success. Or at least in helping you steer clear of strategic errors. And chances to share those experiences are sometimes few and far between.
I'm expecting Wednesday to be one of those chances.
With the help of InformationWeek, Wednesday's the day that we here at Framehawk are holding a live webcast based around sharing useful IT experiences in delivering enterprise mobility. The speaker is our CTO and co-founder, Stephen Vilke, who has spent the past 2 decades not as a vendor, but as an IT guy, including a stint as a CIO.
Stephen collected his thoughts about the move to mobility that enterprises are undergoing currently and will be presenting them during the first part of Wednesday's webcast. Then, in the second half, he will take questions and comments from the audience about their experiences and issues to feed the discussion.
The goal is to continue the conversation that we’ve started here on the blog about what IT departments are learning as they work to incorporate tablets and other mobile devices into their enterprise application environments. The topics will very likely range quite broadly, and Stephen is planning to hit some very relevant insights and war stories from his past, including:
- How to adapt the lessons enterprises learned (good and bad) from managing laptops to the world of mobility
- How mobile user experience, if done right, can drastically decrease support costs
- How the threat of data leakage compares to other security concerns and how they impact BYOD policies
- What is the "killer app" for enterprise mobility, and how can IT deliver it?
The title of the whole event is “Confessions of a CTO: 7 Dos & Don’t for Bringing Existing Enterprise Applications to the iPad.” Registration is free, so join us if you can.
Even more importantly, if you have your own “confessions” or real-world experiences that you’d like to share, leave a comment here for others to see and learn from. Or contribute during the live Q&A session on Wednesday’s webcast. I’ll be tweeting interesting questions and commentary (from Stephen and the audience) during the session (hashtag #CTOconfess), and I'll summarize the more intriguing and useful comments we received here on the blog afterwards. We're looking forward to some quality discussions Wednesday and beyond.
The InformationWeek Framehawk webcast “Confessions of a CTO: 7 Dos & Don’t for Bringing Existing Enterprise Applications to the iPad” is being held at 10 a.m. Pacific on Wed., Oct. 24. Go here to register.
This post also appears on the Framehawk blog.
Labels:
BYOD,
mobile security,
mobility,
webcast
Sunday, May 15, 2011
Is a revolutionary, greenfield approach to cloud The Ultimate Answer? (Or is it still 42?)
Posted by
Jay Fry
at
9:18 PM
If you were looking for the Ultimate Answer to Life, the Universe, and Everything, chances are cloud computing is not high on your list of things to worry about. You’re probably more interested in downing your Pan-Galactic Gargleblaster and getting on with things.
However, my CA Technologies colleague Andi Mann (@AndiMann on Twitter) and I used our recent Cloud Slam presentation to try to provide some straightforward advice on different approaches enterprises and service providers can take to move to a cloud-based infrastructure – while paying tribute to Douglas Adams and his Hitchhiker’s Guide to the Galaxy series in the process. The result? "The Hitchhiker's Guide to Cloud Computing: Tips for Navigating the Evolutionary and Revolutionary Paths to Cloud."
Folks willing to wade through the egregious puns and overstretched sci fi references got a view of 2 different cloud computing approaches – one more evolutionary and one quite revolutionary – that customers are taking. (I touched on this topic in a previous blog myself a few months back.) For those that missed Cloud Slam, Andi posted his portion of the presentation – pros and cons of the more evolutionary approach – at his blog. I’m doing the same here, using this post to highlight what the revolutionary approach should get you thinking about.
Sometimes Marvin’s right and it’s easier to just start over
In looking at the pile of technology and processes that most IT shops are dealing with on a daily basis, the idea of building on top of what exists to slowly evolve your way to a more cloud-like environment sounds like a lot of work and a lot of complexity. Why?
• Your existing IT investments bog down new things
• New technologies don’t always fit easily in existing orgs
• Those existing IT processes can restrict your range of innovation
• IT organization politics & culture can put up some impressive resistance
Yes, the roadblocks seem big enough to depress even Marvin the Paranoid Android.
A ‘probable’ option: a turnkey cloud platform
One way of getting to the cloud through the logjam is to kick your Infinite Improbability Drive into high gear. A more probable way? Use a turnkey cloud platform that picks up where server virtualization leaves off.
Virtualization breaks the chains between the hardware resources and the applications, but is still weighed down by networking and storage concerns. A more revolutionary approach is to set up a pool of computing resources and then create a virtual business service to run on top of that. A virtual business service is a multi-tier application and its infrastructure packaged together as a single object that can be moved, scaled, and replicated as needed. This approach allows you to skip right past a lot of the drawbacks of virtualization and evolving your existing systems toward a cloud architecture. For example, load balancers, SANs, and switches become virtual, programmable items.
Why use a revolutionary approach?
This revolutionary approach to cloud is an economic and agility game-changer for IT, enabling you to go far beyond what is enabled by simple server virtualization. It sets up huge potential improvements in the speed and operational expense of managing your applications and infrastructure – I can point you to at least one product in this space where you can literally draw what you need. This approach lets IT enter into conversations about your org’s business needs – conversation it wouldn’t have been in before. Service providers can use this approach to deliver cloud offerings faster, while building margin at the same time. Enterprises can make themselves ready to move apps to and from multiple different service providers.
In a world which IT has been a bit timid at taking a stand on cloud, this approach gives IT a position of strength to deliver what the business is asking for. It’s not a bad way to come out looking like a hyper-intelligent, pan-dimensional being.
What the revolutionary approach looks like in the real world
At Cloud Slam, I talked about two customers taking this approach today that I know about from our work here at CA Technologies.
PGi: PGi turned to the cloud to help it quickly roll out new advanced meeting, conferencing and collaboration solutions services worldwide at a much faster pace and, as it turned out, a far lower price point than they could otherwise. As PGi enters new markets, it needs to quickly secure scalable data center resources near the new markets it will be serving to ensure the best service for its new customers. Building a new data center is extremely expensive and time-consuming; it can typically take about 18 months to create a new data center. Instead, PGi subscribes to data center capacity from a set of different service providers.
PGi uses CA 3Tera AppLogic to create a movable “infrastructure stack” supporting the services it creates. This infrastructure stack consists of all of the servers, firewalls, storage and other components uniquely configured to support a given service, and can be moved with the click of a button from one service providers’ network to another. This way, PGi can choose the provider with the right price points and geographic coverage for each new market it enters. It can even easily move workloads between its own data centers and service providers’ networks.
PGi’s Chief Technology Officer David Guthrie (who has a video clip up on CA.com talking about all this) says that “using 3Tera, we’ve been able to spin-up new virtual data centers to support 5 new locations for what it would have cost us to build one physical data center.”PGi’s time-to-market with new services has improved significantly. Previously, it would take up to 6 months to purchase new hardware resources and 4-6 weeks to deploy a new software application. Today, it takes between 2-5 days to complete these processes from start to finish.
ScaleMatrix: My second example was a new service provider customer of ours that I profiled in a recent interview. ScaleMatrix is a brand-new business that started from scratch last year. And, they’re already selling cloud services to their clients. By definition, that means no legacy systems. Instead they have taken this revolutionary approach to heart. They used the CA 3Tera AppLogic software and custom-designed hardware to get their offerings off the ground. Fast. If you want more details about what ScaleMatrix has been able to do, check out their profile here.
What are the challenges to taking this revolutionary path?
All is not simple with this revolutionary path to cloud, though.
You’ll break a lot of glass. You’re going to have to be ready to do a lot of learning about both the approach and a new use of technology. “Antibodies” from inside and outside IT will appear to resist this, often because this is such a different way to look at things. If you do take advantage of being able to move virtual business services among different providers, vendor management will take more of your time than it has.
And, of course, fewer people have taken this path. The risks are certainly higher, but so are the rewards. Picture yourself as Arthur Dent, complete with his hitchhiking towel, his guidebook, and hopefully a whole lot more luck.
The good news…getting your organization to cloud fast
This revolutionary approach means you won’t have to wait around for generations for the Ultimate Answer, or even to see the initial benefits of the move to cloud. You jump right to the end state. You get application portability, mobility, and replication as part of the deal, plus some other very useful things for free, like standard application images, DR, and security.
A word of warning, however: to hear some people tell it, you’re not going to want to go back to doing things the old way.
Which path should I choose?
How do you translate all of this commentary (and the ones Andi gave) into real action? We provided a bit of a virtual Babel Fish at the end of our Cloud Slam presentation, to help you figure out whether to take Andi’s more evolutionary approach, or the revolutionary one I described here.
Here are a couple situations where you’d rather take the more revolutionary approach:
You could be a service provider that needs:
• To deliver a cloud offering now
• To get to market with new offerings
• To deliver multiple offerings (like IaaS and SaaS) while maintaining margin
Or, you could be an enterprise that:
• Wants an Amazon EC2-like infrastructure internally and can build up a greenfield infrastructure. To get the benefits, you need to control the components, have standardized (x86) hardware, a spikey usage profile, and be ready for a new development model
• Is out of time. Maybe a competitor has made a move you must counter, or maybe you’re just trying to deliver something to respond quickly to a new business initiative. Either way, you’ve figured out that the old way won’t work.
Either way – evolutionary or revolutionary – the answer that Andi and I gave in our 42-slide deck (and no, Douglas Adams fans, that wasn’t a coincidence) is “Don’t Panic.” Both are appropriate approaches in particular situations. In fact, most organizations will probably end up pursuing both. You’ll notice that the use cases that both of us gave are not mutually exclusive. Spend the time to think it through.
Hopefully this recap, alongside Andi’s, gives you some useful advice to having those Deep Thoughts. And, you’ll be happy to hear that gratuitous Hitchhiker’s Guide references are, well, “mostly harmless.”
Unlike Vogon poetry.
However, my CA Technologies colleague Andi Mann (@AndiMann on Twitter) and I used our recent Cloud Slam presentation to try to provide some straightforward advice on different approaches enterprises and service providers can take to move to a cloud-based infrastructure – while paying tribute to Douglas Adams and his Hitchhiker’s Guide to the Galaxy series in the process. The result? "The Hitchhiker's Guide to Cloud Computing: Tips for Navigating the Evolutionary and Revolutionary Paths to Cloud."
Folks willing to wade through the egregious puns and overstretched sci fi references got a view of 2 different cloud computing approaches – one more evolutionary and one quite revolutionary – that customers are taking. (I touched on this topic in a previous blog myself a few months back.) For those that missed Cloud Slam, Andi posted his portion of the presentation – pros and cons of the more evolutionary approach – at his blog. I’m doing the same here, using this post to highlight what the revolutionary approach should get you thinking about.
Sometimes Marvin’s right and it’s easier to just start over
In looking at the pile of technology and processes that most IT shops are dealing with on a daily basis, the idea of building on top of what exists to slowly evolve your way to a more cloud-like environment sounds like a lot of work and a lot of complexity. Why?
• Your existing IT investments bog down new things
• New technologies don’t always fit easily in existing orgs
• Those existing IT processes can restrict your range of innovation
• IT organization politics & culture can put up some impressive resistance
Yes, the roadblocks seem big enough to depress even Marvin the Paranoid Android.
A ‘probable’ option: a turnkey cloud platform
One way of getting to the cloud through the logjam is to kick your Infinite Improbability Drive into high gear. A more probable way? Use a turnkey cloud platform that picks up where server virtualization leaves off.
Virtualization breaks the chains between the hardware resources and the applications, but is still weighed down by networking and storage concerns. A more revolutionary approach is to set up a pool of computing resources and then create a virtual business service to run on top of that. A virtual business service is a multi-tier application and its infrastructure packaged together as a single object that can be moved, scaled, and replicated as needed. This approach allows you to skip right past a lot of the drawbacks of virtualization and evolving your existing systems toward a cloud architecture. For example, load balancers, SANs, and switches become virtual, programmable items.
Why use a revolutionary approach?
This revolutionary approach to cloud is an economic and agility game-changer for IT, enabling you to go far beyond what is enabled by simple server virtualization. It sets up huge potential improvements in the speed and operational expense of managing your applications and infrastructure – I can point you to at least one product in this space where you can literally draw what you need. This approach lets IT enter into conversations about your org’s business needs – conversation it wouldn’t have been in before. Service providers can use this approach to deliver cloud offerings faster, while building margin at the same time. Enterprises can make themselves ready to move apps to and from multiple different service providers.
In a world which IT has been a bit timid at taking a stand on cloud, this approach gives IT a position of strength to deliver what the business is asking for. It’s not a bad way to come out looking like a hyper-intelligent, pan-dimensional being.
What the revolutionary approach looks like in the real world
At Cloud Slam, I talked about two customers taking this approach today that I know about from our work here at CA Technologies.
PGi: PGi turned to the cloud to help it quickly roll out new advanced meeting, conferencing and collaboration solutions services worldwide at a much faster pace and, as it turned out, a far lower price point than they could otherwise. As PGi enters new markets, it needs to quickly secure scalable data center resources near the new markets it will be serving to ensure the best service for its new customers. Building a new data center is extremely expensive and time-consuming; it can typically take about 18 months to create a new data center. Instead, PGi subscribes to data center capacity from a set of different service providers.
PGi uses CA 3Tera AppLogic to create a movable “infrastructure stack” supporting the services it creates. This infrastructure stack consists of all of the servers, firewalls, storage and other components uniquely configured to support a given service, and can be moved with the click of a button from one service providers’ network to another. This way, PGi can choose the provider with the right price points and geographic coverage for each new market it enters. It can even easily move workloads between its own data centers and service providers’ networks.
PGi’s Chief Technology Officer David Guthrie (who has a video clip up on CA.com talking about all this) says that “using 3Tera, we’ve been able to spin-up new virtual data centers to support 5 new locations for what it would have cost us to build one physical data center.”PGi’s time-to-market with new services has improved significantly. Previously, it would take up to 6 months to purchase new hardware resources and 4-6 weeks to deploy a new software application. Today, it takes between 2-5 days to complete these processes from start to finish.
ScaleMatrix: My second example was a new service provider customer of ours that I profiled in a recent interview. ScaleMatrix is a brand-new business that started from scratch last year. And, they’re already selling cloud services to their clients. By definition, that means no legacy systems. Instead they have taken this revolutionary approach to heart. They used the CA 3Tera AppLogic software and custom-designed hardware to get their offerings off the ground. Fast. If you want more details about what ScaleMatrix has been able to do, check out their profile here.
What are the challenges to taking this revolutionary path?
All is not simple with this revolutionary path to cloud, though.
You’ll break a lot of glass. You’re going to have to be ready to do a lot of learning about both the approach and a new use of technology. “Antibodies” from inside and outside IT will appear to resist this, often because this is such a different way to look at things. If you do take advantage of being able to move virtual business services among different providers, vendor management will take more of your time than it has.
And, of course, fewer people have taken this path. The risks are certainly higher, but so are the rewards. Picture yourself as Arthur Dent, complete with his hitchhiking towel, his guidebook, and hopefully a whole lot more luck.
The good news…getting your organization to cloud fast
This revolutionary approach means you won’t have to wait around for generations for the Ultimate Answer, or even to see the initial benefits of the move to cloud. You jump right to the end state. You get application portability, mobility, and replication as part of the deal, plus some other very useful things for free, like standard application images, DR, and security.
A word of warning, however: to hear some people tell it, you’re not going to want to go back to doing things the old way.
Which path should I choose?
How do you translate all of this commentary (and the ones Andi gave) into real action? We provided a bit of a virtual Babel Fish at the end of our Cloud Slam presentation, to help you figure out whether to take Andi’s more evolutionary approach, or the revolutionary one I described here.
Here are a couple situations where you’d rather take the more revolutionary approach:
You could be a service provider that needs:
• To deliver a cloud offering now
• To get to market with new offerings
• To deliver multiple offerings (like IaaS and SaaS) while maintaining margin
Or, you could be an enterprise that:
• Wants an Amazon EC2-like infrastructure internally and can build up a greenfield infrastructure. To get the benefits, you need to control the components, have standardized (x86) hardware, a spikey usage profile, and be ready for a new development model
• Is out of time. Maybe a competitor has made a move you must counter, or maybe you’re just trying to deliver something to respond quickly to a new business initiative. Either way, you’ve figured out that the old way won’t work.
Either way – evolutionary or revolutionary – the answer that Andi and I gave in our 42-slide deck (and no, Douglas Adams fans, that wasn’t a coincidence) is “Don’t Panic.” Both are appropriate approaches in particular situations. In fact, most organizations will probably end up pursuing both. You’ll notice that the use cases that both of us gave are not mutually exclusive. Spend the time to think it through.
Hopefully this recap, alongside Andi’s, gives you some useful advice to having those Deep Thoughts. And, you’ll be happy to hear that gratuitous Hitchhiker’s Guide references are, well, “mostly harmless.”
Unlike Vogon poetry.
Wednesday, October 13, 2010
The first 200 servers are the easy part: private cloud advice and why IT won’t lose jobs to the cloud
Posted by
Jay Fry
at
9:48 PM
The recent CIO.com webcast that featured Bert Armijo of CA Technologies and James Staten of Forrester Research offered some glimpses into the state of private clouds in large enterprises at the moment. I heard both pragmatism and some good, old-fashioned optimism -- even when the topic turned to the impact of cloud computing on IT jobs.
Here are some highlights worth passing on, including a few juicy quotes (always fun):
Cloud has executive fans, and cloud decisions are being made at a relatively high level. In the live polling we did during the webcast, we asked who was likely to be the biggest proponent of cloud computing in attendees’ organizations. 53% said it was their CIO or senior IT leadership. 23% said it was the business executives. Forrester’s James Staten interpreted this to mean that business folks are demanding answers, often leaning toward the cloud, and the senior IT team is working quickly to bring solutions to the table, often including the cloud as a key piece. I suppose you could add: “whether they wanted to or not.”
Forrester’s Staten gave a run-down of why many organizations aren’t ready for an internal cloud – but gave lots of tips for changing that. If you’ve read James’ paper on the topic of private cloud readiness (reg required), you’ve heard a lot of these suggestions. There were quite a few new tidbits, however:
· On creating a private cloud: “It’s not as easy as setting up a VMware environment and thinking you’re done.” Even if this had been anyone’s belief at one point, I think the industry has matured enough (as have cloud computing definitions) for it not to be controversial any more. Virtualization is a good step on the way, but isn’t the whole enchilada.
· “Sharing is not something that organizations are good at.” James is right on here. I think we all learned this on the playground early in life, but it’s still true in IT. IT’s silos aren’t conducive to sharing things. James went farther, actually, and said, “you’re not ready for private cloud if you have separate virtual resource pools for marketing…and HR…and development.” Bottom line: the silos have got to go.
· So what advice did James give for IT organizations to help speed their move to private clouds? One thing they can do is “create a new desired state with separate resources, that way you can start learning from that [cloud environment].” Find a way to deliver a private cloud quickly (I can think of at least one).
· James also noted that “a private cloud doesn’t have to be something you build.” You can use a hosted “virtual private cloud” from a service provider like Layered Tech. Bert Armijo, the CA Technologies expert on the webcast, agreed. “Even large customers start with resources in hosting provider data centers.” Enterprises with CA 3Tera AppLogic running at their service provider and internally can then move applications to whichever location makes the most sense at a given point in time, said Armijo.
· What about “cloud-in-a-box” solutions ? James was asked for his take. “Cloud-in-a-box is something you should learn from, not take apart,” he said. “The degree of completeness varies dramatically. And the way in which it suits your needs will vary dramatically as well.”
The biggest cloud skeptics were cited as – no surprise – the security and compliance groups within IT, according to the polling. This continues to be a common theme, but shouldn’t be taken as a reason to toss the whole idea of cloud computing out, emphasized Staten. “Everyone loves to hold up the security flag and stop things from happening in the organization.” But don’t let them. It’s too easy to use it as an excuse for not doing something that could be very useful to your organization.
Armijo also listed several tips for finding successful starting points in the move to creating a private cloud. It was all about pragmatic first steps, in Bert’s view. “The first 200 servers are the easy part,” said Armijo. “Because you can get a 50-server cloud up doesn’t mean you have conquered cloud.” His suggestions:
- Start where value outweighs the perceived risk of cloud computing for your organization (and it will indeed be different for each organization)
- Find places where you will have quick, repeated application or stack usage
- If you’re more on the bleeding edge, set up IT as an internal service provider to the various parts of the business. It’s more challenging, for sure, but there are (large) companies doing this today, and it will make profound improvements to IT’s service delivery.
Will cloud computing eliminate jobs? A bit of Armijo’s optimism was in evidence here: he said, in a word, no. “Every time we hit an efficiency wall, we never lose jobs,” he said. “We may reshuffle them. That will be true for clouds as well.” He believed more strategic roles will grow out of any changes that come as a result of the impact of cloud on IT.
“IT people are the most creative people on the face of the planet,” said Armijo. “Most of us got into IT because we like solving problems. That’s what cloud’s going to do – it’s going to let our creative juices flow.”
If you’re interested in listening to the whole webcast, which was moderated by Jim Malone, editorial director at IDG, you can sign up here for an on-demand, encore performance .
Here are some highlights worth passing on, including a few juicy quotes (always fun):
Cloud has executive fans, and cloud decisions are being made at a relatively high level. In the live polling we did during the webcast, we asked who was likely to be the biggest proponent of cloud computing in attendees’ organizations. 53% said it was their CIO or senior IT leadership. 23% said it was the business executives. Forrester’s James Staten interpreted this to mean that business folks are demanding answers, often leaning toward the cloud, and the senior IT team is working quickly to bring solutions to the table, often including the cloud as a key piece. I suppose you could add: “whether they wanted to or not.”
Forrester’s Staten gave a run-down of why many organizations aren’t ready for an internal cloud – but gave lots of tips for changing that. If you’ve read James’ paper on the topic of private cloud readiness (reg required), you’ve heard a lot of these suggestions. There were quite a few new tidbits, however:
· On creating a private cloud: “It’s not as easy as setting up a VMware environment and thinking you’re done.” Even if this had been anyone’s belief at one point, I think the industry has matured enough (as have cloud computing definitions) for it not to be controversial any more. Virtualization is a good step on the way, but isn’t the whole enchilada.
· “Sharing is not something that organizations are good at.” James is right on here. I think we all learned this on the playground early in life, but it’s still true in IT. IT’s silos aren’t conducive to sharing things. James went farther, actually, and said, “you’re not ready for private cloud if you have separate virtual resource pools for marketing…and HR…and development.” Bottom line: the silos have got to go.
· So what advice did James give for IT organizations to help speed their move to private clouds? One thing they can do is “create a new desired state with separate resources, that way you can start learning from that [cloud environment].” Find a way to deliver a private cloud quickly (I can think of at least one).
· James also noted that “a private cloud doesn’t have to be something you build.” You can use a hosted “virtual private cloud” from a service provider like Layered Tech. Bert Armijo, the CA Technologies expert on the webcast, agreed. “Even large customers start with resources in hosting provider data centers.” Enterprises with CA 3Tera AppLogic running at their service provider and internally can then move applications to whichever location makes the most sense at a given point in time, said Armijo.
· What about “cloud-in-a-box” solutions ? James was asked for his take. “Cloud-in-a-box is something you should learn from, not take apart,” he said. “The degree of completeness varies dramatically. And the way in which it suits your needs will vary dramatically as well.”
The biggest cloud skeptics were cited as – no surprise – the security and compliance groups within IT, according to the polling. This continues to be a common theme, but shouldn’t be taken as a reason to toss the whole idea of cloud computing out, emphasized Staten. “Everyone loves to hold up the security flag and stop things from happening in the organization.” But don’t let them. It’s too easy to use it as an excuse for not doing something that could be very useful to your organization.
Armijo also listed several tips for finding successful starting points in the move to creating a private cloud. It was all about pragmatic first steps, in Bert’s view. “The first 200 servers are the easy part,” said Armijo. “Because you can get a 50-server cloud up doesn’t mean you have conquered cloud.” His suggestions:
- Start where value outweighs the perceived risk of cloud computing for your organization (and it will indeed be different for each organization)
- Find places where you will have quick, repeated application or stack usage
- If you’re more on the bleeding edge, set up IT as an internal service provider to the various parts of the business. It’s more challenging, for sure, but there are (large) companies doing this today, and it will make profound improvements to IT’s service delivery.
Will cloud computing eliminate jobs? A bit of Armijo’s optimism was in evidence here: he said, in a word, no. “Every time we hit an efficiency wall, we never lose jobs,” he said. “We may reshuffle them. That will be true for clouds as well.” He believed more strategic roles will grow out of any changes that come as a result of the impact of cloud on IT.
“IT people are the most creative people on the face of the planet,” said Armijo. “Most of us got into IT because we like solving problems. That’s what cloud’s going to do – it’s going to let our creative juices flow.”
If you’re interested in listening to the whole webcast, which was moderated by Jim Malone, editorial director at IDG, you can sign up here for an on-demand, encore performance .
Tuesday, April 7, 2009
Webcast polls: Fast progress on internal clouds, but org barriers remain
Posted by
Jay Fry
at
10:59 AM
Today's free advice: you should never miss out on the opportunity to ask questions of end users. Surprise, surprise, they tell you interesting things. And, yes, even surprise you now and again. We had a great opportunity to ask some cloud computing questions last week, and found what looks like an interesting acceleration in the adoption -- or at least consideration -- of internal clouds.
As you've probably seen, Cassatt does an occasional webcast on relevant data center efficiency topics and we like to use those as opportunities to take the market's temperature (even if we are taking the temperature of only a small, unscientific sample). Back in November, we asked attendees of the webcast Cassatt did with James Staten of Forrester (covering the basics of internal clouds) some very, well, basic questions about what they were doing with cloud computing. The results: enterprises weren't "cloudy" -- yet. 70% said they had not yet started using cloud computing (internal or external).
Last Thursday we had another webcast, and again we used it as an opportunity to ask IT people what they actually are doing with internal clouds today. As expected, end users have only just started down this path and they are being conservative about the types of applications they say they will put into an internal cloud at this point. But you'd be surprised how much tire-kicking is actually going on.
This is a bit of a change from what we heard from folks back in the November Forrester webcast. In last week's webcast we were a little more specific in our line of questioning, focusing our questions on internal clouds, but the answers definitely felt like people are farther along.
Some highlights:
The webcast itself: tips on how to create an internal cloud from data center resources you already have. If you didn't read about it in my post prior to the event, we had Craig Vosburgh, our chief engineer and frequent contributor to this blog, review what an internal cloud was and the prerequisites your IT operations team must be ready for (or at least agree upon) before you even start down the path of creating a private cloud. He previewed some of what he said in the webcast in a posting here a few months back ("Is your organization ready for an internal cloud?"). The second part of the webcast featured Steve Oberlin, Cassatt chief scientist and blogger in his own right, covering 7 steps he suggests following (based on direct Cassatt customer experiences) to actually get an internal cloud implementation going.
On to the webcast polling question results:
IT is just beginning to investigate internal cloud computing, but there's significant progress. The biggest chunk of respondents by far (37%) were those who were just starting to figure out what this internal cloud thing might actually be. Interestingly, 17% had some basic plans in place for a private cloud architecture and were beginning to look into business cases. 7% had started to create an internal cloud and 10% said they were already using an internal cloud. Those latter two numbers surprised me, actually. That's a good number of people doing serious due diligence or moving forward fairly aggressively.
One word about the attendee demographics before I continue: people paying attention to or attending a Cassatt webcast are going to be more likely than your average bear to be early adopters. Our customers and best prospects are generally large organizations with very complex IT environments -- and IT is critical to the survival of their business. And, I'm sure that we got a biased sampling because of the title of our webcast ("How to Create an Internal Cloud from Data Center Resources You Already Have"), but it's still hard to refute the forward progress. Another interesting thing to note: we had more registrations and more attendees for this webcast than the one featuring Forrester back in November. I think that's another indication of the burgeoning interest level in the topic (and certainly not a ding at Forrester or their market standing -- James Staten did a bang-up job on the November one).
Now, if it makes the cloud computing naysayers feel any better, we did get 10% of the respondents to the first polling question saying they had no plans to create an internal cloud. And, there was another 20% who didn't know what an internal cloud was. We were actually glad to have that last group at the event; hopefully they had a good feel for some basic terminology by the end of the hour.
IT organizational barriers are the most daunting roadblocks for internal clouds. At the end of Craig's section of the webcast, he recapped all the prerequisites that he mentioned and then turned around and asked the audience what they thought their organization's biggest hurdles were from the list he provided. Only one of the technical issues he mentioned even got votes. Instead, 45% of the people said their organization's "willingness to make changes" was the biggest problem. A few (17%) also mentioned problems with the willingness to decouple applications and services from their underlying compute infrastructure -- an issue that people moving to virtualization would be having as well. 5% weren't comfortable with the shifts in IT roles that internal clouds would cause.
So, despite the 17% that said they had the prerequisites that Craig mentioned well in hand, this seems to be The Big Problem: how we've always done things. Getting a whole bunch of very valuable benefits still has to overcome some pretty strong organizational and political inertia.
IT isn't sure what its servers are doing. One of the 7 steps Steve mentioned was to figure out what you already have before trying to create an internal cloud out of it. Sounds logical. However, by the look of things from our recent survey work and in this webcast poll, this is a gaping hole. Only 9% said they had a minute-by-minute profile of what their servers were doing. Instead, they either only had a general idea (41%), they knew what their servers were originally set up to do but weren't sure that was still the case (24%), or they didn't have a really good handle on what their servers were doing at all (18%). Pretty disturbing, and as Steve mentioned on the webcast, it's important to get this information in hand before you can set up a compute cloud to handle your needs. (We found this problem so prevalent with our customers that Cassatt actually created a service offering to help.)
Test and development apps are the first to be considered for an internal cloud. In the final polling question (suggested from a question I posed on Twitter), we asked "What application (or type of application) was being considered to move to an internal cloud first?" And, despite the data nightmare that would ensue, we decided to let the answer be free-form. After sifting through and trying to categorize the answers, we came up with roughly 3 buckets of responses. People were interested in trying out an internal cloud approach first with:
· Development/test or non-mission-critical apps
· Web servers, often with elastic demand
· New, or even just simple, apps
While a few people also said back-up or DR applications (you can read about one approach to DR using an internal cloud in a previous post) and some pointed to specific apps like electronic health records, most were looking to try something with minimal risk. A very sensible approach, actually. It matches the advice Steve gave everyone, to be honest (step #3: "Start small").
For those who missed the webcast, we've posted the slides on our site (a quick registration is required). Helpful hint when you get the deck: check out the very last slide. It's a great summary of the whole thing, subtly entitled "The 1 Slide You Need to Remember about Creating Internal Clouds." The event recording will also be posted shortly (to make sure you are notified when it is, drop us an e-mail at info@cassatt.com).
In the meantime, let us know both what you thought was useful (or not) about the webcast content, and also what topic we should cover in the follow-on webcast that we've already started planning. Hybrid clouds, anyone?
As you've probably seen, Cassatt does an occasional webcast on relevant data center efficiency topics and we like to use those as opportunities to take the market's temperature (even if we are taking the temperature of only a small, unscientific sample). Back in November, we asked attendees of the webcast Cassatt did with James Staten of Forrester (covering the basics of internal clouds) some very, well, basic questions about what they were doing with cloud computing. The results: enterprises weren't "cloudy" -- yet. 70% said they had not yet started using cloud computing (internal or external).
Last Thursday we had another webcast, and again we used it as an opportunity to ask IT people what they actually are doing with internal clouds today. As expected, end users have only just started down this path and they are being conservative about the types of applications they say they will put into an internal cloud at this point. But you'd be surprised how much tire-kicking is actually going on.
This is a bit of a change from what we heard from folks back in the November Forrester webcast. In last week's webcast we were a little more specific in our line of questioning, focusing our questions on internal clouds, but the answers definitely felt like people are farther along.
Some highlights:
The webcast itself: tips on how to create an internal cloud from data center resources you already have. If you didn't read about it in my post prior to the event, we had Craig Vosburgh, our chief engineer and frequent contributor to this blog, review what an internal cloud was and the prerequisites your IT operations team must be ready for (or at least agree upon) before you even start down the path of creating a private cloud. He previewed some of what he said in the webcast in a posting here a few months back ("Is your organization ready for an internal cloud?"). The second part of the webcast featured Steve Oberlin, Cassatt chief scientist and blogger in his own right, covering 7 steps he suggests following (based on direct Cassatt customer experiences) to actually get an internal cloud implementation going.
On to the webcast polling question results:
IT is just beginning to investigate internal cloud computing, but there's significant progress. The biggest chunk of respondents by far (37%) were those who were just starting to figure out what this internal cloud thing might actually be. Interestingly, 17% had some basic plans in place for a private cloud architecture and were beginning to look into business cases. 7% had started to create an internal cloud and 10% said they were already using an internal cloud. Those latter two numbers surprised me, actually. That's a good number of people doing serious due diligence or moving forward fairly aggressively.
One word about the attendee demographics before I continue: people paying attention to or attending a Cassatt webcast are going to be more likely than your average bear to be early adopters. Our customers and best prospects are generally large organizations with very complex IT environments -- and IT is critical to the survival of their business. And, I'm sure that we got a biased sampling because of the title of our webcast ("How to Create an Internal Cloud from Data Center Resources You Already Have"), but it's still hard to refute the forward progress. Another interesting thing to note: we had more registrations and more attendees for this webcast than the one featuring Forrester back in November. I think that's another indication of the burgeoning interest level in the topic (and certainly not a ding at Forrester or their market standing -- James Staten did a bang-up job on the November one).
Now, if it makes the cloud computing naysayers feel any better, we did get 10% of the respondents to the first polling question saying they had no plans to create an internal cloud. And, there was another 20% who didn't know what an internal cloud was. We were actually glad to have that last group at the event; hopefully they had a good feel for some basic terminology by the end of the hour.
IT organizational barriers are the most daunting roadblocks for internal clouds. At the end of Craig's section of the webcast, he recapped all the prerequisites that he mentioned and then turned around and asked the audience what they thought their organization's biggest hurdles were from the list he provided. Only one of the technical issues he mentioned even got votes. Instead, 45% of the people said their organization's "willingness to make changes" was the biggest problem. A few (17%) also mentioned problems with the willingness to decouple applications and services from their underlying compute infrastructure -- an issue that people moving to virtualization would be having as well. 5% weren't comfortable with the shifts in IT roles that internal clouds would cause.
So, despite the 17% that said they had the prerequisites that Craig mentioned well in hand, this seems to be The Big Problem: how we've always done things. Getting a whole bunch of very valuable benefits still has to overcome some pretty strong organizational and political inertia.
IT isn't sure what its servers are doing. One of the 7 steps Steve mentioned was to figure out what you already have before trying to create an internal cloud out of it. Sounds logical. However, by the look of things from our recent survey work and in this webcast poll, this is a gaping hole. Only 9% said they had a minute-by-minute profile of what their servers were doing. Instead, they either only had a general idea (41%), they knew what their servers were originally set up to do but weren't sure that was still the case (24%), or they didn't have a really good handle on what their servers were doing at all (18%). Pretty disturbing, and as Steve mentioned on the webcast, it's important to get this information in hand before you can set up a compute cloud to handle your needs. (We found this problem so prevalent with our customers that Cassatt actually created a service offering to help.)
Test and development apps are the first to be considered for an internal cloud. In the final polling question (suggested from a question I posed on Twitter), we asked "What application (or type of application) was being considered to move to an internal cloud first?" And, despite the data nightmare that would ensue, we decided to let the answer be free-form. After sifting through and trying to categorize the answers, we came up with roughly 3 buckets of responses. People were interested in trying out an internal cloud approach first with:
· Development/test or non-mission-critical apps
· Web servers, often with elastic demand
· New, or even just simple, apps
While a few people also said back-up or DR applications (you can read about one approach to DR using an internal cloud in a previous post) and some pointed to specific apps like electronic health records, most were looking to try something with minimal risk. A very sensible approach, actually. It matches the advice Steve gave everyone, to be honest (step #3: "Start small").
For those who missed the webcast, we've posted the slides on our site (a quick registration is required). Helpful hint when you get the deck: check out the very last slide. It's a great summary of the whole thing, subtly entitled "The 1 Slide You Need to Remember about Creating Internal Clouds." The event recording will also be posted shortly (to make sure you are notified when it is, drop us an e-mail at info@cassatt.com).
In the meantime, let us know both what you thought was useful (or not) about the webcast content, and also what topic we should cover in the follow-on webcast that we've already started planning. Hybrid clouds, anyone?
Tuesday, March 24, 2009
Webcast: Using what you already have to create a cloud
Posted by
Jay Fry
at
9:04 PM
If there's one thing customers hate, it's a great idea that comes with a caveat. Especially a caveat that says something like: "In order to benefit from said great idea, you are required to tear everything out and start all over." That sort of behavior usually gets you kicked out of data center and IT operations meetings. Data centers don’t operate that way. Data center operators don’t operate that way.
However, if there's a way to link a new idea or technology incrementally with what's already underway or where investments have already been made, you're going to get a much warmer reception.
OK, so I'll apply the above truisms to cloud computing (and internal/private clouds especially). If organizations think that in order to get the benefits of an internal cloud they have to start buying their infrastructure anew, they are going to be interested only where they have plans (and budgets) that can support that kind of spending and that kind of change. That's especially difficult in this economy. In fact, there's a non-zero chance that they might be openly hostile and, despite the promised benefits for internal clouds, ignore the concept completely.
Instead, my belief is this: a really useful internal cloud is one that leverages what they already have. And, man, do people have a lot of messy, complex stuff -- VMs, physical servers, applications, networking, the works.
So, how do you build an internal cloud out of what you already have in your data center?
Based on our experiences with customers, there are a couple things you'll need to know and do. Some of these are organizational and technological challenges (Craig Vosburgh has talked about some of the issues in his posts here before), like a willingness to change current procedures and roles, or even having your once-relatively-static CMDB shift into a more dynamic mode.
Once you know you're going to be able to address some of those high-level concerns, you then have to get a real, live project going around an actual set of applications. Rubber, meet road.
To rattle off some of the things we've learned from our customers in getting internal cloud projects off the ground, we corralled a couple of our technical experts for a webcast on the topic next week. Steve Oberlin, Cassatt's chief scientist and newly minted blogger (check out Cloudology), and the aforementioned Craig Vosburgh, our chief engineer, are going to walk through what they've seen that works, what doesn't work, and -- probably most interestingly -- talk about how you can use the data center components you've already invested in as the basis for a cloud-style architecture on your own premises.
This webcast is a follow-on to one we did back in November with James Staten of Forrester, which covered what internal clouds were in the first place (you can view the playback of that one here -- simple registration required) and how they might help data center efficiency. This one is about the next step: where do you start?
One thing that might be of special interest: Steve and Craig will point out some internal cloud starter projects and the characteristics that make those projects good pilots. And, yes, they will talk a bit about how something like Cassatt's software can help, but mostly the webcast will be a synthesis of what we've learned in customer engagements in the hope that you can benefit from some of our war stories. And, the two of them are planning to talk for about 20 minutes each, leaving a bunch of time for live questions from the attendees. You have my word on that. (It helps that I'm the emcee.)
Thanks, by the way, to Eric Lundquist of Computerworld for pointing his readers to the webcast. "I don't usually plug vendor webcasts," says Eric in his blog, "but this upcoming one from Cassatt looks interesting. If you go to the webcast, let me know if you like it or not." We'll work hard not to disappoint, and we'll let you be the judge of that.
There's one caveat (of course): you'll have to show up to the webcast.
You can register for the Cassatt April 2 webcast "How to Create an Internal Cloud from Data Center Resources You Already Have" here.
However, if there's a way to link a new idea or technology incrementally with what's already underway or where investments have already been made, you're going to get a much warmer reception.
OK, so I'll apply the above truisms to cloud computing (and internal/private clouds especially). If organizations think that in order to get the benefits of an internal cloud they have to start buying their infrastructure anew, they are going to be interested only where they have plans (and budgets) that can support that kind of spending and that kind of change. That's especially difficult in this economy. In fact, there's a non-zero chance that they might be openly hostile and, despite the promised benefits for internal clouds, ignore the concept completely.
Instead, my belief is this: a really useful internal cloud is one that leverages what they already have. And, man, do people have a lot of messy, complex stuff -- VMs, physical servers, applications, networking, the works.
So, how do you build an internal cloud out of what you already have in your data center?
Based on our experiences with customers, there are a couple things you'll need to know and do. Some of these are organizational and technological challenges (Craig Vosburgh has talked about some of the issues in his posts here before), like a willingness to change current procedures and roles, or even having your once-relatively-static CMDB shift into a more dynamic mode.
Once you know you're going to be able to address some of those high-level concerns, you then have to get a real, live project going around an actual set of applications. Rubber, meet road.
To rattle off some of the things we've learned from our customers in getting internal cloud projects off the ground, we corralled a couple of our technical experts for a webcast on the topic next week. Steve Oberlin, Cassatt's chief scientist and newly minted blogger (check out Cloudology), and the aforementioned Craig Vosburgh, our chief engineer, are going to walk through what they've seen that works, what doesn't work, and -- probably most interestingly -- talk about how you can use the data center components you've already invested in as the basis for a cloud-style architecture on your own premises.
This webcast is a follow-on to one we did back in November with James Staten of Forrester, which covered what internal clouds were in the first place (you can view the playback of that one here -- simple registration required) and how they might help data center efficiency. This one is about the next step: where do you start?
One thing that might be of special interest: Steve and Craig will point out some internal cloud starter projects and the characteristics that make those projects good pilots. And, yes, they will talk a bit about how something like Cassatt's software can help, but mostly the webcast will be a synthesis of what we've learned in customer engagements in the hope that you can benefit from some of our war stories. And, the two of them are planning to talk for about 20 minutes each, leaving a bunch of time for live questions from the attendees. You have my word on that. (It helps that I'm the emcee.)
Thanks, by the way, to Eric Lundquist of Computerworld for pointing his readers to the webcast. "I don't usually plug vendor webcasts," says Eric in his blog, "but this upcoming one from Cassatt looks interesting. If you go to the webcast, let me know if you like it or not." We'll work hard not to disappoint, and we'll let you be the judge of that.
There's one caveat (of course): you'll have to show up to the webcast.
You can register for the Cassatt April 2 webcast "How to Create an Internal Cloud from Data Center Resources You Already Have" here.
Labels:
internal clouds,
IT operations,
private clouds,
webcast
Monday, December 1, 2008
Forrester-Cassatt webcast poll: enterprises not cloudy -- yet
Posted by
Jay Fry
at
9:32 PM
There are lots of great resources out there if you're interested in cloud computing (no, really?). Some are a little more caught up in the hype, some less so. The trick is distinguishing between the two. We dug up some interesting stats in our recent webcast with Forrester Research that we thought were worth highlighting and adding to the conversation, hopefully tending toward the "less hype/more useful" side of the equation. But you be the judge.
First, some quick background on the Cassatt-Forrester webcast: it was a live event featuring Cassatt's chief scientist Steve Oberlin and James Staten, principal analyst at Forrester Research, held Nov. 20. We had both speakers give their take on aspects of cloud computing. James gave a really good run-down of his/Forrester's take on what defines cloud computing, the positives and negatives organizations will run into, plus discussed examples of what a few companies are actually doing today. He cited a project from the New York Times in which they made 11 million newspaper articles from 1851-1922 available as PDFs -- for a total cost of 240ドル using Amazon EC2 and S3 (and, no, I didn't forget a few zeroes. The actual cost was indeed 240ドル).
James gave an overview of platform-as-a-service and infrastructure-as-a-service offerings out there today, plus advice to IT Ops on how you should experiment with the cloud -- including a suggestion to build an internal cloud inside the four walls of your data center. All in all, he left you with the feeling that cloud computing is indeed real, but really was asking IT ops folks what they're doing about it. (Translation: you can't afford to just sit on your hands on this one...)
Steve Oberlin's comments used James' thoughts about building an internal cloud as a jumping-off point. He explained a bit about how Cassatt could help IT build a cloud-style set-up internally using the IT resources they already have in their data centers. The main concept he talked about was having a way to use internal clouds to get the positives of cloud computing, but to do so incrementally. No "big bang" approach. And, how to help customers find a way to get around the negatives that cause concern about today's cloud offerings.
And that's where some of the interesting stuff comes in.
On the webcast, we asked some polling questions to get a feel for where people were coming from on the cloud computing topic. Some of the results:
To most, cloud computing is a data center in the clouds. There are many definitions of what cloud computing actually is. OK, that's no surprise. For the webinar attendees, it wasn't just virtualized server hosting (though 35% said it was). It wasn't just SaaS (though 49% said that was their definition). It wasn't just a virtualized infrastructure (the answer for 53%). By far the largest chunk -- 78% of the webinar attendees -- said it was an entire "data center in the clouds." And that was before James Staten offered his definition (one Steve liked, too): "a pool of highly scalable, abstracted infrastructure, capable of hosting end-customer applications, that is billed by consumption."
Most people haven't even started cloud computing yet. Of all the data we gathered from the webinar, this was the one that most starkly showed where people are with cloud computing. Or rather, where they aren't. We asked "what is your company currently using cloud computing for?" 70% replied that they are not using it yet. James said Forrester's most recent research echoes these results. So, there's a long way to go. Some were starting to experiment with websites/microsites, rich Internet applications, and internal applications of some sort. Those were all single-digit percentage responses. So what was the most frequently selected application type, aside from "none"? Grid/HPC applications (9%).
Security, SLAs, and compliance: lots and lots of hurdles for cloud computing. We asked about the most significant hurdles that webinar attendees' organizations faced with cloud computing. The answers were frankly not very surprising: they are the same things we've been hearing from large companies and government agencies for months now. 76% cite security as the cloud's biggest obstacle for their organization. 62% said service levels and performance guarantees. 60% said compliance, auditing, and logging. As if to underscore all of this, nobody clicked the "cloud computing has no hurdles" button. Not that we expected them to, but, hey, we're optimistic.
By the way, we don't take these results to mean that all is lost for cloud computing. On the contrary, it's these negatives and hurdles that people see that we think we've got some solutions for at Cassatt. In any case, these simple polls from our webcast just scratch the surface and beg for some follow-up research. In the meantime, if you're interested in hearing the webcast firsthand, you can listen to and watch a full replay here (registration required). If you'd like the slides, drop us an e-mail at info@cassatt.com.
Feel free to post your thoughts on these results.
First, some quick background on the Cassatt-Forrester webcast: it was a live event featuring Cassatt's chief scientist Steve Oberlin and James Staten, principal analyst at Forrester Research, held Nov. 20. We had both speakers give their take on aspects of cloud computing. James gave a really good run-down of his/Forrester's take on what defines cloud computing, the positives and negatives organizations will run into, plus discussed examples of what a few companies are actually doing today. He cited a project from the New York Times in which they made 11 million newspaper articles from 1851-1922 available as PDFs -- for a total cost of 240ドル using Amazon EC2 and S3 (and, no, I didn't forget a few zeroes. The actual cost was indeed 240ドル).
James gave an overview of platform-as-a-service and infrastructure-as-a-service offerings out there today, plus advice to IT Ops on how you should experiment with the cloud -- including a suggestion to build an internal cloud inside the four walls of your data center. All in all, he left you with the feeling that cloud computing is indeed real, but really was asking IT ops folks what they're doing about it. (Translation: you can't afford to just sit on your hands on this one...)
Steve Oberlin's comments used James' thoughts about building an internal cloud as a jumping-off point. He explained a bit about how Cassatt could help IT build a cloud-style set-up internally using the IT resources they already have in their data centers. The main concept he talked about was having a way to use internal clouds to get the positives of cloud computing, but to do so incrementally. No "big bang" approach. And, how to help customers find a way to get around the negatives that cause concern about today's cloud offerings.
And that's where some of the interesting stuff comes in.
On the webcast, we asked some polling questions to get a feel for where people were coming from on the cloud computing topic. Some of the results:
To most, cloud computing is a data center in the clouds. There are many definitions of what cloud computing actually is. OK, that's no surprise. For the webinar attendees, it wasn't just virtualized server hosting (though 35% said it was). It wasn't just SaaS (though 49% said that was their definition). It wasn't just a virtualized infrastructure (the answer for 53%). By far the largest chunk -- 78% of the webinar attendees -- said it was an entire "data center in the clouds." And that was before James Staten offered his definition (one Steve liked, too): "a pool of highly scalable, abstracted infrastructure, capable of hosting end-customer applications, that is billed by consumption."
Most people haven't even started cloud computing yet. Of all the data we gathered from the webinar, this was the one that most starkly showed where people are with cloud computing. Or rather, where they aren't. We asked "what is your company currently using cloud computing for?" 70% replied that they are not using it yet. James said Forrester's most recent research echoes these results. So, there's a long way to go. Some were starting to experiment with websites/microsites, rich Internet applications, and internal applications of some sort. Those were all single-digit percentage responses. So what was the most frequently selected application type, aside from "none"? Grid/HPC applications (9%).
Security, SLAs, and compliance: lots and lots of hurdles for cloud computing. We asked about the most significant hurdles that webinar attendees' organizations faced with cloud computing. The answers were frankly not very surprising: they are the same things we've been hearing from large companies and government agencies for months now. 76% cite security as the cloud's biggest obstacle for their organization. 62% said service levels and performance guarantees. 60% said compliance, auditing, and logging. As if to underscore all of this, nobody clicked the "cloud computing has no hurdles" button. Not that we expected them to, but, hey, we're optimistic.
By the way, we don't take these results to mean that all is lost for cloud computing. On the contrary, it's these negatives and hurdles that people see that we think we've got some solutions for at Cassatt. In any case, these simple polls from our webcast just scratch the surface and beg for some follow-up research. In the meantime, if you're interested in hearing the webcast firsthand, you can listen to and watch a full replay here (registration required). If you'd like the slides, drop us an e-mail at info@cassatt.com.
Feel free to post your thoughts on these results.
Subscribe to:
Comments (Atom)