Building block architectures

  • Vote This Post

    1

What, at the end of the day, is the promise of cloud computing?

Simplicity. What the cloud promises is information power without worrying about the mechanics of delivering it.

Much like your home — you replace the old television with a home theatre system; you go from one desktop PC to laptops, smartphones, tablets and chargers galore — you just fill up the plugs and the local utility delivers more power to run everything. It’s pay as you go, and pay for what you use.

Really effective use of the promise of the cloud requires massive upheaval, though. Existing applications won’t necessarily cut it: that idea of “power on demand” really only works well when your applications are modular, triggering the start up and shut down of processing instances to handle shifting transaction loads. Most of us don’t have a portfolio that looks like that, yet.

There’s also — despite the growing number of quality Canadian cloud suppliers that ensure information stays in the country — still worries about just how much trust you can place in an all-cloud solution.

So perhaps it will make sense to move in stages. Some services — mail comes immediately to mind — could jump directly to a public cloud provider. A core application might move to a private cloud solution offered by that application vendor. There’ll still be some elements with a business process outsoucer or you may have your assets parked in a colocation facility or managed service provider.

In the interim, though, maybe some pre-configured infrastructure will also make sense.

Top flight performance from infrastructure these days means managing a fair number of variables. The network interconnection components, processor mix and data stores all have to be balanced together. Microcode patches from one vendor can cause subtle performance changes that ripple through a configuration until other vendors have updated in turn.

Take a look at the infrastructure organization in any reasonably sized enterprise. Storage, processors and operating systems, and networks are usually three separate silos. Yet virtualizing and driving maximal utilization at high performance from the configuration requires integrated skills.

So instead, perhaps letting a vendor configure for you is the right answer. Something like VCE’s Vblock infrastructure might be a reasonable intermediate building block toward that cloud future.

What a Vblock is is a box that contains a balanced, maintenance-synchronized configuration of networking, processing and storage, running virtualized workloads. Think of it as a private cloud operating within your own data centre. Instead of having to select components, balance the configuration, and handle patches from multiple vendors, it’s a single vendor integration that lets IT infrastructure move “upmarket” (much as it would if everything came from the cloud).

Similarly, for analytics (BI-style against a data warehouse or big data style including unstructured information), you could configure hardware yourself, alongside implementing the necessary databases, analytics engines, Hadoop capabilities and the like — or you could buy a fully integrated data computing appliance such as Greenplum’s.

What these sort of integrated solutions are pointing to is a shift in the way we plan and think about infrastructure, triggered by the emergence of the cloud. The decisions that lie ahead aren’t about “standard computing” vs “cloud computing”: they’re about what mix of cloud-style offerings are part of the IT service set. The notion of a highly virtualized, responsive to workload, standardized infrastructure is the future — we can either provision it piecemeal and invest time in configuring and maintaining the elements, or we can move upmarket and configure standard blocks, adding more as application capacity requirements increase (as we update our portfolios to take advantage of the technology).

Then, the decision whether to run it privately (whether on our own or through a partner) or run it in a public facility becomes a much more relevant one: one of dollars and cents (indeed, most will probably run a hybrid of the two, buying capacity on demand for peaks against a core kept close, for a decade or more). Meanwhile, instead of spending a career racking, patching and trouble-shooting configurations, we’ll be moving our skills upward to value added services for the enterprise.

The market is shifting: it’s not about OS vendors or hardware manufacturers alone any more. Time to get familiar with the integrators and what their products can do for you.

Bruce Stewart Bruce Stewart (112 Posts)

Bruce Stewart is a 39 year veteran of IT management and above. He is an executive advisor serving CIOs and senior executives in areas of governance, strategy, complex architectural transitions, portfolio yield and value generation.


  • DonSheppard

    Unfortunately I didn’t make it to the EMC conference yesterday……you did, obviously!
    I fully agree with your idea that a basic promise of cloud computing is simplicity – it definitely is!

    But the road to simplicity is paved with standadization.  We are moving from component standardization to sub-system and service standardization.  The idea is “data centre as a service” – it may be in a single appliance box or a complete Google data centre.

    And, yes, we cannot have a big bang conversion.  It may take generations for cloud computing to fully unfold as a mainstream approach. Selecting a vaiable starting point and avoiding destructive competition among suppliers will be critical. 

    Perhaps the “legacy” telephone system was one of the first globally interconnected, multi-supplier, cloud-based approach to communications.  I also remember when the X.400 standards were first being developed that there was a similar vision for email – that would have been essentially a cloud-based, multi-supplier store-and-forward message processing service.