It might be cliche to note that the dawn of distributed computing saw the loss in many companies of the discipline that existed in the mainframe data centers that so many companies abandoned in favor of minicomputers - and ultimately server computing. The overarching model in the traditional mainframe environment consisted of planning carefully what IT resources would be provisioned to which business processes. The process of provisioning hardware and software resources was less of an issue then than it is in many companies today, in large part because of the single vendor dominance of the mainframe provider. IBM dictated interoperability standards for all equipment connected to the central processor and for all software that would run on the platform. Adherence to these de facto standards made short work of resource provsioning.
This is not to say that the mainframe model should be the model for all computing today. In some cases, it was difficult or impossible to change resource allocations once committed, limiting the ability of IT (then Data Processing) to turn on a dime in the face of changing business requirements.
To the extent that distributed computing has improved the agility of business IT and improved its ability to flex in response to changing business priorities - usually by standing up another server, operating system and application on the fly - this should be acknowledged. However, the very nature of distributed computing makes coherent management and provisioning of resources that much more difficult.
The bottom line is that efficient IT resource provisioning requires disciplined processes for identifying changing requirements, an effective means for determining what resources are appropriate and available for allocation to meet requirements, tools for reallocating resources with confidence of successful outcomes, a method for monitoring outcomes of resource provisioning on the business and the infrastructure, and a method for returning resources to a pool once they are no longer needed. It goes almost without saying that a process is also needed for refining on a continuous basis all of the preceding processes.
The key to on-the-fly provisioning of information processing resources to business activities is the establishment of collections of hardware and software components and processes for their control and management as a set of pre-defined services. That way, in theory, services can be selected and applied to a business process like ordering a meal from an a la carte menu at a restaurant.
Services may represent rudimentary resources such as storage capacity, CPU processing power, or network bandwidth. But they can also represent very specific functions like
The services menu can become quite detailed and sophisticated. However, it is essentially an abstraction of a set of uniform components and processes that provide a reusable resource that can be placed in a pool and deployed for use as needed.
Pooling is the other key to understanding a service-based provisioning model. Creating and pooling services is not quite as easy as it sounds. Not all hardware from all vendors works the same way. Storage technology provides perhaps the most obvious example of this phenomenon.
In the realm of data storage, vendors have designed their boxes differently. Engineers have found different approaches for routing data through their boxes of hard drives. Most venodrs have added "value-add" features and functions to their array controllers (the brains of their storage box) in an effort to differentiate their products from one another and, perhaps more to the point, to increase the pain that the consumer will experience if he ever seeks to re-host his data on a competitor's box of disk drives.
So, while disk drives may be commodity components of any array and while the shells that are used to house the drives may come from only a few providers, arrays themselves are not commodity goods that can be mixed and matched at wiill withour reference to the brand name on the cabinet. A workaround to this hurdle is to begin buying barebone storage rigs that are not encrusted with on-box value add software. Deploy these software functions instead on providers that are external to the array of disks. Not only does this strategy help to eliminate "functionality stovepiping" and vendor lock-in, it also makes the game of building services easier and arguably makes infrastructure easier to manage.
Pooling is an old concept. Again, in the storage realm, the idea of pooling capacity dates back to the mainframe, but found its distributed computing equivalent in the Enterprise Network Storage Architecture (ENSA) first advanced by Digital Equipment Corporation soon after the company was acquired by Compaq. (HP, which acquired Compaq, sought to disappear all copies of the ENSA whitepaper much in the way that a Third World dictator seeks to disappear his political opponents.)
ENSA was a strategy for aggregating all storage resources into a common virtual pool that could be dynamically allocated to applications on-the-fly. The strategy entailed the creation of a virtualized Storage Area Network (SAN) - though not the SANs (Fibre Channel Fabrics, to be accurate) that we have read about or deployed over the last decade. In reality, the ENSA writers never posited any sort of plumbing to interconnect storage itself or to enable its quick allocation to specific servers and the applications they hosted. The term "network" was a subtle assumption that common networks would eventually provide all of the necessary interconnects. Since Fibre Channel isn't a network protocol, you couldn't technically build a SAN from it, at least not in the ENSA sense.
This didn't stop progress however on the idea of storage pooling. One tack pursued by vendors - in distributed storage and networks a decade ago, and more recently in the distributed server space - is the idea of virtualization. ENSA called for storage virtualization (though, again, it provided little guidance for actually virtualizing storage itself) to simplify the dynamic allocation of capacity to applications that needed it from a common pool of resources. Network virtualization was pursued in part to create "private networks" through public network plumbing for purposes of security. Today's hypervisor-based server virtualization seeks to leverage the commoditized design of servers to enable application multi-tenancy on less hardware and to provide a means to re-host application workload on the fly to other commodity servers.
Virtualization doesn't truly address the differences between the underlying components of the infrastructure, it only masks them from view. We illustrate this idea humorously with the picture at right.
A less than attractive server administrator (female) and a less than attractive storage administrator (male) find their social life limited by their looks, so they go to a plastic surgeon and have some work done.
They emerge looking like supermodels and one morning they meet at the local coffee emporium, strike up a conversation, fall deeply in love, marry and reproduce.
Their offspring, however, is ugly because plastic surgery doesn't change DNA. It just masks it from view.
Virtualization does the same for technology infrastructure: it doesn't fix it or make it more interoperable, it just masks complexity and presents an abstract view.
Virtualization technologies are improving, of course. They are the centerpiece of both consolidation strategies in many companies and in the development of "clouds." Under the abstraction layer, however, much work remains to be done to get to real pooling and dynamic service allocation.
Assuming that resources and processes can be aggregated in an intelligent service-oriented way, and pooled for dynamic assignment to business processes, the other hurdle will be to find a means to translate business requests into technologist-friendly service requests. Much work is needed in this space to develop a ready path for business-centric IT service provisioning.
With all due respect to the ITILs, CoBits, Six Sigmas and others, common sense rules still apply. We offer ours in the sidebar at right.
Basic Components of Business-IT Service Provisioning
1Define Needs: business requirements need to be identified and "translated" into a form intelligible to technologists. Needs must be mapped to resources.
2Assign Resources: in geek speak, services refer to hardware resources and software processes (and human resources too). If resources and processes adhere to a "services" structure and services are "pooled" or aggregated in a manner that makes their assignment similar to ordering dinner off of an a la carte menu, this step is a snap. If not, the business needs to get to work on a pooling strategy.
3Configure & Tune: before new services are released to the business, they need to be tested and validated under workload to make sure they fit the bill. Moreover, ancilary services are needed to ensure the compliance, security, auditability and protection of the new configuration and the new data it will generate.
4Deploy & Operate: give the users the keys after ensuring that they have received any necessary training and ensure that new configurations and workload are well understood by IT operations staff.
5Monitor & Manage: as new services are operated, they need to be monitored and managed on an on-going basis both to ensure operational efficiency and to document performance metrics that can help guide future service provisioning.
6Teardown & Re-pool: when needs change, provisioned resources may be unnecessary and availabe for re-pooling and re-allocation. This process requires careful documentation and coordination.