Internet Explorer is not supported by our website. For a more secure experience, please use Chrome, Safari, Firefox, or Edge.
Infrastructure Software
Rebecca Buckman  |  November 17, 2014
Thirteen Year-Old Blog Post Presages Datacenter Innovations

In April 2001, Battery published this blog post about the dawning age of scale-out computing and huge datacenters. This was pre-VMWare and pre-Amazon Web Services. The piece outlined an investment thesis that led to the formation of BladeLogic, a seminal, datacenter automation company Battery helped create and build.

We’re sharing the piece in the spirit of Throwback Thursday (#TBT). The idea underlying the post was that a new breed of IT vendor would emerge with targeted products to address this new phase of computing—namely, those that would help one system administrator manage 1000 or more servers, up from the 10 or so many were managing at the time. Now, of course, nimble startups like Chef are allowing companies to manage millions of servers.

It was fun to watch the BladeLogic and Opsware teams emerge as the leaders in the category and duke it out; it was one of the most exciting, recent rivalries in enterprise software.  

But while we pegged many key trends of scale-out computing in this post, the Baha Men—who popularized “Who Let the Dogs Out”–have unfortunately not stood the test of time.

Like the catchy lyrics of Baha Men’s smash hit, “Who let the dogs out?”, many IT system administrators are probably humming their own version of the song: “Who let the servers out?”

Just three years ago, a datacenter of one hundred boxes would have been considered enormous.  Today, this size datacenter is commonplace for even modest Internet or intranet application deployments for service providers and enterprises. In fact, portals (Yahoo, Google), investment banks (CSFB), managed hosters (Digex, USIx) routinely deploy and manage servers in the thousands, and the speed of growth is only increasing.

Applications that previously ran on a single mainframe now run in a modular, distributed-computing environment with racks of servers.  These smaller footprint servers allow corporations to scale their IT infrastructure incrementally, as opposed to the step-curve model necessitated by the large servers of the past.  The resulting systems-management challenge is that the number of datacenter devices (servers, load balancers, routers, firewalls, etc.) is increasing exponentially (up from sales of 12mm servers and 15mm datacom devices in 2000), while the number of trained IT professionals is growing linearly (according to IDC, 300,000 IT jobs went unfilled in 2000 and that statistic is expected to grow to 500,000 in 2002).

Similar to the architecture inflection points in the past (mainframes to minis to client-server), Battery believes new infrastructure management tools and processes will be required to successfully deploy and manage the “thousand server” environment for both service providers and enterprises.  We have been analyzing this trend over the last several months, and expect to see several significant changes, including the following:

IT organizations will become increasingly structured. 

As the ratio of boxes to IT staff increases, more disciplined processes will emerge for provisioning, change management, monitoring, security and performance management.  Historically, IT organizations have been run in a relatively ad-hoc manner.  The process discipline evident in a manufacturing operation has never been replicated in the IT organization. To some degree, we have already witnessed the first phase of this development.  Already, significant portions of corporate IT infrastructures are outsourced to managed service providers, such as a Digex or USIx.    With multiple organizations involved in the ultimate uptime for mission-critical applications, strict operating processes must be enforced.

In the long-run, we believe the operating discipline adopted by the managed-service providers will make its way back to the enterprise internal IT department.  Already, business units are being presented with greater choice for their IT needs, such as:

  • Existing internal IT departments
  • Point-product outsourcing services (web, content, application, storage, collocation, security etc., e.g. Exodus, Akamai). Enterprise demand for hosting services (web, application, storage, and other) is expected to grow from $5.5 billion in 2000 to $74.5 billion in 2005 (MSDW report 11/20).
  • Integrated outsource providers (e.g. IBM, CSC, EDS, etc.)
  • IT Services providers (e.g. D&T, Accenture, Viant, etc.)

As competition increases, we expect both enterprise and service-provider IT organizations to fundamentally rethink the way they deliver service to their business unit or enterprise customers.   For example, sloppy change management is still a large driver of server and application downtime. Corporations are now demanding service level agreements (SLAs) from their IT providers, and many service providers are adopting carrier-like discipline in their internal operations to deliver such SLA guarantees.

New system management vendors will emerge.

In the past, one server ran many applications; today, one application runs on many servers.  While the fundamental principles of systems management are generally independent of system architecture, current products were designed for the client-server generation as opposed to the distributed environment of the future.

IT organizations will still need to manage the core system management functions:  performance/availability, capacity planning, configuration, change management, recovery/back-up, security and audit.  However, we believe a new breed of vendors will emerge with products specifically targeted at the new data center market.

Why are the current systems management players not equipped to develop these applications?   There are two reasons: 1) business model, and 2) deep understanding of recent points of pain.  IBM Tivoli, CA Unicenter, BMC, HP OpenView and others are large multi-hundred million or billion dollar revenue businesses.  Their core expertise is capitalizing on large markets.  They are measured by earnings performance in their markets, which is generally driven by getting operating leverage out of their products and extensive distribution channels. They are not well equipped to make venture investments on innovative markets. Said another way, the losses incurred creating new markets damage their earnings scorecard, particularly when the early market potential is measured in only tens of millions of dollars.  Additionally, many of the traditional vendors are not purely focused on the current computing needs of the distributed enterprise or service provider.  The needs of this new segment are quite specific, and cannot be met by simply repositioning existing offerings.

Our research in this space is ongoing, but early pain points we have identified include:

  • Software Provisioning, Change Management and Distribution
  • Performance Monitoring with Dynamic Resource Allocation
  • Total Cost-of-Computing Analysis Tools

The shift from single server to multi-server architecture is one more step in the eventual migration to distributed network computing.  We expect IT organizations will experience near-term growing pains in their journey.   The adoption of new system-management tools and more disciplined IT processes should help minimize these bumps along the road.  And who knows, maybe one day, sys admins may even be grateful that someone unleashed all these servers in the first place.

We believe there are several exciting investment opportunities in the broadly-defined systems- management sector.  We are continuously looking to partner with entrepreneurs seeking to capitalize on the emerging trends in this market.  If you would like to discuss this project area or this article in more detail, please feel free to contact Mark or Neeraj.

Back To Blog
Related ARTICLES