bluebay2014 - Fotolia

bluebay2014 - Fotolia

Vendor lock-in still issue in new data center servers

  • 0

With converged infrastructure, data center servers are stepping back to the mainframe ideals of self-contained computing -- with a few differences.

Vendor lock-in is a pervasive issue, especially for data center servers.

Self-contained servers come with the central processing unit and supporting chip sets of memory, storage and network interface cards (NICs), allowing them to talk to the rest of the data center and the outside world. The mainframe is a prime example -- highly proprietary with specialized connectors to deal with expansion, such as extra storage. With such a custom unit, the only standardization needed is at the NIC.

Servers talk to each other via standardized cables and plugs. But the problem with self-contained servers is this architecture prohibits availability. If any part of the monolithic server fails, it needs to be replaced. Alternatively, data centers spend huge amounts on high availability for these systems.

To deconstruct the data center server, storage moves to a different environment. Storage area networks (SANs) create a shared pool of highly available storage; servers can fail without affecting data. Virtualization minimizes application downtime by spinning up new servers to interface with the data.

With this improvement comes a hefty price tag, and a new proprietary conundrum. Few SANs are compatible with other vendors' kits, and heterogeneous storage management software promises more while providing less.

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Blade computing also addresses the proprietary issue of self-contained data center servers. Data centers can buy separate server, storage and network blades and build them up into highly flexible server units. The downside? You have to know what you're doing. With the many errors in configuration and chassis engineering that yield major hot spots and cooling failures, it's clear that blade computing can create more problems than it solves.

Blade computing is another proprietary leap out of the kettle and into the fire: Each chassis is a proprietary design to the vendor. Even if the chassis servers don't meet requirements, maintaining a commercial relationship with the existing vendor is often cheaper than adding new servers.

New approaches to an old problem

Data center servers follow two main architecture approaches: Scale-out commodity clouds of pooled resources and engineered converged systems.

Commodity equipment, with as much of the resources pooled as possible through a cloud platform, removes single points of failure. This works well for service providers and IT shops with low-level workloads, and where little hardware tuning is required to gain adequate performance.

With converged infrastructure servers, an engineered system of components are pre-configured to provide a high performance platform. Examples from well-known data center vendors include Cisco UCS, VCE Vblock, IBM PureFlex, Dell Active Systems and HP Converged Systems. With as much standardization at the periphery as possible, IT teams can ignore the system's internals.

As a consumer, you are buying into a hardware platform and an approach where the vendor optimizes the system for your workloads. This is very difficult to do with the pure commodity approach. Therefore, highly proprietary connection buses (such as IBM's CAPI), highly specific storage systems and other components go inside to improve performance. The vendor pre-configures everything to work, with fans and wiring in safe, reliable set ups, redundant power supplies and NICs where needed, and so on. We are back to the age of the mainframe -- almost.

Modern IT shops need systems that expand easily, demanding standardization on the edges of the converged infrastructure. If extra storage is required, data centers might want to go to a third party, such as EMC or NetApp, or to the new startups such as Pure Storage, Violin or Nimble for systems that easily attach to the platform. They should be able to implement fabric networks with other vendors' network switches, Cisco, Juniper or Brocade, for example, rather than being locked in to the engineered system vendor.

Don't get too hung up on the internals of these tailored systems -- it's how a system interfaces with the rest of the world that matters. As long as a proprietary server system can attach to external storage and networking systems, can integrate and interoperate with the rest of your IT platform, and can support the workloads you require, engineered is the way to go.

This was first published in December 2014

Dig deeper on Converged infrastructure (CI)

Related Discussions

Clive Longbottom asks:

Is a converged infrastructure better than scaling out with commodity hardware?

1  Response So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close