Advertisement

You will be redirected to the page you want to view in  seconds.

Data-center consolidation: Think like the best

Aug. 8, 2014 - 09:45AM   |  
By DAVE GWYN   |   Comments
Dave Gwyn
Dave Gwyn (Courtesy Photo)

Data-center consolidation continues to be a hot topic and with good reason. Recent estimates indicate that the federal government is far from reaching its goal to close or consolidate 40 percent of data centers by 2015. What’s more, even as Congress moves to turn it into law, some in government are hinting that the Federal Data Center Consolidation Initiative (FDCCI) may, in fact, be losing its momentum. One possible explanation for this is that agencies are attempting to modernize and consolidate using the same approaches that caused the data-center sprawl in the first place.

Modernizing the right way

Some of the largest, most successful companies in the world solved these challenges long before the government pondered them. Companies like Google, Facebook, Twitter and Yahoo consider every aspect of data-center efficiency, including power, cooling, space, infrastructure, maintenance, IT manpower and training costs. They all decided to move away from the expense and complexity of traditional, proprietary “three-tier” architectures, replacing specialized servers, switches and storage area networks (SANs) with simple, inexpensive, commodity hardware.

Why? Because hardware no longer needs to be purpose-built. As the iPhone has proven, software can define what a simple piece of hardware does. Just as software can vary an iPhone’s function minute to minute, it can define what an inexpensive, commodity server does. Google’s data center, for example, is filled with racks of generic, rack-mounted servers — each providing different functionality based on what software is telling it to do. Yet the servers are simple, inexpensive and easy to maintain, scale and seamlessly cluster, with no need for hardware- or vendor-specific specialists for maintenance.

So what is needed to shift the thinking in a new direction in order to realize the efficiencies and cost savings promised by FDCCI? Openness to the notion that the best examples of data-center efficiency include taking this new “Web-scale” approach, making computing more efficient and less expensive, and allowing agencies to remain focused on their missions. It optimizes an organization’s virtual environment, eliminating the need for SANs and the complex switch fabric required to connect to virtualization hosts. Web-scale dramatically decreases an agency’s infrastructure footprint, power and cooling, resulting in lower costs than traditional approaches.

Addressing the myths

Along with new technologies and approaches comes the fear of giving up control and the uncertainty of change. The human dynamic is a powerful factor, and resistance to change may be a bigger obstacle than the technology itself.

In addition to addressing any potential internal issues, agencies need to solve several other myths:

The traditional, three-tiered SAN-based architecture is the best platform for organizations with high-performance demands and increasing amounts of data. This was once the case because key features of enterprise virtualization only worked when SANs were present, but with the emergence in 2011 of software-defined storage that is no longer true.

The primary focus of FDCCI should be on decreasing the number of data centers.Savings will not occur simply by closing down individual centers or moving servers from one building to another. The best results come from improving efficiency by scaling down across the board: reducing maintenance costs of physical facilities and labor force, energy and power consumption, cooling costs, capital expenses and operational costs.

Advanced technologies such as those that power leading Web-scale and cloud infrastructures are expensive. The same advanced software-defined technology that the big guys use is now within reach of any size government agency looking to modernize its system architecture, and it typically comes in at 40 to 50 percent of the capital expenditure and 35 percent of the operating expense of traditional hardware.

■ New infrastructure procurement is a hassle and causes major project delays. A valuable benefit of a converged system is that it requires less procurement effort due to its drastically simplified bill of materials. And the solution is delivered as one package compared to multiple shipments of storage and servers that arrive from different vendors at different times, delaying progress.Procurement does, however, warrant some scrutiny — previous-generation architectures represent significant revenue to some federal vehicle holders, and any perceived disconnect between profit and modernizing data centers may slow adoption.

Software-defined, Web-scale infrastructure is transforming the data center as we know it, and hundreds of forward-thinking government agencies have embraced it to handle virtualized environments for mobility, private cloud, big data and continuity of operations and disaster recovery initiatives. Agencies looking to eliminate the many challenges associated with data-center consolidation should consider Web-scale IT as a solution, with the understanding that leveraging the same old architecture cannot deliver the drastic evolution the FDCCI intends.

Dave Gwyn is the vice president of Nutanix Federal.

More In Federal IT