Resources—News, Articles and Events

High Density Design Approach

Is high density really here? What is high density? How much density can my data center handle? These questions have always been part of the IT/facilities dialogue. But now they resonate with more urgency due to several recent technical developments.

Today’s leading edge IT developments are relying more and more on blade server technology. According to International Data Corporation, blade server technology will become the dominant technology trend in the next few years. By 2007, blade servers will make up one quarter of the total server market. The trend to blade servers will have a significant impact on IT infrastructure that is migrating from the larger form factor servers that have been the workhorses of the last five years.

Blade servers, which consist of a series of vertically oriented circuit-cards, each of which contain multiple processors, memory, communications & networking chipsets, and spinning storage, are usually packaged in a single chassis that also includes redundant hot swappable power supplies, and a common communications backplane connected to a managed switch. The horsepower available in a single rack filled with blade servers used to require 168 sq. ft. of space on the raised floor of a data center vs. 24 sq. ft. for a single rack (including clearance and circulation). This seven-fold reduction in space however does not come with a corresponding reduction in power consumption or heat rejection. What you will find is that most organizations will experience significant reductions in populated area on the raised floor while pushing their UPS and critical power systems to the limit.

Cooling is also problematic because even if you spread these racks out, the difficulty is in getting enough directed airflow through the densely packed racks, and returning the heated air back to the heat rejection device without overheating adjacent equipment. A typical data center will have a slow moving high volume of air flowing under the raised floor and wafting up through perforated floor tile in a cold aisle. Depending on how return air is controlled or directed back to the air conditioning unit, you may end up with very little of the cooled air actually doing its job cooling the blades, but instead bypassing or mixing with the cooled air as it flows back to the return opening. In this scenario, the heat from the rack will rise into the ceiling area, get cooled there and re-cycle through the raised floor. The blades, which are designed for 34 degree rise (from a nominal 70 deg. F to 104 deg. F), experience higher and higher temperatures down at the rack level with entering air sometimes at 75 or 80 and discharge air well over the equipment rating. Equipment failures due to high temperature increase exponentially as the temperature exceeds the equipment ratings.

For large blade installations there is another problem. When critical power systems consume more than 3,320 kva, you will exceed the maximum 4000A UL rating classification for low voltage switchboard style buses and circuit breakers. This is important because high availability systems require dual redundant path architecture to allow for component failures even when primary systems are down for maintenance. The right approach will distribute the redundancy requirement across high availability clusters that “share” or “distribute” the redundancy so that any two out of three, three out of four, or four out of five clusters can support the entire enterprise – without exceeding the 4,000 amp limit – depending on the total load. This approach will also increase reliability by ensuring that each cluster is partially loaded to prevent failures due to thermal stress when a normally zero percent loaded cluster suddenly is required to support a significant critical load. The shared redundant concept provides similar reliability to the popular system + system approach advocated for Tier 4 data centers by the Uptime Institute. The system + system is more reliable than shared redundant below 3,320 kva, but less reliable for systems over that level.

History & Data

The blade phenomena is only the latest in the short history of information technology. Earlier epochs were punctuated by such advances as transistors, integrated circuits, Moore’s law and the ever shrinking CPU chip, and the migration away from centralized industrial strength data processing machinery (otherwise known as the mainframe) and towards distributed computing platforms. Each era left its imprint on the raised floor, and the following table will help illustrate what was considered low, medium, high and
ultra-high density.

Year Low Medium High Ultra-High
1970 10-15 20-30 40-50 60-75
1980 20-30 40-60 75-100 100-120
1990 30-45 50-80 90-120 130-150
2000 45-60 75-100 120-175 200-300
2010 60-80 100-150 200-300 300-500

High Density Characteristics

Low: Comprised of mainframe and large scale unix (such as AS400) with a high proportion of space occupied by both active and backup storage.

Medium: A homogenous mix of mainframe and wintel servers with a high proportion of SAN cabinets. Tape is probably in another room or off-site. Most corporate enterprise data centers fall into this category.

High: A wintel server environment with about 20% blade deployment, and a high proportion of SAN cabinets. PDUs and cooling units are usually located outside the raised floor environment.

Ultra-high: A maximum blade deployment with little or no storage, principally utilized in research and supercomputing architectures for simulations.

Design Approach

The right approach will address the cooling challenges, power challenges and the space planning challenges so that an integrated solution is developed that is in alignment with project objectives, timeline and preliminary budget assumptions. The cooling team should determine the best approach from three basic cooling models as follows:

  1. Directed Air: This concept controls airflow through the installation of ductwork between the air handling unit supply outlet and the cold aisles, and between the air handling return inlet and the hot aisles. The benefit here is that you can provide sufficient control of the airflow by reducing recycling and bypass of cooled air into the return air stream. The performance capacity of this approach will be dictated by the degree of control over the air stream, the need for a permanent and less flexible rack layout, and the height of the ceiling needed for an extensive network of ducts.
  2. Water-Cooled Cabinets: This concept utilizes cabinets similar to Sanimina’s Ecobay product to support up to 24kw per cabinet. The downside is that they are expensive and you need to provide chilled water to the cabinet.
  3. Ceiling Mounted Refrigeration Coolers: These units utilize refrigerant rather than chilled water and can be installed to support up to 500 w/sf through the use of a remote refrigerant pumping station which interfaces with a building chilled water loop. The ceiling mounted units receive return air from the hot aisles and blows cold air down on the cold aisles. Since the cooled media is a refrigerant, any leaks will only result in the escape of gases rather than liquids.

An additional problem with high density facilities is the very short duration which can be tolerated without adequate air flow or cooling media. There will be a need (depending on the outcome of the criticality analysis and reliability modeling described below) to deal with fans and pumps on UPS power as well a thermal storage system in the form of a chilled water tank or ice storage system to allow the UPS to continue to support critical loads to the battery end-of-life. The goal is to prevent the IT equipment from failing on overtemp conditions prior to a UPS shutdown. In some cases the piping can be sized to provide adequate ride through of the cooling media.

How Much Density Can I Handle?

Simply calculating the average power density from your existing power and cooling systems will not tell you what your tolerance for high-density platforms will be. Most medium density data centers can tolerate up to 250 watts per sq. ft. in some limited deployment. The degree of the deployment capability and the extent of the limitations depends on a number of factors including ceiling height, average power density, available airflow (in CFM per sq. ft.), and spare UPS/PDU capacities. One example illustrates this concept.

One recently completed data center of 40,000 sq. ft. of raised floor required that 20% of the area support 150 w/sf while the overall average was to support 60 w/sf. The average dictated that the overall UPS capacity support 2,400kw, while the 150 w/sf requirement meant that half the capacity (8,000 sf x 150 w/sf), or 1,200kw be utilized for high density. The remaining 32,000 sf is left with approximately 40 w/sf. For every 10 w/sf that the high density application doesn’t use, 2.5 w/sf is made available to the remaining area.

To find out more about high density applications, or to determine how much capacity your data center can handle, contact Reliable Resources.