Resources—News, Articles and Events

Data Center Planning

A lot goes into figuring out where to put the data center environment, but once you have the room, how do you go about planning what goes inside?

The planning process for the data center layout is as important as the planning for the location and infrastructure supporting the data center. You have successfully sold management on the need for a new equipment room, committed to the functionality represented by that new space, and now that you have it, you need to make sure you optimize its use and make it last as long as possible.

The planning process is as follows:

  • Preparation
  • Signage, amenities and equipment
  • Circulation and unuseable areas
  • Mechanical and electrical support equipment
  • Communication points of presence
  • Cable distribution and infrastructure
  • Network infrastructure
  • Tape
  • Large frame Unix
  • DASD
  • Servers
  • Hotspot management
  • Expansion plan

While this is not comprehensive for large complex data centers, it does represent a structure for the planning process to get the most of your facility.

Let’s take a closer look.

Preparation

To do a really good job, you need to be fuly prepared. Remember your Boyscout days? Preparation involves having proper documentation of what is going into the data center including equipment inventories, cable schedules, branch circuit requirements and power and cooling data. As with other things in life, a small amount of time spent preparing now will pay big dividends later. If you don’t have the staff or time to prepare yourself, hire an outside consultant to help. It will pay off later.

Categorize the equipment list in a spreadsheet by type of equipment, i.e., blade server, DASD cabinet, tape silo, large frame unix, etc. That way later, you can sort the equipment list by category and get an overall feel for how much space is required by each platform. If you need a template to start with send us an e-mail and we’ll get yu one you can modify for your own use.

Another good idea is to get a hold of any service providers and ask them for provisioning requirements for the data center for your level of service.

Signage, amenities and equipment

I’m surprised how often this item is ignored or forgotten. Having a good way to map out what’s going on by way of a grid coordinate system based either on ceiling tile or raised floor tile is vital to not only good planning but good management. Locating the fire extinguishers, tile lifters and making sure all electrical and mechanical equipment is adequately labeled is also a must. The small amount of money spent on these items will more than pay for itself. Deciding on what racks to use would be a good idea at this stage. Some newer models are deeper and wider due to cable pathways and cooling fans. Knowing this now would be helpful.

Circulation and unuseable areas

Now that you are prepared, look at the floor plan and identify the structural columns, entry doors, elevators, ramps, tape vaults or any other permanent part of the building that takes away from your ability to locate equipment there. Usually, a logical circulation path will present itself that flows from entry doors and ramp areas, past columns and along the perimeter of the room. When we talk about circulation, we are not necessarily talking about the aisles between racks. We are talking about the main aisles that you will be rolling hand trucks, pallet jacks and computer equipment on to get to the rack aisles. The circulation path should be identified because you will want to install heavier capacity floor tile to accomodate the heavy rolling loads. (The newest EMC storage cabinet weighs more than a Merceds E-Class).

Mechanical and electrical support equipment

If you have elected not to use a mechanical gallery (a separate corridor-like space where cooling units and PDUs are located) then you will need to look at where all this equipment will be located. In today’s high density environment, the infrastructure equipment takes up more and more of the raised floor area. For example, in a 150w/sf environment, there will be at least one and probably two PDUs required for every 1,000 sq. ft. of active raised floor. A 150kw PDU takes up about 50 sf each including service clearances. Cooling units take up about the same space as a PDU (50 sf), but I will need three units to handle the same 1,000 sq. ft. area with one redundant unit. Taken together, the two PDUs and the three cooling units will take up 250 sq. ft. or 25% of the area I am supporting.

Grossing Factor

When we prepare space programs for data center customers we use a grossing factor of 1.35 to 1.50 to accomodate circulation and mechanical and electrical support equipment.

Communication points of presence

Somewhere you will need to provision a space for service providers to install their termination equipment. Whether a separate room, or simply open racks in the middle of the data center, they must be accounted for and their location will dictate location and planning of other equipment later in the process. Here is where your preparation pays off. Knowing what the requirements are will allow you to establish an optimum location that will keep cable installation costs down, reduce attenuation, and ehnace the overall network peformance.

Cable distribution and infrastructure

Whether you plan to devolve switches out to the edge, or keep them at the core, your cable distribution plan needs to support your topology. The topology will dictate the quantity and location of termination equipment and affect the overall planning process. Nailing this down now is essential to a smooth plan and installation.

Network infrastructure

Network infrastructure is critical to proper reliability and performance of the data center. As alluded to above, deciding how to implement the switching and routing functions will have an affect on the quanity of space in each rack devoted to connectivity and how much is available for equipment, and how many racks of network will be reqauired to handle the enitre data center.

Tape

Tape is the heaviest and biggest stuff to deal with. If you are using Storagetek libraries, they can take up a tremendous amount of room. They don’t use much power comparatively, but they weigh almost 3 tons. You’ll need heavier floor tile capacities and provisions for fire suppression tanks nearby. In most cases if tape storage is kept in the data center, it should be protected with a gaseous fire suppression system such as FM 200, Inagen, or other equivalent halon replacement.

Large frame Unix

These systems include such products as Sun Enterprise Servers, HP Superdome, etc. These boxes use a lot of electricity and generally need their own IDF for connectivity. Also, since the power use is quite high, additional clearance space should be planned to avoid a concentrated hot spot. The extra clearance gives the cooling units a little extra help in removing and conditioning the air in the immediate vicinity.

DASD

The new big frame cabinets (like the EMC Symmetrix) can be heavier than a Mercedes E-Class! These boxes are typically served by fibre channel connectivity and require minimal IDF space, but require huge amounts of power. The DMX 3000 requires 17 kva of power and 4 tons of cooling. That’s 944 w/sf (footprint area). To be able to adequately serve this monster in a typical data center, you will need a minimum of 7.5 feet of clearance around the entire cabinet.

Servers

Servers come in a variety of flavors, form factors, OSs, etc. The trend is toward blade servers which fit ten vertical oriented circuit-board servers in about 5 rack units. You should not fill the rack all the way up with these devices because it will cause a hot spots and premature failures due to overheating. A happy medium seems to be around 20-25 rack units maximum (up to 50 servers per rack). This will provide adequate room for patch panels and prevent excessive heat build up. Racks filled this way can be set side by side pretty much ad infinitum, with the odd IDF rack or aisle to interupt the pattern. The rows of rack however, should be spaced with at least two clear tiles between (4 feet) and oriented so that all the cabinet fronts are facing each other. This will result in a hot aisle/cold aisle arrangement where every other row has perforated tile for conditioned air, while the hot aisles will have the air float upward to be returned to the cooling unts.

Hotspot management

In order to account for the fact that a data center is not a homogenous collection of equal heat producing pieces of equipment, the data center should be planned to handle about 20% more power and cooling than is required. This is because in the typical data center, there will about ten percent of the space that is occupied by equipment that uses about three times the average power. For example, if I have determined that my total load will yield a power density of 100 w/sf, then I can assume that about 10% of my area will require 300 w/sf. 90% x 100 w/sf = 90w/sf, and 10% x 300 w/sf = 30 w/sf. So, 120 w/sf (90 +30) would be a good planning number for power and cooling capacity. In addition to the capacity for the central power and cooling plant, the distribution of air and power feeders should all be designed for this extra factor, so that there is flexibility in where the hot spots will occur, and in recognition that they will change over time.

Expansion plan

This is a big and commonly-overlooked item. Giddiness over getting a new data center or a bigger space for the data center often prevents adequate planning for where expansion and growth will take place with as little impact as possible for either the data center or the facility. Identify an adjacent area that can be remodeled without affecting services to the data center, e.g. requiring replacement or removal of major electrical feeders, chilled water piping or main sprinkler lines.

All this can get quite complicated and overwhelming, which is why we are in business. We have planned and implelemented millions of sq. ft. of raised floor (and some without) data center design. No matter how big or small your project, we can help ensure a certainty of outcome through reliable solutions with reliable technology from trusted resources.