Resources—News, Articles and Events

Data Center Design Criteria

There are as many approaches to data center design as there are engineers and architects. Over the last twenty years, and roughly 3 million sq. ft. of raised floor, I have found the following approach very helpful in bringing clear alignment and common understanding between the IT organization (the customer or tenant of the data center), and the facilities organization (the builder and maintainer of the data center).

*criterion* (pl. criteria): a standard of judging; any approved or established rule or test, by which facts, principles, opinions, and conduct are tried in forming a correct judgment respecting them.

As used in facility design, criteria are the standards by which designs are evaluated in terms of the desired performance or functionality. We typically start new projects by following some version of this process:

  • Describe the goals and objectives
  • Establish the design criteria
  • Prepare conceptual designs for evaluation
  • Select the optimum concept that meets the goals and objectives and satisfies the design criteria.

As the design process evolves, the design criteria serve as a measuring device to help keep the design in line with the requirements of the project.

There are three categories of criteria to consider. The Planning Criteria provide high-level direction to the planning and capital budget request process to align project objectives with the schedule and funds available. The Program outlines the functional areas and establishes the overall scope and size of the facility. The Operational Criteria describe how the facility will be operated.

Planning Criteria

For data centers, there is a simple and universal relationship between reliability, power density and raised floor area that has proven very accurate and useful in establishing an early scope and budget for the project. The key to this excercise is to verify the key criteria for each of the three.


Reliability planning is a key part of the success of the planning effort. There is as great a danger of overestimating the importance and criticality of the data center as there is in underestimating it. The appropriate way to determine how robust the facility must be is to consider three issues:

  1. Operational requirements will describe how continuous the facility must operate without downtime. Normal office buildings can be maintained at night and on weekends. The nature of the IT infrastructure and the business processes it supports will dictate how often and how long the facility (and by extension the IT equipment) can be down for maintenance. I usually create 4 categories, over 400 hours a year of scheduled downtime; 200-400 hours; 0-200 hours; and 0 hours. Many organizations now have the ultimate requirement, 7 days by 24 hours by forever…no scheduled downtime ever.
  2. Secondly, we need to look at the availability requirements for the IT infrastructure. The network will only be as good as the infrastructure, so the higher the commitment on the part of IT to end-users, customers, etc., the higher the facility reliability needs to be.
  3. Finally I look at the impacts of downtime. Sometimes the impact of a downtime event will be more severe than the the previous two criteria would indicate. In other cases the opposite will be true. The key is to combine the these three drivers to support a reliability level that will be applied consistently throughout the design process.

Power Density

I have written previously about the trends to put processing and storage into smaller and smaller form factors, while power consumption is static. This trend has and is leading to higher and higher power denisities. However, power denisty is not a driver per se, it is a process input to the design criteria, but it is a process output of a thorough IT survey and analysis. Simply taking existing watts per sf and applying it to a new data center because that’s what you’re currently experiencing is not the optimum process. The best way is to survey all existing technologies that will be housed in the data center, and re-populate them according to best practices in terms of rack densities, relationship with IDF locations, network cabling and service clearances. The total power demands can then be divided by the area indicated by best practices to arrive at the power density number. Today, the range is 35-75 for most installations, with the highest concentrations in the 120-150 range.

Raised Floor Area

Based on the best practices survey of your IT technology, the amount of total raised floor area can be determined that will appriately house the equipment plus leave some room for growth. The relationship of area to power to reliability can be explained as follows:

  • An increase in area results in the need for more power.
  • An increase in reliability requires redundant components, which take up more space.
  • An increase in power density increase the size and cost of the backup power.


The program is like an accounting and listing of every space and functional area known to be needed.