Advancements in modern data center operations lead to new capabilities and management challenges. Recent changes to modern data center standards and design maximize both operational and energy efficiency.

The modern data center is far more sophisticated than its predecessors. With proper maintenance and management, today’s data centers can support new generations of hardware for years to come, and significantly reduce energy consumption. Most changes revolve around air flow and increasing temperatures within the data center. It is important to understand how to manage these new capabilities to take full advantage of the opportunities they offer.

Containment

Most modern data centers employ air containment to maximize the isolation of hot and cold aisles. This means solid panels or plastic strips and end-of-row doors are installed to block off either the hot or cold aisle.

Containment should fit tightly between the tops of cabinets and the ceiling. Doors should be self-closing and seal well to regulate air temperature and circulation. Cabinets should have bottom seals so air doesn’t leak under the wheels or leveling feet. If admins install new cabinets, they must redo the containment to maintain tight seals.

Containment requires that IT fill all unused cabinet openings with blanking panels. For small spaces, 1U and 2U blanking panels are available, but buying greater quantities can be costly. Blanking sheets can economically fill larger spaces more so than individual panels. Blanking sheets are 27U high but are scored in 1U increments so they can be cut to the appropriate size.

Blanking sheets are useful in modern data centers where there’s room for future growth in vacant cabinets. The use of blanking panels or sheets enables cold and hot air to properly flow through server cabinets. If these panels or sheets are not in place, hot air would circulate more through the data center and cause the air conditioning unit to activate more frequently.

Graphic showing benefits of using blanking panels in server cabinets

Temperature monitoring

Data center changes include higher operating temperatures with power and cooling systems able to match capacity to actual IT loads. An increase in operational data is available from the power and cooling systems, and from the IT equipment itself.

Operating a state-of-the-art data center within the ASHRAE thermal envelope requires good temperature monitoring since it is impossible to achieve completely uniform air across a room. Air conditioners have the usual discharge and return air temperature sensors, but those are not sufficient for thermal management.

Most modern cabinet power strips are equipped with accessory temperature and humidity probes, monitored through the same network connections as the power. These sensors should be in front of and behind IT hardware, and at two or three different points up the length of cabinets. This will ensure they give a clear representation of data on thermal conditions across the entirety of the room. Sensor results will appear on a monitor connected to the system.

Thermal strips that are on the cabinets only show the temperature of the metal, not the environmental air. However, there are panels that enable air to flow around the temperature strips, which monitor the environmental air. Infrared spot meters also monitor the equipment’s surface temperature, but they cannot measure the air temperature.

Air balance

Data center air conditioners are usually set at 75 degrees Fahrenheit (24 degrees Celsius) so that the farthest cabinets don’t exceed the maximum 80.6 degrees Fahrenheit (27 degrees Celsius) server inlet temperature. This means hot aisles can reach up to 95 degrees Fahrenheit (35 degrees Celsius), which is now not considered dangerous to equipment.

Close-coupled cooling, such as in-row, is self-balancing if temperature sensors are properly in place. Under-floor air requires the addition or removal of air flow panels or the adjustment of panel air dampers as loads change from cabinet to cabinet. Overhead cooling may require adjusting the dampers on air diffusers. It is a waste of energy and resources if insufficient air temperature results cause unnecessary temperature fluctuations.

Power monitoring

Today’s cabinet power strips — intelligent power distribution units (iPDUs) — can monitor the load on each receptacle, as well as on each phase of the total strip. Maintaining phase balance is particularly important to maximize both UPS capacity and energy efficiency.

It is a waste of energy and resources if insufficient air temperature results cause unnecessary temperature fluctuations.

Balancing phases in European 240-volt systems means moving loads from one outlet to another. American 208-volt systems are trickier. Moving a load from one receptacle to another changes one phase load but not the other. Visual phase readouts on the iPDUs provide instant feedback on how a load has been affected when circuiting changes.

Power monitoring is also critical to maintaining power redundancy. In a 2N system, the load on each UPS should be kept below 50% so the other UPS can assume total load when one fails. But an N+1 design requires knowledge of the UPS modular structure. In a 100 kW UPS made up of 20 kW modules, N+1 means six 20 kW modules, and the total load must be kept below 100 kW. This ensures the extra 20 kW module does not overload the remaining five. Likewise, a 500 kW N+1 UPS using 50 kW modules must be kept below 500 kW to ensure redundancy protection.

Fire protection

Data centers need an aspirating smoke detection system to supplement code-compliant fire detection and suppression. These systems constantly sample the air throughout the facility and can be set for high sensitivity. This enables personal intervention, smoke source identification and suppression with hand-held extinguishers far earlier than it would occur with conventional fire alarms.

Early warning notifications must be displayed to both IT ops and security to reduce the risk of triggering a full fire alarm. This avoids having the fire department show up and activate the emergency power off button.

DCIM

The best way to monitor all these systems, and to convert the mass of available data into usable information, is with a data center infrastructure management (DCIM) software package. The systems mentioned above have hundreds of monitoring points that produce more data than the human mind can realistically grasp and assimilate.

A DCIM system, under the proper selection, installation and maintenance, can turn data into easily grasped information. In large operations, it can manage IT inventory from order placement through installation and eventual decommissioning. A DCIM can also monitor whether software releases are up to date. It could even predict potential power and cooling failures before they occur.

Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom and Wilke LLC. McFarlane has spent more than 35 years in communications consulting and has experience in every segment of the data center industry.