Saturday, November 21, 2009

Best Practices in Data Center Cooling

The proliferation of modern, high-powered IT equipment is creating a new set of cooling challenges in the data center that lead to reduced equipment resilience well before the cooling capacity of the room is reached. This is forcing data center operators to take a conservative approach to data center cooling and pay more than is necessary to operate the cooling systems. As shown in Figure 1, over the past decade, the cost of power and cooling has increased 400%, and these costs are expected to continue to rise. To make matters worse, there is still a need to deploy more servers to support new business solutions, faster data access has resulted in tenfold increase of computer servers in today’s data centers. In addition to the increase of the number of servers, the size of these servers has decreased allowing more equipment to be packed into server cabinets. The latest blade server design incorporates a high number of high power servers into a single 2 or 3 U enclosure.

Figure 1: IDC Report – Cost Structure and Trends

When server racks are filled with these high density servers the results are an increased power load and resulting heat load that is now causing serious problems for today’s data centers. The proliferation of these high-density racks in very dense configurations increases the complexity of providing adequate power and cooling in the data center. The resulting power consumption and heat output of these high density racks approaches 10-15 kW per rack far exceeding the 1-2 kW per rack of just 10 years ago.
If these trends continue, the ability of data centers to deploy new services will be severely constrained. To overcome this constraint, data center operators have three choices: expand power and cooling capacity, build new data centers, or employ an efficient solution that maximizes the usage of existing cooling capacity. This takes us back to our basic economics lessons – “Maximum Utilisation of Available Resources”. This increased power consumption in data centers is the obvious cause of a byproduct - heat output generated by these high density servers, which in turn create “hot spots”. Dealing with hot-spots is the data center manager’s nightmare. Thus, designing the air-conditioning system in a data centre using a simple energy balance method has become increasingly challenging; somewhat reliant on personal skill and may not meet the demand in the near future. The Solution however is quite at hand and achievable.

Hot Aisles and Cold Aisles

This practice was first implemented by Dr Robert (Doctor Bob) Sullivan. His concept was to arrange equipment in a front to front and rear to rear configuration to create hot aisles and cold isles. This configuration has become the industry standard practice to supply cold air needed to equipment and manage the heat output efficiently. This configuration will optimize cooling effectiveness and efficiency by ensuring that only cold supply air is delivered to equipment inlets.

Source: The Uptime Institute

Avoid Gaps in Rows

Air is a fluid and fluids tend to hug the outside surface of a server cabinet. Openings in equipment rows and at the ends of rows allow air from the hot aisles to wrap around the end cabinets infiltrating the cold aisles. The mixing of hot and cold air raises the inlet temperature for devices. If this recycling of warm air perpetuates the device may overheat. Long continuous rows of server cabinets allow better cooling flow of both hot exhaust air and cold supply air.


Illustration showing how hot exhaust air mixes with cold air when gaps between cabinets are present
Cabinet Ceiling Fans

With the evolution of many different types of cabinet designs, it is common to have cabinets installed with ceiling fans these days. It is however advised not to configure more than 3 cabinets with ceiling fans adjacent to each other. Cabinet ceiling fans serve no useful purpose when full depth devices are installed at the top of the cabinet. They disrupt the natural flow of hot exhaust air.

Cabinet Alignment

Locate computer cabinets in a manner where the front of the cabinet is aligned with raised floor seam. In order for the Cold aisles to be effective they must be at least 4 feet wide or in simpler terms equal to two floor panels wide. See below for Illustration showing the minimum recommended row spacing for effective cooling.

Source: ANSI/TIA-942
Cabinet Blanking Panels

The compaction of today’s high density equipment has forced the use of small diameter fans with reduced velocity. Rear cabinet doors restrict airflow, thus exhaust is circulated around the inside of the cabinet. The exhaust air finds a path of least resistance moving toward the front of the cabinets when cabinets are not fully populated with devices. The exhaust is drawn into the inlets eventually over heating devices. Installing blanking panels in the front of the cabinet, blocks this exhaust migration.
In most situations, cabinets that are not fully populated and without blanking panels will allow air from the hot aisle to pass through the cabinet and infiltrate the cold aisles. Again, the installation of blanking panels will eliminate this problem.


Row Orientation to Precision Air-conditioning Units

Orient computer equipment rows perpendicular to front face of computer-grade air-conditioners. This important as this allows hot exhaust air to return to the CRAC unit along the row or tunnel created by the wall of cabinets. Further when multiple equipment rows are perpendicular to the front face of the air-conditioners, the majority of exhaust air will travel along either end of the aisle working its way back to the nearest air-conditioner. Some air will move over the tops of equipment rows; however the impact to the cold aisles is much less. Ideally, hot aisles should be immediately perpendicular to air-conditioning unit face. See below for Illustration showing the recommended row orientation to Precision Air-conditioning Units.



Source: ANSI/TIA-942

Cable Management

Implement methodical cable routing and dress at rear of computer cabinets. This requires cables to be neatly dressed away from exhaust outlets to ensure unrestricted airflow. Also it has been often noticed that on pulling an under floor tile you would find waterfalls of cables just in front of the airflow from the CRAC. These should be neatly stacked away in cable trays and better still overhead cabling should be practiced.


Ceiling Clearance

Provide a minimum of 18 inches clearance above all computer cabinets. Industry recommends 36 inches for optimum cooling. Assuming the use of computer-grade down flow air-conditioners in standard configuration, the return air is intended to move along the data center ceiling returning to the top inlet of the air-conditioner. For this method of returning air to be effective, sufficient clearance is necessary above cabinets.


Locating Airflow Panels

Raised floor airflow panels should only be installed in “cold aisles” and must be accessible at all times. Use only the quantity required to maintain maximum static pressure. A general rule of thumb is 1 panel per sensible ton of cooling capacity. Another important thing about airflow panels is looking at the damper thickness. Unnecessarily thick dampers block airflow and defeat the purpose for which these airflow panels are designed. See below for Illustration showing the types of air flow panels.

Using a mix of both types of tiles could be a good idea for serving different cooling requirements for high density requirements of today.


Seal Cable Cutouts

Keep cable cutouts in raised floor panels as small as possible and seal after cable installation. Unsealed cable cutouts are a tremendous source of static pressure loss. These openings deliver cold air where it is not needed. The result is insufficient airflow volume at the airflow panels where it is truly needed. Sealing cable cutouts can increase the static pressure by nearly 25%. There are various types of floor grommets available today for new installations where the floor panel would be cut, grommet installed, and cables routed through the floor also grommets are designed for existing installations. This type of grommet is a two-piece assembly that is separated, wrapped around existing cables, joined together, and sealed to the raised floor. See below for Illustration showing the types of flow grommets.


Return Air Travel Distance

Keep return airflow distance to less than 1.33 feet per sensible ton of cooling capacity.



High-Density / Low-Density Areas

For deployment of self-contained high density zones within an existing or new low density data center, consider under floor barrier enclosing A/C and equipment. Because 150w/sq.ft. Is approx. only 3-4 kW per cabinet and today’s cabinets are much higher but not likely to fill the room. Flexibility is nice but fitting up entire room for limited high density equipment is not practical. The independence of these high density zones allows for predictable and reliable operation of high density equipment without a negative impact on the performance of existing low density power and cooling infrastructure. A side benefit is that these high density zones operate at much higher electrical efficiency than conventional designs. See below for Illustration showing a high density zone in a low density room.



Ceiling Plenum Return

If large ceiling plenum space which is fairly open is available, consider converting it to return air plenum. However this space must be twice that of the raised floor depth. The ceiling plenum can be used as flexibly as the raised floor to remove heat directly from the hot aisles before it can migrate. Use egg crate panels above the hot aisles and install ductwork on top of the air-conditioner to poke through the ceiling.
Locating Air-Conditioners

Rooms wider than 60 feet in either direction with air-conditioners along the periphery only, will require air-conditioners located along the center of the room. A/C units have defined area of operation. Largest computer-grade a/c units have max effective area approximating 35 feet from face of unit.

Supplemental Cooling

When load densities exceed 150 w/sq.ft. for >200 sq.ft. or 4kW per cabinet, supplemental cooling should be considered. Computer-grade down flow air-conditioning units available today can effectively cool load densities of 150 w/sq.ft. Above 150 w/sq.ft. the process of long-distance, under floor distribution to cabinet inlets, requires localized cooling.
Manufacturers have responded with products designed specifically for high density loads. Some attach to rear of cabs to improve local ventilation and others mount overhead to supply air to multiple cabinets.

Under Floor Baffle system

Baffle systems are a passive and contributory holistic solution that can be easily installed as an effective vertical under floor partitioning system, to direct air flow within the plenum space. These baffles direct the source of the cold air from the CRAC units to where the air is or is not needed. For rooms not yet populated it is essential not to waste the cold air, by cooling areas that are not yet populated.
Velocity is the time rate of motion, therefore velocity pressure is the pressure caused by air in motion. When air from a CRAC unit is forced through a partitioned air flow space, static pressure is created. Without dedicated partitioning, as the air moves further away from a CRAC unit, the air velocity decreases. To maintain velocity pressure to particular 'hot zones', baffles help maintain the static pressure further away from CRAC units and are a simple solution to cool thermal hot spots in information technology equipment centers. The ideal objective should be to create un-obstructed dedicated air flow paths to the equipment. Open floor penetrations must also be sealed to manage air flow more effectively. See below for Illustration showing deployment of baffle systems.



Source: plenaform.com

Partitioning off raised floor around command control centers, etc, improves operator comfort.


Acknowledgements: Endless discussions with my dear friend David

Share