Soup to Servers - Data Center Design Considerations

By Carrie Goetz

The data center of today is built, run, and operated by the various disciplines within the space.  As the centers grow, the decisions about what products to procure tend to get made in silos.  This is due to a few factors:

• Budgets are allocated to departments
• Products don’t have the same refresh rates
• People are too busy to pay attention to things that don’t have a direct impact on their departments
• People don’t have the expertise or diversity in skills to weigh in on decisions.
• There is little upper management support for cross functional teams
• The fear of failure repercussions is strong
• The company doesn’t have a plan for commissioning and decommissioning that includes all stakeholders.

One thing is for certain: The data center is an ecosystem.  One bad decision by one department can have a staggeringly negative impact on the remainder of the ecosystem inhabitants.  It is difficult at best to rally all of the stakeholders for smaller decisions.  The better way is to have these stakeholders cross pollinated (so to speak) is knowledge that can be used in the ecosystem decision-making process. 

Join Carrie Goetz, DCI Board member, Leaders Lab advisor, Global Director of Technology at Paige DataCom Solutions, 30-year industry veteran--and Woman Warrior at Data Center World on March 19 in Phoenix from 8 a.m. to 11:30. She will be presenting an All-Access Workshop on The Fundamentals of Data Center Design. Be sure to sign into Twitter, #DCWsouptoserver to provide your input and ideas prior to the event.

This workshop is designed to accomplish a few things.  First, for those new to or interested in the industry, it is a means to understand the various specialties that combine to house, supply and back up company’s data.  For those thinking of moving to another arm of data center responsibility, it is a shortcut to understanding.  And finally, for those that have been in the space for a while, this is a first of its kind interactive presentation where your input will become part of the final handout released to all participants.   Sort of a crowd-sourced design guide.

Some Nuts and Bolts

When beginning a data center design, the space and building envelope dictate some, but certainly not all of the elements for the whitespace.  In a colocation scenario, where a company leases the space, there may be limited to no input from the company as to their desires.  In a recent Leader’s Lab, one comment from an engineering firm is that they are generally firewalled from the end user that will occupy the space.  This means that decisions like whether to have a raised floor and whether to use passive cooling or active cooling are decided for the occupant by the building owner. 

In a build to suit or build your own data center, there are a few considerations for including a raised floor or not.  Piping for chilled water and sometimes pathways are well suited to a raised floor environment.  The cooling system may rely on having a raised floor.  And of course, preference plays a part as well. 

For cooling, there are several options out there today.  Cooling is size-based on the power needs of the space.  Calculating power can be a bit of a crystal ball routine, but we know based on past history that loads tend to shift and morph.  As data centers become more diverse in geographical location, the amount of power may shift from one location to another and the amount of redundancy may decrease as well.  After all, two Tier 2 type facilities offer better redundancy than a single Tier 4, due to that geographic diversity.  The best way to size for power and redundancy is to calculate the risk factors for the applications.  This will be particularly useful when edge compute starts to become part of the enterprise. 

Once the building particulars are noted, and the power has been calculated with growth assessed, then it becomes necessary to determine if the cooling in the space is sufficient (if existing) or if new supplemental cooling methods will be needed.  Traditional passive cooling methods only go so far.  Contained aisles or a contained cabinet can help with close coupled cooling. 

When laying out the floor area for a data center, it is important to note that there may actually be a mix of cooling methods.  This is true if the data center will contain a high-density area, for instance.  If the IT load is spread out over the data center floor, the room cooling may be ample.  If the cooling is insufficient it makes sense to add a high-density area to accommodate the higher loads.  It may make sense to use close coupled cooling within or in combination with cabinets purpose built for that heat removal application. Cooling (heat removal, technically) can be handled across an entire room, a row at a time, or in zones as needed.  The key is to provide the right balance across the floor with optimized heat intake for cooling output efficiency.  

When the cabinets and cooling selected and the cabinet layouts are in place, it is time to be sure that everything is grounded and bonded prior to active equipment being installed.  This includes the flooring system, if used, all pathways, the cabinets themselves and the bonding bars in the cabinets that are then available for the equipment that the cabinet will house.  The requirements for grounding and bonding are outlined in the National Electrical Code and also referenced in the data center standards.  It is important to understand the proper grounding and bonding methods to have a single reference ground. 

Cabling, both copper and fiber, are used to connect the communications within the space.  There are a variety of options for both.  Some of the cabling systems are proprietary, while some are standards based.  The selection of copper and fiber are based on a few criteria.  The media type of port, the interface of the port, the cost of the equipment that will be connected, and the power drawn by that equipment as an operating cost.  While the standards provide some guidelines as to what should be used, there is no hard and fast rule that forces a data center to any sort of cable.  With standards compliant cabling systems and even those that exceed the standards, options exist for switches, servers, SAN, and even the overhead lighting, security systems, WAPs, and sensors within the space.

From an equipment perspective, it is important to know that power and cooling handshake with the equipment demands of the data center.  The number and placement of switches, SAN equipment and servers will play an important part in the selection and location decision-making processes.  It is critical for IT to understand the facilities side of the house, and it is also critical for facilities to be part of the equipment selection process to assure that capacity considerations like weight, power and cooling are sufficient to handle the desired IT load. 

Power backup systems also need to be sized correctly to accommodate failover should there be a power failure.  Without a commissioning/decommissioning plan, DCIM or other means to balance and monitor the loads across the data center floor, the opportunity for failure is high.  Policies and procedures should assure that the IT staff is aware of the limits and further, they should also be active participants in assuring that the equipment they select is vetted for efficiency.  This is one area where facilities can assist in the IT decisions.

When capacity is reached, there are several options to bring the facility back to the limits required for support across the supporting infrastructure systems.  Cloud compute can play a role, as can offloading some of the resources to another site.  With failover sites, it is important to right-size resources across the data center framework which consists of the original site and the backup site(s).  As applications grow smarter, the need for every piece of equipment to have multiple network, power, SAN and management connections may not be necessary.   Eliminating some of the unnecessary redundancy can be a huge savings over time both in capital expenditures and operational expenditures. 

Takeaways

This session is designed to accomplish a few things.  First, for those new to or interested in the industry, it is a means to understand the various specialties that combine to house, supply and back up company’s data.  For those thinking of moving to another arm of data center responsibility, it is a shortcut to understanding.  And finally, for those that have been in the space for a while, this is a first of its kind interactive presentation where your input will become part of the final handout released to all participants.   Sort of a crowd-sourced design guide.

While not all inclusive of every design element, the session is geared toward that mutual understanding and stewardship across the data center ecosystem.  With audience input into the final deck that will be delivered to participants, this is a one-of-a-kind presentation that will provide design guidance for years to come. 

Audience participants that wish to contribute are encouraged to sign up for Twitter® #DCWSouptoServer prior to the presentation.