More or Less?
Fox Architects recently went through a unique schematic design exercise in the planning of a Green Data Center “Lab”. We were asked to engineer a data center that would test current design “practices” to “best practices” and to “future practices”; not an easy task since data centers are not typically set up to change monthly for change‘s sake alone (Lab Testing). But, it got us thinking about what type of physical infrastructure would or would not be required.
More or Less? If we had been asked this question three years ago for a corporate data center where the IT department got one shot at the funds available to build, I would have thrown everything we could at the problem: more, more, more.
But in today’s economy with a greener philosophy for data center design you have to think prudently – simply, think better. Less is more in this case: no ceiling, no raised floor, no gas suppression system, nothing on the data floor but racks themselves.
In our design exercise solution, we provided a clean (simple) box with exposed cable trays hung from a 2×2 grid of unistrut framing recessed flush into the concrete deck pour above. We did end up with a 2’-high raised-access floor but no infrastructure under the floor. The raised floor was used to test older data center practices and used for air distribution in a +1 requirement for N+1 when testing in-row and other direct rack cooling solutions.
We attempted to support the data floor requirements itself with only the infrastructure that was known to be needed at the present time and then, in light of past experience, to plan for change evaluating how change could be accommodated in the future without installing any unneeded infrastructure now. We took the same approach for the data center room itself: leaving open one side of each axis of the room for future expansion. We assumed we only needed the horizontal axis open for expansion but to deal with future best practices and unknown technology we left two sides open and the top (you can expand south and east and up). Locating the mechanical and electrical distributions along the long axis of the data floor and locating the UPS/Batteries, Substations and Switch Gear and Mechanical Chiller Room remote from the data center itself. Remember this is a data center lab; users of the lab (only need access to the server racks, cooling distribution and electrical distribution) and have the ability to vary and monitor power/cooling. That was our ideal model.
In reality placing the substation too far from the UPS/batteries and too far from the PDUs was unnecessarily expensive. So we concluding that we only needed to expand horizontally, like adding train cars on the track the data center lab can expand to the east. Not being able to change the height of the ceiling is limiting to a data center lab. However there have been a great number of studies done on ceiling heights in data centers based on the cooling strategy, so with a little research we concluded that the ceiling only need vary two feet in height from 8’ to 10’ above the data floor; we did not need infinite flexibility. How much physical space flexibility do you need to test unknown best practices…the question itself reminds me of a Calc 3 infinity equation and it gives me a headache.
Conclusion, less and less infrastructure was provided; this in itself is a good green practice. This is a lab and they can expect down time to reconfigure if needed (not acceptable in any other Tier 3 data center), but normal for a lab. We did isolate the data center into zones 1, 2 and 3. All three zones can share cooling and power or all three zones can be isolated. So if you have to reconfigure you only effect one zone.
Result, a very good design approach for a “Multi-Tier rated data center design” providing the flexibility of control with enough back up cooling and power for only 2 of the 3 zones; N+1, +1, N only. Not all applications need N+1 Tier 3 reliability.
Bob Dunn LEED AP BD+C