The Idiosyncrasies of Data Center White Space, Rack & Stack, Power and Conveyance Planning The fight for Internal versus Outsourced Data Center processing
By Mike Najarian, Consultant for Corporate Strategy, Execution & Integration of Infrastructure, Data Center, Cloud, Network & Security Programs
Data Centers have undergone a landscape transformation for many years, from back office number-crunching mystery to corporate-owned glass palaces and outsourced cloud computing facilities to Hyper-Scale Data Centers. These trends are nice but scary. For years, it’s been the cost association of internal Data Center operations that has been prohibitive. The cost of ownership, Legacy Applications and Hardware, AC/DC/UPS power, HVAC, technical and facility staffing, DCIM software and structured cabling spike capital and operational expenditures through the roof. Let’s throw in Data Center expansion, and we have a cash cow on our hands at a time when cost-cutting and efficient operations are the mantra of C-level executives and the Board of Directors. Outsourcing is a popular option, especially for the Cloud, but a thorough investigation of base contracts covering Network access, Security, Compute power, Storage, and Service Level Agreements is warranted. The potential for doubling your application access internally and externally may turn out to be cost prohibitive.
Exceptional IT Project Managers understand that complex Data Center initiatives require deep thought in planning and design. Defining the requirements and activities, timelines, budget, resources, facility personnel, risks, hardware and software, 3rd party vendors, scheduling, and communication is the path to clarity. If a PM does not know the Why behind a project, then the project team will be lost.
Let’s start with White Space allocation and placement. There are two types of personnel responsible for floor space allocation: Data Center Engineering team and Facility Management. Here are some questions to ask: Where are we going to install the equipment? Is the size of the space sufficient? Can the floor (raised or slab) bear the load of a fully populated cabinet? Typically, cabinets are the same make and model for ease of rack and stack, cable management, PDU, security and aesthetics. Cabinet dimensions combined with upper and lower access for power, cable management and conveyance, cooling and heat evacuation must be validated.
We have all seen the jungle of cables strewn about cabinets in IDFs and MDFs. The hardest part was seeing the cable management guides not being utilized.
Next up: Rack & Stack. It comes down to the type of hardware and amount being installed. We’ll target servers. Verify Rack Unit per server and Verify airflow: cool air intake in the front – heat exhaust in the rear. Calculate BTU/heat dissipation and power consumption. These calculations are critical for Data Center operations as they will affect AC power distribution, UPS capacity and cooling. Understand hot and cold air aisle arrangement, ceiling containment and the potential for liquid cooling. Intelligent PDUs are installed on both sides in the rear of the cabinet and are directly connected to diverse AC power receptacles. It is very important that power cord and receptacle types are defined. High capacity servers come with two power supplies for redundancy. Split power cords between the PDUs evenly. Do not block airflow.
Network switch connectivity is extremely important. Network switches can be installed within the cabinet as Top of Rack (ToR) or Middle of Row (MoR) configuration. Verify with the Network Engineering team. This will affect cable routing and infrastructure capacity. If ToR, use a cable management system to guide and dress cables into the switch. If MoR, use a cable tray or ladder rack conveyance to switch.
Most Network Engineers deploy an independent Network Management topology to access, configure, and troubleshoot problems with servers, switches, and PDUs. Verify that Engineers have connectivity to all devices.
I strongly advise creating a rack elevation drawing in CAD or Visio detailing rack position, devices and connectivity. It is a gift most people skip, including Cloud providers.
AC/DC & UPS Power: Power comes in many flavors. In previous iterations, all network devices were DC powered. Just ask the good folks at Ma Bell. Back then, we had to plan for Rectifiers (AC to DC conversion) and Inverters (DC to AC conversion). Calculating the power budget was intense. There is a power budget loss with conversion. Today, most Data Center equipment is AC-powered. Understanding power consumption is critical to Data Center operations as every device installed consumes power. Let’s go back to our server, network rack and stack installation. Each cabinet has a power budget. This is performed for two reasons:
- Commercial power is provided by an external vendor. If we consume more than what is provided, a spike will be noticed on the service bill. Upgrades will be necessary and are costly.
- Uninterruptable Power Supply (UPS) are based on AC load and are installed with batteries. A UPS system functions in parallel with AC power, so when there is a failure, the batteries provide DC power, which is converted to AC power for the devices. This is why power consumption calculations are so important. The UPS system can run for days if properly sized and measured. Some locations have generators installed to back up commercial power. The caveat is that a generator will take 4 to 8 seconds to spin up and produce and restore the necessary amount of power. The UPS system will run the Data Center equipment until the generators have produced enough power. As you see, each component runs in tandem. Now, let’s revisit the cabinet power budget. The cabinet budget is measured in kilowatts. The old power budget was 3 to 6 to 10 kilowatts per cabinet. With Artificial Intelligence, Machine Learning, Data Science, Gaming and other powerful applications that consume CPU, GPU and Memory, the power budget is 13 to 20 kilowatts per cabinet.
Structured Cabling & Conveyance: We have all seen the jungle of cables strewn about cabinets in IDFs and MDFs. The hardest part was seeing the cable management guides not being utilized. Structured Cabling routing and grooming is an art. Most Cloud providers are proud of their installations. Trays of copper and fiber optics were neatly routed and dressed. Power is typically separated from copper cabling so as to not cause electrical interference. Unshielded cabling acts like an antenna and will pick up electrical signals, causing data transmission errors and driving the network team bonkers.
Due to bandwidth capacity, Power over Ethernet (PoE) and Intelligent Building designs, my recommendation is to install Category 6a (Cat 6a). Pay special attention to distance and patch panels as they all combine to the 100 Meter (330 foot) limitation. Long distance/campus fiber runs will require Single Mode Fiber (SMF). Use Multimode Fiber (MMF) for internal runs of 100 feet – server to patch panel or direct to switch. Be very cognizant of the fiber connector type and polarity for all devices and patch panels. Do not play the blame game. Verify and Validate. If a Structured Cabling vendor is performing the installation, make sure they perform fiber and copper certification testing before installing. This includes patch cables and patch panels. Request a copy of the test results for safekeeping. It is best to find a problem during installation as opposed to post installation. You’re spending good money to get it right, so do it right.