Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, June 5th, 2017
Time |
Event |
3:00p |
Most Data Center Outages aren’t Caused by Tech Failure Many critical industries such as nuclear energy, commercial and military airlines—even drivers’ education—invest significant time and resources to developing processes. The data center industry … not so much.
That can be problematic, considering that two-thirds of data center outages are related to processes, not infrastructure systems, says David Boston, director of facility operations solutions for TiePoint-bkm Engineering.
“Most are quite aware that processes cause most of the downtime, but few have taken the initiative to comprehensively address them. This is somewhat unique to our industry.”
See also: British Air Data Center Outage Feeds Outrage at Airline Cost Cuts
Boston is scheduled to speak about strategies to prevent data center outages at the Data Center World local conference at the Art Institute of Chicago on July 12. More about the event here.
He suggests that management is constantly compelled to replace aging infrastructure systems and components, or systems that have caused repetitive problems, and they are accustomed to adding system capacity to accommodate load growth. In terms of infrastructure, mechanical failure in cooling systems is the biggest generator of failures, but electrical system failures cause far more downtime events because of such a short time to react.
“Each of these efforts involve outside engineering support, so the time required of management is often only that of defining the project and overseeing it.”
While developing processes associated with the most common causes of data center outages may be more time-consuming for management, it’s time well spent. Here are the top three offenses and best practices that Boston recommends following:
- Failure to match a facility’s staff size and shift coverage with objectives for critical operations uptime.
Best practice: Quantify uptime objectives with senior IT management and ensure staffing matches it. Boston suggests keeping two individuals per shift on every shift, with additional personnel responsible for training and procedures programs. If maximum uptime is desired, only use single shift coverage if an occasional downtime event is acceptable.
- No site-specific training program, including dedicated practice time before the facility begins operation.
Best practice: Assign a single team member as the training program owner, with time to coordinate monthly emergency response training for all team members. Rotate each team member through hands-on practice, isolating an infrastructure system before a maintenance activity and restoring the system to service as activities pop up on the preventive maintenance calendar.
- Inadequate site-specific procedures.
Best practice: Assign a single team member as the procedures program owner, with time to develop (or work with a consultant to develop) the 100 to 200 critical procedures needed for virtually every critical facility. Have each one confirmed for technical accuracy and verify all are clearly understood by the least knowledgeable person on the team.
“I have long suspected that there is a reluctance to devote the initial time required to implement the programs described above,” comments Boston.
These processes should absolutely be implemented with respect to critical operations—those that would negatively impact an organization’s revenue or credibility if they fail. However, for non-critical operations, he suggests focusing on methods for quick restoration.
Data Center World Local, Chicago, is taking place July 12 at the Art Institute of Chicago. Register here for the conference. | 3:30p |
Oregon School Licenses Dome Data Center Design to Startup The unique dome-shaped data center design used for Oregon Health & Science University’s (OHSU) supercomputer facility in Hillsboro is being licensed by a new startup, Server Dome, which plans to build more on a commercially available basis, reported the Portland Business Journal.
Perry Gliessman, director of technology services for the university’s IT Group, is the mastermind behind the design which encompasses 8,000 square feet, 4MW of power and racks that can support 25kW of hardware.
More about the data center design: Geodesic Dome Makes Perfect Data Center Shell in Oregon
When Gliessman sat down to design the Hillsboro Data Dome, which opened in 2014, he said he took the need for structural integrity into consideration, as well as the need to support extreme power densities with an economy of space, while using as much free cooling as possible. That meant maximizing outside-air intake and exhaust surface area.
The dome data center design, according to Server Dome, helped reduce the required mechanical systems in the facility to just one—the Air Handling Units. By using natural convection, the university eliminated cooling equipment like air conditioning units, exhaust fans, dehumidifiers, chillers and CRACs, helping make considerable savings on electricity costs.
It resulted in at least a 30 percent reduction in construction costs and an 80 percent reduction in maintenance costs, when compared to a traditional brick-and-mortar building.
OHSU built the $22 million Data Dome on the West Campus in Beaverton to share the workload of its primary data center in downtown Portland.
Server Dome described the facility as “virtually maintenance-free”, adding that it should last for 15 to 20 years. | 4:51p |
DCK Investor Edge: Why RagingWire is a Data Center Company to Watch A tailwind from the Internet of Things, 5G networking, and edge computing research by parent NTT Group gives data center provider RagingWire a distinct advantage in competing for large-scale deployments, according to CEO Doug Adams.
Many US data center professionals familiar with the Reno, Nevada-based company’s patented 2N+2 data center design may not be as familiar with its publicly traded corporate parent from Tokyo, NTT Communications. In 2014, NTT invested $350 million to purchase 80 percent of Raging Wire, and plans are in place to consolidate 100 percent ownership later this year.
One notable exception would be DuPont Fabros Technology CEO Chris Eldredge. Prior to joining DFT in February 2015, Eldredge was executive vice president of data center services at NTT America. Today, RagingWire and DuPont Fabros are both building large-scale wholesale data centers, targeting hyper-scale public cloud providers and Fortune 1000 enterprise customers.
Work in the data center industry? Curious about the data center business or how the infrastructure for the digital future is built? Listen to podcasts? If so, check out our new show, The Data Center Podcast. The first episode features Equinix’s chief evangelist Peter Ferris, who gives us a lesson in internet history, which includes Playboy’s first data center.
The latest RagingWire 2N+2 design allows for customization of 1MW vaults (each of which can be further subdivided into four 250kW data halls) within 14MW to 16MW data centers. This design allows for redundancy even when undergoing scheduled maintenance. RagingWire’s existing customer deployments range from four or five racks to 20MW.
“Our view is that the cloud is a catalyst for data center demand, not a one-time blip,” Adams said in an interview with Data Center Knowledge. “Over time, the Internet of Things, content, big data, and AI (Artificial Intelligence) will drive greater need for cloud computing and data centers.”
Read more: RagingWire Pursuing Cloud Providers with New Focus on Wholesale
NTT has top-level executives leading dedicated teams and research labs supporting IoT, 5G, edge data centers, and other IT initiatives, including a recent partnership announced with Toyota to research and build out a global data center network to support autonomous vehicles.
NTT Group
NTT Communications generates over $100 billion in annual revenues, making it one of the largest telecom and data center players globally. It also has a complex corporate structure with over 900 subsidiaries. NTT in aggregate has 240,000 employees located in 88 countries who “provide consultancy, architecture, security and cloud services to optimize the information and communications technology (ICT) environments of enterprises.” NTT companies include: Dimension Data, NTT DOCOMO, and NTT DATA.

Source: NTT – June 2017 (click image to enlarge)
There are over 140 NTT-affiliated data centers located in major cities across 19 countries on four continents. In Asia-Pacific the company serves: Tokyo, Hong Kong, Singapore, Malaysia, Indonesia, Philippines, and Thailand.
The scale of NTT’s data center footprint is easily overlooked when it comes to industry stats reported in the US, likely because of the historic concentration in Asia and multiple data center “brands” which operate semi-autonomously.
The NTT Nexcenter data center “umbrella” includes:
- RagingWire – North and South America
- Netmagic Solutions – India
- Gyron Internet – United Kingdom
- e-shelter facility services – Germany
The RagingWire US data centers are “carrier neutral.” However, the ability to coordinate with NTT engineers on IT architecture allows for some unique global solutions which can involve NTT sub-sea cables and “global tier-1 IP network, the Arcstar Universal One VPN network reaching 196 countries/regions.”
Notably, there is the ability for RagingWire enterprise customers to enter into “consistent contracts worldwide” to help support and distribute workloads globally if required. Customers can choose to contract globally with counterparties backed by NTT’s enormous balance sheet and impressive AA- investment grade rating.
RagingWire Tier 1 Strategy
US Tier 1 markets are attractive to a multinational telecom and data center player like NTT, because they represent such a large piece of the global data center pie.

Source: Raging Wire – June 2017 (click image to enlarge)
Adams mentioned to us back in February that both Chicago and Silicon Valley were being targeted Tier 1 markets for expansion — although the precise timing still remains under wraps.
Read more: NTT Names Adams RagingWire CEO, Takes Full Ownership of Company
The NTT sales force has business relationships with 80 percent of the Fortune Global 100. This deep penetration with multinationals should provide RagingWire’s business development team enhanced access to decision makers.
Adams, who recently returned from corporate meetings in Tokyo, now has a runway for CapEx budgets through 2019, making it easier for RagingWire to accelerate expansions into strategic markets. Stay tuned to DCK Investor Edge for further developments on those fronts.
Cost-of-Capital Advantage
The Raging Wire 2N+2 design incorporates far more parts and pieces than low-cost competitor CyrusOne’s “Massively Modular” design. However, RagingWire’s cost-of-capital advantage helps to level the playing field when it comes to competing for hyperscale customers interested in large-scale deployments.
Read more: DCK Investor Edge: CyrusOne — Catch Me If You Can
NTT’s AA-/Aa3 bond rating paved the way for a 10-year yen denominated bond due in March 2023 to be offered at an eye-popping 0.69 percent interest rate. By way of comparison, CyrusOne’s March 2017 private offering of $500 million of senior notes due 2024 and $300 million of senior notes due 2027 priced at 5.000% and 5.375%, respectively.
Digital Realty Trust, the only US REIT with an investment-grade balance sheet and a global wholesale data center footprint, has a BBB-rated balance sheet. Digital’s current weighted average coupon is 3.5 percent, for an average 5.2-year term. DuPont Fabros has a BB/Ba1 rated balance sheet, and unsecured notes average 5.8 percent, for a five-year remaining term.
While none of these are precisely apples-to-apples comparisons, NTT has a clear edge in the capital-intensive data center business.
Bottom Line
Clearly, RagingWire has the horsepower, as well as scale and operational expertise to compete with both private equity-backed and publicly traded US data center REITs across Tier 1 markets in North America.
“Our focus is on the top data center markets in North America,” Adams said. “We are listening to our customers tell us where they want their data centers and helping them get there.” He added, “Enterprises, particularly multi-nationals, will be the next wave, as they deploy hybrid data center and cloud architectures.”
RagingWire’s parent can afford to make strategic investments in the US for the long term, without concerns about quarterly performance typically associated with smaller publicly traded companies. Investors should keep a close eye on both RagingWire’s appetite for geographic expansion and its ability to gain market share going forward. | 4:53p |
Playboy’s First Data Center, or Birth of the Internet Colo Peter Ferris has sat in a front-row seat to some of the seminal moments in the development of internet infrastructure as we know it, both as a spectator and as an active participant. He was deeply involved in establishment of some of the first data centers that turned the combination of real estate and access to networks into a business.
Some of his early customers included Playboy, which at the time (in the early 90s) was one of the most highly visited websites, hosted on 15 to 20 racks’ worth of data center gear; an NBC executive channel for business users, one of the early internet resources geared for business; as well as Netscape, Geocities, and the gaming site Happy Puppy, among others.
And Ferris has sat in that front-row seat ever since, mostly as one of the key execs at Equinix, which today is the world’s largest provider of interconnection and colocation services.
Because of his experience, we figured Ferris would be an ideal guest for the debut episode of The Data Center Podcast, which we’re launching today. We sat down with him after his keynote presentation at Data Center World Global in Los Angeles back in April for a mini lesson in internet history and to get a behind-the-scenes glimpse into how today’s cloud giants and enterprise data center users are building out their digital infrastructure.
Here it is, The Data Center Podcast, Episode 1:
Download
Or stream:
About The Data Center Podcast
On this show, produced by Yevgeniy Sverdlik and Data Center Knowledge, we’ll be interviewing top business and technology leaders in the data center industry about the latest trends to help you stay informed about the present, knowledgeable about the past, prepared for the future, and, hopefully, entertained. We promise to keep it interesting!
Expect new episodes of the podcast to come out once every two weeks (our next one will feature John Gilmartin, VP and GM, Integrated Systems Business Unit, VMware, to talk about VMware’s cloud strategy now that this pivotal company in the data center industry’s history is no longer trying to be a cloud service provider, and as it adjusts to its new parent Dell Technologies).
You can download or stream The Data Center Podcast on Soundcloud, and we’re working on making it available on iTunes and other platforms in the near future.
Enjoy, and please send us comments and suggestions for the show. We really love those. We also love when you help us spread the word about our content. Please do share it on social media and tell your colleagues and anyone else who’s interested in the data center industry that they should really check out The Data Center Podcast to hear from some of the brightest minds in the market.
Stay current on data center industry news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or Facebook or join our LinkedIn Group – Data Center Knowledge | 5:21p |
Report: Bank of America to Close Three Data Centers As part of its quest to move operations to a software-defined infrastructure, Bank of America announced that it will close three of its data centers, American Banker reported.
While the company disclosed that it would record a $300 million charge in the second quarter as a result, it’s just a matter of time before the closures save money.
That’s a small amount, compared to the $1 billion to $1.5 billion in annual cost savings CEO Brian Moynihan told shareholders the company has identified from its scheme to consolidate and digitize operations. BofA can also sell the three data centers, but he said no decision had been made yet.
Believing that such an infrastructure is the future of finance, BofA created a private cloud system a few years ago to accommodate massive network, storage, and server capacities to support its approximately 4,600 retail financial centers, 16,000 ATMs, and 34 million active online accounts.
See also: Bank of America Endorses Data Center Clean Energy Buying Principles
Today, Moynihan said, the goal is to have 80 percent of all the bank’s systems running in software-defined data centers within two years.
The bank has also become more savvy and efficient in mobile banking, tripling its digital budget in 2016, according to the report. It added 2.5 million new mobile customers over the past 12 months, increasing the total to 20 million active users of its mobile app. Now, mobile check deposits represent 17 percent of BofA’s deposit transactions.
Typically, its customers process 280,000 mobile deposits per day, an increase of 28 percent year-over-year and equivalent to 800 financial centers. Each mobile deposit costs the company 90 percent less than an in-branch deposit.
See also: Wall Street Rethinking Data Center Hardware
Savings moving forward could also come from the move to replace paper statements with electronic versions and to convert its fixed-income and equities trading platforms to less laborious digital platforms, which Moynihan referred to as “electronification.”
As part of its long-time cost-cutting measures, BofA will record another $125 million of severance costs as it continues to reduce headcount throughout the company, Moynihan said. The severance costs are not related to the data center closings and are primarily tied to termination of higher-salaried managers. | 5:53p |
Digital Realty SVP Schaap Named CEO of Aligned Energy Aligned Energy, the recently launched data center provider with a unique business model enabled by the design of its cooling system, has appointed a new CEO. Andrew Schaap takes company helm after 11 years at Digital Realty Trust, one of the world’s largest data center providers.
Aligned’s current CEO, Jakob Carnemark, who is also its founder, vice chairman, and CTO, will focus on technology and strategy. In a statement, Carnemark said he’s always planned to eventually bring on another chief exec:
“To continue building upon the successful foundation we have laid at Aligned Energy, it is the right time to bring on a proven data center executive to run the business. I have always said that as the business scales, I would like to bring in a CEO to manage our company so I may focus on accelerating our technology development to serve our clients.”
In his most recent senior VP role at Digital Realty, Schaap focused on global large-scale data center projects for clients, according to the announcement Aligned released Monday, which also said Schaap was part of the executive team that grew the San Francisco-based data center REIT’s revenue to $2 billion.
See also: Aligned Beefs Up Management Team with Hires from Rival Firms
 Andrew Schaap, CEO, Aligned Energy
Aligned launched its first data center, located in Plano, Texas, in 2015, and earlier this year it announced that the first phase of the facility has been fully leased. Also earlier this year Aligned announced the launch of its second data center, this time in the Phoenix market.
The company has a unique business model for a colocation provider. It offers data center capacity on-demand, allowing customers not only to expand both capacity and the price of their lease in small increments, but also to contract it.
Read more: Modular Cooling system Enables On-Demand Data Center Capacity
This is enabled by a modular cooling system designed by Aligned’s subsidiary Inertech. Aligned Energy, a holding company backed by the hedge fund BlueMountain Capital Management, provides data center services under the name Aligned Data Centers. | 6:30p |
How To ‘Refactor’ Monolithic Applications into Microservices Chris Stetson is Chief Architect of Microservices Engineering at NGINX.
Monolithic applications are extremely common in businesses today. These applications, built using traditional IDEs, are easy to develop, test and—at least at first—to deploy and scale. There’s just one thing: The monolith will only get bigger and more complex, eventually becoming a spaghetti-code monster for organizations looking to respond quickly, reliably and efficiently to changing business needs and customer demand.
Monoliths are beneficial in certain situations and especially in the early days of app development because it’s a familiar and well-known way of developing applications. Monoliths put business logic into the core of the application, implemented by modules that define services, domain objects and events. Database and messaging components, or Web components that expose APIs or implement a user interface, surround the core and interface with the external world..
However, while developers built these applications in a modular way, they deployed them as monoliths—for example, Java applications packaged as WAR files and deployed on application servers or packaged as self-contained executable JARs, or Rails and Node.js applications packaged as a directory hierarchy.
There are many benefits to this model in the early stages of a project, but as applications grow and add feature, things start to get complex and less agile over time. And the more successful an application is, the more likely it is to become difficult to manage–comprising the original info and line after line (after line after line) of added code.
At this point, continuous delivery just isn’t possible and the monolith needs to be broken down into smaller pieces. Regression testing alone on a large monolith can take a prohibitively long time, especially if you have manual UI or pen testing requirements in your process. Indeed, it will be difficult for developers—especially those new to the application—to work on it at all, let alone truly understand it. The time it takes to fix bugs and address vulnerabilities—not to mention add new features—will increase exponentially.
All of this would be problematic in any case, but companies today need to be able to respond efficiently and effectively to new customer demands and changing market dynamics. If making a change to a critical application is the equivalent of turning around an ocean liner, your company’s ability to compete will be severely compromised.
As a result, monolithic applications can be a barrier not only to innovation but to success: Adopting new languages or technologies can be challenging and a monolithic application can be difficult to scale, mitigating the business’ ability to accommodate increased demand. . With the popularity of DevOps, microservices and containers, monoliths could also be a barrier to hiring new talent who want to work on the latest technologies.
So, what’s a company to do? Well, Amazon, eBay and Netflix are great examples of successful and very large companies deploying applications using microservice architectures. In this model, applications are split into a small set of interconnected services based on business capability. Each service implements a set of distinct features—in effect, serving as a mini-application with its own business logic, adapters and even database schema. Services can be grouped together in different ways, with the sum of their parts serving as new products.
The microservices model is “an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API,” according to Martin Fowler, an author, software developer and early proponent of the idea of microservices. “These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”
The Time Is Now for Microservices
Microservices is not a new idea, but the concept is more important than ever in the current, dynamically changing business environment. Each service has a well-defined boundary and are interfaced through some sort of API. The model enforces a level of modularity that makes individual services much faster to develop, test and deploy. Your developers get the added benefit that services are much easier to understand and maintain.
The microservices model also makes it easier for teams of developers to focus on a single service rather than the entire monolithic application. Developers are not hamstrung by whatever technologies were used to build the original monolith, and they can choose the technology—and iterative timetable–that best suits the service task. Likewise, each service has its own database schema. This goes against the idea of an enterprise-wide data model, but enables developers to use the type of database that is best suited to the service’s needs (known as the polyglot persistence architecture).
If you think that the microservices model can help tame your company’s data beast, here are three things to think about to get you off (or to continue) on the right foot:
- Quit while you’re “ahead.” (Or, don’t make an unwieldy application more unwieldy): When you are implementing new functionality, don’t add more code to the monolith. Instead, turn the new code into a stand-alone microservice.
- Split the front end from the back end: With most monolithic applications, there is a clean separation between the presentation logic on one side and the business and data-access logic on the other, including an API that can act as a seam. “Splitting” the application along this seam creates two smaller applications.
- Turn existing modules into stand-alone microservices: Each time you extract a module and turn it into a service, the monolith shrinks. Once you have converted enough modules, the monolith will cease to be a problem.
Microservices are not perfect, nor are they a cure for whatever application issues are ailing a company. However, microservices are helping companies to better align their business and IT needs—and to tame the monolithic beast.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. |
|