Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 8th, 2013
| Time |
Event |
| 1:00p |
The Power of a Consumable Data Center Network Design Cloud computing has created new types of services, new types of delivery infrastructure, and a new type of data center demand. Unfortunately, today’s data communications networks are not able to keep pace with this dynamic business environment, and they’re struggling to deliver consistent, on-demand connectivity. Enter the consumable data center network design.
Before the move to the cloud, enterprises had to purchase large compute systems to meet the peak processing needs of a limited set of specific events, such as financial milestones (month end or year-end), or annual retail events (holiday shopping). Outside of the specific events, the systems were under-utilized. Thus, this approach was expensive, both in terms of CAPEX and OPEX, requiring significant outlay for power, space and air-conditioning. With that, came the concept of the cloud-based data center. This infrastructure allowed an organization and their IT department to consume additional compute and storage cycles on an as-needed basis. Peak demands can be provisioned “just in time”, which lowers operational costs and provides the ability to share compute resources across applications.
This white paper from Nuage Networks examines the real technology behind a consumable data center network. This means the ability to create direct transparency between applications, creating easier-to-manage networking environments, and eliminating the vendor lock-down challenge. This white paper outlines:
- The complete abstraction of the application from the data center
- Customer self-fulfillment – Cloud Management Systems (CMS)
- Maximizing network compute cycles with Virtualized Services Platforms (VSPs)
- Understanding next-generation networking with Virtual Service Controllers, Virtual Routing and Switching, and the Virtualized Services Directory.
Download this white paper today to learn how a fully virtualized networking plane can help create a truly consumable data center network. As the reliance on the modern data center continues to increase, the applications and services hosted within these platforms will become even more critical. With that in mind, creating an agile networking infrastructure can help meet the demands of a growing business organization. | | 2:00p |
Alcatel-Lucent’s CloudBand Hopes To Advance NFV Community Alcatel-Lucent transforms itself with a focus on enabling Network Functions Virtualization community, Emulex launches a purpose-built NetFlow appliance, and EdgeCast offers a globally-distributed DNS service.
Alcatel-Lucent advances NFV with CloudBand Ecosystem
Speaking at the opening of its cloud research center in Tel Aviv, Alcatel-Lucent (ALU) CEO Michel Combes outlined The Shift Plan – the company’s industrial transformation from telecoms generalist to specialist in IP Networking and Ultra-Broadband Access – which will harness new technologies to accelerate the move to the cloud by service providers. The company hopes to leverage Network Functions Virtualization (NFV) technology, which enables network services to be deployed on a shared cloud infrastructure instead of dedicated, purpose-built hardware. To advance the cause, the company launched its CloudBand Ecosystem Program, a community of service providers, developers and vendors adopting NFV. Developers and vendors can access tools and test applications to ensure they scale and interact within a simulated cloud environment before they reach a service provider’s network. Alcatel-Lucent already uses the platform at its CloudBand Innovation Center (CIC) to successfully ‘onboard’ its own virtualized applications for the carrier cloud.
“Alcatel-Lucent is dedicated to helping service providers advance quickly in the area of NFV,” said Roy Amir, Vice President, Strategy & Ecosystem, CloudBand at Alcatel-Lucent. ”The CloudBand Ecosystem Program aims to do just that, providing a workspace where companies can access CloudBand and collaborate and learn from each other. This will help service providers adopt a completely new NFV operational model using services and solutions that have been developed with them and their customers’ needs in mind.”
Emulex launches NetFlow appliance
At Interop New York Emulex announced the new EndaceFlow 3040 NetFlow generator appliance. The new appliance is purpose-built for use with high-density 10Gb Ethernet networks, generates 100 percent accurate NetFlows on up to four Ethernet links at speeds up to 10Gb per second line rate. By providing 100 percent NetFlow generation, new threats to network security and performance can more easily be detected, identified and resolved – resulting in the detection of a wider range of network anomalies and intrusions in the security operations space and the identification of network choke points that impact application performance – and can be further treated with packet-based network recording and analysis tools.
“As enterprises move more deeply into the latest data center technologies, such as 10GbE, server virtualization and software-defined networking, they are finding that visualizing what is happening in their networks has become more challenging,” said Lee Doyle, principal analyst, Doyle Research. “This is compounded by the fact that many tools that worked well at 1Gbps speeds simply have not scaled up to 10Gbps. This has critical implications for the ways that enterprises approach security monitoring, forensics and network performance management, which can only be addressed by tools that are designed to enable network visualization at 10Gbps speeds and above.”
EdgeCast launches DNS service
EdgeCast Networks announced a new globally distributed DNS (Domain Name System) service. The new service will offer enterprise-grade DNS features and functionality with a simple and cost-effective pricing structure. Customers switching from competitors are likely to realize significant savings while benefiting from vastly superior performance and functionality. “In third-party testing, EdgeCast Route was the fastest overall performer among all the major DNS providers – in many cases by a substantial margin,” said Ted Middleton, VP of product management at EdgeCast Networks. “Coupling that performance with sophisticated features like near instant health-checks, automatic failover, load balancing and advanced policy (geography or network-based) routing, EdgeCast Route is clearly differentiated against any other managed DNS service. This combination of performance and features – offered at prices that are among the most aggressive in the industry – brings a very powerful new alternative to the market.” | | 2:15p |
Big Data News: ScaleOut Software Releases hServer V2 This week’s big data news includes: ScaleOut Software releasing a new edition of its hServer running Hadoop MapReduce; Concurrent partners with services companies for big data in the enterprise; and MapR’s Hadoop platform helps Xactly enhance its offering.
ScaleOut Software Releases hServer V2
ScaleOut Software announced the release of hServer V2, incorporating new technology that runs Hadoop MapReduce on live data. ScaleOut hServer V2 provides a self-contained execution engine for Hadoop MapReduce applications to significantly accelerate performance and eliminate overheads inherent in standard Hadoop distributions. The new release provides low-latency access to Hadoop and enables fast, concurrent access and updating of data sets held in the in-memory data grids while continuous MapReduce analyses are being performed. hServer is available in both a free community edition and in several commercial editions. “This release marks a huge step forward for real-time data analytics,” said Bill Bain, ScaleOut Software’s CEO. “By enabling real-time analytics for Hadoop, which has emerged as by far the most popular platform for analyzing big data, we aim to dramatically improve the effectiveness with which organizations can manage their live data. Reducing Hadoop’s execution time by more than an order of magnitude will make a tremendous difference in the ability to better understand – and predict – key patterns and trends with live, fast-changing data.”
Concurrent Forges Enterprise Partnerships
Big data application platform company Concurrent announced a new partnership with Think Big Analytics, a provider of data science and engineering services for Big Data and big analytics projects. Concurrent partnered with Think Big, which is known for its strategic planning, implementation and training expertise. Think Big provides valuable expertise to execute a big data strategy quickly and with low risk. The big data consulting firm provides services that ensure enterprises have the right foundation in place to make the most of their big data strategies. “In order to achieve big data breakthroughs, companies need to invest in both technology and people. Our partnership with Concurrent addresses both aspects as enterprises can tap into the power of the Cascading application framework while accessing the expertise and training of the Think Big team. After looking at other approaches, we’ve decided to support Cascading as it offers a proven solution for Big Data application development on Hadoop by solving real business problems at enterprise scale.”
Xactly Deploys MapR Hadoop Histribution
MapR announced that Xactly, provider of cloud-based incentive and sales performance management solutions, has deployed the MapR Distribution for Hadoop. Xactly decided to implement Hadoop within its IT infrastructure to effectively and efficiently scale to meet its growth demands and embarked on a comprehensive review of available technologies, including MapR. “Right from the start, we saw that MapR had a very different mentality. They knew what it meant to operate Hadoop in a transactional production data center and understood that this was not an academic exercise; it was mission-critical to our business,” said Ron Rasmussen, VP of engineering, Xactly Corporation. “We’re running customer data and billions of transactions through our systems architecture 24×7, worldwide. Our customers demand high performance and the system software background of the MapR team was a major differentiator.” | | 4:00p |
MongoDB Closes $150 Million Financing Round Seeking to further expand on its already explosive growth, NoSQL database company MongoDB announced that it has secured $150 million in financing, led by a global financial services company and by certain funds and accounts advised by T. Rowe Price Associates, Inc., with additional new investors Altimeter Capital and salesforce.com. The round includes participation from existing investors Intel Capital, NEA, Red Hat and Sequoia Capital. Marking the largest database single funding round ever, this global six year old company has raised over $231 million.
“Adoption of MongoDB has grown explosively over the last few years,” said Max Schireson, CEO at MongoDB. “This funding will allow us to continue to invest in the technology and the global operation our customers require. Building the product and company to bring greater agility and scalability to how organizations manage data will require a large and sustained investment. With this additional funding we will have the staying power to make these investments.”
As a big data and NoSQL database leader MongoDB has built a vast community of developers and supporters, and attracted some big name technology investors as well. The company will use the new funds to further invest in the core MongoDB project as well as in MongoDB Management Service, a suite of tools and services to operate MongoDB at scale. In addition, MongoDB will extend its efforts in supporting its growing user base throughout the world.
“With this round, MongoDB establishes itself as the database of the future, with by far the strongest product, community, team and financial backing in the industry, “said Luis Robles, Venture Capitalist at Sequoia Capital. “They are in a very large and competitive market, but they have all the ingredients to be big winners and we are delighted to be their business partners.” | | 5:30p |
Finally, the Cure for Finger Pointing: Cloud and Network Monitoring IT managers ask themselves every day, how can I run a cloud-ready environment as efficiently as possible? How do I ensure that our cloud platform doesn’t fall under the “finger pointing” problem?
Here’s the reality – Without a truly robust cloud and network monitoring solution, monitoring your environment entails relying on partial information from multiple sources. This prevents you from easily determining the root cause and solution for performance issues within your cloud environment.
Join Kaseya for this webinar, which outlines the key concepts behind creating a robust cloud and networking monitoring platform. As your cloud environment continues to grow, it’ll be vital to resolve problems quickly and to control resources from a single management console.
Register for this webinar today and learn how to:
- Perform rapid root cause analysis and quickly resolve issues
- Easily manage business services and prove Service Level Agreements
- Monitor and manage cloud, hybrid cloud, on-premise, virtualized and distributed environments from a single place
It’s time to stop finger pointing and start managing your environment centrally, proving SLAs, and quickly resolving performance issues. Increasing efficiency within your cloud environment not only helps the performance – but it will also have a direct impact on your users. Your users will become more productive and you will be able to know with certainty that you are delivering a high-quality service to the business. | | 6:00p |
Risky Business: When Disaster Strikes With No Recovery Plan Donna Johnson is Director of Product Marketing, Talari Networks.
 DONNA JOHNSON
Talari Networks
According to a recent survey by the Disaster Recovery Preparedness Council (DRP), “Seventy-two percent of survey participants, or nearly three out of four companies worldwide, are failing in terms of disaster readiness scoring ratings of either a D or F grade. Only 28 percent scored an A, B or C passing grade with the remaining 72 percent of respondents at risk.”
With experts estimating the average cost of downtime at $5,000 per minute, many organizations are at significant risk when it comes to the reliability of their data environments.
Preparing for a Disaster
A key part of almost every disaster recovery plan is backups. WAN Virtualization speeds data backup through its ability to simultaneously use all bandwidth in an aggregated WAN for a single session. This removes the bandwidth restriction in any one link and sends data in parallel through multiple links, increasing throughput rates. WAN Virtualization also ensures backup processes have enough bandwidth to perform successfully because a minimum amount of bandwidth can be reserved for the backup process, preventing other, lower priority, processes from crowding out backups.
Preventing a Disaster
While true natural disasters are rare, network outages are not. According to the DRP study, 54 percent of disasters are caused by software or network failure. Handling these failures without downtime or application outage is key to a business’ DR (Disaster Recovery) strategy. WAN Virtualization detects network outages within a split second and takes action to switch traffic to alternate lines and along alternate routes, preventing network outages from becoming a disaster.
Recovering From a Disaster
Many organizations have a backup data center ready to take over when a primary data center fails. WAN Virtualization’s geographic high availability options mesh seamlessly with this type of recovery plan. This is done using a second WAN Virtualization unit positioned at the backup data center that is automatically maintained with the same configuration as the primary unit. When a disaster causes the backup data center to take over, WAN Virtualization units will immediately direct traffic using the inbound and outbound WAN links from the backup data center.
 Click to enlarge.
How Some Companies are using WAN Virtualization as Part of their DR Strategy
Companies of all sizes, varying industries and geographic locations are using WAN Virtualization as part of their DR strategies. Although none of these companies have anything in common, the one thing they do share is the need for 24/7 network resiliency and uninterrupted business continuity. Here are some examples:
- Maricopa Region 911 is a first-responder consortium of local municipalities in the greater Phoenix metro area that handle emergency VoIP 911 calls dispatched through 25 call centers. These calls are mission-critical and must be transmitted without the risk of any dropped links or failures. Since deploying WAN Virtualization, Maricopa has never experienced their calls centers going down – it’s about performance, reliability, cost-savings and, ultimately, saving lives.
- TEAL Electronics specializes in the design and manufacturing of custom power subsystems for OEMs that require high power and contain sensitive electronics. Prior to using WAN Virtualization, its five manufacturing facilities worldwide remote sites were connected to the primary data center by single or bonded T1s running MPLS for corporate-wide transmission. Data from every remote site was backed up to the main data center nightly. The volume of backup data transmitted often exceeded bandwidth capacity, creating congestion on the network and backups to run beyond their off-hours window, forcing IT to manually stop and restart backups. If a T1 link went down, there was no backup circuit and the site would be cut off from the data center and any applications hosted on the line. A failure could idle an entire plant, costing thousands of dollars. After deploying WAN Virtualization, TEAL swapped expensive MPLS circuits for less costly, higher bandwidth at two to five times the bandwidth of MPLS at 74 percent of the cost; seamless failover has become transparent to users and downtime non-existent; and high productivity as employees are assured full-time access to critical applications and no longer have to worry whether a data backup has finished.
- Meritrust Credit Union, the largest credit union in Kansas with 220 employees and 14 branches. Each branch was connected to the data center over a single 1.544 Mbps T1 circuit. If a link went down, that branch would lose connectivity to both applications and data hosted at the central site. After switching to WAN Virtualization, Meritrust achieved seamless failover of network links by intelligently rerouting traffic, full bandwidth utilization, no dropped sessions and a fully integrated disaster recovery strategy that ensures a smooth switch to the backup data center.
By using WAN Virtualization technology as a key part of any DR strategy, network operators can ensure a more resilient network topology and make it much less likely for a single link failure, link congestion, or even a complete data center shut down to become a catastrophic disaster.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:17p |
RagingWire Boosts DCIM Offering With CA Technologies  The new RagingWire data center in northern Virginia features colorful, branded air handler units. RagingWire has worked with CA Technologies to enhance its N-Matrix DCIM offering. (Photo: RagingWire)
RagingWire’s N-Matrix Data Center Infrastructure Management (DCIM) solution is yet another example of a multi-tenant data center provider adding value in what they do beyond space, power and pipe. The company announced during Data Center World that it has further enhanced N-Matrix with the inclusion of CA DCIM from CA Technologies.
N-Matrix is RagingWire’s proprietary software which integrates more than 20 infrastructure management applications. Customers use it to receive detailed power and security reports via an interactive online portal. The partnership with CA means that N-Matrix is getting more capabilities, and the RagingWire customer is getting more value.
The companies are working together to tailor, integrate and leverage several CA DCIM capabilities such as monitoring, analysis, asset management and 3D visualization capabilities into RagingWire’s data centers in Ashburn, Virginia and Sacramento, Calif.
The additional capabilities will enhance N-Matrix. As part of RagingWire’s N-Matrix system, CA Technologies DCIM technology further improves RagingWire’s critical infrastructure reliability with data trending capabilities to correct equipment operating deficiencies before failure, as well as visualized and automated notification and alarm functions.
With the addition of CA DCIM, RagingWire’s customers will now have access to visualization and data feeds via a customer portal, through web services APIs, and through CA’s iPad and Android applications.
“While many other data center providers are still formulating their DCIM strategies, RagingWire is already delivering concrete benefits and added value to customers with the N-Matrix DCIM system it has running in production today,” said Terrence Clark, senior vice president, infrastructure management, CA Technologies. “By making the CA DCIM solution a core element of their N-Matrix system, RagingWire is both validating our industry leadership in DCIM and raising the bar for performance and transparency in the data center services market.”
The data center is getting smarter. This isn’t the first instance of a multi-tenant data center provider moving to add DCIM, or in the case of RagingWire, further improving DCIM capabilities to bring value to the customer.
“The data center industry has been criticized for a lack of transparency regarding data center environmental conditions, bandwidth, and power utilization. RagingWire is changing that,” said William Dougherty, vice president of information technology at RagingWire. “We vetted a large number of solution providers in our search for a DCIM solution that met our objectives of excellent customer experience and integrated deep-dive data analysis capabilities. CA Technologies was the provider that stood out to us with respect to our specifications.”
“Service providers and other data center operators increasingly recognize that deploying DCIM enables improved infrastructure monitoring and management,” said Rhonda Ascierto, research manager, data center technologies and eco-efficient IT for 451 Research. “But the fuller potential of DCIM for a data center provider is reached when that value is delivered back to its customers. The approach of companies like RagingWire and CA Technologies illustrates what can be done when the offerings of data center and software companies are brought together.” | | 6:30p |
AMD SeaMicro Servers Will Power New Verizon Cloud  A customized version of the AMD SeaMicro SM15000 server (shown above) will power a new enterprise IaaS offering from Verizon Business. (Photo: Verizon)
Seeking to optimize for speed and performance, Verizon will use AMD SeaMicro SM15000 servers to link hundreds of cores together in a single system to power its new Infrastructure as a Service (IaaS) cloud platform and cloud-based object storage service. Verizon and AMD co-developed additional hardware and software technology on the SM15000 server that provides unprecedented performance and best-in-class reliability backed by enterprise-level service level agreements (SLAs).
“Verizon created the enterprise cloud, now we’re recreating it,” said John Stratton, president of Verizon Enterprise Solutions. “This is the revolution in cloud services that enterprises have been calling for. We took feedback from our enterprise clients across the globe and built a new cloud platform from the bottom up to deliver the attributes they require.”
In 2011 Verizon acquired Terremark, whose Enterprise Cloud offering made it an early leader in the corporate cloud computing space. With its new offering, Verizon has re-engineered the infrastructure and software driving the platform.
Compute and Storage
With Verizon Cloud Compute virtual machines can be setup in seconds, and users will be able to determine and set virtual machine and network performance, providing predictable performance for applications, even during peak times. Storage can be configured to attach to multiple virtual machines, and is an object-addressable, multi-tenant storage platform. Verizon says its cloud storage overcomes latency issues that have plagued many traditional storage offerings, providing improved performance.
“This is a breakthrough approach to how cloud computing is done,” said Bryson Koehler, chief information officer at The Weather Company, the nation’s leading provider of weather forecasts and information. “Weather is the most dynamic dataset in the world, and we also use big data to help consumers better plan their day and help businesses make intelligent decisions as it relates to weather. As a big data leader, a major part of The Weather Company’s go-forward strategy is based on the cloud, and we are linking a large part of our technical future to these services from Verizon.”
Verizon and AMD SeaMicro
Verizon and AMD’s SeaMicro engineers worked for over two years to create the new platform. The SM15000-based infrastructure will provision virtual machines in seconds, provide precise server configuration options, and provide traffic isolation, data encryption, and data inspection with firewalls that achieve Department of Defense and PCI compliance levels.
“Verizon has a clear vision for the future of the public cloud services—services that are more flexible, more reliable and guaranteed,” said Andrew Feldman, corporate vice president and general manager, Server, AMD. “The technology we developed turns the cloud paradigm upside down by creating a service that an enterprise can configure and control as if the equipment were in its own data center. With this innovation in cloud services, I expect enterprises to migrate their core IT services and mission critical applications to Verizon’s cloud services.”
Early Kudos from Gartner
Cloud service launches are a dime a dozen these days. But upon its launch last week, the Verizon Cloud was dubbed “technically innovative” by Gartner analyst Lydia Leong, an influential voice among cloud analysts.
“The Verizon Cloud architecture is actually very interesting, and, as far as I know, unique amongst cloud IaaS providers,” Leong wrote on her CloudPundit blog. “It is almost purely a software-defined data center.
“It’s an enormously ambitious undertaking,” she added. “It is, assuming it all works as promised, a technical triumph — it’s the kind of engineering you expect out of an organization like AWS or Google, or a software company like Microsoft or VMware, not a staid, slow-moving carrier (the mere fact that Verizon managed to launch this is a minor miracle unto itself).”
DCK’s Rich Miller contributed to this story. | | 7:00p |
Online Poker a Potential Boost for Nevada Data Centers  Switch was the first Nevada data center approved to host online gambling sites, which recently became legal in Nevada. Online gaming may represent a $400 million business for Nevada. (Photo: Switch)
Several U.S. states are looking to legitimize and legalize online gambling, with Nevada leading the pack. The launch of online poker services represent an opportunity for data centers that can meet the regulatory standards for housing gaming infrastructure. Last May colocation provider Switch announced that its SuperNAP data center has been approved as registered hosting center for online gaming, and now ViaWest says it has gained approved for its new facility at Lone Mountain, Nevada.
Nevada is breaking ground for online gaming nationally, with new laws allowing online poker passed in February of this year. The services are restricted to players at least 21 years old and physically located in Nevada. Station Gaming (Ultimate Poker) and Caesar’s (World Series of Poker) are the first companies to launch legal online poker operations in Nevada.
New Jersey and Delaware also have similar online gaming initiatives, but there haven’t been any announcements from data centers in those states.
Online gambling is projected to be a $7.4 billion business in the U.S. by 2017, according to researcher H2 Gaming Capital, which says Nevada will represent about $400 million of that total. After years of debate. the states adopting online gaming are tightly regulating early entries, and hosting in approved facilities means that players gain some assurance and providers like Switch and ViaWest stand to benefit.
The Nevada Online gaming commission conducted rigorous inspections of both Switch and ViaWest facilities and determined they met requirements necessary for mission critical gaming data. Gaining the new designation can assure licensed gaming entities of the highest availability and security for their outsourced critical data and disaster recovery.
The online poker scene imploded in 2011 when the biggest sites, Full Tilt Poker, Poker Stars, and Absolute Poker were charged with fraud, money laundering, and violating gambling laws. While most gambling is hosted offshore, a trusted in-state data center located within state goes a long way in legitimizing what has been a fringe, albeit huge industry.
Lone Mountain Approved
ViaWest’s Lone Mountain data center located in Las Vegas, was registered as a hosting center with the State of Nevada Gaming Control Board (NGCB).
“As a registered hosting center, ViaWest can continue its positive contribution to the Las Vegas economy by housing business-critical operations for the gaming industry,” states Michael Vignato, Las Vegas General Manager and Regional Vice President for ViaWest. “Our newest Vegas facility, Lone Mountain, has ample space to serve this growing industry and offers the highest levels of fault tolerance, security and energy efficiency. We look forward to continuing to support and serve gaming organizations throughout Nevada.”
Lone Mountain is the first Tier IV Design-Certified multi-tenant facility in North America. The data center has over 70,000 square feet of raised floor (For a closer look, see our photo feature. The facility touts an expected Power Usage Effectiveness rating of 1.2, thanks to the inherent dry climate of Nevada and design of the facility.
Switch First Approved by NGCB
Switch made the announcement last May that it was the first approved online gaming hosting center by the NGCB.
“Online gaming is poised for dramatic growth given Nevada’s approval of the industry earlier this year,” said Switch Executive Vice President of Colocation Missy Young. “Switch’s SUPERNAP is the first data center in the world with the official seal of approval from the Nevada Gaming Commission.”
Switch has a big footprint in Nevada, unveiling a second SuperNAP there last April. The company is having no trouble growing out in the desert, selling a whopping 19 megawatts there in a single month last August. The facility,opened in April 2013 and is fueled by CEO and founder Rob Roy’s patents in data center design, systems, and related industry technologies.
While Nevada is leading the pack here, New Jersey and Delaware are moving towards legalizing. All three states will allow anyone within their borders and of age to gamble. Delaware is expected to offer online slots, poker, blackjack and roulette by the end of this month. Users will be able to gamble on their phones sometime in 2014. New Jersey says its online poker will kick off on Nov. 26. | | 7:30p |
NSA Data Center Plagued by Electrical Problems  The NSA data center in Bluffdale, Utah. (Photo by swilsonmc via Wikipedia)
The National Security Agency’s massive data center in Utah has been plagued by electrical problems that have delayed the opening of the 1 million square foot campus. The Wall Street Journal reports that there have been 10 incidents of arc flash “meltdowns” that have damaged equipment and delayed efforts to bring the facility’s power infrastructure online.
Citing project documents, the Journal reports that the failures have been the focus of more than 50,000 man hours of investigation and troubleshooting by contractors. The events have also caused significant equipment damage, incurring costs of up to $100,000 per incident.
An arc flash is an electrical explosion that generates intense heat that can reach 35,000 degrees Fahrenheit, which can damage and even melt electrical equipment. Arc flash incidents also represent a significant threat to worker safety.
Series of “Meltdowns”
There have been 10 arc fault “meltdowns” in the past 13 months at the NSA data center in Bluffdale, Utah, according to Journal. The electrical challenges at the facility were confirmed by the NSA and the construction team, a joint venture between Balfour Beatty Construction, DPR Construction and Big-D Construction Corp. The architectural firm KlingStubbins designed the electrical system.
“Problems were discovered with certain parts of the unique and highly complex electrical system,” the joint venture said in a statement. “The causes of those problems have been determined and a permanent fix is being implemented.”
NSA spokeswoman Vanee Vines told the paper that “the failures that occurred during testing have been mitigated. A project of this magnitude requires stringent management, oversight, and testing before the government accepts any building.”
Digital Juggernaut? Or Complex System?
As the NSA’s data collection efforts have made headlines in recent months, the Utah data center has become a symbol of the agency’s technology aspirations, taking on the aura of a digital powerhouse to house the nation’s intelligence secrets. The latest reports serve as a reminder that data centers are complex facilities that must be thoroughly tested to ensure safe operations.
The first arc fault failure at the Utah site occurred in August, 2012, according to project documents, with the most recent occurring on Sept. 25. More than 30 independent experts have conducted 160 tests over 50,000 man-hours, according to the Journal.
“Backup generators have failed numerous tests … and officials disagree about whether the cause is understood,” the paper wrote. “There are also disagreements among government officials and contractors over the adequacy of the electrical control systems, a project official said, and the cooling systems also remain untested.”
The NSA has said it will spend up to $1.5 billion on the Utah data center, which is approaching completion of its first phase after nearly four years of construction. The project will have a power capacity of 65 megawatts, making power a big component of its operations. The 1 million square-foot Camp Williams facility in Bluffdale, Utah will house 100,000 square feet of data center space, while the remaining 900,000 SF will be used for technical support and administrative space. |
|