Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, June 14th, 2013
| Time |
Event |
| 12:26p |
Big Switch, Dell Power SDN Solution for Chinese Customer Big Switch Networks and Dell power a solution for CSM Media Research, Alcatel-Lucent and Avelacom build a 100G network from Moscow to London, and Broadcom launches a high-performance multi-core communications processor.
Big Switch and Dell enable cloud network for Chinese customer. Big Switch Networks announced that the company’s Open SDN Suite has been selected as the software-defined networking (SDN) controller and application platform deployed with an OpenFlow-based Dell networking solution for CSM Media Research, one of China’s leading media research and measurement companies. The deployment will span both data center network virtualization and network monitoring functions riding on top of a Dell 10G network architecture, including end-to-end management of a 10G network, network virtualization and real-time traffic monitoring. The SDN application will help CSM Media Research improve security, deliver network automation for their virtualized data center, and support more cost-effective network operations. “Our mutual commitment to SDN allows Dell and Big Switch to jointly deliver integrated customer solutions built upon Dell’s OpenFlow switching infrastructure and our Open SDN Suite,” said Big Switch CEO and co-founder Guido Appenzeller. “We are excited to see customers like CSM and others deploying our commercial SDN solutions into production environments at scale.”
Alcatel-Lucent and Avelacom build 100G network. Alcatel-Lucent (ALU) and the Russia-based telecommunication carrier Avelacom announced plans to expand existing Avelacom’s 100G optical backbone network from Moscow to London. The 100G coherent transport network, which is based on Alcatel-Lucent’s Agile Optical Networking technology, will support speeds of 100 gigabit per second (100G), enabling high-capacity, high-speed distribution of data over extremely long distances. The solution is based on 100G Dense Wavelength Division Multiplexing (DWDM) technology on its 1830 Photonic Service Switch (1830 PSS) platform. “This is our first deployment of a low latency 100G DWDM network in the Nordic and Baltic regions,” said Luis Martinez Amago, President of Alcatel-Lucent’s EMEA region. ”Avelacom is a marketplace innovator by taking advantage of our solution to offer their customers a flexible high speed, high quality and low cost connectivity from Moscow and St. Petersburg to London. And Avelacom will be ready to upgrade to 400G speeds down the road thanks to our latest generation of the 400G Photonic Service Engine.”
Broadcom launches multi-core communications processors. Broadcom (BRCM) announced the world’s highest performance multi-core communications processor manufactured in 28nm. The new XLP900 Series is optimized for deployment of network functions such as hardware acceleration, virtualization and deep packet inspection. ”Our new XLP900 Series of processors integrates server-class CPU core performance with industry-leading networking and communications technology to deliver the industry’s highest performance, most scalable and intelligent processor for next-generation networks,” said Ron Jankov, Broadcom Senior Vice President and General Manager, Processor and Wireless Infrastructure. “By out-executing the industry and being first to market with a multicore solution capable of over a trillion operations per second, we once again raise the bar and further solidify our technical leadership.” The XLP900 series of processors offer 160Gbps application performance, scalable to 1.28Tbps. | | 12:30p |
Augmenting Power Monitoring and Control at Data Centers Bhavesh Patel is Director of Marketing and Customer Support, ASCO Power Technologies located in Florham Park, New Jersey, a business of Emerson Network Power.
 BHAVESH PATEL
ASCO
Reliability of emergency/backup power at data centers is vital. The true cost of data center downtime results not only from the length of an interruption in power itself, but also the length of time “business as usual” applications cannot be supported. The cost of sustained downtime can be very high, with the average cost coming in at upwards of $5,000+ per minute, which adds up very quickly to $300K an hour! And, in addition to the negative impact on clients’ businesses, sustained downtime can also damage a data center’s reputation as a service provider.

There are several ways to boost reliability and resiliency of emergency/backup power in the face of a utility power outage.
One increasingly popular trend is to install advanced power monitoring and control capabilities that take advantage of reporting from a variety of computing devices. Another is to take advantage of sophisticated power controls that complement or supplement a building management system (BMS) and data center information management (DCIM) in order to handle the volume and/or speed of information required for advanced critical power applications. A third is increased use of power quality analytics.
More Monitoring Needed
With respect to the first trend, a recent survey of facility executives, including management at data centers around the country, indicates a strong desire for more power monitoring and control information than they currently have. The most common monitoring applications that can help a data center defend against problems with emergency/backup power systems are generator status and transfer switch status, which 57 percent and 48 percent of surveyed executives, respectively, had in place and which 18 percent and 20 percent, respectively, noted that they do not have but would like to have. The same survey indicated that about 1/3 of respondents (34 percent) have the capability to monitor circuit breaker status but that an additional 23 percent of respondents don’t have that ability but would like to, and that about one third (34 percent) have monitoring capability for power/energy trending but that 24 percent don’t but would like to.
The survey also showed a marked disparity between the percentage of facility load already on emergency power versus percentage of facility load executives would like to have. More than half (54 percent) have less than 25 percent of facility load on emergency power. Another 22 percent have between 25 percent and 49 percent. Only 24 percent have 50 percent or more connected to emergency power, with just 12 percent having 75 percent or more connected.
Increasing Sophistication of Controls
With respect to the second trend of taking advantage of more sophisticated power controls, this is coming to the surface at least partially because many products in use today for power control applications lack “best practices” features in areas such as monitoring, control, reporting, and power quality analytics.
Analytics for Power Quality
As for the third trend, power quality analytics – which unlike traditional monitoring can analyze events that happen over just milliseconds – are the leading edge of power control technology. Smart-match power control applications can boost reliability of emergency/backup power systems and help a data center avoid any single point of failure of the primary electricity feed. When setting the applications up, a best practice is to use a dedicated critical power management system (CPMS) for power quality analytics to monitor, control and analyze the emergency power. A typical CPMS monitors and/or controls generators, transfer switches, static transfer switches, generator paralleling switchgear, UPS, loadbanks, bus bar, etc., using a CPMS display terminal.
A high-end system can include sophisticated power controls that, operating at very high speeds, have the capability to share or cache large amounts of data (such as wave form capture or transient harmonic displays) from one device to another without disrupting building functions. This is typically accomplished via standalone proprietary networks that feed essential data to a BMS or a DCIM system, or via vendor agreements to share vendor proprietary software-generated critical data, as long as the BMS and DCIM systems can handle the volume or speed of information. For data centers that do not have DCIM which can handle the lightning speed of data generated for analytics, it is key to ensure that DCIM is able to intelligently manage crucial data points for major issues.
Troubleshooting Help
Another extremely helpful function of power quality analytics is post-event troubleshooting, which can aid in determining why a facility lost a particular breaker that tripped the PDU and led to a chain of events that caused a switchover to the UPS. For example, possible reasons include an electrical spike, a short, or a floating ground. Lots of events occur within a very short time frame, often milliseconds.
The ability of power controls to send automatic alerts on system operation by email, pager, or system alarm to the BMS or DCIM is also helpful.
Severe Weather Events Happen More Often
One trend that doesn’t require a survey is the increase in frequency of severe weather events that have caused power outages. Taking advantage of available monitoring and control capabilities to match a data center’s degree of tolerance for downtime is a smart, forward-thinking decision.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:57p |
Against The Wind: Storm-Proofing Data Centers for Hurricanes and Tornadoes  Is your data center ready for an EF5 tornado (pictured above)? Or a major hurricane? Here’s a look at storm preparedness. (Photo by Justin Hobson via Wikimedia Commons)
How do you storm-proof your data center against increasingly fierce tornadoes, hurricanes, derechos and other windstorms? It’s a front-of-mind issue for the industry in the wake of a wave of powerful tornadoes and the first named storm of the 2013 hurricane season. On Wednesday and Thursday, violent thunderstorms blew through key data center hubs in suburban Chicago and northern Virginia.
There were no major outages from this week’s storms, reinforcing the basic value proposition of the data center. The severity of storms like Superstorm Sandy and the recent EF5 tornadoes in Oklahoma has prompted scrutiny of design standards for mission-critical facilities.
“From a structural standpoint, you can design and construct a facility to withstand just about anything,” said Ron Vokoun, Mission Critical Market Leader – Western Region at JE Dunn Construction. ” It comes down to how much you want to spend and weigh the extra costs against the benefits of locating in an area with these types of risks.
“If you are located in Florida, is the pull of the ‘server hugger’ mentality so strong that you will spend extra money to build your data center close to home, or will you move it farther inland within the constraints of your latency requirements?” said Vokoun. “Do the incentives being offered by the Midwestern states offset the extra structural costs to build a data center that will withstand an F5 tornado? In many cases they do. It’s all about analyzing risk and TCO to make an informed business decision.”
SunGard Offers Early Warning System
Uptime during storms isn’t just a function of cement and steel, but also having strong policies and procedures in place. With fast-moving storms like the Oklahoma tornadoes, an early warning can make a huge difference in executing those procedures.
SunGard Availability Services this week announced a new weather alert system leveraging real-time data from NOAA to better inform and prepare customers of weather-related disasters headed their way. The company is leveraging real-time data from the National Oceanic and Atmospheric Administration (NOAA) and using big data analytics to keep its customers abreast of threatening storms and other natural disasters.
“Many companies overlook business continuity during severe storms simply because they aren’t prepared,” said Bob DiLossi, director, Crisis Management, SunGard Availability Services. “Through the years, we’ve listened to our customers’ needs and are proud to be among the first IT service providers offering a weather alert system free of charge. By using real time weather information combined with business analytics, we’re preparing our customers for what’s to come, giving them time to plan and take action.”
How Sturdy to Build?
Given the EF5 in Oklahoma, questions have been raised about whether a data center could withstood a direct hit. Data centers often claim to meet the “Miami-Dade standard” of being able to withstand 150 MPH wind speeds. Tornadoes can generate winds far beyond these numbers. So is being able to withstand 300 MPH going to be the new standard? Most likely not.
Despite the recent EF5 storms, tornadoes are a far lesser risk than hurricanes or earthquakes. Hurricanes and earthquakes can damage huge swaths of land, whereas tornadoes cause sporadic damage for a few miles, perhaps a mile wide. Ninety five percent of tornadoes are below EF3 intensity, and only 0.1 percent reach EF5 according to NOAA. The bigger threat is outside of the physical data center, in the form of power outages, flooding and network breaks.
Regardless, there are some providers building and marketing facilities as “tornado proof.” Perimeter Technology has built its Oklahoma data center to withstand an EF3 tornado. The raised floor portion of the data center is surrounded with 8.5 inch concrete, reinforced walls. It’s surrounded by offices, which are then protected by another 8.5 inch exterior wall. The roof is double reinforced, thick enough to handle a storm’s uplift. It also insulates the building helping with the cooling.
The EF5 on May 20 struck less than 20 miles from Perimeter’s data center.
In 2011, DataCave built a tornado-proof 4.5 million pound roof in Columbus, Indiana. Excessive? Data Cave said it’s based on experience with Midwest tornadoes and the types of damage they can inflict upon structures. Barry Czachura said in our original coverage that the roof is a key line of defense.
Hurricane Season is Here: Batten Down the Hatches
Hurricane season is upon us, having officially starts on June 1. The National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center anticipates an “active to extremely active” season, according to NOAA’s annual report issued in May. NOAA predicts 13 to 20 named storms this year, including 7 to 11 hurricanes. Three to six of these could be major hurricanes. These ranges are above the typical seasonal average of 12 named storms, six hurricanes and three major hurricanes.
The first tropical storm of the season, Andrea, hit last week and was primarily a rain event.
In preparation for the upcoming season, companies with facilities in Hurricane-sensitive areas are going the extra mile this year. Peak 10, which operates a network of data centers across the southeast, has assembled a response team of IT leaders and engineers from several of its geographic operations to be ready for deployment in the event of a natural disaster. Peak 10 also offers a “Recovery Cloud” option for companies who need the added security of rapid recovery of data lost during a disaster.
“People are the most critical part of any disaster recovery plan,” said Jeff Biggs, executive vice president of operations and technology for Peak 10.” Our national response team ensures that customers have the support they need locally while allowing our employees who are also affected by the disaster to tend to their families and homes. Deploying our response team is an added measure we take to ensure that our customers, and their mission-critical data and IT systems, are taken care of.”
Peak 10 has taken a number of operational steps including standard operating procedure (SOP) reviews for each of its facilities along the East Coast. The SOP reviews ensure appropriate security and supplies are in place, testing redundant network infrastructure and carrier connectivity, and confirming arrangements are in place and vendors are on standby for emergency refueling if an extended utility outage is experienced. The company also regularly tests its emergency power systems throughout the year to ensure they are ready in the event of utility power loss. This includes load testing of Uninterruptable Power Supplies (UPSs) and emergency standby generator systems.
Steps to Readiness
SunGard blogged about National Hurricane Preparedness, telling folks not to roll their eyes at it given what we faced last year. Superstorm Sandy did more than $75 billion in damage. During Hurricane Sandy, SunGard AS received 342 alerts from customers and 117 disaster declarations. To support these declarations, it deployed almost one-third of its staff, sent out 5 mobile recovery units, and filled up 1,500 workgroup seats for their customers’ employees. The Carlstadt, NJ data center served as an impromptu command center for local authorities and took in nearby flood victims.
SunGard recently shared best practices and challenges, based on its experiences during Sandy and the 2,000 other disasters for which the company has provided support since 1990. These include:
- Data protection challenges: ”Backing up onto tapes is good, but getting those tapes over to our recovery centers through flooded streets was a challenge,” writes SunGard’s Maryling Yu.
- Systems recovery challenges: “Not having the right operating systems and servers and storage and networks and hypervisors at the recovery site was a huge challenge for many of our customers. Fully one-third of our customers found themselves having to make serious changes to their recovery site specifications…and by ‘serious’ I mean, ‘We forgot to tell you about a mainframe that we had.’”
- People challenges: ”Apparently, telework was not all that great of a DR strategy during Sandy,” writes Yu. “This was a regional disaster that saw power outages across a large swath of territory … which meant that work-from-home or work-from-Starbucks was impaired as well.”
- Process challenges: “Recovery runbooks were frequently out-of-date. It certainly doesn’t help to have a runbook for recovering Windows 2003 servers when you need to recover Windows 2008 servers,” says Yu.
- Program challenges: Change management to sync up production and recovery environments is still not a focus for many of our customers. How can it be? They’ve got enough to do, because they’ve got to observe the ever-louder IT mantra of today: “Do more with less.”
Yu’s full observations can be found here. The company has also offered a Hurricane Preparedness Toolkit (registration required) here. | | 2:33p |
Cisco Introduces Carrier Routing System X  A look at Cisco Systems’ new Carrier Routing System X, a 400 Gigabit per second (Gbps) per slot system that can be expanded to nearly 1 petabit per second in a multi-chassis deployment. (Photo: Cisco)
Cisco (CSCO) has introduced the Carrier Routing System-X (CRS-X), the newest addition to the CRS family, providing economical scale and lasting investment protection for more than 750 telecommunications service providers and organizations worldwide that have deployed more than 10,000 CRS systems as the foundation of their network infrastructures.
The new CRS-X is a 400 Gigabit per second (Gbps) per slot system that can be expanded to nearly 1 petabit per second in a multi-chassis deployment. The line card uses complementary metal oxide semiconductor (CMOS) photonic technology, called Cisco CPAK, to reduce power consumption, reduce the cost of sparing, and increase deployment flexibility.
“Cisco’s flagship networking platforms are designed with investment protection for decades and beyond, unlike other technology providers, which force operators to rip and replace their products on a regular basis,” said Surya Panditi, senior vice president and general manager, Cisco’s service provider networking group. ” Service providers, large educational and research networks, and government agencies around the world are preparing for the next-generation Internet and the increasing demand for video, collaboration and distributed computing. Cisco CPAK technology and 400 Gbps per slot CRS-X demonstrate Cisco’s commitment to leading the industry in IP core technology and protecting the investment of our existing CRS customers.”
Additionally, the CRS-X improves the simplicity and scale of IP and optical convergence. Service providers can now choose between deploying integrated optics or the new Cisco nV™ optical satellite. Both allow for a single IP and optical system that utilizes Cisco’s nLight technology for control plane automation. he CRS-X uses the IOS-XR software, a unique self-healing and self-defending operating system designed for “always on” operation while scaling system capacity. | | 3:30p |
Big Data News: Intel, WalmartLabs, DataStax Big data continues to be big news, as Intel expands Lustre to the enterprise, @WalmartLabs acquires Inkiru and DataStax helps move customers off of Oracle, over to Cassandra NoSQL databases.
Intel expands big data solutions. Intel (INTC) announced Enterprise Edition for Lustre software to make performance-based storage solutions easier to deploy and manage. As a popular high-performance computing (HPC) solution, Lustre is an open source parallel distributed file system and key storage technology that ties together data and enables extremely fast access. When paired with the Intel® Distribution for Apache Hadoop, the Intel Enterprise Edition for Lustre software allows Hadoop to be run on top of Lustre, significantly improving speed in which data can be accessed and analyzed. With veteran Lustre engineers and developers working at Intel contributing to the code Intel will contribute development and support as well as community releases. ”Enterprise users are looking for cost-effective and scalable tools to efficiently manage and quickly access large volumes of data to turn valuable information into actionable insight,” said Boyd Davis, vice president and general manager of Intel’s Datacenter Software Division. “The addition of the Intel Enterprise Edition for Lustre to our big data software portfolio will help make it easier and more affordable for businesses to move, store and process data quickly and efficiently.”
WalmartLabs acquires Inkiru. @WalmartLabs announced that predictive intelligence platform provider Inkiru will be joining @WalmartLabs, the technology arm of Walmart Global eCommerce. Inkiru has developed an active learning system that combines real-time predictive intelligence, big data analytics and a customizable decision engine to inform and streamline business decisions. The platform enables acceleration of the big data capabilities that @WalmartLabs has propelled forward at scale - including site personalization, search, fraud prevention and marketing. Inkiru’s data scientists and infrastructure engineers will join @WalmartLabs to help shape the future of commerce.
DataStax helps companies move to NoSQL. DataStax announced that dozens of industry-leading enterprises such as Netflix, Openwave Messaging and Ooyala have migrated from traditional Oracle relational database management systems (RDBMS) to DataStax and the Apache Cassandra NoSQL database platform. DataStax hosted its third annual Cassandra Summit in San Francisco this week. “Many customers such as Netflix, OpenWave and Ooyala are replacing Oracle with DataStax in their most critical line of business applications,” said Billy Bosworth, CEO, DataStax. “Relational databases such as Oracle are built on antiquated architectures that are inadequate for powering today’s online line-of-business applications due to their weak scaling capabilities, disaster vulnerability and massive price tags.” | | 3:41p |
Friday Funny: Hanging Around the Data Center Congratulations! You’ve made it to Friday, the end of the work week and time to party. But before you go, take a peek at our new Data Center Knowledge cartoon, and tell us what caption you’d put on this comic.
Diane Alber, our favorite cartoonist, writes, “You don’t see a lot of data center managers hanging from the ceiling, but I’m sure there is a good reason for it!” And our data center funsters, Kip and Gary, are happy to “explore” the ceiling.
Also hearty congratulations to Jim Leach of Raging Wire, who won last week’s contest, with the caption, “No, we won’t sell our souls for a lower PUE.”
New to the caption contest? Here’s how it works: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion.
Click to enlarge.
For the previous cartoons on DCK, see our Humor Channel. Please visit Diane’s website Kip and Gary for more of her data center humor. | | 5:00p |
Cray Unveils New Hadoop Solution For Big Data Analytics Cray announced a new Hadoop solution, which will allow customers to apply the combination of supercomputing technologies and an enterprise-strength approach to Big Data analytics to high-value Hadoop applications. Available later this month, Cray cluster supercomputers for Hadoop will pair Cray CS300 systems with the Intel Distribution for Apache Hadoop (Intel Distribution) software.
“More and more organizations are expanding their usage of Hadoop software beyond just basic storage and reporting. But while they’re developing increasingly complex algorithms and becoming more dependent on getting value out of Hadoop systems, they are also pushing the limits of their architectures,” said Bill Blake, senior vice president and CTO of Cray. “We are combining the supercomputing technologies of the Cray CS300 series with the performance and security of the Intel Distribution to provide customers with a turnkey, reliable Hadoop solution that is purpose-built for high-value Hadoop environments. Organizations can now focus on scaling their use of platform-independent Hadoop software, while gaining the benefits of important underlying architectural advantages from Cray and Intel.”
Built on Linux, the solution features workload management software, the Cray Advanced Cluster Engine (ACE) management software and the Intel distribution of Hadoop. The entire solution is integrated, optimized, validated and supported by Cray. The Cray CS300 series of cluster supercomputers are powerful, configurable cluster solutions based on industry standard hardware that will allow for a quick and reliable implementation of the Intel Distribution. The systems offer energy efficient, air-cooled and liquid-cooled architectures featuring high performance, high availability computing. The Cray CS300 series of cluster supercomputers are powerful, configurable cluster solutions based on industry standard hardware that will allow for a quick and reliable implementation of the Intel Distribution. The systems offer energy efficient, air-cooled and liquid-cooled architectures featuring high performance, high availability computing.
“The convergence of data-intensive HPC and high-end commercial analytics is forming a new Big Data market IDC calls High Performance Data Analysis, and most of this work is and will be done on clusters,” said Steve Conway, IDC research vice president for HPC. “Cray’s CS300 line of cluster supercomputers has a strong track record for scaling cluster-based Big Data problems to extremely high levels. Pairing the Cray CS300 systems with Intel’s Hadoop Distribution creates a solution with the potential to tackle Big Data problems that would frustrate most clusters.”
| | 6:50p |
CenturyLink Savvis Acquires PaaS Provider AppFog CenturyLink has acquired Platform as a Service provider AppFog to enhance its Savvis Cloud Suite. Portland, Oregon based AppFog is specifically tailored for developers, counting more thank 100,000 customers who have deployed more than 150,000 applications. Terms of the deal were not disclosed.
AppFog was previously available as a public cloud offering. Under CenturyLink’s Savvis organization, it will remain available as a public cloud offering through the savvisdirect online channel, and Savvis will offer private, dedicated deployments to its enterprise clients.
AppFog started life out as PHP Fog, its namesake after its initial focus on PHP programming. Like many PaaS providers, it quickly expanded into multi-language support, first with ruby and node.js, followed by others. This multi-language support is part of the appeal of AppFog as a PaaS offering, making for an attractive, flexible enterprise platform.
The acquisition greatly enhances Savvis’ offerings on the PaaS front. More and more cloud providers are figuring ways to add value above and beyond raw compute and storage, and PaaS is a big way to do so. PaaS simplifies the development and deployment of applications and it gaining popularity in the enterprise, especially with the rise of DevOps.
“AppFog leads the way in Platform-as-a-Service capabilities and continues to see strong adoption in the developer community,” said Jeff Von Deylen, president of CenturyLink’s Savvis organization. “Combining AppFog’s market-leading Platform-as-a-Service capabilities with Savvis’ industry-leading Infrastructure-as-a-Service cloud services and CenturyLink’s global network will enable developers to securely and reliably operate and connect the applications they build and deploy.”
Under CenturyLink, AppFog has more of the resources it needs to enhance its platform compared to being a standalone company. Part of the apprehension of using a PaaS provider is that most are relatively young and standalone operations – so it’s hard to ensure they’ll stick around. This acquisition ensures AppFog isn’t going anywhere, giving enterprises a sort of ‘go-ahead’ to use it.
“We are excited to introduce the AppFog developer community to CenturyLink’s Savvis organization, which is recognized globally as an innovative cloud leader,” said Lucas Carlson, whose role prior to the acquisition was chief executive officer at AppFog and who now serves as vice president, cloud evangelist, at Savvis. “Developers can expect to see enhancements to our PaaS capabilities, and we look forward to making the full suite of AppFog services available to existing and prospective CenturyLink and Savvis clients.” | | 7:40p |
Data Center News: The Friday Mega-Roundup It’s been a busy week for data center news, so we’re playing catchup on Friday afternoon with Mega Roundup of some of the items we’ve missed. Get ready, because here come the links!
tw telecom to Extend Fiber to Sentinel Durham Facility – Sentinel Data Centers today announced that tw telecom, will extend its network into Sentinel’s NC-1 data center. The fiber extension will expand Sentinel users’ options for regional and national network connectivity to and through the facility. “We are excited that an industry leader like tw telecom is extending its fiber network and award-winning offerings into our NC-1 data center,” said Sentinel Data Centers Co-President Todd Aaron. “This relationship with tw telecom expands our roster of top tier network providers and further differentiates our NC-1 facility as the only carrier-neutral, enterprise-class data center in the region.”
EarthLink Opens San Jose Data Center – As part of its nationwide IT services and data center network expansion, EarthLink, Inc.,(NASDAQ: ELNK) a leading IT services and communications provider, today announced the opening of its newest data center on its next-generation cloud hosting platform, located at 8 Great Oaks Boulevard in San Jose, CA. EarthLink has also opened a new local sales office and is growing its employee base to fulfill business demand for its highly-secure, comprehensive IT solutions including cloud hosting, Cloud Workspace, managed security, colocation, Cloud Server Backup and application solutions in the Silicon Valley.
DataChambers Building New Data Center in NC – Castle & Cooke, Inc., the developers of the NCRC, finalized an agreement early this month with DataChambers, headquartered in Winston-Salem, NC, to build a 50,000 square-foot data center at Research Campus Drive and Main Street on the NCRC’s 350-acre campus. The building will be rated to withstand hurricane-force winds, feature state-of-the-art systems for power, security, HVAC, network connectivity and incorporate the latest LEED standards developed by the US Green Building Council for energy efficient operation. The new data center will provide services such as hosted and cloud-based infrastructure solutions, around the clock network management and a variety of data backup and business continuity solutions.
Internap Los Angeles Data Center Awarded LEED Gold – Internap Network Services Corporation (NASDAQ: INAP), a provider of high-performance hosting services, today announced that its Los Angeles data center has been awarded LEED Gold certification by the U.S. Green Building Council (USGBC). LEED is one of the primary rating systems for the design, construction and operation of energy-efficient buildings. A Gold certification is given for buildings that are designed and constructed with sustainable concepts and practices that substantially reduce the building’s impact on the environment as compared to other, similar facilities.
CyrusOne Selected by Hamilton-Clermont Schools – Global data center services provider CyrusOne (NASDAQ: CONE) announced today that the Hamilton-Clermont Cooperative Association (HCCA), which provides data and Internet services for approximately 100 public and non-public schools in the Greater Cincinnati metropolitan area, has selected CyrusOne’s Cincinnati data center to house and support their data technology. The HCCA provides technological services and support to the Hamilton and Clermont County school districts, including Internet protocol telephony, software support and applications, financial software for payroll and accounting, student grading and scheduling software, and parental accesses.
Stream Commissions Houston-Area Data Center – Stream Data Centers, a national data center developer and operator, today announced it has completed construction and commissioning of its Private Data Center (PDC) facility in The Woodlands, a northern suburb to Houston, Texas. The company began construction on the purpose-built data center in August 2012.
SkyWire Media Selects Cobalt Data Centers - SkyWire Media, Inc., a national Mobile Content Enabler (MCE), announced an agreement with Las Vegas-based Cobalt Data Centers to provide high-availability data center and interconnection services for its fast growing mobile solutions business. The agreement is a “win-win” for Las Vegas as it joins together two local businesses to handle innovative mobility applications across the country. “SkyWire Media knew they wanted a flexible, quality, customer-centric experience,” said Cobalt Data Centers Chief Executive Officer Mike Ballard. “Past experience with the ‘big box’ providers didn’t suit them, and they were pleased to find a new alternative in town…Cobalt.”
|
|