Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 22nd, 2013
| Time |
Event |
| 11:30a |
Data Center Jobs: CBRE At the Data Center Jobs Board, we have a new job listing from CBRE, which is seeking a Chief Operations Engineer – Critical Facility in Chandler, Arizona.
The Chief Operations Engineer – Critical Facility is responsible for overseeing the maintenance and continuous operation of all building systems including: fire/life safety, mechanical (HVAC, plumbing, controls) electrical (lighting, UPS, PDU, generators, switchgear), cabling (data and voice, broad band), lighting and termperature controls systems, critical environments, light construction (painting, doors, relites, locks), digital systems (fire alarm, duress, card access, radionics, CCTV), and Audio/Visual services, utilizing staff and contracting with outside vendors as necessary, developing, reviewing, and approving Maintenance Critical Environment Work Procedures adhering to specific client requirements, and supervising and managing engineers and maintenance staff including hiring, training and personal development. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 12:30p |
Top 7 Reasons Data Centers Don’t Raise Their Thermostats Ron Vokoun DBIA, LEED AP BD+C, leads the Mission Critical Market for at JE Dunn Construction. Ron was previously Director of Mission Critical for Gray Construction and also served in leadership roles with Qwest Communications and Aerie Networks. You can find him on Twitter at @RonVokoun.
 RON VOKOUN
JE Dunn
In 2011, it was with great fanfare that ASHRAE released its updated Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance. The new guidelines created new classes of equipment ratings and corresponding wider ranges of operating conditions. Yet, here we are in 2013 and very few data centers are even raising their thermostats to the recommended limits prescribed by ASHRAE’s 2008 guidance.
Raising the thermostat is the single most simple energy saving move a data center can make, so why is it that they are so hesitant to do so? Generally speaking, raising the temperature setting 1.8°F (1°C) will save two to four percent on the overall energy use of a data center. What a great ROI for a simple flick of a switch!
As I often do when I have a question, I took to Twitter to find answers, or at least opinions. Specifically, I engaged Mark Thiele of Switch, Jan Wiersma of Data Center Pulse, Tim Crawford of AVOA, and Bill Dougherty of RagingWire in what became a spirited exchange of reasons why temperatures largely remain unchanged.
Without further ado, and with my apologies to David Letterman, I give you:
The Top 7 Reasons Why Data Centers Don’t Raise Their Thermostats
7. Some HVAC Equipment Can’t Handle Higher Return Air Temperatures
I will confess that I am not an engineer, but this one doesn’t make sense to me. I have been told by engineers in the past that the higher the return air temperature, the more efficient the system will be. I would be interested in hearing opinions, but until convinced otherwise, I’m going to call this one bunk.
6. Colocation Data Centers Have To Be All Things To All People
This one makes sense to me. Colocation providers can’t choose their customers, but rather they compete for them. If they have a potential customer that feels uncomfortable with the warmer temperatures, they will lose them to one of their competitors that keeps their data center unnecessarily cool. They also have to plan for the lowest common denominator in that many customers are still using legacy equipment that doesn’t fit into the ASHRAE standard classifications.
This makes me wonder if there might be the potential for a new colocation product. Given the energy savings, perhaps physically separated sections of the data center can be offered at a discounted rate in exchange for agreeing to operate at a higher temperature? This could be an attractive cost savings for a few enlightened souls.
5. Fear, Uncertainty, Doubt (FUD)/Ignorance
This one is very widespread throughout the industry. I am told that most colocation RFP’s from CIO’s specify 70°F (21°C). The industry is full of sayings like,” Nobody ever got fired for keeping a data center cold.” That may change if the CFO finds out how much money he can save by raising the temperature!
4. Intolerable Work Environment
I can say with confidence that I would not enjoy working in a hot aisle that’s reaching temperatures up to 115°F (46°C). With that said, construction workers in Arizona work in that heat every day during the summer. I’ll leave it to OSHA to say what’s appropriate here in the U.S. Jan Wiersma, who lives and works in Europe, informed me that the EU has a reasonable law for working in the hot aisle, so it can be done.
3. Cultural Norms and Inertia
I’ve always hated hearing,” Because that’s the way we’ve always done it.” But, for legacy data centers, this is often the case. A more reasonable excuse that also fits into this category is that it’s probably nearly impossible to change an SLA without opening up all of the other terms to renegotiation.
2. Concern Over Higher Failure Rates and Performance Issues
The good folks at the Green Grid have debunked this one adequately already. A presentation at the Uptime Institute Symposium earlier this year from representatives of ASHRAE’s TC 9.9 agreed. A good qualification that Mark pointed out is that consistent environmental conditions are important to realizing lower failure rates.
And the number one reason why data centers don’t raise their thermostats (drum roll please)…
1. Thermal Ride-Through Time
If a data center has an outage of some sort, having an environment with a lower temperature will provide a longer thermal ride-through time. This is magnified in a containerized data center solution where the total volume of conditioned air is very limited in comparison to a more traditional open data center.
It seems there are very few good reasons why you should not raise the temperature in your data center, at least a bit. At the end of the day, you need to understand your business and the risks associated with its’ data center operations and make an informed decision. If your analysis indicates you can, flip that thermostat up a bit higher and enjoy the money you save as a result.
Many thanks to Mark, Jan, Tim, and Bill for sharing their wisdom on Twitter! I highly recommend following them if you don’t already.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:59p |
Peak 10 Expands In Atlanta, Seeing A Big Data Opportunity  Racks inside a data center operated by Peak 10, which has been expanding its footprint in the Southeast. (Photo: Peak 10)
IT infrastructure provider Peak 10 is adding space in Atlanta, looking to capitalize on predicted high growth in the market. The new location is located on Windward Parkway in Alpharetta’s High Tech Corridor, where Peak 10 will build out three 15,000 square foot phases for three separate data centers, with the completion of the first phase expected in Spring 2014. The new facility sits on 12 acres, providing plenty of room for further expansion.
The company entered the Atlanta market in 2007 through greenfield facility acquisitions, and this latest expansion speaks to its continued success. The new building brings Peak 10’s entire Atlanta footprint to more than 100,000 square feet. Headquartered in Charlotte, N.C., Peak 10 operates data centers in 10 markets, primarily in the southeastern U.S. Less than a year ago, Peak 10 completed a 2,500 square foot addition to its second data center on its Norcross campus to accommodate additional customers.
“We have been very selective with this expansion in Atlanta,” said David Jones, the president and CEO of Peak 10. “We will be investing significant capital into this site, a campus that will afford Peak 10 exceptional expansion capabilities. We anticipate continued exponential growth for our Atlanta operations and the Windward/Alpharetta area affords us all of the essentials for demand growth, stable power infrastructure, multiple fiber carriers and access to a technology-oriented employment community.”
Big Data is big business in Atlanta. According to Reuters, Big Data will grow by 45% annually to reach a $25 billion industry by 2015. Peak 10 believes Atlanta is uniquely positioned to capitalize on this growth trend due to its diversity of data-centric sectors, from transportation to finance, logistics, retail, and healthcare.
A History of Calculated Expansions
Peak 10 has been around since 2000, safely navigating through the dotcom bubble as well as the major economic downturn in 2008. How has the company survived, and even thrived? Historically it has followed a conservative game plan, choosing to focus on smaller, regional data centers rather than speculative builds. It staffs these regional data centers with local talent that knows the area, quickly establishing Peak 10 as a staple in local business communitie. The company takes smart, calculated plays in markets showing potential.
Peak 10′s product strategy has evolved with the times, offering a unique mix of colocation, managed hosting, and most recently, cloud. It provides tailored solutions, often winning a piece of a customer’s IT infrastructure and further growing the relationship as time goes on.
The company survived by avoiding the irrational exuberance displayed by many providers. In the early 2000s, it began to target SMBs in addition to Fortune 500 companies, and now the company is equipped to accommodate a broad range of customers. In 2004, it brought managed services in house. In the last 10 or so years it has expanded to several regional markets with high demand
Peak 10 has continued to add capacity in its core markets while expanding selectively into new markets. With each expansion to a new market came the hiring of local professionals and a general manager that knew the scene.
Expanding in Alpharetta
Alpharetta is a northern suburb of Atlantas that has become a popular destination for data center operators. T5 Data Centers, ByteGrid and Blackberry have data centers in Alpharetta.
“We chose Alpharetta for our expansion because of its strong commitment to the IT industry,” said Angela Haneklau, vice president and general manager for Peak 10’s Atlanta operations. “We look forward to serving as IT advisors and trusted business partners to the organizations in the North Fulton County technology and business communities and beyond.
The City of Alpharetta is excited to welcome Peak 10,” said Peter Tokar, economic development director for the City of Alpharetta. “As a major hub for technology and Big Data in the Metro Atlanta Region, Peak 10’s location in Alpharetta will provide yet another wonderful asset to our business community and our network of high tech companies.”
Its conservative roots gave the company a strong reputation. When Peak 10 chooses to expand, it’s because the company knows that there is the demand to fill the space. Predominantly located in the southeast, the company has been expanding westward in the last few years. Expansion has significantly picked up in the last five years, so it’s hard to call it a conservative company still, but rather a smart one. Conservative roots and its repeatable approach of establishing itself in new markets has made the company a major contender. | | 2:36p |
Data Acceleration: A Game Changer for Converged Infrastructure  Boosting the performance of converged platforms is a potential game-changer for the data center industry.
Many organizations are seeking to create a more powerful infrastructure capable of doing more with a lot less. The digitization of the business world has forced IT administrators to rethink how they deploy their data centers.
This is what makes the Cisco acquisition of solid state memory specialist Whiptail so interesting. Right now, many high-end cloud shops will argue that there is a gap between what some integrated systems can provide and the performance required to run new types of workloads. For example, business intelligence and Big Data systems need a lot of compute and storage power to process vast amounts of information. In many cases, these resources are pulled from spinning disks or separate solid-state arrays. So why not integrate the entire process together? Why not create systems that can deliver powerful resources directly to the data and the application?
According to the news release from Cisco – “With the acquisition of Whiptail, Cisco is evolving the UCS architecture by integrating data acceleration capability into the compute layer.”
Yes, Cisco just shifted the playing field when it comes to the cloud and providing a unified front around storage, networking and compute. However, the really interesting part is what will happen with the industry once other large hardware makers realize that this is the way to go.
- Less Data Center Real Estate. Virtualization and high-density computing brought very real benefits from data center consolidation. We were simply able to place more users on better multi-tenancy systems. Now we’re taking that entire process to a new level. Already, converged infrastructures are becoming the foundation of cloud-ready systems. Couple that with an flash-based processing power and you’ve got a whole new type of platform. Imagine this entire powerhouse of a system under one unified roof. This isn’t some little platform we’re talking about either. The Whiptail model allows for scaling from one node to up to 30 nodes. From there, it can deliver over four million IOPS and 360 terabytes of raw capacity. That is a truly staggering amount of resources that can now be directly delivered to applications, data and the user.
- Complete Unified Computing. Take all of your computing, storage and resource needs and place them under one unified computing platform. That’s it. You’re done. Manufacturers like Cisco clearly see that the future of the cloud computing and almost all modern technologies will directly revolve around the capabilities of the data center. More systems will placed within a data center model as this hub becomes the home of to the Internet of Everything. Applications, data, workloads, and entire desktops are now being delivered from a public or private data center model. As more users join the cloud and virtualization revolution, there will need to be core, unified, computing resources capable of handling this demand. This means greater levels of networking throughput, more consolidated computing power, and delivering data and applications from solid-state and flash pool resources.
- Creating the Micro-Cloud. With even greater amounts of density and processing power, the converged infrastructure is bound to get even more compact. Soon, it will be very feasible for organizations to deploy micro-cloud environments to extend their infrastructure to branches and remote offices. Furthermore, the with micro-cloud capabilities, IT shops will be able to deliver even greater amounts of content to the end-user. By incorporating caching and better methods of WANOP, a converged infrastructure can help an organization extend their platform to the edge. These smaller systems will be built around sold-state technologies with the incorporation of a massive amount of compute and networking power. This amount of throughput, bandwidth and resource availability will create an even more robust cloud network.
- Big data = Not a Big Problem. One of the big problems around big data and the ability to quantify massive amounts of information was the processing needs around the compute and storage platforms. Now, with solid-state technologies built directly into a converged platform – we’re suddenly delivery quite a few direct IOPS to these big data workloads. Instead of having separate storage arrays for big data engines, the process can be built directly into a converged system. Not only that, big data processing can also be incorporated into the micro-cloud platform. Organizations which are vastly distributed can utilize converged components at the edge to process and quantify critical data to make the right types of business decisions.
- Future Converged Infrastructure. The acquisition of Whiptail means a new type of Cisco UCS technology. It’s going to create a truly powerful platform with very diverse capabilities. This means that other converged infrastructure vendors are going to be examining their designs as well. IBM and HP are also aiming to create the next-generation in converged platform computing while storage vendors like EMC and NetApp continue to blaze their way into the flash and solid-state market. With this much movement around the unified computing platform, don’t be surprised if you see even more technological evolution around platforms like Violin Memory, Nimbus Data, Pure Storage, Kaminario and others. Also, don’t be surprised if some more big purchases or acquisitions are made around this technology as well.
The converged infrastructure model makes sense. We’re able to put more under one roof and utilize fewer resources to deliver powerful platforms. Now, more organizations are finding the power of solid-state technologies to help offload certain types of processes. Here’s what you need to understand:
- Pricing for solid-state systems will continue to become more affordable.
- The reliability of flash and solid-state platforms is only improving.
- Entire cloud platforms and workloads are being designed to run on flash arrays.
- Storage pools and tiered data designs are helping organizations make the most of their storage environment.
Now you’re able to direct data to the appropriate type of storage while maintaining optimal user experience. As the data center model continues to evolve, the converged infrastructure platform will sit square in the middle of it all. Already, we are seeing the modern data center become the home of “everything” that is IT. Moving forward, both private and public data center models will be tasked with supporting even more users – and a lot more data. All of this will translate to the need for even greater resource optimization around the next-generation converged infrastructure platform. | | 3:09p |
With New Analytics Unit, IO Focuses on the Software-Refined Data Center  IO’s software can provide a deep dive into data center operations. It is now using that data to develop intelligent solutions to boost customer efficiency. (Image: IO)
IT infrastructure provider IO is launching a new division called IO.Applied Intelligence that leverages its DCIM capabilities and may eventually lead to new products and services. The company continues to build out its vision for the software-defined data center. The new division will create new value-added services such as bill forecasting, capacity management and simulation, and will make product recommendations to IO’s hardware and software development teams.
IO.Applied Intelligence rings in some heavy thinkers in engineering and data analytics to improve product performance for customers, and generally squeeze inefficiencies out of IT.
“The data center is the ideal place to fundamentally, comprehensively and enduringly address today’s IT and sustainability challenges,” said George Slessman, IO Chief Executive Officer and Product Architect. “Finding more efficient ways to operate digital infrastructure and intelligently manage demand will bring economic, environmental and social gains. As a result, IO.Applied Intelligence is positioned to attract the world’s most talented data experts – great minds who have a passion to solve energy and information challenges.”
A Software-Defined Era
IO.Applied Intelligence will leverage huge amounts of operating data collected by IO.OS, the company’s data center operating system, to develop and deliver capabilities in data mining and visualization, predictive modeling and simulation.
“We live in a software-defined era,” said Patrick Flynn, Group Leader, Applied Intelligence & Sustainability at IO. “The value of IT going forward will be driven by software; even hardware-design enhancements will be software derived and data driven. Data has brought smarter systems to city planning, logistics, and healthcare, but only now are we are we bringing that same intelligence to the design and operations of digital infrastructure itself.”
IO.Applied Intelligence will first reach customers via enhancements to the IO converged technology platform, which includes IO.Anywhere modules and the IO.OS software. Over time, product features and simulation services may be sold separately. Deployments are custom priced based on scope and delivery options.
Among the programs already underway at IO.Applied Intelligence are:
- Assessing power, cooling, networking and space capacity within IO’s global Data Center as a Service (DCaaS) footprint to improve performance;
- Composing analytical models to quantify value and cost for the data center environment to drive more efficient usage and operations;
- Working with IO’s CSO Bob Butler to continually evolve physical and logical security associated with the converged IO technology platform; and
- Partnering with leaders in data analytics including McLaren Applied Technology, leading consultants to the IO.Applied Intelligence research team.
IO remains focused on the software-defined data center and continues to innovate. The company says IO technology lowers the total cost of data center ownership compared to traditional data centers, enabling dynamic deployment and intelligent control based on the needs of IT equipment and applications in the data center. | | 3:42p |
SGI Expands Big Data Portfolio Expanding its portfolio of big data solutions SGI introduced InfiniteData Cluster, SGI ObjectStore and SGI LiveArc AE for Infinite Storage Gateway.
“As businesses tackle the rising volume, velocity, and variety of big data, they face a growing challenge–how to unlock value at greater speed, scale and efficiency,” said Jorge Titinger, president and CEO of SGI. “SGI’s expertise in designing and building some of the world’s fastest supercomputers enables customers to fully optimize High Performance Computing for big data analytics to achieve business breakthroughs. As a Top 10 storage provider, and with many years experience helping customers manage some of the world’s largest data environments cost efficiently, we are applying our expertise in big data storage to deliver solutions increasingly needed in today’s enterprise.”
InfiniteData Cluster
With Intel Xeon E5-2600 v2 processors and up to 12 4TB drives per tray, the InfiniteData cluster delivers a 1:1 core to storage spindle ratio to optimize Apache Hadoop software. With high speed interconnects it provides for up to 40 nodes and 1.9 petabytes per rack–more than twice the compute and storage density per footprint of other HPC solutions for big data analytics. Out of the box InfiniteData solutions are pre-integrated with Cloudera Hadoop running on Red Hat Linux and SGI Management Center. SGI is also making an InfiniteData Cluster available online, pre-configured with RedHat Enterprise Linux and Cloudera Hadoop, for customers to upload their own data and explore big data analytics. SGI’s Hadoop Sandbox is targeted for availability at year.
ObjectStore
New SGI ObjectStore integrates SGI’s OEM of Scality RING software with its Modular InfiniteStorage Server hardware. ObjectStore provides a proven, object-based, “scale-out” storage solution for petabyte environments. The unique RING peer-to-peer architecture allows SGI’s storage environment to overcome the strain massive data volumes place on conventional storage file systems. The ObjectStore system architecture delivers a shared storage pool supporting thousands to millions of users, with limitless file size and quantity, and performance rivaling block-based storage. Nodes can be added at any time to increase capacity – without interrupting users or performance levels.
LiveArc AE
SGI is introducing SGI LiveArc AE, a special “appliance edition” of SGI LiveArc software to be embedded in the Gateway before year end. Delivering intelligent management for active archives, LiveArc AE provides fast search, facilitates compliance, and helps achieve disaster recovery. LiveArc will automatically index metadata and file content for infrequently accessed data, enabling users to quickly search across the archive using free-text queries to locate needed files. Version history can be captured with audit trails. Write Once Read Many (WORM) functionality can be implemented such as restricting file deletion or changes. Data retention periods can be set. And remote replication of archived data can be automated based on policies, with inline encryption for security and file level de-duplication to reduce storage consumption. | | 4:03p |
Savvis Introduces Big Data Solutions Around Hadoop Savvis, a CenturyLink company, today announced the availability of Savvis Big Data Solutions, a suite of services to help organizations drive value from their data. This new offering combines existing Savvis services with new services.
“If you think from a very simple standpoint, what foundation services is a fully managed Hadoop distribution,” said Milan Vaclavik, Senior Director and Solution Lead for Savvis’ Big Data Solutions.
Savvis Big Data Solutions gives enterprises and government organizations access to the compute, storage and high-bandwidth network capabilities required to power virtually any analytics application. The suite includes Savvis’ managed services for Cloudera and MapR platforms based on Apache Hadoop.
“We’ve had the network. We’ve done a lot with cloud services. The IaaS (Infrastructure as a Service) isn’t new. The physical boxes and the actual configuration isn’t the new piece,” said Vaclavik. “The new piece is that we’re taking all of what we’ve done and bringing that to big data. That Hadoop piece is really the new piece.”
With a lot of interest in Hadoop in the marketplace, there still remains confusion on how to best leverage Hadoop. A lot of customers want it, but don’t necessarily fully understand how to successfully implement it. “We’re approaching this in a workshop manner to come up with recommendations. This is not one size fits all,” said Vaclavik. “We need to understand data usage and expectations and what the customer wants to get out of their big data implementation. If there’s a scenario where Hadoop is not a proper fit, then we won’t suggest it. But this is more the exception to the rule.”
Savvis Big Data Solutions offers hosted, fully managed hardware and software services for optimizing data storage, integration, retrieval and analysis through:
- Enterprise-grade Infrastructure-as-a-Service capabilities, including scalable compute and storage platforms, software and security services;
- Secure, high-bandwidth network connectivity for accessing, integrating and processing massive amounts of data;
- Software licensing and operations management for Cloudera and MapR distributions of Apache Hadoop, including configuration, monitoring, upgrades and security;
- Big data planning and implementation services, including environment design, security planning and project management; and
- Consulting services for client business-case development.
Big Data Confusion Still Abounds
“Big data, big definitions, is what I like to say,” said Vaclavik. “There’s all kinds of definitions of what big data is. You put five people in a room, they’ll give you ten answers. The truth is that it’s very specific for particular customers. They might not have done an internal analysis of what they need. That’s one way we can help. This is more around business case development, to recommend an environment to establish what they need in an internal standpoint.
Part of this offering is a consulting piece. “We’ll go in and help,” said Vaclavik. “Analytics are reaching ‘phase two’ and customers are looking to better leverage their data. Customers are also looking to offload data from more expensive data warehousing and bring it into Hadoop from a cost savings perspective.”
An Entry Point to Big Data
“One thing I want to emphasize is that this is our entry point to big data,” said Vaclavik. “We recognize that this is not a complete big data solution. When you start laying in analytics is when you get the full value. We will be enhancing the solution to include a broader set of services, including analytics. As we build these capabilities, we’ll be partnering with providers to bring these solutions to bring this to customers.”
He envisions two types of customers leveraging these solutions. The first is the type of customer that wants Savvis to host and manage their big data solution for them. The second set is more hands-on, who need help implementing something and desire to do so in a managed hosting setup.
“One of our key differentiators is that we’re bringing in network, managed services, all of our expertise and applying it to big data,” said Vaclavik. “The marriage between Savvis and CenturyLink has been great, and has greatly benefited the network piece in particular.” | | 6:15p |
Datagram Expands to Higher Ground at Intergate.Manhattan  Hosting provider Datagram has opened data center space within Intergate. Manhattan, the new high-rise data center redeveloped by Sabey Corp. (Photo: Sabey)
New York hosting provider Datagram has expanded into space at Intergate.Manhattan, the new high-rise data center from Sabey Data Centers, the company said today.
Datagram was one of the companies hit hardest by Superstorm Sandy. The storm surge flooded the basements of its primary data center building at 33 Whitehall, knocking that location offline for days (see After Sandy: Datagram Recovers From Apocalyptic Flood). At the time, the company said it would take steps to improve the storm-readiness of its infrastructure, moving critical equipment to higher floors of 33 Whitehall and adding a second location in lower Manhattan.
Just under a year later, Datagram has completed commissioning its new Tier III space on the upper floors at Intergate.Manhattan at 375 Pearl Street. Datagram’s power will be delivered from four separate utility services with diverse points of entry into the building. The data center has N+1 redundancy built throughout the power supply chain, starting from the transfer switches, down to the uninterruptible power systems, power distribution units, generators and fuel farm systems. Datagram will use hot aisle containment to support high-density workloads of up to 20kW per cabinet.
“For twenty years Datagram has leveraged innovative technologies in order to improve and expand on the services we offer,” said Alex Reppen, CEO & Founder of Datagram. “Our customers expect us to be on the cutting edge of technology, and this expansion allows us to do just that.”
Upgraded Infrastructure at 375 Pearl
The building at 375 Pearl Street was developed in 1975 as a Verizon telecom switching hub and later served as a back office facility. Sabey acquired the property in 2011 and has redeveloped the building from the ground up, updating the power and mechanical infrastructure to support high-density hosting as well as traditional telecom use.
“We want to take full advantage of the infrastructure at 375 Pearl St,” added Mohammad Soliman, COO of Datagram. “We’re building a 24×7 Network Operations Center and extending our fully meshed dark fiber backbone into the building, there’s nothing we can’t offer at this new facility.”
Intergate.Manhattan came through Superstorm Sandy without any issues. The basement level at 375 Pearl remains more than a dozen feet above the high water mark seen during Sandy, but said Sabey is taking no chances and has equipped the basement-level fuel depot with submersible pumps.
Datagram has begun its build into 375 Pearl St and expects to be fully operational by end of the year. This expansion marks the company’s fourth data center location in Manhattan, with additional facilities in New Jersey, Connecticut and California.
Sabey Data Centers now operates 3 million square feet of data center space as part of a larger 5.3 million square foot portfolio of owned and managed commercial real estate. The company has developed a national fiber network to connect its East Coast operations with its campuses in Washington state, where it is the largest provider of hydro-powered facilities. Sabey’s data center properties include the huge Intergate.East and Intergate.West developments in the Seattle suburb of Tukwila, the Intergate.Columbia project in Wenatchee and Intergate.Quincy. |
|