Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 27th, 2015
| Time |
Event |
| 12:00p |
Bloomberg Data Centers: Where the “Go”s Go The kind of resiliency test Hurricane Sandy forced on Bloomberg’s Manhattan data center is not a test John O’Connor wants to go through again. As the storm surge in New York City in late October 2012 was flooding the streets of lower Manhattan, water level in the facility’s basement reached 18 inches at one point.
There were fuel tanks and fuel controls in the basement, all of which could have easily malfunctioned had any more water entered the building, but “mother nature allowed us to win that battle,” O’Connor, manager of data center operations at Bloomberg, recalls. They were able to keep water from rising further; the facility switched to generator power and hummed through the crisis with no incident.
But the episode made upper management at Bloomberg “uncomfortable enough to write a big check” to build a new data center in the quiet New York suburb of Orangetown, as far away from Manhattan as practically possible. “We wanted to have more baskets to put our eggs in,” O’Connor said.
The data center came online this spring. The company hasn’t shut down the downtown facility, which has been in operation for about 18 years, but O’Connor’s team has been moving a lot of the workloads from there to the new one.
Where the “Go”s Go
Bloomberg data centers support its bread-and-butter professional service, which today has more than 325,000 subscribers. They pay the company “for this always-on, always-fast access to data and analytics about all things money,” O’Connor said.
He likes to say Bloomberg’s data centers are “where the ‘Go’s go,” referring to the “Go” key that replaces “Enter” on keyboards that come with the service. “When they hit ‘Go,’ it’s coming back to the data center to get the answers.”
Three Hot Sites
The older facility in Manhattan, the new one in Orangetown, and another one elsewhere in the New York metro are the three primary Bloomberg data centers. All three are “hot,” which means workloads can be quickly shifted from site to site as needed.
The load is split among the sites, sometimes 50-50 between two sites, and sometimes one-third per site. “It’s all designed to be very flexible,” O’Connor said.
If one of the sites goes down, some functionality, the top-tier workloads, will failover automatically, and some will need to be transferred manually by data center operators.
A Telco-Like Global WAN
Bloomberg’s own dedicated fiber infrastructure interconnects all three facilities to make failover faster. Given the nature of its business, the company invests a lot in network infrastructure. In addition to the three primary East Coast data centers, it has more than 100 nodes in data centers around the world, all communicating with the core infrastructure through Bloomberg’s sizable global Wide Area Network. “We’re essentially a telco provider,” O’Connor said.
The company uses other carriers’ physical fiber for connectivity between the nodes and the primary sites, but “’other’ is the wrong word,” he said. “All [carriers] is probably more accurate.” Each Bloomberg data center has two sizable network meet-me rooms where carriers interconnect with each other and with Bloomberg’s infrastructure. Most data centers only have one.
Contain Hot Air, Use Free Cooling, Put VFDs on Everything
The company expects its new data center to be 20 percent more energy efficient than the one in Manhattan. There is nothing extraordinary about the facility’s design that enables it to get to that level of efficiency, O’Connor said. His team simply followed as many efficiency best practices as they could find and implement.
The 7 MW facility has all the latest and greatest IT gear (new hardware is usually a lot more efficient than preceding generations) and uses as much free cooling as possible at any given point in time. But the biggest difference in efficiency is made by complete isolation of hot and cold air, O’Connor said.
Exhaust air escapes servers and rises into an overhead plenum through chimneys attached to the tops of the cabinets. It means the air that comes into the servers can be warmer than usual, which reduces the cooling system’s energy use. “The whole room is a cold aisle, and it doesn’t have to be as cold, because there’s no hot air mixing,” he said.
There are variable-speed drives “on everything,” and the system adjusts the amount of mechanical cooling automatically, based on outside-air temperature. It gets weather data from its own simple weather station on the building’s roof.
Automating this piece of data center management and having better weather data from a local weather station (as opposed to getting data from the nearest airport) enables the facility to use more free cooling and save energy. “If you can get an extra hour of free cooling – and you do that several times a year – that’s extra money,” O’Connor said.
Keeping in Step With Peers
Like other big players in the financial-services market, Bloomberg’s IT and data center teams spend quite a bit of time experimenting with new technologies. The company has a “sizable” private OpenStack cloud, which is currently used for development and some minor customer-facing services, but there’s every intention to shift other types of workloads in that direction as the technology matures. All new applications Bloomberg is developing are created for elastic environments, O’Connor said.
Bloomberg has also “dabbled” in hardware designed to Open Compute Project specifications. OCP is a Facebook-led open source data center and hardware design initiative. There are some Open Compute servers running at Bloomberg, O’Connor said. The company is actively involved in OCP, and he expects to submit proposals for a rack design to the project later this year.
Like other participants in OCP, Bloomberg is involved in the project to drive the open source community in the direction that fits its needs, he explained.
Green Story Born Out of a Crisis
Sandy, doubtless, made lots of organizations with critical infrastructure located in downtown New York – and perhaps in other low-lying coastal areas – take a hard look at their choice of location. It’s hard to say Bloomberg would not have built a new data center in the suburbs was it not for the hurricane and the flood that followed; companies expand infrastructure periodically and for different reasons, and more and more choose to build data centers outside of major cities. More often than not they do it for reasons other than natural disasters (mostly economic).
But Sandy was clearly a catalyst in Bloomberg’s case. As a result, its services gained in resiliency, and the company scored some “green” points for building a facility with advanced energy-efficiency features. It is one of the first data centers to receive certification under US Green Building Council’s newest LEED V4 benchmark, which includes some requirements unique to certain types of buildings, including data centers. Bloomberg’s Orangetown data center received LEED Gold earlier this year. | | 3:00p |
Storing All That Data: Web-Scale Solutions for a Connected World Stefan Bernbo is Founder and CEO of Compuverde.
A recent data forecast from Cisco predicts that mobile data traffic will grow ten-fold globally from 2014 to 2019 – a compound annual growth rate of 57 percent. Another 57 percent of mobile connections will be “smart” connections by 2019, up from 26 percent in 2014. Add the growth of mobile devices and cloud-based services, and it’s enough to make an database administrator’s head spin trying to figure out where all that data is going to be stored.
It is obvious that more storage will be required than traditional architectures can provide.
These architectures have bottlenecks that, while merely inconvenient for legacy data, are simply untenable for the scale of storage needed today. To adapt to this exponential growth trajectory, major enterprises are deploying web-scale architectures that enable virtualization, compute and storage functionality on a tremendous scale.
Overcoming the Single-Point-of-Entry Challenge
A bottleneck that functions as a single point of entry can become a single point of failure, especially with the demands of cloud computing on Big Data storage. Adding redundant, expensive, high-performance components to alleviate the bottleneck, as most service providers presently do, adds cost and complexity to a system very quickly. However, a horizontally scalable web-scale system designed to distribute data among all nodes makes it possible to choose cheaper, lower-energy hardware while eliminating bottlenecks.
This is a huge win for cloud providers, which must manage far more users and greater performance demands than do enterprises. While the average user of an enterprise system demands high performance, these systems typically have fewer users, and those users can access their files directly through the local network. Furthermore, enterprise system users are typically accessing, sending and saving relatively low-volume files like documents and spreadsheets, using less storage capacity and alleviating performance load.
Outside the enterprise, though, the situation is quite different. The system is being accessed simultaneously over the Internet by exponentially more users, which itself becomes a performance bottleneck. The cloud provider’s storage system not only has to scale to each additional user, but must also maintain performance across the aggregate of all users. Significantly, the average cloud user is accessing and storing far larger files – music, photo and video files – than does the average enterprise user. Web-scale architectures are designed to prevent the bottlenecks that this volume of usage causes in traditional legacy storage setups.
Scaling Storage Economically
Freedom from reliance on hardware is important for web-scale architecture. Since hardware inevitably fails at a number of points within the machine, traditional appliances – storage hardware that has proprietary software built in – typically include multiple copies of expensive components to anticipate and prevent failure. These extra layers of identical hardware extract higher costs in energy usage and add layers of complexity to a single appliance. Because the actual cost per appliance is quite high compared with commodity servers, cost estimates often skyrocket when companies begin examining how to scale out their data centers. One way to avoid this is by using software-defined vNAS or vSAN in a hypervisor environment, both of which offer ways to build out servers at a web-scale rate.
Solving Problems at the Storage Level
To accommodate web-scale architecture, distributed storage offers the best model – even though the trend has been to move toward centralization. This is because there are now ways to improve performance at the software level that neutralize the performance advantage of a centralized data storage approach.
To minimize load time, service providers need to be able to offer data centers located across the globe as users can access cloud services from anywhere at any time. With global availability, however, comes a number of challenges. Load is active in the data center in a company’s region. This creates a problem, since all data stored in all locations must be in sync. From an architecture point of view, it’s important to solve these problems at the storage layer instead of further up at the application layer, where it becomes more difficult and complicated to solve.
Events like natural disasters that cause power outages can put a local server farm offline, which means that global data centers must be resilient. If a local data center or server goes down, global data centers must reroute data quickly to available servers to minimize downtime. While there are certainly solutions today that solve these problems, they do so at the application layer. Attempting to solve these issues that high up in the hierarchy of data center infrastructure – instead of solving them at the storage level – presents significant cost and complexity disadvantages. Solving these issues directly at the storage level through web-scale architectures delivers significant benefits in efficiency, time and cost savings.
Future-Proofed Through Web-Scale
The exponential demand for more storage means that companies can no longer continue to rely on expensive, inflexible appliances in their data centers and remain financially viable. They will be forced to outlay significant funds to develop the storage capacity they need to meet customer needs.
Having an expansive, rigid network environment locked into configurations determined by an outside vendor severely curtails the ability of the organization to react nimbly to market demands, much less anticipate them in a proactive manner. Web-scale storage philosophies enable major enterprises to “future proof” their data centers. Since the hardware and the software are separate investments, either may be switched out to a better, more appropriate option as the market dictates, and at minimal cost.
This new model is necessarily the future of storage for organizations faced with the volumes of data that the modern world presents. Software-defined storage and hyper-converged infrastructures create an agile and cost-effective framework for major enterprises, global organizations and internet service providers to serve their constituents with high performance in a distributed framework that won’t break the bank.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:31p |
Telx Joins Group Building US-Asia Submarine Cable System Moving to give customers a more options for connecting data center facilities on the West Coast of the US with Asia, Telx announced support for a new transpacific cable system called Faster. The network of submarine cables will span the Pacific Ocean by 2016, and a Telx data center in the Pacific Northwest will be one of its termination points.
The Faster consortium promises to provide access to the submarine cable system that’s being developed by China Mobile International, China Telecom Global, Global Transit, Google, KDDI, and Singtel. The consortium announced the $300-million project about one year ago. The system is expected to be operational by the second quarter of next year.
New York-based Telx has been a national player and has never had data centers in Asia. But that will change once the acquisition of Telx by its new parent company Digital Realty closes. The $1.9-billion deal was announced earlier this month. Digital has a substantial presence across the Asia-Pacific region, including data centers in Singapore, Hong Kong, Osaka, Sydney, and Melbourne.
Don Schuett, vice president of business development and strategy for Telx, said that by connecting Faster cables to a distribution Point of Presence (PoP) in PRT1 data center that Telx operates in Hillsboro, Oregon, customers will have access to initial capacity of 60 Tb per second.
“Most of the cables crossing the Pacific terminate at a carrier facility,” said Schuett. “We want to be able to give customers access to a carrier-neutral option that terminates inside our facilities.”
Since 2007 the volume of traffic moving between the US and Asia has increased 37 percent. As more businesses become truly global, conducting business in real time across multiple countries, the volume of network traffic moving across the Pacific is only expected to increase, said Schuett.
In fact, scaling up to meet those demands is one of the primary factors driving a wave of billion-dollar mergers involving companies like Telx and Digital. In addition, the nature of the traffic is changing, as organizations begin to embrace multimedia technologies needed to support, for example, video. As such, IT organizations are looking for more direct access to cabling systems around the globe.
While telephone companies have been laying submarine cable between continents since the 19th century, the Faster Consortium is laying fiber optic cable carrying 100 wavelengths at 100 gigabits per second across the world’s deepest ocean. That requires using a new lightweight polyethylene that is only 17 millimeters thick.
Once operational, that level of capacity should not only make it more feasible for organizations to expand their IT operations into Asia, but conversely make it possible of many companies that operate in, for example, China to expand their IT operations into North America. The end result would be a rapid acceleration of globalization enabled by direct access to unprecedented amount of network bandwidth at costs that should be substantially lower than what most carriers provide today. | | 6:08p |
Broadcom Controllers Offload Virtual Switch Traffic from Servers Moving to provide IT organizations with a way to offload virtual switch traffic from servers, Broadcom today unveiled 10G/25G/40G/50G Ethernet controllers, called the NetXtreme C-Series, aimed at data center environments that need to make the most efficient use of server capacity possible. The controllers make use of a Truflow processing engine developed by Broadcom.
Relying on servers to process virtual switch traffic amounts to a waste of valuable CPU capacity that could be better allocated to applications that generate actual revenue, Jim Dworkin, director of product marketing for controller at Broadcom, said.
Rather than continuing to buy 10G and 40G Ethernet technologies, Dworkin said, some data center operators are already making a rapid shift to 25/50G Ethernet technologies that offer significantly better price-performance ratios. In fact, 40G Ethernet will soon represent a class of orphan technologies in the data center, he added.
Many IT organizations will make the shift to 25/50G Ethernet technologies when they upgrade their IT infrastructure. Broadcom has also established direct relationships with advanced operators of data centers that often have the engineering talent needed to build and deploy their own IT infrastructure.
While acknowledging that making use of controllers to offload virtual switch processing is something of a religious debate, Dworkin noted that each VM on a server creates its own virtual switch. Server vendors, he added, have a vested interest in processing that traffic on servers because it drives up overall demand for server capacity. Broadcom contends that traffic is more efficiently processed on network infrastructure, thereby freeing up server capacity.
“A virtual switch is a tax on the server that comes in the software,” Dworkin said. “Offloading that function to the controller can accelerate performance by as much as 50 percent.”
Broadcom claims it’s not unusual to see as much as 33 percent of server processing being allocated to virtual switches as the number of VMs that get deployed inside a data center starts to increase substantially.
While it may take a while for every IT organization to make the shift to 25/50G Ethernet, the combination of better price-performance and the ability to offload virtual switch traffic in dense data center environments will be fairly compelling. As such, Broadcom is betting many of the operators will not be waiting for permission from server vendors to make the transition. | | 8:34p |
Report: Tata Mulling Sale of Global Data Center Business Indian telecommunications giant Tata Communications may be considering a sale of its massive global data center business.
While it isn’t as widely known as western data center giants like Equinix or Interxion, Tata is one of the world’s largest data center providers. The company operates more than 1 million square feet of data center space in more than 40 locations around the world, including North America, Europe, and Asia-Pacific.
It has retained Jefferies, a US-based investment banking firm, to help it find the best buyer, The Times of India reported, citing anonymous sources. Tata’s data center business is worth about $500 million, according to the report.
A lot of data center assets have been changing hands recently, as data center providers try to put themselves in the best position to serve today’s market. There’s reportedly a lot of demand for data center services at global scale from cloud service providers and digital-media companies, and companies including Equinix, NTT, and Digital Realty have been making big acquisitions to increase their footprint.
Japanese telco NTT is buying German data center provider e-shelter in a €742-million deal announced in March; Equinix announced in May a $3.6-billion takeover of the European giant TelecityGroup; Digital said this month it will acquire US national player Telx for $1.9 billion.
Tata doesn’t break out financial results of its data center business separately, but its managed services unit, which includes data centers, mobility, unified communications, and other services, contributes about one-third of its annual revenue. | | 10:46p |
Avoid the “Cookie-Cutter” Data Center Services Mistake Part of planning any data center is establishing a baseline of operations for both today and tomorrow. Too often organizations find an attractive package without realizing the repercussions of that solution in the future. Contract or vendor lock-in can be a serious impediment to growth. This is why establishing a good IT plan and knowing the capabilities of a provider are vital to the success of the data center environment.
Data center scalability is one of the primary concerns for DC operators. So, why is this becoming a challenge? Many organizations forgot to plan out their environments for long-term growth. Sometimes, they are locked into some type of contract or inflexible data center that inhibits growth. This is why working with flexible providers and applying some data center selection best practices can really help keep an organization be agile.
Watch out for “cookie-cutter” contracts
- They can be limiting: The reality is that some data center services contracts can be limiting. This is especially the case when growth is a necessary component, which is true for most modern organizations. While for some companies such contracts might make sense, for rapidly expanding organizations working with a flexible data center services partner is the only way to stay ahead of the competition.
- Additional tools and features may end up costing a lot of money: In long-term leases, anything additional can cost some serious dollars. Part of the beauty of working with an agile data center is that tools, features, infrastructure, amenities, and even expansion can be done à la carte without blowing a budget away. This is why working with a flexible partner capable of adjusting to your needs is crucial.
- Infrastructure doesn’t lend itself to changing technologies: To support growing organizations, your data center partner must be able to grow and change with the needs of your company. In some scenarios a pre-built lease may hinder growth in that existing technologies don’t fare well with newer systems. Look for a flexible provider who is capable of adjusting systems and infrastructure to help your environment scale.
Work with dedicated, flexible solutions advisors
- Evolve the solution as your business evolves: This is one of the core underlying themes of this guide. Good support and the ability to change as the industry changes will help keep organizations ahead of their competition. It’s important to work with a data center services partner that clearly sees the business value in evolving in line with the needs of their customers.
- Adding new components proactively: Since the data center is at the core of almost any business, data center environments must be kept up and running at all times. This means shifting from a reactive model to a proactive one. Flexible data center services partners will actively analyze power flow, resource utilization, and even capacity needs for their customers.
- Deploy adaptable framework from day one: This is one of the key features of working with a truly flexible data center provider. By planning out and deploying flexible infrastructure from the onset, administrators are able to plan out their environments in the long run. Look for data center flexibility in the type of hardware they are providing. Can the data center adjust for more power and space? Will it support an increased number of users? Data center providers that deploy adaptable technologies can answer a resounding ‘yes’ to those types of inquiries.
Never forget about good management practices
- Set a good contract with the provider: As mentioned earlier, always take the time to develop a solid SLA and contract with your data center provider. Establish uptime metrics, determine critical systems, and ensure that your data center is capable of scaling. Under no circumstance should you enter into a rigid contract where new feature add-ons or environment expansions become far too costly to adopt. Flexibility ranges from infrastructure to contract development.
- Know your roles and don’t forget about good management: In working with a data center provider, it’s important to understand one simple fact: This is a two-way partnership. Look for a provider who will treat you as a partner, a customer, and a valued addition to their infrastructure. By partnering with a data center, you are able to align your vision and share that with the provider. From there, both the organization and the data center can build an infrastructure capable of growth, capacity, and efficiency.
By applying best practices and working with a partner who is capable of being flexible, data center administrators are able to better adjust their infrastructure needs. Getting locked into an agreement can have very detrimental results for any infrastructure. This is even the case when the upfront contract may seem less expensive. Remember, the benefits can be very thin in the long run. Flexibility within a data center infrastructure is a must for any organization looking to scale as their business and industry evolves, even if it comes at a premium. | | 11:01p |
TierPoint Buys Florida Data Center Provider CxP Data center services provider TierPoint has been a company on the move recently. Less than a week ago, the St. Louis-based company announced that Jerry Kent would take over the helm as CEO from Paul Estes, who will remain with TierPoint in a different capacity.
Perhaps best known as the owner of Suddenlink, a cable company he sold a few months ago for $9.1 billion to European telecom Altice SA, Kent had been overseeing TierPoint as chairman since a group of investors acquired the St. Louis-based company last year.
On the heels of that personnel shift, today TierPoint announced the expansion of its national data center footprint with a Florida data center through an acquisition of Jacksonville-based CxP Data Centers.
At 121,000 square feet, it is the largest colocation facility in northeast Florida and will include approximately 65,000 additional square feet of data center space once fully built out, according to the company. The building is designed to be hurricane-resistant and is ballistic-hardened. Its location, within hundreds of yards of the main fiber-optic lines that serve Florida makes it attractive for companies planning disaster recovery and business continuity.
The Florida data center, which reopened in March of 2014, was renovated to include a state-of-the-art Network Operations Center and Disaster Recovery and Business Continuity space. In addition, it is capable of supporting a wide range of computing densities for cloud and managed hosting services.
The CxP acquisition is precisely what TierPoint hoped would happen following its own buyout. As CEO at the time, Estes spoke of only positive changes on the horizon.
“Our new financial partners bring a long-term orientation and expertise in building high-growth communications businesses,” he said in a statement. “With their involvement and this recapitalization we are well-positioned to continue investing in our infrastructure, technologies and people. We plan to acquire additional strategically sound assets and continue building TierPoint into an industry-leading company.”
Although this is the eighth acquistion for TierPoint — the largest being Xand in December 2014 — the CxP deal is still a big step toward meeting the company’s vision.
“This acquisition is a great fit for TierPoint, expanding upon our growth strategy in Tier 2 markets,” said Kent. “We are bringing to Northeast Florida our culture of providing enterprise-grade cloud, colocation and managed services, delivered with local customer service that is second to none.”
With its prior acquisitions, TierPoint now owns and operates more than 365,000 square feet of data center space in 14 US markets.
CxP Data Centers will immediately be branded as TierPoint. | | 11:04p |
HostingCon 2015: Why Stealing from Rackspace is OK 
This article originally appeared at The WHIR
Crafting the right contract for your hosting business can seem like a daunting and pricey task, but it doesn’t have to be.
In a presentation on Monday at HostingCon Global, attorney David Snead walked through what it takes to write a contract that is easy for your customers to understand. The room represented both sides of the spectrum of hosting companies, some attendees from small 5-10 person companies and others working for companies with more than 150 employees.
The session, Stealing from Rackspace is OK, is certainly more applicable to smaller hosting businesses that don’t have the resources to have full-time legal counsel.
Snead said that he often sees customers coming to him with contracts that he calls the “Rackspace Contract.” In other words, provisions in their contract have been borrowed from Rackspace’s contract, which isn’t actually a bad thing.
Rackspace looks at its contract as a public service, realizing that a lot of web hosts out there don’t have a lot of money to hire legal counsel to write them a contract from scratch, Snead said.
Looking at Rackspace’s contract could help you determine the best language to use around provisions like raising prices when utility costs go up, for instance. But stealing contracts wholesale doesn’t work because your business is unique to you and you will have concerns are different from other clients. The worst thing you can do, Snead said, is borrow a paragraph from Rackspace, one from 1&1, GoDaddy and so on.
“What keeps you up at night about your business?” Snead asked. It’s important to consider this so you know what your contract needs to cover.
Along with guarding against your concerns, your contract also needs to meet brand needs. Don’t say you’re a customer friendly if you have an inflexible no refund policy, for example.
If you’re going to use a service like Rocket Lawyer or Legal Zoom, use it to help you understand what issues are important to your business, Snead said. These services can help you decide what provisions you might need legal help with.
Top 5 Contract Goals
- Exceeding customer expectations
- Supporting your brand
- Protecting your revenue
- Meeting contract obligations
- Litigation prevention
This first ran at https://www.thewhir.com/web-hosting-news/hostingcon-2015-why-stealing-from-rackspace-is-ok |
|