Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 19th, 2016
Time |
Event |
1:00p |
Digital Realty Sells Key Midwest Carrier Hotel to Netrality Netrality, the company that in recent years has been buying network-rich buildings in active data center markets around the US, has acquired a four-property portfolio from Digital Realty Trust. Two buildings in the portfolio represent one of the most important network interconnection points in the country: 900 Walnut St. and 210 N. Tucker Blvd., both in St. Louis.
The two interconnected buildings, formerly known as the Bandwidth Exchange, were converted to carrier hotels in the 1990s by a group of real estate investors. Since then, they have attracted close to 90 network operators total, providing access to networks that serve traffic to internet users in the St. Louis market and to numerous long-haul fiber networks.
Digital Realty bought them in 2007 for $53 million. Its current strategy, however, is focused on building and operating large data center campuses in top markets around the world, and St. Louis is a secondary US market.
The sale of the four-building package to Netrality is one of the final deals Digital will have made as part of an effort it started about two years ago to sell off properties it doesn’t consider core to its business strategy.
Read more: Digital Realty Taking Its Medicine
Netrality paid $114.5 million for the four buildings, Michael Darragh, senior VP of acquisitions at Digital Realty, told Data Center Knowledge in an interview.
The other two buildings are smaller data centers in Virginia: 1807 Michael Faraday Ct. in Reston and 251 Exchange Pl. in Herndon. While both are fully occupied by a number of tenants, they are not strategically important to Netrality, which plans to sell them.
The company bought the Virginia properties only because Digital was selling them together with the St. Louis carrier hotels as a single portfolio it refused to break up, Gerald Marshall, Netrality president and CEO, told us. “It was offered to us as a portfolio, and so we plan on disposing of [the properties in Virginia] in the near future,” he said.
A Midwestern Network Hub
The 100,000-square foot building on Walnut Street in St. Louis is about 90 percent occupied, while the 400,000-square foot carrier hotel on North Tucker is about half full. The latter is practically an expansion of the former. They are interconnected by direct network links as well as by a fiber-optic loop that goes through the city.
The owners of 900 Walnut prior to Digital bought 210 N. Tucker because they were running out of space on Walnut, Marshall said.
The buildings fit well into Netrality’s model of buying core interconnection facilities in major data center markets. The joint venture got its current name early last year, but the partners behind it, Amerimar Enterprises and Hunter Newby, a data center and interconnection entrepreneur, have been buying network-rich properties since at least four years ago.
Read more: Amerimar and Newby Going Shopping for Carrier Hotels
Today, Netrality’s portfolio consists of carrier hotels in New York, Philadelphia, Kansas City, Houston, and now also St. Louis. When it buys a building, the company usually builds a network meet-me room there if it doesn’t already have one, or upgrades or expands an existing one to make it easier for network and data center operators to interconnect.
The buildings form the region’s most important network interconnection hub, where content and service providers can reach last-mile networks that can carry their content to local end users. They are also a gateway to long-haul fiber lines, connecting St. Louis to other metros in the region, Marshall said.
One of the data center providers at 210 N. Tucker was 365 Data Centers, whose colocation facility there has also been acquired by Netrality.
New Demand for City’s Abundant Power Capacity
One of the characteristics that makes St. Louis attractive as a data center location besides connectivity is an abundance of available power on the local grid. The grid was built to support a mass of warehouse and industrial companies that operated downtown in the past, according to The New York Times.
Those industrial users are no longer there, and little new office space has been built downtown in recent years, most of it going up in the suburbs, so there is no shortage of power capacity on the grid for data centers, not to mention some of the lowest energy rates in the country.
“The infrastructure is a prime draw downtown,” Brad Pittenger, former CEO of Xiolink, which converted a 100,000-square-foot office building in St. Louis to a data center, told the Times in 2011. “We have access to more power than you do in most locations in the country and also to advanced fiber optic networks for carrier lines.”
In 2014, a data center provider called Cosentry acquired Xiolink, adding St. Louis to its portfolio of properties across the Midwest. Earlier this year, Cosentry was itself acquired by TierPoint, a quickly expanding national data center services player.
More on TierPoint: How TierPoint Quietly Built a Data Center Empire in Secondary Markets
Both the Xiolink building (at 1111 Olive St.) and 900 Walnut used to house printing presses. As digital media replaces print, more and more printing plants get converted for data center use. A massive data center in Edison, New Jersey, operated by IO, is a former New York Times printing plant. The data center QTS Realty launched this month in Chicago is where Chicago Sun-Times used to be printed. | 3:00p |
Red Hat Shoots to Solve Container Storage with Gluster and OpenShift  By The VAR Guy
Red Hat is the newest organization to take a stab at the persistent storage challenge for containers. Last month, the open source giant announced a new Gluster-based storage option for OpenShift, the company’s open source platform for running containerized apps.
Gluster and OpenShift are two key parts of the Red Hat technology stack. Gluster provides open source distributed storage, while OpenShift offers an integrated, one-stop platform for deploying and managing containers using Docker and Kubernetes.
Previously, Gluster and OpenShift were separate entities. But that changed with the announcement that Red Hat will make Gluster available as a storage option on OpenShift.
The Container Storage Holy Grail?
The integration translates to another option for storing data inside containers. That’s important because, to date, other persistent storage solutions for containers have tended to be clunky.
Here’s why: Docker containers are ephemeral. They spin up and down as needed, which is what makes containerized infrastructure so scalable and agile. But it also makes it hard to store data persistently, since you can’t store permanent data inside containers very effectively if the containers themselves are not permanent.
Previous attempts to solve this conundrum have centered on creating special containers dedicated to storage, or allowing containerized apps to access storage on the host system. The former approach is not highly persistent, and the latter undercuts the isolation of containerized apps, which is one of their selling points.
Red Hat is taking a more innovative approach. It will allow containers to access a persistent distributed storage system through Gluster. That means the data will have a permanent place to live even as the containers using it spin up and down. In addition, access to the data won’t require containers to access local storage on the host.
Echoes of Torus
If this idea sounds familiar, it’s probably because it’s similar to what CoreOS is trying to do with its new Torus distributed storage system. But whereas Torus is brand-new and still being developed, Gluster has already been in production for years.
So Red Hat is in the enviable position of being able to offer a persistent storage solution for containers that is both more elegant than previous options, and ready to use now — or later this summer, at least, which is when Red Hat says Gluster on OpenShift will become available.
This first ran at http://thevarguy.com/open-source-application-software-companies/red-hat-shoots-solve-container-storage-gluster-and-opensh | 4:43p |
Five Best Practices for Outsourcing Cybersecurity  Brought to you by MSPmentor
Data breaches are getting more sophisticated, more common, and more expensive; the average cost of a breach has reached $4 million, up 29% in the past three years. No organization, regardless of size or industry, can afford to ignore information security. The shortage of qualified cybersecurity personnel, combined with modern organizations preferring to outsource ancillary functions so they can focus on their core competencies, has resulted in many organizations choosing to outsource part or all of their cybersecurity operations, often to a managed security services provider (MSSP).
There are many benefits to outsourcing information security, including cost savings and access to a deeper knowledge base and a higher level of expertise than is available in-house. However, outsourcing is not without its pitfalls, and there are issues that organizations should be aware of when choosing a cybersecurity vendor. This article will discuss five best practices for outsourcing information security.
1. Never use an offshore cybersecurity provider
The bargain-basement prices offered by offshore cybersecurity providers are tempting to budget-conscious organizations, especially since many other IT functions, such as mobile app and software development, are routinely offshored.
However, mobile app and software development do not necessitate allowing contractors to have access to your organization’s network or sensitive data, and the work can be reviewed by an internal team before deployment. Due to the nature of the work, cybersecurity contractors have full access to your organization’s internal systems and data, in real-time. Meanwhile, there is no way to verify the education, skills, or experience levels of the offshore company’s employees, nor is there any way to ensure they have undergone comprehensive criminal background checks. Finally, if a breach occurs, you may have little or no legal recourse against the offshore provider even if you have proof that the breach was due to negligence or a malicious insider at their company.
Information security is simply too important to entrust to an offshore contractor. There is also a practical matter to consider: Offshore providers are unable to provide on-site security staff at your location, which leads into our second best practice.
2. Steer clear of providers that suggest solutions that are completely remote-based
Some cybersecurity companies provide services that are strictly remote, conducted entirely via telephone and the internet. However, a remote-only solution cannot fully protect your organization, especially since over half of all data breaches can be traced back to negligence, mistakes, or malicious acts on the part of company insiders. An MSSP can protect your organization from the outside and the inside through a hybrid solution that combines remote security operations center (SOC) monitoring with on-site security personnel who can work in tandem with your existing staff or function as a standalone, embedded SOC. These on-site personnel can help your organization establish cybersecurity policies and employee training, as well as immediately respond to security breaches.
3. Beware of providers that claim their solutions provide 100% protection against breaches
When evaluating cybersecurity vendors, you will inevitably come across providers who claim that their solutions are foolproof and will prevent all breaches. This is impossible. Cybersecurity experts are engaged in a never-ending war against hackers. As soon as one vulnerability is fixed, hackers devote themselves to finding the next one, and every new technology that is introduced presents brand-new vulnerabilities.
While a comprehensive cybersecurity solution will protect your organization against most breaches, the cold, hard reality is that there is no such thing as an impenetrable security system. Steer clear of providers who try to tell you otherwise. Not only are they being dishonest, they may also be unable to effectively respond when a breach does occur.
4. Ensure that the provider’s team has real-world experience in cybersecurity
Some cybersecurity providers hire recent college graduates or certificate-holders with plenty of classroom training in information security theory but little or no actual work experience protecting critical infrastructures. Cybersecurity expertise cannot be honed within the confines of a classroom. Entry-level trainees lack the experience to fully grasp the nuances of real-world information security procedures and challenges, which means they are far more likely to make mistakes than enterprise security professionals with years of experience. Make sure that your provider hires only seasoned security experts.
5. Beware of providers who talk about “magic hardware” and little else
Enterprise security hardware platforms are a hot topic in the information security industry right now, and many exciting new developments are being made in this area. However, security hardware is not a standalone solution, and you should be wary of any provider that tries to sell you on a “magic hardware” platform that will purportedly address all of your security needs. Security hardware is a tool for human security professionals; it does not replace them.
Outsourcing your organization’s information security is serious business. You are handing the keys to your kingdom – your company’s internal systems and sensitive data – to a third-party vendor. Asking critical questions and following best practices during the evaluation and selection process will ensure a successful, long-term relationship between your organization and your cybersecurity provider.
Mike Baker is founder and Principal at Mosaic451, a bespoke cybersecurity service provider and consultancy with specific expertise in building, operating and defending some of the most highly-secure networks in North America.
This first ran at http://mspmentor.net/guest-bloggers/5-best-practices-outsourcing-cybersecurity | 5:15p |
EMC Shareholders Approve Dell Merger With 98 Percent of Votes (Bloomberg) — EMC shareholders approved the merger with Dell with 98 percent of the votes, clearing a key hurdle on the way to finalizing the largest technology merger in history.
EMC, the maker of storage products, said nearly all shareholders voted in favor, based on a preliminary tally unveiled at a special meeting to decide the deal, according to a company statement. The merger is on track to close under the original terms, EMC said. Previously EMC said the deal would close by October. It’s still subject to regulatory approval from China.
“The board evaluated numerous alternatives to enhance shareholder value with an eye on execution and certainty and concluded that our proposed merger with Dell is by far the best outcome,” Joe Tucci, EMC chairman, said during the meeting, which was webcast.
EMC agreed to be bought by Dell for $67 billion, an agreement that will bring together two of the largest tech hardware companies in the world. EMC company has been facing a challenging climate for storage machines because more companies are avoiding its expensive devices to warehouse information in their own data centers and are signing up with companies like Amazon.com Inc. and Microsoft Corp. to store their data in the cloud. | 5:45p |
IBM Investors Get First Sign of Turnaround at Big Blue (Bloomberg) — IBM gave investors a sign that Big Blue may finally be turning things around. Now it has to prove it can continue to drive momentum.
Revenue increased for the first time in a key unit — cognitive solutions, including its Watson artificial intelligence platform — that the company has been touting as crucial to future growth. The second-quarter results may signal that CEO Ginni Rometty is making good on her promise to shift IBM’s software and services offerings to match customers’ increasing appetite for cloud-based solutions. It’s been an uphill battle. Overall, sales have declined 17 quarters in a row while margins narrowed.
IBM’s results Monday “underscore that the company is beginning to find an inflection point,” said Bill Kreher, an analyst with Edward Jones & Co. “We may begin to see the company grow as a whole as soon as next year.”
Second-quarter sales gained 3.5 percent to $4.7 billion in the cognitive solutions group, including analytics and security software products. This is the first time since Armonk, New York-based IBM reorganized its segments that cognitive solutions has registered a revenue increase, after declining the previous five quarters.
CFO Martin Schroeter highlighted Cognos and Watson products in the analytics portfolio as the major reasons for the increase. The sales growth in the cognitive solutions unit was also buoyed by deals, he said, pointing to the integration of the Weather Co. assets and Truven Health Analytics into Watson.
Shares were little changed at 10:50 a.m. Tuesday in New York, the first time in four quarters the stock didn’t tumble in reaction to an earnings report. IBM is up 16 percent this year through Monday.
Higher Expectations
To keep up the momentum, IBM will have to prove that it can meet expectations for its full-year earnings forecast of at least $13.50 a share. This means that more than 60 percent of its profit will have to come from the latter half of 2016, with most of the pressure on the last quarter.
It won’t be easy — IBM’s gross profit margins have narrowed the last three quarters. Schroeter has said previously that the company’s investments and restructuring have compressed margins, along with the shift to the as-a-service software model. He said Monday evening that new businesses in software will become more profitable as they ramp up, an effect the company will start to see this half of the year.
Schroeter said third quarter earnings per share should be in the mi-to-high range of 22 percent to 24 percent of the full year, suggesting a forecast of $3.11 to $3.24. As a result, IBM would have to report EPS of $4.96 to $5.09 for the last three months to keep its word. Acquisitions will have a smaller impact on profit for the remainder of the year, Schroeter said. IBM expects that profit during the final period of the year will be supported by savings from workforce rebalancing and reduced real estate expenses.
To convince the skeptics, IBM also needs to prove it can break out of a pattern. For the last two years, the company missed its original full-year earnings projections, lowering forecasts both times in the third quarter.
IBM’s goal for the year is possibly achievable, but “unlikely” given the past two years, especially with projections of “relatively modest benefit from its restructuring and acquisitions,” Sanford C. Bernstein & Co. analyst Toni Sacconaghi wrote in a note. “Guidance points to Q4 needing to enjoy the highest sequential improvement in the last 9 years.”
In the first half of this year, IBM spent $5.4 billion on deals, closing 11 of them. Acquisitions added a two-percentage-point boost to revenue this quarter and is expected to add more to the top line during the second half of the year, Schroeter said on an earnings call late Monday. He didn’t specify how much acquisitions helped each group within the company.
IBM mixes the products from the acquisitions into software and services it sells to customers, which is why the purchases have helped multiple divisions of the company, Schroeter said in an interview.
“IBM has made some prudent investments, and now it appears they’re beginning to pay off,” Kreher said. “It certainly does ramp up expectations.” | 6:15p |
Expedient Shrinks Cages to Make Data Center Space Cheaper Expedient has come up with a new design for colocation cages which it claims will lower the costs of data center space for its customers.
The design, called SlimLine, essentially shrinks the cage to more closely match the shape of equipment racks. Typical colocation cages have a lot of space inside them that isn’t used by equipment, usually enough for technicians to move around, even when all the actual rack space is utilized.
Expedient’s new cage design, which it hopes to patent, is a way to utilize data center space more efficiently. Instead of leaving extra space inside the cage, techs have full access to front and back of the equipment via roll-up doors, which can be either solid or perforated, depending presumably on the equipment’s cooling needs.
The data center provider said it will tailor the cages to customer needs. The smallest cage available will closely match the physical profile of a row of four cabinets.
Multiple-row configurations include enclosed hot aisles that contain hot server exhaust air.
Customers save because they end up using less data center space to house the same amount of equipment than they would in traditional colocation cages. Those savings can reach as much as 40 percent, Expedient CTO, Ken Hill, said in a statement.
Here’s a photo of a two-row SlimLine configuration in one of Expedient’s data centers, featuring a perforated access wall and a solid door in the hot aisle (photo by Expedient):
 | 8:20p |
LinkedIn Pushes Own Data Center Hardware Standard LinkedIn, the social network for the professional world that was in June acquired by Microsoft, has announced a new open design standard for data center servers and racks it hopes will gain wide industry adoption.
It’s unclear, however, how the initiative fits with the infrastructure strategy of its new parent company, which has gone all-in with Facebook’s Open Compute Project, an open source data center and hardware design initiative with its own open design standards for the same components. When it joined OCP two years ago Microsoft also adopted a data center strategy that would standardize hardware on its own OCP-inspired designs across its global operations.
Yuval Bachar, who leads LinkedIn’s infrastructure architecture and who unveiled the Open19 initiative in a blog post Tuesday, told us earlier this year that the company had decided against using OCP hardware when it was switching to a hyperscale approach to data center deployment because OCP hardware wasn’t designed for standard data centers and data center racks. That, however, was in March, before LinkedIn was gobbled up by the Redmond, Washington-based tech giant.
“Our plan is to build a standard that works in any EIA 19-inch rack in order to allow many more suppliers to produce servers that will interoperate and be interchangeable in any rack environment,” Bachar wrote in the blog post.
See also: LinkedIn Data Centers Adopting the Hyperscale Way
The standard OCP servers are 21 inches wide, and so are the standard OCP racks. Facebook switched to 21 inches in its data centers several years ago, and announced its 21-inch rack design, called Open Rack, in 2012. Multiple vendors, however, have designed OCP servers in the traditional 19-inch form factor and racks that accommodate them.
There is more to LinkedIn’s proposed Open19 standard than rack width, however. Here is the full list of Open19 specifications:
- Standard 19-inch 4 post rack
- Brick cage
- Brick (B), Double Brick (DB), Double High Brick (DHB)
- Power shelf—12 volt distribution, OTS power modules
- Optional Battery Backup Unit (BBU)
- Optional Networking switch (ToR)
- Snap-on power cables/PCB—200-250 watts per brick
- Snap-on data cables—up to 100G per brick
- Provides linear growth on power and bandwidth based on brick size

Illustration of LinkedIn’s proposed Open19 rack and server design (Image: LinkedIn)
Bachar and his colleagues believe designs that follow these specs “will be more modular, efficient to install, and contain components that are easier to source than other custom open server solutions.”
Making open hardware easier to source is an important issue and probably the strongest argument for an alternative standard to OCP. We have heard from multiple people close to OCP that sourcing components for OCP gear is difficult, especially if you’re not a high-volume buyer like Facebook or Microsoft. OCP vendors today are focused predominantly on serving those hyperscale data center operators, which substantially limits access to that kind of hardware for smaller IT shops.
Read more: Why OCP Servers are Hard to Get for Enterprise IT Shops
Still, the amount of industry support OCP has gained over the last several years will make it difficult for a competing standard to take hold, especially given that one of OCP’s biggest supporters is now LinkedIn’s parent company. Other OCP members include Apple, Google, AT&T, Deutsche Telekom, and Equinix, as well as numerous large financial institutions and the biggest hardware and data center infrastructure vendors. | 11:04p |
Google Cuts Its Giant Electricity Bill With DeepMind-Powered AI (Bloomberg) — Google just paid for part of its acquisition of DeepMind in a surprising way.
The internet giant is using technology from the DeepMind artificial intelligence subsidiary for big savings on the power consumed by its data centers, according to DeepMind Co-Founder Demis Hassabis.
In recent months, the Alphabet unit put a DeepMind AI system in control of parts of its data centers to reduce power consumption by manipulating computer servers and related equipment like cooling systems. It uses a similar technique to DeepMind software that taught itself to play Atari video games, Hassabis said in an interview at a recent AI conference in New York.
The system cut power usage in the data centers by several percentage points, “which is a huge saving in terms of cost but, also, great for the environment,” he said.
See also: Google Has Built Its Own Custom Chip for AI Servers
The savings translate into a 15 percent improvement in power usage efficiency, or PUE, Google said in a statement. PUE measures how much electricity Google uses for its computers, versus the supporting infrastructure like cooling systems.
Google said it used 4,402,836 MWh of electricity in 2014, equivalent to the average yearly consumption of about 366,903 US family homes. A significant proportion of Google’s spending on electricity comes from its data centers, which support its globe-spanning web services and mobile apps.
See also: Here’s How Much Energy All US Data Centers Consume
Saving a few percentage points of electricity usage means major financial gains for Google. Typical electricity prices companies pay in the US range from about $25 to $40 per MWh, according to data from the US Energy Information Administration. (Prices in different regions range from a few dollars to more than $100). Either way, saving 10 percent on data center power consumption, for instance, could translate to hundreds of millions of dollars in savings for Google over multiple years. Google acquired DeepMind in 2014 for 400 million pounds, or more than $600 million at the time, according to The Guardian.
The application of DeepMind’s technology builds on previous efforts by Google to apply machine learning, a type of AI, to its data centers. Back in 2014, the company said it used neural networks, a type of pattern recognition system, to predict how its power usage would change over time, letting it arrange equipment in more efficient ways.
Read more: Google Using Machine Learning to Boost Data Center Efficiency
The DeepMind work goes a step further. Instead of making moves in an Atari game, the software changes how equipment runs inside the data centers to get the highest score — in this case more efficient consumption of electricity.
“It controls about 120 variables in the data centers. The fans and the cooling systems and so on, and windows and other things,” Hassabis said. “They were pretty astounded.”
This is just the beginning of the project, Hassabis said. Now that DeepMind knows the approach works, it also knows where its AI system lacks information, so it may ask Google to put additional sensors into its data centers to let its software eke out even more efficiency.
See also: What Cloud and AI Do and Do Not Mean for Google Data Centers |
|