Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, June 21st, 2017
Time |
Event |
12:00p |
Open19: The Vendor-Friendly Open Source Data Center Project In case you missed it, LinkedIn last month teamed up with GE, Hewlett Packard Enterprise, and a host of other companies serving the data center market to launch a foundation to govern its open source data center technology effort. The Open19 Foundation now administers the Open19 Project, which in many ways is similar to the Open Compute Project, started by Facebook, but also stands distinctly apart thanks to several key differences.
The most prominent point of contrast is Open19’s target audience: data center operators who are smaller than the hyper-scale cloud platforms operated by Facebook, Microsoft, Apple, and Google, some of OCP’s biggest data center-operator members. Another big difference is Open19’s big focus on edge compute in addition to core data center hardware.
There are other differences, but one that is especially telling about the nature of Open19 is the way its founders have chosen to treat intellectual property of the participating companies. Unlike OCP, which requires any company that wants to have a server or another piece of gear recognized as OCP-compliant to open source the entire thing, Open19 structured its licensing framework in a way that lets companies protect their IP and still participate. If HPE or one of the other participating hardware vendors wants to adopt Open19 standards for a server, for example, it doesn’t have to part with its rights to the technology inside that server for the foundation to recognize it as Open19-compliant.
“A lot of people are reluctant to be in an environment where they’re always required to put their IP out,” Yuval Bachar, a top infrastructure engineer at LinkedIn who is spearheading Open19, said in an interview with Data Center Knowledge. “We’re creating an environment where you’re not required [to open source IP] unless you participate actively and contribute to the project.”
See also: LinkedIn’s Data Center Standard Aims to Do What OCP Hasn’t
LinkedIn owns all the current Open19 IP, which includes a “cage” that goes inside a standard data center rack, four standard server form factors that slide into the cage, a power shelf, which is essentially a single power supply for all the servers in the cage, and a network switch. The company is planning to contribute all of the above to the foundation, Bachar said, but it doesn’t expect other members to do the same. It might also contribute the technology inside the servers, although server innards aren’t part of Open19’s current focus.
“In Open19 you don’t have to contribute what’s inside your server,” he said. “Potentially, we as LinkedIn will do that, because we don’t see a competitive commercial advantage in actually doing our own servers.” But the likes of HPE, hyve, Flex, QCT, or Inspur have IP to protect, and the foundation doesn’t want that to hamper their participation.
Licenses Selected to Lower Risk
For cases where vendors do want to contribute technology to the project, Open19 has selected various types of licenses for different scenarios, all meant to further reduce friction associated with participation.
The default license for everything other than software, such as specification documents or schematics, is Creative Commons, Brad Biddle, legal counsel for the Open19 Foundation, said in an interview with Data Center Knowledge. Different flavors of CC that provide different levels of control apply depending on document type.
If a collaborative project results in a specification, or another document that others will implement in their own hardware, the parties that created it are required to grant a patent license for the parts they contributed on “RAND-Z terms.” RAND-Z — in which RAND stands for reasonable and non-discriminatory and Z for zero royalty – is a common scheme standards organizations use when somebody’s IP is essential to a standard.
See also: GE Bets on LinkedIn’s Data Center Standard for Predix at the Edge
Open19’s default license for software contributions is the MIT license, one of the most popular open source licenses. “It’s a very simple, permissive-style license, as opposed to copyleft license,” Biddle said. The license is used by popular open source projects such as Ruby on Rails, Node.js, and jQuery, among others. Copyleft licenses, such as the GPL, essentially require that the code, including any modifications, remains open source and compliant with the same license. Permissive-style licenses impose no such restrictions. In other words, if a company takes a piece of open source code from Open19 and modifies it, it doesn’t have to open source the modified version. “We were sensitive to not wanting to force implementers to license away technology as a price of implementation,” Biddle said. “Our default licenses don’t require any licenses back from the technology recipients.”
The same set of default licenses would apply to single-source contributions, such as LinkedIn’s Open19 designs. Biddle said he doesn’t expect the foundation to run ongoing development projects for single-source contributions and has designed that framework for one-off releases.
The foundation’s open source licensing choices and the freedom to participate without having to give away trade secrets are meant to make participation less risky for vendors and whoever else designs hardware or writes code. After all, the initiative’s success will depend on its ability to grow the ecosystem of companies that participate. Growing an open source data center hardware community is a chicken-and-egg puzzle. Unlike the world of open source software, where vendor participation is not a prerequisite for a thriving project, attracting more end users to an open source data center hardware project requires a variety of vendors willing to spend the resources necessary to design and produce the hardware, so end users can be sure they can actually source the technology and source it from multiple suppliers. Conversely, vendors are attracted by end users. Making it easier for vendors to play helps solve half of the puzzle.
Christine Hall contributed to this article. | 1:00p |
Vapor IO to Sell Data Center Colocation Services at Cell Towers Expecting development of the Internet of Things to drive demand for edge data centers that aggregate device data close to the devices themselves, Vapor IO, an Austin-based data center technology startup, is launching a colocation business, offering leased data center capacity at wireless network towers.
The company also has a new investor. Crown Castle, the largest wireless tower company in the US as of last December, has taken a minority stake in the startup. Crown Castle leases towers to all the top wireless carriers, including Verizon, AT&T, and T-Mobile. The companies did not disclose the size of the investment.
The service, called Project Volutus, includes everything from site selection to rack space, power, connectivity, infrastructure management software, and remote hands. The bet is that companies like content and cloud service providers will only get hungrier for edge computing capacity as technologies like connected and autonomous cars, augmented and virtual reality, and 5G wireless become a reality and start scaling.
Vapor IO announced the launch of Project Volutus and the investment by Crown Castle Tuesday. Simultaneously, it announced the appointment of Don Duet, former head of the technology division of Goldman Sachs (Vapor’s first investor) as president and COO.
The company will deploy its cylindrical data center enclosures called Vapor Chamber at wireless towers as customers sign on, Cole Crawford, Vapor IO founder and CEO, said in an interview with Data Center Knowledge. “Like any colocation company, we will be customer-driven,” he said.
Each chamber contains six racks and provides 150kW of power and the necessary cooling infrastructure. Vapor announced a hardened version of the chamber, designed specifically for wireless-tower deployments, earlier this year.
More on the Vapor Chamber: This Startup Challenges the ‘Data Center Aisle’ Concept
Like traditional colocation, the sites will be multi-tenant and carrier-neutral.
Contract manufacturer Flex will make Vapor Chambers for Project Volutus. Both Flex and Vapor are sponsors and active participants in Open19, the LinkedIn-led data center standard organization. While Vapor prefers that the chambers Flex produces for its colocation business are Open19-compliant, customers are free to chose any type of racks, network connectivity, and power distribution they need, Crawford said.
Vapor is rolling out an early-access program for Project Volutus and has identified two cities where users will be able to try the technology but hasn’t publicly named them yet. | 3:00p |
“If it Moves, Regulate it” Chris Crosby is CEO of Compass Datacenters.
If you’ve been paying attention, you’ve noticed that everybody is pretty darn excited about IoT. Businesses, consumers, hackers, equipment companies and cloud providers are positively giddy about the ability to track anything, at any time, from anywhere. Some of you sharper readers out there are probably asking, ‘But Chris, I couldn’t help but notice that you slipped hackers in there. Why would they be so enthusiastic about IoT?’.
First, thanks for paying attention, and second, because as we’ve seen, the proliferation of IoT- enabled devices increases the potential points of entry for those bent on spoiling everyone’s good time. Since no one wants their baby monitor harnessed for malevolent activities, security has become a potential speed bump on our road to IoT nirvana, and folks are beginning to demand that someone do something about it.
US Government to the Rescue?
Naturally, a small, but growing, number of interested parties are becoming adamant that there is only one group with the strength, power, integrity, selfless disregard and proven experience in dealing with sensitive manners such as this—say it with me kids—the US Government. A desperate cry for help, that can only elicit one logical response…”Wait. What?”
Yes, ladies and gentlemen, the folks on Capitol Hill are the only ones capable of ensuring a future void of the potential for some miscreant in Belarus or Pyongyang to hijack a wireless camera watching the beer fridge in the back shed that makes sure that the local teens don’t steal any twelve packs.
These are the same guys who spent four years and $600 million to build a website that didn’t work, lost a few million personnel records to the Chinese and “discovered” a few hundred more data centers every few months as part of its data center consolidation effort. If the prospect of “government regulation” of an emerging industry leaves you less than enthused, you probably understand that this phrase roughly translates into the following sequence of events:
Phase One: A bunch of guys and gals who understand nothing about the technology (we like to call them Congress) pass a law setting up a regulatory body, or if they’re feeling really efficient, give this new responsibility to an existing regulatory agency.
Phase Two: A group of people who also don’t understand the technology (we’ll call them “regulators”) and establish security rules and requirements that everyone in the IoT industry must follow. The shorthand for this is “we do what the companies that give us the most money tell us to do to make it more difficult for new companies to compete with them.”
Phase Three: The speed of technological innovation is much faster than the regulators can deal with so regulation becomes more draconian to slow things down in order to catch up.
Phase Four: What should have happened finally does, and a group of like-minded firms and interested parties get together and develop industry standards that actually work; competition abounds and everyone, except for the hacking community, benefit from lower prices for superior equipment and functionality.
Before you get the wrong idea, I don’t think all regulation is bad. I like knowing that when I buy hamburger it’s been verified as coming from a cow and not from places and members of the animal kingdom that I’d prefer not to contemplate, for example. But when you’ve reached the point that it’s estimated that Americans pay more than $2 trillion annually in regulatory costs, things have gone just a little too far, and maybe we’ve reached the point of diminishing returns.
Cheap and Secure IoT Devices
To put this issue in practical perspective, does anyone care about Energy Star ratings for data centers? I’m terrified to think how much money was spent developing that program when we already had so much industry initiative and innovation at work. And we can only imagine the impact early government intervention might have had on industries and technologies like automobiles, home video technology, and the internet in their nascent stages. Imagine driving a car that can go no faster than a horse can run in order to preserve the livestock industry; or getting home to watch the show you taped on your BetaMax; or surfing the web to review a plethora of websites that all look and sound like PBS and NPR. Not a pretty picture.
The challenge that industry must solve is how to make IoT devices cheaply yet still secure. As usual, this most likely will come from an approach that is not associated with the devices themselves (think network based, not device based). But the fact of the matter remains that there are already a few bazillion devices that have to run their lifespan since the cat is already out of the proverbial bag.
If regulation prematurely engages, it will be device based, increase the costs of the devices (since firmware security is expensive); and, thereby, cripple adoption. But without the adoption, demand won’t drive the business need to solve the problem, and industry will be discouraged about coming up with investment for solutions. Premature regulation in the tech space kills. We should let it play out within the industry since tech has always been a shark tank in which one behemoth becomes tomorrow’s chum (anyone remember my alma mater – Nortel?).
Other than creating a job’s program for a few thousand federal bureaucrats, I’m a little skeptical as to the value of government regulation of IoT at this stage of its adoption curve. I say let the government do what they do well, and let the experts in the industry develop IoT that works securely and can be quickly adapted to address newly developed intrusion methods.
Once IoT falls into the “mature industry” category, then the government can go to town on it. While I’m sure all the folks calling for government regulation have the best of intentions, remember what they say about the road to hell. If you can’t remember, I’m sure that there is a federal agency that would be more than happy to tell you what you need to do to find out.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 6:03p |
AMD Server Chip Revival Effort Enlists Some Big Friends (Bloomberg) — Advanced Micro Devices Inc., trying to re-enter the lucrative server chip market, will get the help of Microsoft Corp. and Baidu Inc. who have committed to using its new Epyc product in their data centers.
“This is just the beginning of the engagement. You’ll see much more from us,” said Chief Executive Officer Lisa Su. “AMD at its best was a very strong player in the data center.”
Epyc, which goes on sale Tuesday, is AMD’s attempt to turn around a market share that’s shrunk to less than 1 percent, ceding the entire market to Intel Corp. Signing up data center operators such as Baidu, China’s largest search engine, as customers has become more important in the decade since AMD was a serious competitor to Intel. That’s because those customers are growing at a much faster rate than the overall industry and buy directly to build their own computers.
AMD announced versions of Epyc ranging from about $4,000 to $400 per chip in price. The server chips will be cheaper than their direct Intel equivalents and offer more performance, according to Forrest Norrod, a company vice president. The company made comparisons with Intel chips currently on sale and said it believes it will keep its leadership in some benchmarks even when the world’s largest chipmaker updates its product range.
Hewlett Packard Enterprise Co., Dell Technologies Inc., Lenovo Group Ltd. and other companies will offer servers based on Epyc and Microsoft, Red Hat Inc, and VMware Inc. will make sure their software works on the new chips.
Microsoft, for its Azure services, intends “to be the first global cloud provider to deliver AMD Epyc, and its combination of high performance and value, to customers by the end of the year,” said Girish Bablani, Microsoft corporate vice president, Azure Compute.
Baidu said it plans to use the AMD server chips for search, artificial intelligence applications and the cloud.
“Choice is only important if we are able to get the performance we need for our workloads,” Liu Chao, senior director of the company’s system technologies department, said in a statement. “With AMD and their new Epyc processor, we are confident that innovation in the server market will accelerate.”
Since taking over AMD in 2014, Su has been working to turn around the chipmaker, which has struggled to compete with Intel through long periods of its 48-year existence. Last year Intel turned $17 billion of sales from its data center unit into $7.5 billion of operating profit.
“We take all competitors seriously, and while AMD is trying to re-enter the server market segment, Intel continues to deliver 20-plus years of uninterrupted data center innovations while maintaining broad ecosystem investments,” Intel said in a statement supplied ahead of AMD’s event to introduce the chip Tuesday in Austin, Texas. “With our next-generation Xeon Scalable processors, we expect to continue offering the highest core and system performance versus AMD.”
AMD’s last big surge in profits and revenue was more than a decade ago when its Opteron server part allowed it to grab more than 20 percent of the market. Follow-up models failed to arrive on time or fell short on performance promises and Intel improved its products — eventually all but throwing AMD out of the most lucrative area of the processor market.
AMD is aiming to get back to the level reached by Opteron’s initial success, but cautions it will take time.
“Today we’re at less than 1 percent. At our peak we were above 25 percent,” Su said, adding the company’s interim target is a double-digit-percent market share. “We’re realistic in that will take some time — across the next couple of years.” | 8:33p |
SoftBank Invests $100 Million in Security Startup Cybereason (Bloomberg) — Cybereason Inc. raised $100 million from SoftBank Group Corp., bringing another cybersecurity business closer to the $1 billion valuation that has become the hallmark of a heavyweight technology startup.
The funding round gives Cybereason, founded by former operatives in Israel’s elite Unit 8200 military intelligence group, a valuation north of $850 million, after factoring in the latest cash infusion, according to a person familiar with the situation.
SoftBank financed the entire round, after leading Cybereason’s previous funding round of $59 million in 2015, the person added. They asked not to be identified speaking about a private investment.
The cash came from SoftBank’s own coffers rather than its new Vision Fund, a $93 billion vehicle with outside investors including Apple Inc., Qualcomm Inc., and Saudi Arabia’s Public Investment Fund.
Cybereason, like other cybersecurity startups such as Cylance Inc., uses artificial intelligence to detect breaches of computer networks. AI helps it work faster to identify problems as hackers switch tactics with increasing dexterity and aggression, said Chief Executive Officer Lior Div, in an interview.
“The cadence was switched from years, to months, and then to weeks,” he said. “The way different government agencies are thinking about terrorists, we’re thinking about cyber.”
Cybersecurity startups valued at more than $1 billion include Cloudflare Inc., Cylance, Crowdstrike Inc. and Zscaler Inc. Including the new funding round, Cybereason has raised $188.6 million from investors including Charles River Ventures, Lockheed Martin Corp., and Spark Capital. | 9:04p |
Sabey Data Centers Achieves Highest Level of Energy Savings Sabey might be a lot smaller than eBay and Digital Realty Trust, but the Seattle-based company beat out both as the data center operator that achieved the highest level of energy savings last year as noted in the Department of Energy’s (DOE) 2017 Better Buildings Progress Report.
Sabey Data Centers was recognized for having achieved its goals in 2016,and for having the highest percentage of savings so far of all data center operators enrolled in the Better Buildings Challenge program. Started in 2011, 60 organizations initially joined the program, representing almost 2 billion square feet of building space. Their goal: to improve the efficiency of their building portfolios by 20 percent or more.
The federal government committed to a goal of $2 billion in third-party financing, and the finance community committed to $2 billion in energy efficiency financing. Today, more than 310 organizations representing 4.2 billion square feet and 1,000 industrial facilities have taken the Better Buildings Challenge, and public and private sector financing commitments total well over $10 billion.
Rob Rockwood, president of Sabey, said, “To be cited as a company that has a proven approach to significant energy savings is a great honor. It’s also a testament to our operations staff’s everyday commitment to energy efficiency and the eagerness on our customers’ part to embrace these practices.”
The DOE highlighted Sabey’s 408,000-square-foot Quincy facility in Washington State as a “Leadership in Action” model for the industry. “Sabey Data Center Properties has demonstrated that high efficiency design can be applied effectively in colocated data center spaces by achieving 41 percent savings at the multi-tenant Intergate.Quincy facility.”
As one of the biggest data center landlords in Washington, Sabey has more than 20 years of experience in the data center business and is perhaps the largest provider of hydro-powered facilities in the United States. Sabey’s current tenants include Microsoft Corporation, JP Morgan Chase, Savvis, Internap, VMware and T-Mobile.
In addition to two data centers in Quincy, it has large facilities in Seattle and Wenatchee. The company also has data centers in New York City and Ashburn, Virginia. | 9:23p |
Digital Realty Study: Direct Connects to Cloud Bring 50X Less Latency For a company thinking about leveraging the public cloud as part of its data center solution, latency can be a big concern. For some, unfortunately, latency isn’t considered until after they’ve already committed to using the public cloud and quickly becomes a costly issue. Others see the red flag ahead of time and perhaps only utilize public clouds in situations where latency won’t cause a problem.
The problem is the internet itself. It’s fast, but not instantaneous. Even under the best of conditions, data to and from a server, whether located on-premises or sitting in a carrier hotel, will take enough measurable time to make some processes sluggish or inoperable. If a bottleneck arises somewhere along the way, which depending on location can happen often, the entire system might become all but unusable.
Security can also be a concern as organizations might be wary of sending sensitive data across the public internet, whether that data is encrypted or not.
The big cloud providers recognized the problem early on and offered enterprise users direct wired access — for a fee of course — under plans like DirectConnect, ExpressRoute and Interconnect. Data center providers were also quick to understand the issue, with all offering direct-connect services — with the added bonus of direct connections to multiple cloud providers. Customers like choice, it seems.
While it’s pretty obvious that sending data back and forth through a direct connection will be more efficient than having it transit the internet, there have been no metrics — other than anecdotal evidence — measuring the difference.
That is, until today, with the release of a study conducted for data center operator Digital Realty by Krystallize Technologies, which shows significant improvements when using IBM Direct Link Colocation to connect with IBM Cloud Bluemix servers located in its data centers when compared with connections made via the internet.
“We really wanted to get some metrics into the industry,” Digital Realty’s CTO Chris Sharp told Data Center Knowledge, “because I don’t believe many companies have really boiled it down to Use Case A and Use Case B and then represent the delta between the two. That’s why we think this is drastically different, by trying to educate the market with actual metrics.”
The study focused on IBM’s Bluemix because the company has at least five data centers where it can connect its customers directly to Bluemix servers located on-premises. Although the study focused on comparing on-premises direct connections with connections made via the internet, the metrics of other direct connection methods made through a metropolitan area network or MAN were also included in the study.
Three issues were examined, starting with “file-read latency,” or the time it takes for a packet requested to be transmitted and arrive back at the file system. The second was “file-read throughput,” a measure of kilobytes of storage transmitted per second. Finally, the study looked at “application performance,” which is described as “how applications actually perform in the tested configurations.”
The differences were somewhat akin to comparing driving on a graveled road to driving on a Nascar speedway.
The “file-read latency” test, with a time of 0.3 seconds for the internet connection and .0044 seconds for the direct connect, showed that the direct link cross connect delivers on average 1/50 the latency of the internet. The time for a direct connection utilizing a MAN was .088, still a marked improvement.

The “file-read throughput” test delivered 55.4 times better throughput using the direct connection, or a speed of 413.76 kB/s using the internet, 6,739.10 kB/s through a MAN, and 22,904.26 kB/s through a direct connection with an on-premises Bluemix server.

Boiling this down into “application performance,” this means that a 5.5 MB unoptimized page will render in 0.3 seconds with the direct connection or 25.8 seconds when transiting the internet. For a best case scenario, using full caching and parallel processing, the direct connection rendering time drops to 0.2 seconds compared to 13.3 seconds when using the internet.
Although the test didn’t attempt to measure security in any way, direct connections are obviously more secure.
“If you do a private interconnection, you no longer have to have any public facing infrastructure to achieve your desired end state,” he said. “And so you’re not open to any DDoS attacks or any other malicious prying or hacking into your public infrastructure.”
The increased security afforded by directly connecting to cloud providers should be especially important to industries, such as health care and financial, in which security is mandated by law. |
|