Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, August 24th, 2016
Time |
Event |
12:10a |
Utah City Says ‘No Thank You’ to Facebook Data Center Officials in West Jordan, Utah, have terminated negotiations with Facebook about a potential data center project the company was considering there.
While representing a big potential investment, the project “would not include a long-term significant employment base,” a statement issued by the West Jordan city manager’s office late Tuesday afternoon read.
The Facebook data center, codenamed Project Discus, would have meant a $1.5 billion investment by the company. The incentive package various state and local agencies at one point considered offering the company was valued up to $260 million, but not all entities whose tax revenues would be affected by the package agreed, which meant Utah could not beat the size of the incentive package that was being offered by the State of New Mexico, whom Utah was competing against for the project.
Facebook, whose name the city did not disclose as the company behind Project Discus but to whom local press have attributed the project, hasn’t publicly voiced any specific plans to build a data center in either of the states, issuing instead a standard statement that said the company is always on the look-out for a good location for its next data center build.
The final decision to pull out of negotiations over the potential Facebook data center project came after the Utah State Board of Education voted against offering the company a property tax incentive at the expense of the Jordan School District, according to the city manager’s statement. Another vote against the tax breaks came last week from the Salt Lake County Mayor and County Council.
“While this incentive package was more than those previously offered to other companies wanting to locate in Utah, due to [the company’s] size, investment ($1.5 billion) and name recognition, it was not competitive enough as compared to the incentives offered by the State of New Mexico, who also has been courting this data center project,” the city manager’s statement read. | 12:00p |
Red Hat Virtualization Revamp Takes On Hyperconvergence The promise of hyperconverged infrastructure is for data center hardware to include the provisioning tools needed to make workloads run across the entire platform, no matter how big it eventually gets. If hyperconvergence ends up winning the data center, it will be because open source alternatives did not step up to the plate in time.
With announcements of the latest innovations to VMware’s venerable virtualization platform just one week away, and with OpenStack becoming the focus of much of Red Hat’s development efforts, the question gets asked once again: Will Red Hat adjust its strategy for its enterprise virtualization platform, this time to keep up with the advances in hyperconvergence?
See also: Why Hyperconverged Infrastructure is So Hot
Wednesday morning brought Red Hat’s latest answer, in what is nearly becoming an annual event for the company: For version 4 of Red Hat Virtualization, the word “Enterprise” gets dropped from the platform’s title, and its mission becomes more focused upon establishing a single base layer for software-defined infrastructure for customer data centers.
“In this modern data center there shouldn’t be multiple software-defined infrastructures,” Scott Herrod, product manager for Red Hat Virtualization, told Data Center Knowledge. “There should be one, and Red Hat is trying to consolidate and unify that. So whether your workloads are running in Red Hat Virtualization, OpenStack, or Atomic Hosting and OpenShift platform, you don’t need to worry about differences in the infrastructure and having to change policies and procedures around how you effectively manage that infrastructure.”
Put another way, Red Hat is aiming to produce a single control point for administrating software-defined infrastructure (SDI) — the resources that comprise spinning up virtual workloads on private and hybrid clouds, such as compute, storage, and bandwidth. In time, Herrod explained, it wants to provide a unified control center for managing OpenStack, OpenShift, and KVM hypervisor-driven workloads.
See also: Five Myths About Hyperconverged Infrastructure
But accomplishing that means removing the separations between the SDN networks established by all these platforms, so that they effectively run on the same layer. That’s not an easy task, and as Herrod admits, RHV 4 (we’d better get used to it without the “E”) is only on the start of this journey.
 Red Hat Virtualization screenshot (Source: Red Hat)
Neutron Bomb
With previous editions of RHEV, Red Hat had been tinkering with experimental support for Neutron, OpenStack’s native networking component — specifically with building SDN bridges between the two platforms, as well as with Open vSwitch for hypervisor-style virtualization. But that methodology required fully embracing Neutron. In turn, explained Herrod, that mandated that RHEV customers support at least half of the rest of OpenStack, including the Keystone authentication component and a message queue such as OpenStack’s Marconi.
“It was unfriendly to the traditional virtualization user, and we got a lot of push-back,” Herrod admitted. While Neutron had the benefit of being a shiny new technology that generated a lot of interest, compelling the vendors who support RHEV to produce software drivers that addressed these intricate bridges proved “counter-productive.”
What’s more, certain customers in the financial sector (he would not name names, but we got clues that their Fortune numbers were in the low double-digits) refused to deploy infrastructure that mandated multiple logical networking layers.
So RHV 4 will continue to support Neutron as an option, because that’s the path that RHEV started paving. But Herrod was emphatic that RHEV’s customers were soundly rejecting the stratification of the data center into multiple layers. As the company’s documentation for version 3 explained it, virtual machines had virtual network interface cards (vNICs), so one layer connected them all. But another layer had to be designated for logical devices that were not VMs, and did not have vNICs or virtual bridges associated with them. Then there were cluster layers to manage the images of the networks that data center managers actually saw, and data center layers to converge the cluster layers.
This list didn’t even include the separate layer that OpenShift PaaS users thought they required for managing Docker containers, and the Kubernetes network above that. “The answer is a resounding ‘No,’” said Herrod.
The Shim Layer
With RHV 4, he said, Red Hat has constructed a new shim layer, for customers who happen to be utilizing any kind of third-party platform where overlay networks enter the picture. Docker, Kubernetes, and overlays, such as Weave, come to mind on the software side, but also Cisco’s ACI, Nuage Networks, Midokura.
“What we can do within Red Hat Virtualization,” Herrod explained, “without having to write specific drivers for every vendor or every vendor having to explicitly write code to integrate with RHV, we’re able to take that API call and translate that into the Open vSwitch configuration that’s required to deliver the overlay networks, based on the external control planes. We have a common layer to interface with our API, as well as with the Neutron API; and they can consolidate their software-defined networking using open, proprietary, or third-party SDN technologies.”
It’s not a universal layer as of yet, he admits. There’s work underway to reach out to OpenDaylight and OVN, he said, to present open source alternative methodologies for integrating proprietary network components and appliances.
Herrod’s team is also looking for opportunities to apply something similar to this shim layer concept for storage. Up to now, RHEV had been using Cinder, the OpenStack component for persistent block storage, especially for large database volumes. But last October’s acquisition of infrastructure provisioning service Ansible is making it feasible for Red Hat’s forthcoming single management layer to orchestrate and automate a single storage plane “to help avoid storage reconfiguration, and some of the complexity behind it.” Presently, that project is in the experimental phase.
But Red Hat can’t afford for these experiments to remain in the laboratory for too long, as server makers — most notably HPE, Cisco, Dell, and Lenovo with Nutanix — are quickly gaining ground in the race to secure the lowest layer of the hyperconverged data center.
Read more: Nutanix Certifies Cisco UCS for Its Hyperconverged Infrastructure | 3:59p |
China’s $100 Billion Chip Supremacy Bid Deemed Unrealistic (Bloomberg) — China faces an uphill battle in its push to become the global leader in computer chips because of a lack of technological know-how and talent, according to an analysis by Bain & Co.
China is one of the world’s largest consumers of semiconductor devices thanks to its manufacturing might, and is planning to spend more than $100 billion to become also a premier supplier to the planet. The global consultancy estimates that by 2020, almost 55 percent of the world’s memory, logic and analog chips will flow to or through the country. But the vast majority of the microchips that act as the brains of products like Apple’s iPhone are largely imported from companies like Intel and Samsung.
The government in 2014 moved to change this with plans to invest more than $100 billion and become a leading global player. It’s also driven consolidation among domestic suppliers to maximize its investments, including the $2.8 billion merger of Tsinghua Unigroup and Wuhan Xinxin Semiconductor Manufacturing announced last month.
See also: China’s Kingsoft Aims to Take On Alibaba in Cloud Computing
China’s effort to build a world-leading semiconductor industry is part of a broader effort to wean itself off foreign technology. But financial investment will not be enough to buy leadership of the semiconductor sector, which is worth around $1 trillion according to Singapore-based Bain partner Kevin Meehan.
The country currently makes just 15 percent of the semiconductors it consumes, Bain said in a report. And by 2020, the consultancy expects Chinese-based plants to produce just 7 percent of the world’s microchips, barely up from current levels.
“China is coming at it in a pretty smart and intelligent way,” he said. “But I don’t see a path for them to own leading-edge processor technology and that is the foundation of Intel, the foundation of Samsung’s success.”
Efforts by Chinese companies to buy rivals with intellectual property in the processor and memory markets have run afoul of regulators around the world. Plans for a $3.8 billion investment in Western Digital were scrapped amid a U.S. security review and investments in Taiwanese chip companies face regulatory obstacles and have stoked political tension.
See also: Intel Rolls Out First Silicon Photonics Products for Data Centers
“Assuming they can spend it all, the biggest risk is that they end up with a whole bunch of fragmented and weak follower positions,” Meehan said. “There’s ways for them to have influence over greater than 10 percent if they’re partnering well, but otherwise I think you’re essentially capped there.”
Meehan said China’s semiconductor makers could eventually learn from partnerships with global giants and become one of the larger suppliers of key components like computer memory. Both Intel and Qualcomm Inc. have agreed to build semiconductor fabrication plants there in partnership with local providers.
“If you take a long view — not five years but decades — then certainly you’d have to believe there’s some absorption,” he said. “But people are careful not to put their leading edge IP in China.”
“I don’t see a way to get there without licensing or buying.” | 4:41p |
Ark Flash: the Real Data Center Security Hazard is Just a Spark What good is protecting your data center from every possible incursion, from any known or unknown source, on account of any known or foreseeable vulnerability, if the greatest threat it faces today is a spark of electricity?
An arc flash study could become at least as valuable to your data center as a vulnerability assessment or a penetration test, says Joe Furmanski, the veteran facilities director for the University of Pittsburgh Medical Center.
“An arc flash study looks at all the electrical components, from the source at the power company, the whole way through to the plugs that you plug into your IT equipment,” Furmanski told us in an interview. “They look at how all the circuit breakers are set up — it’s called a coordination study — and they look at the power going through. They punch in all these formulas to figure out, will these breakers move fast enough if there’s an electrical short, or will they move too slowly and let the capability of an arc flash be created?”
The typical electrical safety document [PDF], such as the kind published by the US Occupational Safety Hazard Administration, reads like a brochure the fire chief would pass out during lecture time in junior high. Reads one passage, “There exists a number of ways to protect workers from the threat of electrical hazards,” prior to a list which includes remembering to turn things off. It’s been difficult for OSHA, the National Fire Prevention Association, and their peers to find the right attitude with respect to addressing data center professionals.
Furmanski will go into detail on the topics of prevention, remediation, and mitigation of power events in a comprehensive presentation at the Data Center World conference in New Orleans this September.
The NFPA tightens its rules every year, Furmanski told us, and OSHA maintains regulations stipulating that data centers must adhere to the NFPA rules. Since 2013, this turning up the heat has had a positive effect on the conversations that data center engineers have among each other, he said. But for some reason, that discussion has not yet translated to their clients.
“If you have a data center that’s more than five to eight years old, it might not have been evaluated for [arc flash] when it was built,” he explained. “And some of the builders, even today, don’t do what I fully believe they should be doing to protect the people who will use that data center.”
In the eight years that Furmanski has spent with UPMC, he’s had three serious data center incidents on account of arc flash. One, he admits, was preventable: As a new electrical box was being put it and energized, it was rated for a lower voltage than what it was being fed. “Luckily, our folks were on the other side of a wall when they flipped a breaker to turn on that panel,” he said. “It blew the panel apart, but no one was injured.”
In another incident, a fellow cutting wires through the ceiling dropped some snippets onto the grid below. Not immediately, but a while later, one of those snippets crossed two electrical circuits. “As one of my guys walking down the aisle towards it said, it looked like a volcano spewing big hunks of orange out the top of it.”
See also: Electromagnetic Pulse: Can Space Weather Kill the Cloud?
Detecting electrical incidents such as these cannot be done, well, entirely electrically. Electrons travel at one speed, even when there’s an explosion behind them. So any signal that an incident is happening probably won’t beat the incident itself to the signaling mechanism. At least some sensors for power distribution units need to be optical, because photons stand a much better chance (let’s say, about 100 percent of the time) of outpacing electrons, giving the mechanism the opportunity to trip breakers before the power event catches up.
Surprisingly, said Furmanski, few equipment vendors in the field actually sell these optical sensors. New data centers don’t require these retrofits, and some enterprises are simply moving to new facilities, with designs that are more practical and more safety conscious. But healthcare is among the sectors of the economy that don’t always have that luxury.
“Transformers are the killer of all things,” he remarked. “In the old way of distributing power, you’d put transformers out on the data center floor and from there you’d wire them to your cabinets where the IT equipment was. That, in many cases, is the source of the problem.” Modern designs separate transformers further from the equipment and have adopted much safer power distribution equipment such as Starline Track Busway.
Joe Furmanski will go into much further detail on the topics of prevention, remediation, and mitigation of power events at 9:10 am Central Time Thursday, September 15, in Room R215 at Data Center World, presented at the Morial Convention Center in Downtown New Orleans. Data Center World is presented by AFCOM, the association for educating data center and IT infrastructure professionals worldwide.
Register for Data Center World today! | 5:10p |
Hot Encryption Startup Virtru Raises $29M in Series A  Brought to You by The WHIR
Enterprise encryption and data protection provider Virtru announced this week that it has raised $29 million in a Series A funding round led by Bessemer Venture Partners. Virtru will use the funds to scale operations globally, build new products to protect different types of data, and extend its SDKs and APIs.
Virtru’s architecture is based on the open-source Trusted Data Format, which was invented by Will Ackerly, a former NSA cloud security architect. The company was co-founded by Will and John Ackerly and after entering the Cloudant Accelerator Program, launched in 2014. It differs from network-based security approaches like firewalls and intrusion prevention systems by protecting the data itself with a “secure envelope,” which allows access privileges to be revoked or changed at any time.
Virtru claims to have more than 4,000 customers.
It integrates with popular enterprise applications like Gmail, Google Drive, and Microsoft Outlook. The company said it will use the investment to increase its support for Office 365 and other cloud platforms, while also encouraging developers to integrate its encryption.
New Enterprise Associates, Soros Fund Management, Haystack Partners, Quadrant Capital Advisors, and Blue Delta Capital also participated in the funding round.
Virtru has announced several launches this year, including its SDK.
A version of this article first appeared at http://www.thewhir.com/web-hosting-news/encryption-provider-virtru-raises-29m-in-series-a-to-extend-global-reach | 5:55p |
T5 Buys Chicago Data Center from Forsythe, Its First in That Market T5 Data Centers announced acquisition of the large Forsythe data center in Elk Grove, Illinois, it has been managing since the facility was commissioned last year, entering the Chicago data center market.
Chicago is one of the hottest data center markets in the country, where more than 30MW of wholesale data center space was leased last year, according to Jones Lang LaSalle. Among those deals were leases signed by Oracle, Salesforce, and Microsoft, cloud providers looking primarily to serve customers in the Chicago market, the commercial real estate firm said in a recent market report.
The data center is more than 200,000 square feet and partially occupied. The property includes a four-acre parcel that could accommodate another 28MW data center, according to T5, which has had its staff on site since the facility’s early days under contract through its data center facilities management business.
This is T5’s ninth market. Its other facilities are in multiple East Coast, West Coast, and southern markets.
There continues to be a lot of demand in the Chicago market, with several players looking to take advantage of it. Digital Realty Trust recently bought the site of a former Motorolla headquarters next to its data center campus in Franklin Park, and CyrusOne announced plans for a huge expansion on the site of the Chicago Mercantile Exchange data center it acquired.
QTS brought some space to market inside the first 8MW phase of a data center it built at the former Chicago Sun-Times printing plant. DuPont Fabros Technology is also pursuing next-phase expansion in the Chicago data center market, according to JLL.
Another recent entrant to the market is TierPoint, which also entered via acquisition, buying data center provider AlteredScale. |
|