Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, February 9th, 2017

    Time Event
    1:00p
    GaN is Eyeing Silicon’s Data Center Lunch

    As deep learning proliferates, the question of data center power density is once again on the rise, creating new business opportunities for specialized cloud services, hosted in facilities that can support north of 30 kW per rack, and companies in the power conversion space, who can tackle the density issue by making systems more energy efficient.

    A material that’s enabling better vision in self-driving cars, a better augmented-reality experience, and wireless power, is also promising extreme energy efficiency improvements in the data center. Replacing silicon as the semiconductor material in power conversion chips with gallium nitrate, or GaN, leads to much smaller and more energy efficient devices that provide much faster switching.

    Training deep neural networks today requires powerful computers filled with GPUs. These machines need a lot of power, and data centers that host them have to be designed for high power density, similar to the way supercomputer data centers are designed. The radical energy efficiency improvements promised by GaN power converters on motherboards mean more computing power can be stuffed in a single data center cabinet, making the technology appealing to a whole range of companies in the data center space, from those in the hardware supply chain to operators of hyper-scale cloud platforms.

    Cutting Conversion Losses in Half

    Interest in what GaN devices can do for data center power density is “high across the board,” Steven Tom, product line manager at Texas instruments, said in an interview with Data Center Knowledge. According to him, the material “halves power losses” when used instead of silicon.

    A company that has been a major force behind GaN chips in the power conversion market is El Segundo, California-based Efficient Power Conversion. One of EPC’s two founders is Alex Lidow, who is probably GaN’s most prominent and enthusiastic evangelist. In the semiconductor business for 40 years, he was one of the inventors behind the power MOSFET, a silicon transistor widely used for power switching in servers and many other types of electronics. Now he’s advocating for GaN transistors to push those silicon MOSFETs out.

    See also: How Server Power Supplies are Wasting Your Money

    TI, one of the world’s largest semiconductor companies, partners with EPC for its GaN devices, which it uses to build System-on-Chips, or SoCs. An SoC is essentially a single device that’s made up of multiple devices of different types. TI’s upcoming product for the data center market, for example, combines two EPC GaN transistors and other components into a single power-conversion chip.

    Hyper-Scale Cloud Firms Dabble in GaN

    TI has seen requirements come in for power distribution solutions for motherboards with multiple GPUs, Tom said. When they come in, customers don’t usually specify that they want GaN to be used, but the material has proven to be effective in addressing those requirements.

    Operators of web-scale platforms hosted in massive data centers – companies that according to Lidow include Facebook, Google, and Oracle – have been buying GaN chips and exploring the technology, although they are not yet building it into their servers at scale. Artificial Intelligence and cloud are making data center power density an acute issue for these firms, and GaN-based power conversion is one potential solution, he said.

    Silicon-based server power conversion device (left) vs. a GaN-based one (right). The blue chips on the smaller device are EPC’s GaN transistors (Photo: Yevgeniy Sverdlik)

    LMG 5200, the aforementioned power controller for servers with EPC’s chips at the heart, is TI’s first GaN device for the data center market. EPC chips are in “several major servers” expected to ship this year, a company spokesperson wrote in an email. The volume of devices shipped for server use up to this point has been “less than a million units.”

    Cheaper Chips

    The second EPC founder is Archie Hwang, owner of Episil Technologies, which is based in Taiwan and has a silicon foundry there. EPC builds its wafers in that foundry and grows GaN crystals in a facility on the premises. That’s one of the ways the company has been able to bring down the cost, and it’s possible because GaN is grown on top of silicon – a process invented in 1999 by the Japanese scientist Hiroyasu Ishikawa. EPC also covers the layer of GaN transistors in its devices with multiple layers of glass, eliminating the need for processor packages, which constitute more than half the cost of silicon chips, according to Lidow. Finally, the material’s much higher density enables the company to fit many more transistors on a single wafer than silicon would allow.

    EPC and TI aren’t the only GaN game in town. Their competitors include Transphorm, a Goleta, California-based maker of GaN power conversion chips that has attracted hundreds of millions of dollars in funding from investors such as Google Ventures and Kleiner Perkins Caufield and Byers, among others (Lidow and Hwang are EPC’s two sole financial backers). Another example is Macom, a Lowell, Massachusetts-based semiconductor company that three years ago acquired Nitronex, one of the first companies to productize GaN on silicon.

    “Textbook Waveforms”

    Lidow and Tom are both extremely enthusiastic about the future of GaN in the broader power conversion industry. “We think it’s a complete game-changer,” Tom said, describing TI’s position. “We have a very strong opinion of the technology; it really does things silicon as a technology just can’t reach.”

    People in the semiconductor industry have known for a long time that between GaN and silicon, GaN is the better semiconductor material. But until Ishikawa’s breakthrough, which made it possible to produce GaN wafers in the same fabs that produce silicon wafers, the cost of GaN chips made them infeasible, Lidow said.

    It’s a better semiconductor material because of its density. Because the chemical bond between atoms in GaN is stronger than in silicon, things can be brought much closer together without losing control of the flow of the electrons.

    Compared to silicon, GaN allows for fewer unwanted imperfections in the electrical waveform inside the power supply. “In a power supply it’s all about making perfect square waves,” Tom said. What prevents those perfect square waves from forming is dissipation: nasty rings, overshoots, undershoots, all that energy circulates on the board. With GaN chips “we can make textbook waveforms.”

    The improvements don’t come simply as a result of replacing silicon transistors with GaN ones. The devices are completely redesigned from the ground up. TI’s new devices cut the amount of conversion steps energy goes through between the point of entry and its destination. Steps like AC input, isolation, intermediate bus converter, and point-of-load converter (where each step is accompanied by some power loss) are reduced to two: from 48V DC to 12V DC and from 12V to 1V at each point of load. “You get the same performance or better in that conversion because of the device properties,” Tom explained.

    From Beetle to Ferrari

    Lidow has no doubt GaN will disrupt a large portion of the $30 billion power conversion industry. “As sure as the sun comes up, GaN is going to replace silicon in power conversion,” he said. It is, however, unlikely to replace silicon in logic semiconductors (CPUs and GPUs). “It’s possible, but it’s not certain.”

    The only disadvantage of GaN he sees today is the lack of knowledge about it. “Where it loses to silicon is in the user base.” Nothing has ever come along that was so superior to silicon in cost and performance, and switching from one to the other is like switching from an old VW Beetle to a Ferrari, Lidow said, a switch that comes with a learning curve for the driver.

    4:00p
    Four Ways to Survive the IT Operations Big Data Deluge

    Peter Waterhouse is Senior Strategist for CA Technologies.

    The days of managing monolithic style applications running on a single platform are over. Organizations are now committed to delivering their customers a far richer variety of digital services using multiple channels. This means applications are more likely to execute from the cloud, via a multitude of microservices interacting with virtualized resources, containers and software-defined networks.

    In this new normal, teams can no longer afford to get bogged down with reactive fire-fighting and lengthy war room sessions. But with so many moving parts, increased application complexity, and dizzying rates of software delivery, what strategies can IT operations employ to prevent being wiped-out by waves of operational big data?

    The traditional approach is to buy more monitoring tools; one for every new wave of technology adopted. But this doesn’t scale, negatively impacts margins, and only provides narrow views into the all-important customer experience. So putting tools aside, where should organizations turn?

    Well, to the data for starters – or more importantly, to the business problems and opportunities they solve and uncover.

    This is hardly an epiphany. Web scale companies understand implicitly the importance of data and gleaning valuable insights. By developing analytics-driven applications, implementing at scale and democratizing usage, these businesses continuously raise the bar in terms of productivity, agility and customer engagement.

    In a DevOps context these businesses thrive because their IT teams are equally analytics-driven. Not only do they surpass today’s expectations for delivery speed and quality, they leverage data insights to drive improvements at every stage of the digital service continuum. So before perusing the extensive tools catalog, stop and consider four higher value strategies.

    Build an Analytics-driven Culture Within IT

    Many teams will collect masses data points; however, what characterizes a strong analytics-driven culture is a focus on collectively leveraging metrics for the benefit of the business as a whole. In a pinch this will involve:

    • Empowering IT teams with the applications needed to uncover and share more powerful insights. These can include changes in customer engagement via new mobile app designs, emerging performance/security anomalies, or optimum cloud architecture patterns.
    • Incentivizing teams according to business performance goals and outcomes; avoiding persistent “vanity metrics” and operational outputs.

    Fast, real-time action or recommendations when insights are uncovered. Nothing demotivates teams faster than finding something valuable and then not being able to act on it.

    Democratize Data and Analytics Across the Entire Business

    Analytics has limited value when only used by IT operations to support their daily grind. Better methods and techniques treat data as enterprise asset many teams can use, share and leverage in a variety of different contexts. This could involve:

    • Delivering an ‘analytics as a service’ operational function where teams can build their own monitoring dashboards and reports to quickly gain the insights they need.
    • Ensuring every group is provided with analytical models that have production-level support, together with real-time data that’s ready to use and of known quality.
    • Analytics-driven monitoring applications that immediately surface performance insights in-context of different roles and tasks.

    Rapid insight prototyping, which when shown to have value quickly becomes established in production processes and tools.

    Start Using Analytics Where They’re Most Effective – Customer Experience

    You can’t manage what you can’t measure, but collecting and measuring data that’s truly reflective of customer experience is tricky. While some individual metrics will work in specific situations, it’ll be more likely that combos, mashups and new derivations are needed.

    In order to gain customer experience insights, analytics-driven applications will deliver teams complete understanding over cloud and on-premise application infrastructure, application performance and the underpinning network. By correlating many metric types across these elements (time series, logs etc.), analytical models become a shared mechanism which DevOps teams use to drive improvements. For example, using predictive models to assess the business outcomes of new code based on application performance or latency improvements.

    It’s always possible of course that teams can revert back to a narrow (albeit analytical) perspectives to attack the problem.  Some teams will be metric-driven, others will use logs. The trick is understanding how a dovetailed approach will yield greater value. For example, correlating log analytics with network performance management for faster, accurate root-cause determination.

    Become an Effective Hunter-Gatherer – Take a Unified, System Level Approach

    There’re no disputing many IT organizations have invested time and money in acquiring great set of tools to collect data, so why replace them? What’s really needed, however, is a scalable, open method to aggregate and normalize millions of metrics and logs into one unified data store. Call it an immutable Data Lake, Analytics Warehouse, whatever – having a centralized store of data helps teams quickly search, locate and visualize valuable trends, patterns and correlations. Without this, different groups may resort to managing their own (often overlapping) data sets, using inconsistent access methods, tools and data formats. Fine for a narrow views, but woefully inadequate when teams have to relate data across multiple data stovepipes.

    As these strategies illustrate, being analytics-driven is so much more than fixing problems faster. When organizations invest in building an analytics culture, enact new customer-centric methods, and share deep insights, the focus shifts towards treating every problem, pattern and anomaly as an opportunity to improve customer experience.

    So stop getting wiped out by waves of data. Start using analytics-driven applications and surf towards a brighter business future.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    6:05p
    Microsoft Adds Patent Suit Protections For Cloud Customers

    Dina Bass (Bloomberg) — Microsoft Corp. will help cloud customers fend off patent lawsuits and expand coverage of related litigation costs, seeking to distinguish its services from rivals in the fast-growing market for internet-based computing.

    As more companies host their applications and services on Microsoft’s Azure and other cloud providers, they are increasingly becoming the target of lawsuits from companies seeking to make money by claiming patent infringement.

    Microsoft, the second-biggest cloud infrastructure services vendor behind Amazon.com Inc., will help customers fight back by offering them one of its own patents to deter or defeat such suits. The software giant will also expand a program in which Microsoft provides funds or legal resources to fend off claims, known as indemnification.

    See alsoCloud Boom Buoys Microsoft, Intel, Alphabet Results

    The patent protection can provide an edge for Microsoft as the company competes with Amazon and Google in the cloud. As IT spending in the cloud is set to reach $1 trillion by 2020, according to Gartner, the industry faces growing risks of intellectual property lawsuits. Such suits have risen 22 percent in the past five years, according to the Boston Consulting Group. Meanwhile non-practicing entities — the industry’s term for firms that snap up patents to garner licensing fees and launch lawsuits — boosted their acquisition of such patents by 35 percent in the same period.

    “We create a patent umbrella and we let our customers stand underneath it,” said Microsoft President and Chief Legal Officer Brad Smith, in an interview. Because of its strong patent portfolio and experience in patent law, Microsoft can “play a positive and constructive role of helping our customers.”

    Cloud computing is letting companies in a variety of industries move into technology development by building their own applications using Microsoft or Amazon’s server farms, storage and pre-written services. The problem is that these businesses don’t have their own technology patents to defend against companies that might sue them. That’s where Microsoft wants to stand out above its cloud rivals, who don’t offer the same level of protection, said Julia White, a Microsoft vice president for Azure marketing.

    “All of our customers are at some level becoming software providers of their own,” White said. “That puts them in a different domain — an area where they don’t have a lot of experience.”

    Microsoft already offered the indemnification benefit to customers who use Azure cloud services written by Microsoft. The expansion adds protections for those who use open source technologies offered through Azure like Hadoop and Apache. Google offers indemnification but not on open source software.

    The patent program is new to Microsoft and hasn’t really been tried in the industry more broadly, according to Smith. Customers will be able to pick one patent from a pool of 10,000 offered — Microsoft has 60,000 patents total — to use in their defense. Microsoft is hoping that the mere existence of the offer will deter suits in the first place.

    6:57p
    Data Center Market Kicks Off 2017 With Flood of Acquisitions

    It hasn’t taken long for M&A activity to ramp up in the data center market this year.

    QTS Realty, Equinix, and CyrusOne all kicked 2017 off with strategic acquisitions. Equinix and QTS chose to double down on capacity in existing markets, while CyrusOne widened its footprint by acquiring two stabilized data centers with adjacent land for expansions located in North Carolina and Northern New Jersey.

    Additionally, privately-held Digital Bridge Holdings continued a fast-paced acquisition program by its DataBank subsidiary (also a recent acquisition), purchasing data centers in Cleveland and Pittsburgh from 365 Data Centers.

    These largely tactical moves differ from how the acquisition cadence played out in the data center market at the end of last year, when major portfolio deals dominated.

    Read more: 2016 Was a Record Year for Data Center Acquisitions, but 2017 Will Be off the Charts

    Notably, large telecom portfolio deals, enterprise data center outsourcing, and private equity exits, including Vantage Data Centers (potentially) and Cologix, all suggest that 2017 activity will set new records.

    CyrusOne’s Enterprise Focus

    During 2016 CyrusOne was highly successful when it came to competing for and winning hyperscale cloud deployments. However, its latest $490 million acquisition from Sentinel Data Centers appears to be back in this REIT’s more traditional wheelhouse: providing colocation data center solutions for larger enterprise customers.

    Its announcement of Raleigh-Durham, North Carolina, and Somerset, New Jersey, data center acquisitions highlighted gaining over 20 new enterprise logos, including five Fortune 1000 customers among the 30 existing tenants. Additionally, the North Carolina campus represents CyrusOne’s closest pin in the map to data center markets in the Southeast.

    Here are the details on the two locations:

    Raleigh-Durham Data Center

    • Current power capacity: 10 MW (approximately 70 percent leased)
    • Average remaining lease term: ~9 years
    • Lease-up and expansion opportunity: 23 MW

    Somerset Data Center

    • Current power capacity: 11 MW (approximately 95 percent leased)
    • Average remaining lease term: ~8 years
    • Lease-up and expansion opportunity: 22 MW

    The purchase price was underwritten at 14.4x the pro-form $34 million EBITDA run-rate. This would make the acquisition immediately accretive to existing shareholders. CyrusOne expects to be able to lease up the remaining 34,000 square feet and 8 MW of power with only $15 million in additional investment.

    There is land available for expansion at both sites, where the company expects it can build out another 230,000 square feet and 37 MW of power “at a cost expected to be in line with the Company’s current build cost per MW.” CyrusOne has several “Massively Modular” projects underway, including outside of Chicago and in Northern Virginia. It is still too early to tell how the new normal for hyperscale cloud leasing will evolve during in 2017.

    A Post-Brexit Vote of Confidence in London

    The recent Equinix announcement of a data center acquisition from IO for an undisclosed price is essentially additional bolt-on colocation capacity for Equinix in the Slough data center market, a financial hub outside of London.

    This desire for more capacity can be viewed as a vote of confidence for London remaining a major global financial center despite UK’s Brexit vote. The Equinix Slough campus offers low-latency connectivity between London and other key financial markets: 30 milliseconds to New York and 4 milliseconds to Frankfurt, according to the company.

    Equinix currently operates LD4, LD5, and LD6 in Slough and plans to rename the IO data center LD10 after it is tethered to the existing campus. IO UK’s Slough data center contains Baselayer Modular Data Center units, which Equinix will maintain and expand on an as-needed basis, according to the press release.

    LD10 will add about 350 cabinets of sold capacity and total colocation space of 3,340 cabs once the facility is completely built out. The IO data center currently has a customer mix similar to Equinix’s existing base, including financial services, enterprises, and networks. IO’s anchor tenant in Slough is Goldman Sachs.

    QTS Sticks to Its Formula

    QTS has once again executed on its core strategy by acquiring another large-scale facility at a low cost basis which can be retrofitted and leased to third parties.

    Last week, the company announced that it has acquired a 53-acre data center campus for $50 million from Health Care Service Corporation (HCSC) in the Dallas data center market. The transaction was structured as a sale/partial lease-back, where HCSC will remain as a 1 MW anchor tenant in the existing 8 MW data center. The purchase would represent just over $6 million per megawatt if the entire purchase price was allocated to the existing facility.

    Currently there is 40,000 square feet available in a powered shell that can accommodate 80,000 square feet total and is currently served by a dozen carriers. The former HCSC campus at full build-out can support a total of 300,000 square feet and up to 60 MW of power. The new QTS Dallas-Fort Worth campus is located adjacent to the $1 billion Facebook data center campus that’s currently and 20 miles away from QTS’s existing 700,000 square foot Irving facility.

    Big 2016 Deals: a Recap

    After a relatively slow start to 2016, Digital Realty and Equinix worked out a mutually beneficial arrangement in Europe to satisfy the EU Commission in conjunction with the TelecityGroup acquisition by Equinix. Equinix ended the year by announcing another blockbuster deal, the $3.6 billion Verizon Americas data center portfolio carve-out.

    Read more: Why Equinix is Buying Verizon Data Centers for $3.6B

    Additionally, Terremark founder Manny Medina put together a group to purchase the $2.6 billion Century Link data center portfolio. The other acquisitions during 2016 would be considered singles and triples compared with these towering home runs.

    Fast-forward to 2017, and thus far it has been a steady flow of granular acquisitions in targeted markets, as the public data center REITs continue to strategically expand their footprint.

    7:22p
    Cisco Warns of Catastrophic Clock Signal Flaw

    Brought to you by MSPmentor

    Cisco has gone public about a major problem with an outsourced clock signal component installed in a variety of its most popular products – a flaw that if left unrepaired will ultimately destroy the equipment.

    The faulty clock signal component – which functions like a type of metronome to synchronize the operation of digital circuits – affects some of Cisco’s best selling products, including ASA security devices, Nexus 9000 series switches and series 4000 integrated services routers.

    The manufacturer says the component is currently performing normally but that it expects to see increasing product failures after the hardware units have been in use for 18 months or more.

    “Once the component has failed, the system will stop functioning, will not boot, and is not recoverable,” Cisco officials said in a statement.

    The same clock signal component is also installed in hardware from other unnamed manufacturers, the Cisco statement said.

    Cisco said it is already reaching out to customers about fixes.

    “Cisco will proactively provide replacement products under warranty or covered by any valid services contract dated as of November 16, 2016, which have this component,” the company said. “Due to the age-based nature of the failure and the volume of replacements, we will be prioritizing orders based on the products’ time in operation.”

    In the near term, the advisory likely means big headaches for IT services providers, who must identify and replace affected units that might exist in on-premises environments they manage.

    At least one remote monitoring and management (RMM) platform vendor took the opportunity to remind service providers about the importance of having a powerful network inventory tool that can quickly identify all infrastructure devices.

    “Many MSPs don’t have accurate or up-to-date information about the devices on their client networks,” Alex Hoff, vice-president of sales and product at Auvik, said in a statement.

    “They’ll need to run potentially hundreds of commands to identify if a device is subject to the Cisco notice,” he continued. “It’s something an MSP must do, but it takes time—and we all know, time is money.”

    This article originally appeared on MSPmentor.

    << Previous Day 2017/02/09
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org