Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 6th, 2016

    Time Event
    5:00a
    Dell’s Latest PowerEdge Servers Offer Muscle for High-Bandwidth Workloads

    With processors no longer the reliable wellsprings of periodic performance boosts they were in the past, server manufacturers are looking to more specialized use cases. Today, Dell is announcing a revamped PowerEdge R930 with Intel’s newly announced Xeon E7 v4 series processors and a new PowerEdge R830 with recently announced Xeon E5 v4 processors.

    In another era, new servers with new processors would be the new story. Today, to give the new product a boost, it needs more of a use case. So Dell is making the case for both new PowerEdge models as faster processing engines for high-bandwidth data workloads while enlisting SAP to help make it.

    PowerEdge R930

    The newly revised, top-of-the-line PowerEdge R930 with Intel’s E7-8800 series v4 processors, said Brian Payne, executive director for PowerEdge marketing at Dell. The server “is the product we position as having the most demanding, data-intensive applications. It’s great for mission-critical databases that need that high performance and can also be an excellent platform for consolidating a lot of virtualized data-oriented workloads.”

    R930 will maintain its 4U chassis with 10 PCIe slots, one RAID slot, and one network daughter card (NDC) slot. Support for up to 96 DIMMs enables as high as 12 TB of memory, as before. Models will be available with E7-4800 v4 processors. But swapping out the E7-8800 v3 series for a E7-8800 v4 processor, Payne promises, will generate “world record performance” with SAP database workload benchmarks.

    It’s SAP that keeps track of benchmarks involving its own NetWeaver 7.31 solution stack and SAP HANA in-memory database; and it will be SAP that makes the final results public for the tests to which Payne refers. In May 2015, a PowerEdge R930 with 4 Xeon E7-8890 v3 processors scored 320,940 ad hoc navigation steps per hour, on SAP’s business warehouse simulation, Enhanced Mixed Load (BW-EML), with a 1 billion record test battery. Then the following September, it scored 191,170 steps per hour on a similar 4P R930 model, using the NetWeaver 7.40 solution stack, and SAP HANA 1.0, with the 2 billion record test battery. According to SAP, these were indeed the fastest scores reported up until the E7 v4 processors arrived on the scene.

    Though the final numbers had yet to be made public at the time of this writing, Dell’s estimate of how much better the revamped R930 performed on the 2 billion record battery would give it a performance score of 238,523 steps per hour.

    In the BW-EML test, simulated users generate synthetic queries by logging onto the Web client, performing about 40 ad hoc navigation steps, and logging back off. A new user gets added to the workload every second until the benchmark reaches a “high load phase,” which then continues running for at least one hour. Then the workloads are powered down slowly, and a steps-per-hour figure is calculated for the one-hour high-load phase.

    Why is this important? With the laws of physics catching up with how Intel processors tend to steadily increase performance, Moore’s Law appears to be facing a dead end after the Broadwell generation. Both Intel and server makers need all the help they can get in keeping up appearances, so they’ve resolved to accomplish this by defining performance in more real-world terms and demonstrating performance increases in contexts that real-world users may more readily appreciate.

    “Larger core counts drive total cost of ownership down, and improving the per-core performance increases your response times,” said Lisa Spelman, Intel’s VP and general manager for Xeon and data center products, during a company presentation last March. “In 2015, we saw more than 80 percent of our top cloud service provider volume upgrading to higher-performing SKUs in our lineup. They move to higher core-count CPUs to get that better response time and greater TCO efficiency.”

    PowerEdge R830

    Spelman made that statement while unveiling her company’s Xeon E5 v4 series processors, one of which is being put to use in Dell’s all-new PowerEdge R830. R830 takes over the top of Dell’s 2U rack-mount chassis, higher-performance line, with a four-socket system built on Xeon E5-4600 v4 series processors, supporting up to 48 DIMMs.

    “This product is positioned as an ideal combination of density and performance capability,” said Payne about the R830. “When you’re looking at somebody who doesn’t necessarily have a need for that top-end performance and scalability, or who has a space constraint, in some cases, they’re looking at a more dense solution. This is a product category that Dell created: a 2U four-socket delivering this level of density. We’ve had a tremendous amount of success with this product over time.”

    “One of the greatest strengths of the Xeon E5 product line is that versatile workload performance across the widest range of workloads,” said Intel’s Spelman, in introducing the processor at the heart of the R830. “We’re delivering these increases in performance at the same power envelope from our previous generation, so you’re getting increases in your compute, your storage, and your networking of up to 44 percent.”

    Scaling Up in a Scale-Out Era

    Dell will be selling its new PowerEdge models into a data center market that is simultaneously being sold on the idea of more highly distributed workloads — specifically, making database operations more “liquid” and spreading them out across server nodes and cores to increase efficiency. That message — which is coming from the database and cloud communities — runs almost counter to Dell’s message, which paints a picture of huge bundles being managed adroitly by dense processor packages.

    So whose picture of the data center is more realistic?

    “We are absolutely, one hundred percent, behind and investing in scale-out architectures,” said Payne (whose employer is in the midst of purchasing EMC), “and third-platform, or new approaches to developing applications that are designed for scale-out. In fact, our legacy in the data center solutions phase of building out the largest, most efficient data centers in the world, where folks like Microsoft, Amazon, Facebook, etc., are building applications and database tiers the way you’ve described. We’ve optimized infrastructure in those environments, and will absolutely continue that in the future.

    “That being said,” he continued, “there’s still some traditional applications which are going in a different direction, which still can benefit from scale-up or consolidation, based on the way that they’re operated.” Payne counted the style of workloads that the SAP benchmark best simulates, as among the category best served through a scale-up architecture.

    For this reason, he explained, external NAS arrays remain relevant; and performance boosts for workloads on these platforms should still be considered from a scale-up perspective — which has been the side of the proverbial bread that Intel has traditionally buttered.

    But it’s those watchwords — “tradition,” “legacy,” and “data warehouse” — that are used more and more frequently to decorate the marketing messages for scale-up product cycles. While that strategy may work for now, the fundamental changes that are still taking place at the software platform level will inevitably compel both Dell and Intel to look for new avenues for Moore’s Law, or some other reliable “law” of performance boosting, to be exploited.

    12:00p
    Vapor IO Adding Key Physical Dimension to Data Center OS

    Mesosphere’s promise to give enterprises the hyper-scalable distributed technology stack Google has built for itself has vowed a range of heavyweight investors, from some of Silicon Valley’s top IT vendors and venture capital firms to the CIA.

    While its technology surely isn’t simple, the idea is: all your IT resources, be they bare-metal servers and VMs in your own data center or virtual infrastructure in Amazon’s cloud, should be managed as a single system. That’s what the company built its Data Center Operating System to do – it’s quite literally an OS for the entire data center, abstracting disparate resources into unified pools applications can call on.

    DC/OS, which San Francisco-based Mesosphere recently open sourced, covers a wide scope of infrastructure types, but it’s hard to call it truly comprehensive because it’s missing one crucial part: the physical and starkly finite data center infrastructure resources – space, power, and cooling capacity.

    Mesosphere’s partnership with the Austin-based startup Vapor IO, announced last week, is about filling that gap. The two companies will integrate DC/OS with Vapor’s OpenDCRE, its up-to-date take on a nearly 20-year-old server management protocol that feeds data on hardware vitals – such as power, system temperature, and even a box’s on/off status – to any system that needs it via a modern open API.

    OpenDCRE is Vapor’s answer to IPMI (Intelligent Platform Management Interface), the group of server management and monitoring specs that came out in 1998 and continues to be in widespread use, supported by virtually all major hardware vendors. OpenDCRE basically offers the same functionality but provides it in a way a DevOps engineer can understand and use.

    Cole Crawford, Vapor founder and CEO, says the integration of this physical server management functionality into DC/OS is only part of the partnership with Mesosphere. Vapor is working on a product called Mist, which will combine the two but also provide infrastructure cost analysis and the ability to set up and enforce IT policies using all the operational data the combination provides.

    “If you think of DC/OS as an engine, Mist would be a car,” Crawford said. “You can think of it as a distribution of DC/OS.”

    One of the goals is to give IT teams visibility into the real cost of running any particular application on any type of infrastructure so they can decide which is the most cost-effective way to go while complying with policies. An application may be running in a company-owned data center taking up costly resources while its security and performance requirements can be perfectly met by a public cloud provider at a lower cost for example. Mist will be able to give that insight, Crawford said.

    It will also give users the ability to automate workload scheduling based on policy they set up. If a server crosses a pre-defined temperature threshold, for example, Mist will be able to spin down some Docker containers running on that server and launch them on a different machine that has more headroom or in a cloud.

    “We apply a much more granular and vigorous policy rule set to federated cloud (than DC/OS does),” Crawford said.

    Mist is still in the works, but he expects the company to start private beta deployments quickly, planning for the first general-availability release before the end of the year. “We’re working with a company right now on the first implementation of Mist,” he said.

    3:56p
    Oracle, Ellison Accused of Misleading Investors on Cloud Revenue

    (Bloomberg) — Oracle and its top executives were sued by an investor who blames his stock losses on allegations by a former finance executive that the cloud computing giant doctored its quarterly results.

    Shares fell the most in almost three years Thursday to as low as $38.08, the day after the ex-employee filed her lawsuit alleging she was terminated after complaining to supervisors about accounting irregularities.

    In a federal securities complaint in San Francisco, an investor said the stock decline was a response to the company’s misleading statements about its cloud-computing revenue. Shareholder Grover Klarfeld is seeking class-action status on behalf of other investors who bought stock during the 10-month period ending June 1.

    Oracle Chairman Larry Ellison is quoted in Klarfeld’s complaint touting Oracle in March as the top company in the world by new cloud-computing revenue.

    Svetlana Blackburn, the former senior finance manager who sued Wednesday, said in her complaint that the company pushed her to “fit square data into round holes” to inflate results.

    Oracle spokeswoman Deborah Hellinger said in a statement Wednesday that the company is confident that its “cloud accounting is proper and correct” and that Blackburn was terminated for poor performance.

    Hellinger didn’t immediately respond to a phone call Thursday seeking comment on the securities complaint.

    Oracle closed Thursday at $38.66, down 4 percent in New York trading.

    The case is Klarfeld v. Oracle Corp., 3:16-cv-02966, U.S. District Court, Northern District of California (San Francisco).

    4:30p
    Cloudability Gets $24M to Help Companies Cut Cloud Costs

    WHIR logo

    Brought to You by The WHIR

    Cloud cost monitoring platform Cloudability announced on Monday that it has closed a $24 million Series B round of financing, led by the Foundry Select Fund.

    According to a blog post by Mat Ellis, the company’s CEO and founder, the funding will be used to invest in the product, which aims to create more transparency around cloud cost and utilization. Ellis said these improvements will “transform a company’s mountain of billing data into actionable insights to help them build bigger and more complex clouds with confidence and control.”

    “For five years now we’ve helped our customers manage and optimize their clouds at an insane pace. Collectively, our customers have spent nearly $4B on cloud services,” Ellis said. “Early adopters such as GE, Uber, and Atlassian have demonstrated conclusively that new tools and processes are needed to optimize cloud spending and avoid waste. And an agile and decentralized approach to cloud management raises the speed limit for innovation inside their organizations.”

    With the latest round, Cloudability’s total funding has reached over $40 million.

    Earlier this year, the company acquired self-service cloud business intelligence platform DataHero, gaining a San Francisco office in the process. Cloudability is headquartered in Portland, Oregon, where it was recently named one of Oregon’s top 100 best companies to work for.

    Last August, it also acquired RipFog to bring big data analytics capabilities into its portfolio.

    This first ran at http://www.thewhir.com/web-hosting-news/cloudability-gets-24-million-to-help-enterprises-save-on-cloud-expenses

    5:00p
    P-UPS: Stop Calling It a UPS

    Brian Olsen is the Eastern Regional Sales Manager for E1Dynamics.

    The critical infrastructure industry relies on uninterruptible power supplies (UPS) as an integral part of any mission critical system. The problem with static UPSs and referring to them as such, is they are not uninterruptible. This moniker has led to reliance on an outdated topology, with functional limitations, that newer technology has made irrelevant.

    A vast majority of installed mission critical infrastructure have static UPSs as a major component, and this continues to be common practice. However, for mission critical applications, the static UPS needs to be used in conjunction with either automatic transfer switches or paralleling switchgear as well as engine generator sets. These components comprise the complete system. Referring to a power converter and string of batteries as uninterruptible is where the industry goes wrong, and it’s exacerbated by the manufacturers of the equipment.

    A recent study conducted by Emerson Network Power cited the static UPS as the No. 1 cause of unplanned data center outages, accounting for one-quarter of all events. For something to be referred to as both “uninterruptible” and a source of “supply” it must truly be both of those things.

    The reason for this argument is very simple … what’s commonly referred to as a UPS is really a P-UPS, or partially uninterruptible power supply. It’s both part of a larger system, and only capable of supplying power for as long as the batteries can support the load.

    Therefore, this designation as a P-UPS is important. Advances in diesel rotary uninterruptible power supplies (DRUPS) clearly show the limitations of the P-UPS. It’s like comparing apples to oranges. The result is that end-users and facility owners are the ones left dealing with the shortcomings of a system believed to be uninterruptible, and ironically the leading cause of load loss. By understanding that a partially uninterruptible power supply (P-UPS) is completely dependent on other components comprising a system, only then will the owner realize that higher availability can be achieved.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:30p
    Flurry of Enterprise Software Deals Filling Semiconductor Dearth

    (Bloomberg) — Semiconductor mergers were so 2015. This year’s hottest club is business software deals.

    A flurry of enterprise software transactions have been announced in the past two months, helping the technology sector recover from a major drop in semiconductor merger activity. The latest was last week’s announcement that Qlik Technologies agreed to be bought by private equity firm Thoma Bravo for about $3 billion. It followed last Wednesday’s $2.8 billion takeover of Demandware by Salesforce, announced just a day after Vista Equity Partners’ $1.8 billion acquisition of marketing-software firm Marketo.

    The high demand for enterprise software stems from a decline in public valuations from the beginning of the year and a realization among private equity funds and strategics with lots of cash that the money needs to be spent, according to Joel Fishbein, an analyst at BTIG in New York. Once one deal is announced, it gets competitors moving and the cascade begins, he said.

    “I’ve never seen more deals and more things happening,” Salesforce CEO Marc Benioff said Wednesday in an interview with CNBC. “The M&A season is the most intense, most exciting I’ve ever seen.”

    Chip Deals

    While chip deals dominated US technology M&A last year, there’s barely been a peep among semiconductor companies in 2016. There have been no chip deals of more than $1 billion announced this year. That compares with at least 10 last year, according to data compiled by Bloomberg, including five valued at more than $10 billion, led by Avago Technologies’ $37 billion deal for Broadcom and Intel’s $16.7 billion takeover of Altera.

    Semiconductor companies spent a record $113 billion on acquisitions in 2015, triple the volume of the previous year.

    Some of this year’s lack of chip activity can be attributed to digestion, as many buyers work on integrating their acquired companies from last year and pay down debt, said Mark Edelstone, Morgan Stanley managing director who heads the bank’s semiconductor group. The pause may not last long, he said.

    “The predictability of the semiconductor business is decent and valuations have recovered, which sets up a healthy M&A environment for the second half of the year,” Edelstone said. “There are about 60 pure-play public semiconductor companies left, and I fully expect that number to halve in the next five years.”

    Marketo, Cvent

    The cluster of software deals isn’t reaching the value levels of last year’s chip transactions. Apex Technology’s $3.4 billion takeover of Lexmark International is arguably the biggest of the year, and only 17 percent of Lexmark’s revenue stems from enterprise software.

    Still, the recent avalanche can’t be ignored. Vista agreed to buy Marketo last week and Cvent, which makes event management software, for $1.7 billion in April. Nice Systems said on May 18 it agreed to buy on-demand call center software company InContact for nearly a billion dollars.

    Dell is selling its enterprise-software businesses Quest and Sonicwall, again targeting private equity firms.

    Mitel Networks said in April it agreed to merge with Polycom, only to see its offer challenged by, yet again, another private equity firm, Siris Capital Group, according to people with knowledge of the matter.

    Falling Values

    “When valuations for some of these software companies started falling in February, private equity firms started looking at them and said, ‘This is ridiculous,’ and started doing work on them,” said Pat Walravens, an analyst at JMP Securities in San Francisco.

    “Once that started happening, the strategics, who thought they’d have all the time in the world to own some of these assets, realized it would be a bummer if private equity bought them first,” he said. “The private equity firms could put the firms with other companies they own, and the strategics would never see some of these assets as standalone again.”

    Expanding to include enterprise services, Computer Sciences Corp. agreed to merge with Hewlett Packard Enterprise’s services business last month in an $8.5 billion deal, and Oracle agreed to buy two companies in one week — Textura for more than $600 million and Opower for more than $500 million — about a month ago.

    ‘Legacy Providers’

    “You’re seeing guys looking for growth, especially some of the more legacy providers,” Fishbein of BTIG said. “Then you’re seeing guys like Salesforce that are saying, ‘Look, where are these big adjacent secular growth markets that we can go after?”’

    Enterprise software M&A is off to its best start in five years in terms of deal volume, according to UBS Group. Cloud application companies including NetSuite, Veeva Systems, and Cornerstone OnDemand could be next to be scooped up, according to Mandeep Singh, an analyst at Bloomberg Intelligence.

    “SAP, Oracle, and IBM may have to accelerate their cloud strategies after Salesforce’s pending acquisition of Demandware,” Singh said. “This puts focus on other pure-play cloud application companies.”

    9:02p
    QTS Buys DuPont Fabros Tech’s New Jersey Data Center

    QTS Realty Trust has acquired a 38-acre data center campus in Piscataway, New Jersey, from DuPont Fabros Technology, which has been shopping the site around since early this year following a decision to exit the New Jersey market.

    Washington, DC-based DFT entered New Jersey with the 360,000-square foot data center about six years ago under the company’s previous CEO, Hossein Fateh, who was one of the founders. Eventually, its management team, currently headed by CEO Chris Eldredge, found that the market was better suited for retail colocation services than for its wholesale data center leasing model.

    While DFT has been knocking it out of the park in all other markets it’s in – Northern Virginia, Silicon Valley, Chicago, and more recently Toronto – New Jersey has been a sore spot, its only weak market.

    Read more: Why DuPont Fabros is Exiting the New Jersey Data Center Market

    QTS is a Better Fit for New Jersey

    Chicago-based QTS has a different business model, offering a wide variety of data center services, from wholesale and retail colo to cloud and managed services, which means it is better aligned with market needs in New Jersey than DFT is.

    QTS paid $125 million for what will become its third New Jersey data center, the company said in a statement issued Monday. The facility’s current power capacity is 18MW, but the new owner plans to add 8MW more over “the next few years.”

    Its current customers include a major pharmaceutical company, a large media company, and multiple financial services firms, according to QTS. These customers occupy 56,000 square feet of data center space and require 8.4MW of power.

    Read more: Data Center Market Spotlight: New Jersey

    The utilization rate illustrates just how slow the take-up in the facility has been for DFT over the six years it’s been in the market.

    The deal gives QTS much needed new capacity in New Jersey, the company said. Its 32,000-square foot Jersey City data center is 95 percent full, and the 58,000 square feet of data center space that’s been built out at its Princeton facility is at capacity.

    Steep Discount

    A core part of QTS’s business strategy has been buying large properties at steep discounts, and the DFT deal isn’t likely to have been different. When it announced its intent to sell the property, DFT said it expected to incur an “impairment charge” between $115 and $135 million as a result of the sale. That would be the approximate range of the size of the discount the potential buyer would receive.

    The Piscataway facility’s purchase price “represents an upfront cost per megawatt of below $7 million, which is materially below average cost to build in the New York-New Jersey market,” QTS said.

    << Previous Day 2016/06/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org