Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, June 16th, 2016
Time |
Event |
12:00p |
DriveScale Says Big Data Needs a New Kind of Data Center Infrastructure DriveScale, the Silicon Valley data center technology startup founded by a group of Sun and Cisco veterans who were behind some of the two iconic companies’ core data center product lines, such as Sun’s x86 servers and Cisco’s Nexus Switches and Unified Computing System (Cisco UCS), has built a scale-out IT solution geared specifically for Big Data applications. The company, which recently came out of stealth and announced a $15 million funding round, is addressing a growing need in the data center and has a founding team whose technological abilities are undeniable, but its current product is only on its first generation and still has a ways to go before it is proven out in the market.
Let’s back up a little and discuss why a scale-out solution for Big Data is important. Creating virtual controllers which enable some kind of software-defined platform aren’t anything new. In storage, we’ve seen this with Atlantis USX and VMware vSAN; in networking, it’s Cisco ACI, Big Switch, and VMware NSX. The vast majority of these technologies however are designed for traditional workloads, such as virtual desktop infrastructure, databases, application virtualization, web portals, and so on.
What about managing one of the fastest-growing aspects of IT today? What about controlling a new critical source of business value? What about creating a virtual controller for Big Data management?
According to a recent survey by Gartner, investment in Big Data continued to increase in 2015. More than three-fourths of companies are investing or planning to invest in Big Data technologies in the next two years.
“This year begins the shift of big data away from a topic unto itself, and toward standard practices,” Nick Heudecker, research director at Gartner, said in a statement. “The topics that formerly defined Big Data, such as massive data volumes, disparate data sources, and new technologies are becoming familiar as Big Data solutions become mainstream. For example, among companies that have invested in Big Data technology, 70 percent are analyzing or planning to analyze location data, and 64 percent are analyzing or planning to analyze free-form text.”
According to Gartner, organizations typically have multiple goals for Big Data initiatives, such as enhancing the customer experience, streamlining existing processes, achieving more targeted marketing, and reducing costs. As in previous years, organizations are overwhelmingly targeting enhanced customer experience as the primary goal of Big Data projects (64 percent). Process efficiency and more targeted marketing are now tied at 47 percent. As data breaches continue to make headlines, enhanced security capabilities saw the largest increase, from 15 percent to 23 percent.
“As Big Data becomes the new normal, information and analytics leaders are shifting focus from hype to finding value,” Lisa Kart, also a research director at Gartner, said in a statement. “While the perennial challenge of understanding value remains, the practical challenges of skills, governance, funding, and return on investment come to the fore.”
Here are some more key Gartner forecasts on Big Data:
- By 2020, information will be used to reinvent, digitalize or eliminate 80 percent of business processes and products from a decade earlier.
- Through 2016, less than 10 percent of self-service business intelligence initiatives will be governed sufficiently to prevent inconsistencies that adversely affect the business.
- By 2017, 50 percent of information governance initiatives will have incorporated the concept of information advocacy to ensure they are value-driven.
So where is DriveScale aiming to make a difference?
Scale-Out Rack Data Center Architecture
DriveScale was born because of three big trends:
- Rise of the software scale-out stack and demands around Big Data. There is a clear need to make Big Data workloads a lot more resilient, available, and efficient. Most of all, these workloads need to be able to scale dynamically. Furthermore, there is a need for intelligent workload management with failure-prone hardware to ensure data sets are safe and available. Ultimately, DriveScale aims to create a more resilient ecosystem with greater provisioning capabilities for data.
- Commodity and white box technologies. You have network, storage and compute already in your data center. Why replace it when you can just manage it more effectively for Big Data initiatives? A big challenge for organizations looking to deploy better Big Data management ecosystem is that they were using traditional means to manage large data sets. DriveScale comes in with a virtual controller, positioned as the software-defined management layer, which unifies critical resources for Big Data delivery.
- The network layer has evolved. We’re far beyond the days of 1GbE connections.We’re seeing more connectivity capabilities and a lot more intelligence at the networking layer. DriveScale’s technology aims to exploit this to deliver Big Data workloads much faster. Tight awareness of the connectivity and topology within the rack allows DriveScale to get more information about the drives, the data they’re processing, and priority of the information. For example, they can see which drives are fewer hops away from the server; basically creating “Ethernet in the rack” as it relates to data management and resource distribution.
“Our observation is that networking at 10GbE and beyond was becoming less expensive and more available,” said Tom Lyon, DriveScale chief scientist and co-founder of DriveScale. “So, the increased amount of bandwidth and network controls allowed for new kinds of architectures to take place.”
In the past, Lyon held key engineering roles at Nuova Systems, a startup acquired by Cisco in 2008 whose technology became the basis of Cisco’s UCS servers and Nexus switches.
DriveScale didn’t set out to solve the world’s software-defined infrastructure and convergence problems. Rather, they focused their strategy on overcoming two big challenges:
- Difficulties around managing large data sets and Big Data environments. Organizations were relegated to managing siloed Big Data operations, often with traditional compute, storage, and network mechanisms. DriveScale not only works to resolve these challenges, it specifically focuses on the scale-out application market. Platforms by Hadoop, MapR, Cloudera, and others can be integrated with a REST API architecture.
- Server admins have real pain managing Hadoop and scale-out environments. Business are under pressure to get value from the data they process. Why? This data is becoming increasingly valuable to the entire business process. Rather than deploying traditional servers with “trapped” disks, DriveScale changes the way administrators control resources provisioned for Big Data workloads. Using the software, you can provide disk clusters (and servers) to a Big Data application set. And, these resources are indistinguishable from resources available from a regular rack-mount server. Basically, you’re no longer constrained by a server chassis and can literally scale out. Lyon calls it “software-defined sheet metal.”
To overcome these challenges, DriveScale had to create a new type of management architecture. “We invented a rack-scale architecture which maximized network, compute, and the storage environment,” said Tina Nolte, director of product management at DriveScale. “It’s a new type of logical layer which allows you to create software-defined nodes managing complex and scaling Big Data environments.”
The architecture, at a high-level, is fairly straightforward:
- Storage, network, and compute is totally up to you. Have a favorite vendor? Great, DriveScale will likely work with them, no problem.
- Your network layer acts as the connector. You use existing networking components to enable the communication between resources. From here, you can manage load between links in the nodes, create cluster-level management where you pick and create your own granular rules and create access controls based on rights and app-level policies.
- The magic is in the software. The DriveScale software allows you to create the aforementioned software-defined Big Data nodes. Furthermore, it allows you to granularly rebalance the rations of compute to storage. Basically, as your as your Big Data environment evolves and grows with new business demands, it adjusts dynamically.
So, what’s the difference between DriveScale and other software-based hyperconverged-infrastructure solutions. Hyperconvergence focuses on traditional workloads with scale-out architecture and a virtual controller. DriveScale focuses on scale-out workloads (like big data), using commodity hardware, with scale-out software.
Final Thoughts
Again, DriveScale’s product and business are still in their early stages. The company is still working on making strategic partnerships and creating validated reference architectures. Creating those references and alliances (with companies like Hewlett Packard Enterprise, Cisco, Dell, Super Micro, and others) will go a long way in enabling further adoption and greater validation. Furthermore, it will help with support should there be a problem. Many organizations like strategic partnerships, which allow them to have just one support line to reach out to.
The technology powering DriveScale is aiming at solving a growing problem in the industry. The scale-out application market is evolving very quickly and organizations need help in this area. Big Data is constantly getting bigger and changing the way business intelligence is shaping the modern organization. For now, DriveScale is the only company that’s taking a specific “software-defined” aim at the scale-out application industry. Based on the trends, however, it isn’t likely to stay lonely for long. | 3:00p |
Federal Government Data Center Mandate Gets Ahead of the Public Sector Mark Gaydos is Chief Marketing Officer for Nlyte Software.
Recent federal government policy is targeting data centers that are consuming too much power, and seeking to block agencies from allocating money to new or expanding federal data centers, without approval from the Federal CIO himself. This new mandate, in development for several years, basically leaves no other option for federal agencies but to “go green.”
Here is a bit of background to help make sense of these new policies:
- In 2010, the Office of Management and Budget (OMB) launched the Federal Data Center Consolidation Initiative (FDCCI) to promote the use of Green IT by reducing the overall energy and real estate footprint of government data centers, reducing the cost of data center hardware, software and operations.
- In December 2014, the President, by signing into law the Federal Information Technology Acquisition Reform Act (FITARA), enacted and built upon the requirements of the FDCCI. FITARA requires agencies to submit annual reports to include: comprehensive data center inventories; multi-year strategies to consolidate and optimize data centers; performance metrics and a timeline for agency activities; and yearly calculations of investment and cost savings.
FITARA also requires the Administrator of the Office of E-Government and Information Technology, now the Office of the Federal Chief Information Officer (OFCIO), to provide public updates on cumulative cost-savings and optimization improvements, review agency data center inventories, and implement data center management strategies. This government framework helps achieve FITARA’s optimization requirements.
See also: White House Orders Federal Data Center Construction Freeze
What Does DCIM Have To Do With It?
By 2018, all federal government data centers must achieve higher, specified levels of efficiency. One way to achieve this is to bring server utilization rates up to 65 percent, from the current 5 percent utilization rate. Server utilization rates were at 5 percent to ensure capacity. With the elasticity of the cloud, flexible bandwidth-on-demand effectively squeezed more utilization out of the existing boxes. There’s an efficient means to overcome this challenge – and it’s called data center infrastructure management or DCIM.
DCIM is now required in all federal data centers and is the best solution to monitor energy and track inventory. DCIM offers:
- Ease of asset tracking for users to follow assets throughout their lifecycle, from loading dock to decommission.
- Accurate and real-time inventory audits, with floor plan views that allow data center operators easy consolidation and capacity planning.
- Real-time energy monitoring by automatically extracting current energy usage and accurately displaying the overall trending information.
- Facility and IT managers the ability to find and identify stranded, unused power, and space capacity for the most efficient usage, reducing power consumption by 15-25%.
- Data center managers the ability to establish a power use baseline and record data over a period of time to validate to the government that successful measures have been implemented.
And DCIM can accomplish all this without new hardware, which brings with it the adverse effect of turning on and consuming more power.
Federal agencies selecting a DCIM should ensure that the vendor is well entrenched in federal data centers and understands well the unique federal requirements. Furthermore, that they offer a solution to meet the mandates of the recent Data Center Optimization Initiative (DCOI) from the U.S. Federal Office of Management and Budget objectives. This should include the ability to:
- Establish goal and target date configuration within charts, so facility managers can understand where mandates and objectives are being achieved.
- Multivariate analysis to establish a regression line for predictive “realization date” as to when the federal agency should ultimately be in compliance with the specific focus areas.
- Add micro-permissions capabilities to grant qualified individuals specific and detailed dashboard access as necessary throughout any part of the data center complex or across their entire global portfolio.
In conclusion, a complete DCIM solution can help organizations optimize their data centers and consolidate their hardware footprint while being fully synched with the Federal DCOI objectives of efficiency.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:11p |
Samsung Buys Joyent to Expand Cloud Business (Bloomberg) — Samsung Electronics will acquire Joyent to expand in cloud computing as the world’s largest smartphone maker looks beyond hardware for revenue growth.
The deal will help Samsung build its cloud infrastructure, the Galaxy maker said in a statement without disclosing a purchase price. The acquisition gives the Suwon, South Korea-based company its own platform to support mobile, internet of things and cloud services.
Samsung has been “actively looking” to acquire software developers, including artificial intelligence, as it tries to overcome flat-lining sales for its devices, Executive Vice President Rhee In Jong said in March. Samsung had more than $60 billion in cash and equivalents as of the end of the first quarter.
See also: Joyent Wants to Be the Bare Metal Cloud for Docker Containers
Vice Chairman Lee Jae Yong is trying to reduce the company’s focus on manufacturing, which had helped create the world’s biggest maker of phones, TVs and memory chips.
Shipments of Samsung’s Galaxy smartphones and other models fell for a second straight year in 2015 as Apple’s iPhones gained traction in the high-end category while models from Huawei Technologies and Xiaomi attracted budget buyers. Revenue and net income have fallen for two straight years while the stock has posted three consecutive annual declines.
Asia’s biggest technology company was involved in 12 deals totaling $456 million last year, according to data compiled by Bloomberg. They included the purchase of LoopPay, a company that develops technology for mobile payments, last year as well as purchasing SmartThings, which makes mobile applications to control electronics in houses in August 2014. | 5:25p |
Data Center Management Startup LogicMonitor Raises $130M (Bloomberg) — LogicMonitor, which helps companies manage their technology systems in data centers, has raised $130 million to help expand its product lineup and global reach.
With the funding, the Santa Barbara, California-based company has raised more than $150 million, CEO Kevin McGibben said in an interview. The investment came from Providence Strategic Growth, the growth equity affiliate of Providence Equity Partners, which has more than $45 billion in assets. The cash infusion will help the company bolster engineering, expand sales and marketing and boost its overseas presence, including in Europe.
“It’s huge for us,” McGibben said. “We’ve been waiting to take on a much bigger investment until the time we felt like we were really ready — to make those bigger investments to build a much more significant player in the space.”
LogicMonitor helps companies manage servers, storage, and networking in their own data centers or in the cloud. The company, which competes with rivals such as IBM and Hewlett Packard Enterprise, has almost doubled sales the past two years, he said. It has more than 1,000 customers, he said, including JetBlue Airways, Zendesk, National Geographic, and Trulia.
The funding comes amid sluggish investments for startups this year as concerns grow about valuations and profits. During the first quarter, venture capitalists invested $12.1 billion in 969 US deals — little changed from $12 billion in 1,021 deals a year earlier, according to the MoneyTree Report from PricewaterhouseCoopers and the National Venture Capital Association, based on data provided by Thomson Reuters.
While McGibben wouldn’t comment on profitability, he said LogicMonitor has a healthy financial model and has good momentum, helping it stand out as an investment.
“We’re not trying to increase our burn month over month — quarter over quarter — growth at all costs,” he said, while declining to comment on a valuation. “We’re actually trying to build a long-term, valuable company.” | 6:55p |
Second Google Data Center Comes Online in Ireland Google has launched its second data center on the outskirts of Dublin, the city where its European headquarters are located.
The company invested €150 million in the new data center, according to news reports. Including this latest investment, the company has now invested a total of €750 million in Irish capital assets, Irish Times reported, citing Google.
The facility is adjacent to the first Google data center in Ireland, launched in 2012 on the company’s campus in Clondalkin, a town 10 kilometers west of Dublin.
Enda Kenny, Ireland’s Taoiseach (head of government, equivalent to prime minister), spoke at the data center opening Thursday, applauding the company’s sizable investment in the country, creating jobs and being a “leader within Ireland’s digital community,” Irish Independent reported.
See also: What Cloud and AI Do and Don’t Mean for Google’s Data Center Strategy
At the event, Ronan Harris, Google’s head in Ireland, addressed the upcoming referendum on Britain’s exit from the European Union and implications the potential Brexit may have for Google’s Irish operations.
“We are going to wait and see what the outcome of the referendum is and then we’ll assess what the British people have decided and the British government then decide to do,” Harris said, according to Irish Times. “At the moment we don’t have clarity on that so we haven’t made any decisions, accordingly.”
While data center expansion is always an ongoing process for Google, the company has been ramping up data center investment this year to support a push to grow its cloud services business. The company announced in March it would add 10 new data center locations that will host its public cloud infrastructure.
Google reported capital expenditures of $2 billion in the first quarter of this year, saying the spending reflected its “investments in production equipment, facilities, and data center construction.”
This expansion push includes both building and leasing data center capacity from third-party providers.
Read more: Google to Build and Lease Data Centers in Big Cloud Expansion | 7:20p |
Report: Deutsche Telekom Considers Host Europe Acquisition  Brought to You by The WHIR
Host Group Europe’s search for buyers may be drawing to a close, according to a report that Deutsche Telekom is considering acquiring the company from private equity parent Cinven. Five different sources close to the situation told Reuters that Deutsche Telekom is seeking US private equity partners to help fund a deal to merge HEG with its web hosting subsidiary Strato.
Investment firms Hellman & Friedman and Blackstone were named by several sources as among those considering participation. None of the companies named in the report commented. Deutsche Telekom has business relationships with both firms, and CEO Tim Hoettges has expressed comfort with private equity partnerships, according to the report.
See also: Deutsche Telekom Touts New Data Center as Fort Knox for German Data
In April a report indicated that Cinven had put HEG up for sale with a €1.7 billion sticker price and mentioned Hellman & Friedman among firms that might be interested. That price puts HEG’s multiple to earnings of €140 million at just over 12 times, which is comparable to GoDaddy’s multiple and significantly higher than competitors Rackspace and Endurance International Group.
Strato was acquired by Deutsche Telekom in 2009 for €275 million. One of Reuters’ sources said it has core earnings of around €30 million and its enterprise worth is about a quarter as much as HEG.
See also: Microsoft Continues Expanding Cloud Data Center Empire
Potential bidders for HEG would have to consider the value of both its mass market and managed hosting businesses, according to the report.
Deutsche Telekom announced intentions to double its business cloud revenue by 2018 and become the leading cloud platform provider for businesses in Europe when extending a partnership with Huawei to include public cloud a year ago. Adding HEG’s customers, products and team would be a significant move towards that goal.
This first ran at http://www.thewhir.com/web-hosting-news/report-deutsche-telekom-considers-host-europe-group-acquisition |
|