Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, July 27th, 2017

    Time Event
    11:00a
    GigaSpaces Spins Off Cloudify, Its Open Source Cloud Orchestration Unit

    GigaSpaces Technologies announced today that its business unit for Cloudify, the open source orchestration and cloud management platform, will be spun off into a new company focused on its core markets — cloud and telecoms. GigaSpaces’ core product is its in-memory computing platform XAP. It also sells an in-memory analytics solution called InsightEdge.

    “It was always our plan to eventually spin off Cloudify,” said Nati Shalom, Cloudify’s CTO, in a statement. “Based on the impressive growth of the open-source Cloudify project, and increased market penetration of the commercially supported Cloudify product, it has become clear that now is the time to do so. This strategic move gives us the freedom to accelerate engineering development in both product lines.”

    The new company will retain Cloudify’s core engineering, product, and marketing teams, who will work from existing offices in New York City, San Jose, and Tel Aviv.

    Cloudify is a framework used to deploy, manage, and scale applications in the cloud first released by GigaSpaces in 2012. It ships with built-in support for private clouds, such as OpenStack and VMware, as well as public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform. In addition, it supports container technologies, such as Kubernetes, and configuration management tools, including Chef, Puppet, and Ansible.

    The platform is also used increasingly by large telecoms and carriers to manage network functions virtualization, and markets a separate Cloudify Telecom Edition.

    In 2016, Cloudify launched Project ARIA, an open source orchestration library, to accelerate adoption of the open source cloud computing language TOSCA. Although it has contributed the project to the Apache Foundation, it continues to be part of the company’s strategy to leverage open source software.

    In the announcement of the spin-off, GigaSpaces said this action has been in the works since last year and was driven by Cloudify’s success in both the cloud and the carrier network orchestration markets. “It allows the new Cloudify entity to dedicate engineering, product development, marketing and customer support activities to the cloud market segment.”

    “GigaSpaces, with its long-standing track record in the in-memory computing market, will continue to capitalize on its technology investments and innovation in its XAP and InsightEdge product lines,” the company said in a statement.

    While the XAP/InsightEdge and Cloudify businesses are already functioning as separate business units, the structural spin-off is still subject to regulatory approval.

    12:00p
    Switch Offers to Build Custom-Size Data Centers Anywhere Clients Choose

    Switch, the Las Vegas-based data center provider that has traditionally been focused on building large, multi-tenant data center campuses, is now also offering to build single-user data centers of custom size, wherever customers need them, using the same patented design elements used to build its massive campuses.

    The new Modularly Optimized Design, or MOD, product line is aimed at companies whose needs range from relatively small edge deployments to large hyper-scale facilities. Switch speeds up deployment time by manufacturing modular components and shipping them to the location for quick assembly. Switch founder and CEO, Rob Roy, patented a Wattage Density Modular Design in 2007, according to the company.

    There are two variants of the MOD design: MOD 100, which can be customized to be deployed on customer premises or in dense urban environments; and MOD 250, which uses the same design specifications as Switch’s hyper-scale SuperNAP data centers in Las Vegas.

    The company said it took six months to build its 170,000-square foot Las Vegas 12 data center using the MOD 100 design.

    While it already serves the likes of Amazon Web Services, eBay, and Intel out of its multi-tenant colocation facilities, Switch appears to now be targeting customers that either cannot share a building for security and compliance reasons, or have unique location requirements for their computing capacity. Those could include both the largest cloud providers, who have been spending billions on new core and edge data centers recently, and highly security-sensitive organizations in sectors like healthcare, financial services, or government.

    As it expands its product range, Switch is also expanding the geographic reach of the massive campuses it is known for. The company recently launched new campuses outside of Reno, Nevada; and outside of Grand Rapids, Michigan; and announced plans for a campus in Atlanta. It is also building in Italy and Thailand.

    4:39p
    DCK Investor Edge: QTS Aims New Design At Cloud Giants

    QTS Realty, the first data center REITs to report second-quarter results this earnings season, has struggled in the year’s first half to post large year-over-year gains investors have come to expect from the sector.

    But the future appears to be brighter as the company sees strong demand across most of its markets, including: Dallas-Fort Worth, Chicago, and Piscataway, New Jersey. Leasing picked up considerably in the second quarter versus the first one, for which the company reported weaker results than was expected.

    On its second-quarter earnings call, QTS management focused on new initiatives that should bear fruit in the year’s second half and beyond, including connectivity partnerships with PacketFabric and Megaport, as well as QTS’s new Service Delivery Platform and the HyperBlock modular wholesale product, its new data center design aimed at hyper-scale customers.

    The data center REIT has had success with a land-and-expand strategy with some of the largest cloud, digital media, and software firms. HyperBlock, a reduced time-to-market product, was developed based on its experiences of the past two years.

    Read more:  DCK Investor Edge: QTS’s Hybrid IT Strategy — Short-Term Pain, Long-Term Gain?

    HyperBlock — a Deeper Dive

    There’s more to the HyperBlock announcement than the ability to build 2MW data center modules quickly. It is intended to target hyper-scale customer deployments proactively as a separate market segment, with tailored product offerings.

    HyperBlock represents a change in data center design philosophy for QTS, the company’s CTO, Brian Johnston, explained to Data Center Knowledge.

    “Some of the world’s leading hyper-scalers are both QTS customers and partners,” he said. “We continue to work closely with them and have become a fast follower to understand their unique requirements for speed-to-market, visibility, cost-reduction, and operational partnership, and have developed and optimized QTS HyperBlock to meet these needs.”

    Image: QTS

    Being a fast follower means QTS is willing to change its product road map to include options that will make its mega-data centers more attractive to this specific class of users.

    Wholesale Changes

    While QTS has leased large data halls — in the 10MW-to-20MW range — in the past, often to anchor a new campus, these deployments pre-date the latest initiatives by the largest public cloud providers.

    Its older C1 wholesale solution was originally geared to allow enterprise customers in the 500kW-to-6MW range freedom to co-create a custom data center design. These traditional wholesale customers often require a great deal of infrastructure redundancy. Enterprise customers can often take many months to decide on room layout, power distribution, and other customized elements.

    The RFP process has evolved during the past two years, as hyper-scale customers have worked with multiple third-party data center operators to standardize many of their requirements. Each of the public cloud giants has developed a program for what they need in a third-party data center build. This includes an accelerated schedule, from inking the deal all the way to timing and coordination required to roll in equipment.

    HyperBlock Design Flexibility

    Hyper-scale customers don’t need as many electrical system components in their data center designs as traditional enterprise tenants do. Therefore, modular HyperBlock power distribution and UPS configurations provide for N+1 and N solutions at 200 watts per square foot, along with flexibility for hot aisle/cold aisle containment if higher densities are required. This design approach lowers cost-to-build, while increasing customer power utilization in each 2MW block. These changes provide QTS with higher returns per square foot on lower-rent leases and help minimize stranded power capacity.

    “As part of the creation of QTS HyperBlock, QTS spent the last two years perfecting our supply chain and construction capabilities to create … just-in-time approach to hyper-scale data center capacity,” Johnston said. “Today we are delivering an innovative, incremental hyper-scale growth product backed by rapid time-to-delivery service level agreements.”

    QTS can now offer 60-to-120-day data-hall build-out as part of its contract SLA. The company will continue deploying these 2MW modules on its traditional raised-floor data center space until it runs out.

    But the next version of HyperBlock will offer an option for customers to deploy their cabinets on concrete slabs, a more current design many of them use in their own data centers. This design option is expected to reduce cost and improve speed-to-market further, while also allowing for higher-density deployments.

    Inside QTS’s data center in Richmond, Virginia (Photo: QTS)

    HyperBlock is an engineered turn-key solution, meaning the design includes everything from electrical and mechanical system configuration to connectivity and logistics. It also includes software APIs clients can use to manage infrastructure, life-cycle management, and support.

    Since hyper-scale customers tend to operate at higher temperature, each 2MW module comes standard with 30 temperature sensors which can be monitored on-site and remotely. The data collected through the QTS Service Delivery Platform has an API that allows customers to export data into their own data management systems to help optimize performance utilizing their own proprietary algorithms.

    Image: QTS

    5:44p
    Schneider Buys Vertiv’s Transfer Switch Business for $1.25B

    Schneider Electric has agreed to acquire ASCO, the automatic transfer switch business of Vertiv, formerly Emerson Network Power.

    Vertiv said in a statement that the $1.25 billion deal will allow the company to focus on its core business, which is providing critical infrastructure solutions in data center, telecommunications, commercial, and industrial markets. The sale price is a multiple of 11.7 times ASCO’s 2016 earnings, according to Vertiv.

    “As Vertiv has repositioned itself after being sold to Platinum Equity in November 2016, it became clear that ASCO’s strengths in the automatic transfer switch arena fell outside the new organization’s more focused strategy,” Vertiv CEO Rob Johnson said in a statement.

    Emerson Electric sold its Network Power unit to Platinum last year because the unit was not as profitable as its other businesses but retained a 15-percent stake. The $4 billion-unit was one of several Emerson spun off to boost profitability.

    See also: New Owners Pushing Vertiv to “Act Like a Startup”

    It became Vertiv this year, and its executives have said that unlike the common scenario where a private equity firm takes over a business and cuts expenses to squeeze out every last dollar before selling it off, Platinum is in it for the long haul, showing willingness to invest in growing the business.

    In a recent interview with Data Center Knowledge, Johnson said the initial investments have already started paying off. He declined to disclose its revenue projections for the year, but said, “We’re growing.”

    6:15p
    Selecting the Best Database for Your Organization, Part 1

    Franco Rizzo is Senior Pre-sales Architect at TmaxSoft. 

    The term “cloud” is so ubiquitous that it means different things to everyone. For these purposes, we will define it in relation to infrastructure: the cloud is the ability to auto-provision a subset of available compute/network/storage to meet a specific business need via virtualization (IaaS).

    As far as applications, the cloud is browser-based access to an application (SaaS); and, importantly, the utility-based consumption model of paying for these services that has caused a major disruption in the traditional models of technology.

    This has led to a paradigm shift in client-server technology. Just as the mainframe morphed into mini-computing, which led to the client-server model, cloud-computing and Amazon Web Services (AWS), the ubiquity of the cloud is the next phase in the evolution of IT. In this phase, applications, data and services are being moved to the edge of the enterprise data center.

    A CIO wanting to lower IT spend and mitigate risk has many options:

    • Move budget and functionality directly to the business (shadow IT) and empower the use of public cloud options
    • Move to a managed service – private cloud for the skittish
    • Create a private cloud with the ability to burst to a public cloud (i.e., hybrid cloud)
    • Move 100 percent to a public cloud provider managed by a smaller IT department

    Each one of the options listed above comes with pros and cons. With all the available database options, it can be difficult to determine which one is the best solution for an enterprise.

    The three key issues most central to an organization’s database needs are performance, security and compliance. So what are best practices for database management strategies for each deployment option to manage those priorities?

    Let’s briefly examine two use cases for deploying your enterprise database strategy: on-premise/private cloud and hybrid cloud. Part 2 of this article will address public cloud; appliance-based; and virtualized environments.

    On-premise/Private Cloud

    One of the main pros of this type of database deployment scenario is that an enterprise will have control over its own environment, which can be customized to its specific business needs and use cases. This boosts trust in the security of the solution, as IT and CIOs own and control it.

    Where a customer is located relative to where data is located can impact legacy applications. Latency can be an issue if users located in a different part of the globe than the company are accessing data via mobile device, resulting in overall poor user experience.

    Another con is Capex. Traditionally, the break-even ROI for on-premise deployment – between hardware, software and all required components – is about 24 and 36 months, which can be too long for some organizations. Storage costs also can get expensive.

    A feature that could be a pro or con, depending on how one looks at it, is that IT will have a greater involvement. This sometimes can impact an enterprise’s ability to go to market quickly.

    Before moving to an on premise/private cloud database, it’s important to examine expected ROI – if the ROI timeline is more than two or three years into the future, then this option can be justified, but this timeline may not apply for all organizations.

    Perceived security and compliance are other considerations. Some industries have security regulations that require strict compliance, such as financial services and healthcare. Countries like Canada, Germany and Russia are drafting stricter data residency and sovereignty laws that require data to remain in the country to protect their citizens’ personal information. Doing business in those countries, while housing data in another, would be in violation of those laws.

    Security measures and disaster recovery both must be architected into a solution as well.

    Hybrid Cloud

    A hybrid cloud is flexible and customizable, allowing managers to pick and choose elements of either public or private cloud as needs arise. The biggest advantage of hybrid cloud is the ability to do “cloud bursting.” A business running an application on premise may experience a spike in data volume during a given time of month or year. With hybrid, it can “burst” to the cloud to access more capacity only when needed, without purchasing extra capacity that would normally sit unused.

    A hybrid cloud lets an enterprise self-manage an environment without relying too much on IT and it gives the flexibility to deploy workloads depending on business demands.

    More importantly, disaster recovery is built into a hybrid solution and thus removes a key concern. An organization can mitigate some restraints of data sovereignty and security laws with a hybrid cloud; some data can stay local and some can go into the cloud.

    The cons in a hybrid cloud is that integration is complicated; trying to integrate an on-premise option into a public cloud adds complexity that may lead to security issues. Hybrid cloud also can lead to sprawl, where growth of computing resources underlying IT services is uncontrolled and exceeds the resources required for the number of users.

    While hybrid gives the flexibility to leverage the current data center environment with some best-of-breed SaaS offerings, it’s important to have a way to govern and manage sprawl. Equally as important is having a data migration strategy architected into a hybrid cloud. This helps reduce complexity while enhancing security.

    Check back tomorrow to read Part 2.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    6:45p
    Oilfield Rush to High-Tech Helps Smaller Companies Thrive

    David Wethe (Bloomberg) — A wave of next-generation upstarts is hitting America’s oil patch, offering high-tech solutions aimed at an industry in flux following the worst crude-market crash in a generation.

    At a time when the five biggest oilfield servicers — still smarting from the price rout — have cut almost $1 billion from their research budgets, companies such as Ambyint Inc. are stepping into the breach. Ambyint uses iPhone-sized computers, digital signals and complex algorithms to control the flow of oil from older wells, boosting output and avoiding downtime.

    The old guard is taking note. This month, Halliburton Co., the largest provider of fracking services, bought Summit ESP, a company armed with 44 patents for technology to improve production. That followed by two months Helmerich & Payne Inc.’s acquisition of  Motive Drilling Technologies Inc., with 14 patents, another dozen pending, and software in hand that can robotically steer drill bits located more than a mile underground.

    “We had the land rush to buy up acreage; now the land rush is going to be to start buying up intellectual property,” said Mark Mills, a partner in CVP, a digital oilfield venture fund. “The oil industry hasn’t been like that for a long time because it’s been locked into an old structure.”

    While the oil industry has largely recovered from its three-year rout, prices have settled far short of the $100-a-barrel levels seen prior to the plunge. At $45 to $50 a barrel, explorers are looking for every bit of help they can get to keep costs under control.

    Unique Systems

    In some cases, the new innovators offer unique software systems. In others, they’re fine-tuning existing hardware to smooth out the rough edges. And with less access to funding than their bigger rivals, the smaller companies have tended to focus on one issue at a time, allowing drillers to pick and choose the individual fixes they need, rather than being locked into a larger contract with one of the big servicers.

    “The shale revolution is changing America,” said Kevin Shuba, the chief executive officer of OmniTRAX, a railroad logistics provider, in a telephone interview. “There’s a great opportunity for smaller companies to enter this space because of their flexibility and because of their ability to move quickly.”

    Ambyint, a Canada-based provider of software and devices, was launched in 2014 by Nav Dhunay, previously known for his development of a home-automation technology system to control thermostats, lights and music remotely from mobile phones.

    His latest technology is designed to remotely control oil flow from miles away in wells that have slowed with age, and on a variety of systems.

    In changing his focus, Dhunay said he moved from “more of a consumer-tech, Silicon Valley-type of culture where it’s all about innovation and fast-paced change to an industry that is quite a bit slower when it comes to adoption of technology, quite a bit slower when it comes to innovation.”

    Many of the biggest players are “just continuing to do step-changes to what they’ve already got,” Dhunay said in an interview.

    The successes of these smaller companies comes at a time when the biggest service providers have been retreating on their own in-house innovation.

    Techno Music

    When Baker Hughes Inc. blasted techno music and splashed lightning-fast graphics over a movie screen to a room full of energy analysts in a suburb outside Houston three years ago, the world’s third-biggest oilfield-services provider predicted its newly unveiled invention would someday replace the most iconic symbol in the century-old oilfield, the nodding donkey pump.

    Today the Baker Hughes technology, called LEAP, has hardly been discussed on earnings calls, and has failed to displace the traditional oil pumps in any significant way. In the meantime, Baker Hughes cut its research spending by 37 percent, to $384 million in 2016 from $613 million two years earlier.

    Halliburton has also slashed its research-and-development spending, dropping it by 45 percent to $329 million in 2016 from $601 million in 2014. And the two companies aren’t alone. Schlumberger Ltd., Weatherford International Plc, and GE Oil & Gas all reduced the amount of money they spent on research between 2014 and 2016.

    In this low oil-price environment, Halliburton’s attention to patent spending is sharper than ever, said Greg Powers, head of technology at the company.

    ‘Burning the Torch’

    “What we patent has to be monetized,” Powers said in a telephone interview. “That’s not the time when you go burning the torch of the patent machine as hard as you can. It’s a really expensive enterprise. What we did was consciously dial back blockbuster programs.”

    Many smaller contractors, meanwhile, have been ramping up their research budgets.

    Dril-Quip Inc., a provider of subsea gear to the offshore drilling industry, saw its research and engineering budget grow to 8.2 percent of sales last year from 4.9 percent in 2014. In the same period, Tesco Corp., an equipment provider, more than doubled its ratio of research spending to sales to 4.2 percent from 1.8 percent.

    To be sure, the biggest companies are not shutting down their patent machines entirely. Baker Hughes earlier this year unveiled a patented fracking device for deepwater oil wells which the company says will shave roughly 40 percent off the $100 million cost to stimulate an offshore well for production. The company, now majority owned by General Electric Co., can tap into GE’s FastWorks program to better emulate the feel of a Silicon Valley startup.

    Four-Year Lag

    There’s generally a lag of about four years from the time that a company commits research dollars on a new device to when it shows up as a patent. So, expect oilfield patents from the biggest servicers to continue rising through 2018, when the drop in research spending finally shows up.

    Alex Robart spent five years studying the oil field while running PacWest Consulting Partners with his brother Chris before selling to IHS Markit Ltd. in 2014. Soon afterward, Ambyint’s Dhunay snapped him up to run his new oilfield tech company.

    “It’s not the big guys who drive new technologies,” Robart said in an interview. “Big guys are great at incremental innovation and technology, but not so good at totally new technology.”

    7:15p
    Diversity to Drones: Black Hat Speakers Weigh in On Top Security Trends

    Brought to you by IT Pro

    In the 20 years since the first Black Hat conference in 1997, security hacks have become incredibly cheap to initiate, increasingly expensive and complex to mitigate, and have more real-world consequences than ever before, according to speakers and attendees at this year’s conference.

    The first day of sessions at this week’s conference not only touched on new technology but also the human element of security. Facebook chief security officer Alex Stamos shifted the lens on hackers themselves in his keynote session on Wednesday morning, urging them to reflect on their empathy for users.

    Here’s a look at the keynote and other highlights from day one at Black Hat conference.

    Facebook CSO: Hackers Need to Work on Empathy

    Facebook chief security officer Alex Stamos kicked off the Black Hat conference on Wednesday with a keynote that called on attendees – which include security practitioners, vendors, academics and others – to go beyond finding bugs and the next zero-day and recognize the potential human harm of less interesting security issues like phishing and spam.

    According to a report by ThreatPost, Stamos said that the community “is not yet living up to its potential. We’ve perfected the art of finding problems over and over without addressing root issues. We need to think carefully about what to do about it downstream after discovery.” He said that the security community tends to shy away from areas that create real harm, such as instances of abuse like doxing.

    “The security community has the tendency to punish those who implement imperfect solutions in an imperfect world,” Stamos said, according to ThreatPost. “We have no empathy. We don’t have the ability to put ourselves in the shoes of people we are trying to protect.”

    If you want to watch the full keynote, you can do so on Facebook here. (Stamos’ presentation starts at 45:42)

    Diversity in Cybersecurity Needs to Be Priority

    Stamos addressed the issue in his keynote and offline as others in the community continued to discuss how important it is to foster diversity in cybersecurity.

    Many believe that diversity is critical in ensuring that different minds come together to solve the complex security problems of the future. But in the last few years since Black Hat has been focusing on bringing more sessions and panels together on the topic, the diversity numbers have not seen a drastic improvement; instead, they’ve essentially flat lined, according to Kelly Jackson Higgins, executive editor at Dark Reading, who put together a panel on Wednesday called “Making Diversity a Priority in Security.”

    The panel focused on real-world examples of how organizations are hiring diverse candidates, which actually starts right in the job description. Jackson Higgins describes during Charles Tendell Showpodcast how many security job descriptions are not geared towards finding a diverse pool of candidates. Companies and advocates in the security community are trying to change this with internship programs to help underrepresented communities get their foot in the door.

    New Hacks Range from Cheap to Critical (Infrastructure)

    The human element to security may be interesting and topical, but this is a technology conference, and the sessions on technology are plentiful.

    This is no surprise to anyone who works in security, but it’s insanely cheap to hack stuff. I mean, if you know what you’re doing, you basically only need a USB key; or as a panel at Black Hat on Wednesday showed attendees, a $10 SD card reader.

    “Dumping firmware from hardware, utilizing a non-eMMC flash storage device, can be a daunting task with expensive programmers required, 15+ wires to solder (or a pricey socket), and dumps that contain extra data to allow for error correction. With the growing widespread use of eMMC flash storage, the process can be simplified to 5 wires and a cheap SD card reader/writer allowing for direct access to the filesystem within flash in an interface similar to that of using an SD card.”

    Researchers also discussed on Wednesday a new flaw in the cryptographic protocol in 3G and 4G networks, which can be exploited using a low-cost setup.

    Elsewhere, security experts showed attendees how a home-built ultrasound/sound emitting system can be used to launch attacks towards VR products, including smartphones and drones.

    DIY projects and drones may seem small-time, but there are all kinds of attacks that have serious real-world security consequences, particularly when it comes to critical infrastructure.

    Principal security consultant at IOActive Ruben Santamarta spoke Wednesday about how radiation monitoring devices, used in critical infrastructure like nuclear power plants and at the borders, are being exploited. Jason Staggs, a security researcher at the University of Tulsa, explained how wind farm control networks can be attacked to influence wind farm operations, which are becoming a leading source for renewable energy.

    This article originally appeared on IT Pro.

    << Previous Day 2017/07/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org