Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, November 4th, 2016

    Time Event
    7:04p
    As Cloud Adoption Grows, Canadian Businesses Struggle with Security: Report
    Brought to You by The WHIR

    Brought to You by The WHIR

    More than three-quarters of Canadian organizations feel they are not adequately addressing cloud security, according to research released Thursday by IT consultancy Scalar Decisions.

    The report, Moving up the Value Chain: What We Can Learn from Experienced Cloud Users, shows that security remains the top concern for organizations of all experience levels after adopting cloud.

    IDC surveyed 355 Canadian IT decision-makers in August and September for the study, including two-fifths from on-premise only organizations. Responses showed that a majority or near-majority of Canadian organizations have not adopted data classification and accountability (54 percent), client and end-point protection (57 percent), identity and access management (48 percent), and application-level controls (59 percent).

    The findings suggest that security is a major concern, but not a barrier to cloud adoption, as respondents’ cloud-based workloads are expected to increase from 31 to 35 percent over the next year, and to 41 percent in three years time, with roughly corresponding budget increases. Neil Bunn, Chief Technology Officer at Scalar Decisions sees a disconnect between the cloud security worries organizations have, and the actions they are taking to deal with them.

    “Cloud benefits and business value become progressively more sophisticated as organizations’ experience with the cloud increases,” said Bunn. “Viewing cloud security and cloud adoption as non-severable concepts, coupled with investing in a continuous optimization approach in line with the Cloud Experience Model are key factors to achieve success in a rapidly changing marketplace of cloud based services and capabilities.”

    A survey by Trustwave early this year that included respondents from Canada, the U.S., U.K., Australia, and Singapore showed that over three-quarters of IT security professionals feel pressured to unveil projects before they are fully secured. In September, a report by Kaspersky showed that 77 percent of U.S. businesses, and slightly more international companies, have experienced security incidents in the past year.

    7:08p
    Why DDoS Mitigation Solutions Must Address Small-Scale Attacks
    Brought to You by The WHIR

    Brought to You by The WHIR

    The massive DDoS attack against Dyn last month served as a wakeup call for many businesses. But it’s not just the headline-grabbing, largescale DDoS attacks like it that they – or their service providers – should be concerned with.

    There are many small-scale attacks that hit your network on a much more frequent basis, which some of the legacy DDoS mitigation solutions are ill-equipped to mitigate effectively, Corero Network Security COO Dave Larson says.

    These “tiny little vectors” – which Larson calls reconnaissance vectors – are often missed by legacy solutions whose thresholds are set much higher for redirection; attackers use these reconnaissance vectors to “determine who is weak, who is vulnerable, and who can be exploited,” Larson says.

    “Cleaning up your edge for the small attacks also has an ancillary benefit where you aren’t perceived as a vulnerable, likely victim,” he says.

    Another benefit is that IT security staff can spend less time on processing DDoS security incidents, which is part of the reason why Liquid Web recently deployed Corero’s SmartWall Threat Defense System.

    Liquid Web said that it has already seen a “dramatic improvement of service availability” after adding to the Corero solution, which it augments with its existing solution used for detection and reactive mitigation.

    According to Corero, the inline protection of its SmartWall Threat Defense System (TDS) is much better at handling small-scale attacks by detecting and mitigating attacks in real-time without disrupting the flow of good traffic. With legacy technology, Larson says, “there is typically a time anywhere from 5-10 minutes to a half an hour to when the attack is launched, when the attack is detected by the system and when the attack is effectively mitigating.”

    “In that period of time a website most undoubtedly will be down,” he says. “We don’t think that is something that online properties should tolerate anymore. The technology and the automatic proposition exists to defeat the problem instantaneously.”

    Research released by Corero in March showed the increasing trend toward these types of small-scale DDoS attacks. Larson said that these DDoS attacks negatively impact network performance and are often used as a “smokescreen” for more malicious attacks.

    “Sometimes we get wrapped around the axel in these big events like Krebs, and it’s not to say it’s not newsworthy,” he says. “But when we deploy the benefits of inline against a data center like Liquid Web we remove all of the DDoS, and 90 percent of the DDoS is below a GB.”

    “Even in the case of largescale attacks oversaturating the edge bandwidth, we are seeing networks ride those out if they have inline protection like ours,” Larson says.

    7:13p
    The NTT Data Acquisition of Dell Services is Complete. What Might It Mean for Partners?
    By The VAR Guy

    By The VAR Guy

    Back in March, Japan-based NTT Data, the IT services subsidiary of Nippon Telegraph & Telephone, announced its intentions to acquire Dell Services, the information technology services business unit formerly known as Perot Systems. Today, the $3.1 billion deal has reached a close.

    The new unit, which will be called on an interim basis NTT DATA Services, should help NTT advance its goal of becoming one of the top five global IT services providers. The company already had a large share of this market with about $16 billion in global revenue.

    “As a top 10 global business and technology services provider, we have been aggressively expanding our international business, boldly pursuing growth as a ‘Global IT Innovator,’” said Toshio Iwamoto, president and CEO of NTT DATA Corporation.

    NTT has made just over 20 acquisitions of companies in the last five years, many of which focus on ‘soft’ IT-services like IT consulting and support. The acquisition of Dell Services is its largest to date.

    “There are few acquisition targets in our market that provide this type of unique opportunity to increase our competitiveness and the depth of our market offerings,” said NTT Data CEO John McCain, who will lead the combined business.

    In addition, the deal significantly increases NTT’s North American presence. In the U.S. alone, NTT says projections point to a $5 billion IT services company. Across the globe, it will gain about 28,000 employees from Dell Services, most of which are in North America or India.

    The two companies share very little customer overlap, allowing the new entity to broaden its service offerings in a market that increasingly focuses on providing end-to-end solutions for customers. NTT specializes in advisory services and application innovation, as well as some infrastructure services. Dell Services brings the company secure infrastructure, cloud services and DTO (data transfer object) services.

    The new IT and BPO powerhouse will aggressively target verticals such as healthcare, financial services, insurance, and the public sector. This wider range of full lifecycle capabilities should be of interest to U.S. channel partners that are looking to decrease the amount of  “piecemealing” they have to go through to offer comprehensive solutions to their customers

    “With our combined expertise, services and resources, there’s simply no better team to enable clients to stay a step ahead in highly competitive markets,” said McCain. “We have extremely complementary portfolios. More importantly, both companies have always had an unwavering dedication to client success. Our clients and employees are very enthusiastic about the opportunities this acquisition creates.”

    Recent advances in market share from offshore providers and pure play cloud providers such as AWS, combined with notable companies such as HP spinning off their services businesses, have created a shifting ecosystem that NTT hopes this acquisition will help them better navigate. Dell Services data centers in the U.S., U.K., and Australia will join NTT’s 230 data centers around the globe to significantly increase its infrastructure platform.

    “Welcoming Dell Services to NTT DATA is expected to strengthen our leadership position in the IT Services market and initiates an important business relationship with Dell,” said Iwamoto.

    For its part, the deal gave Dell some much needed cash in advance of its $67 billion acquisition of EMC, which closed earlier this year.

    7:38p
    CenturyLink Sells Its Colo Business to Fund Level 3 Deal

    In a move that was anticipated before, during, and after its $34 billion acquisition deal for Level 3 Communications announced earlier this week, CenturyLink — America’s third-largest phone service provider — now says it is selling its colocation business, which includes some 57 data centers in North America, Europe, and Asia.

    The move will net CenturyLink around $2.15 billion in cash plus $150 million in stock in a new joint venture company, being formed by buyers Medina Capital and BC Partners.

    The new joint venture has yet to be named, although Data Center Knowledge has learned it will become a collection of several IT service providers, including cybersecurity — not just a colo provider.  Co-buyer Medina Capital is based in Coral Gables, Florida.  According to records, Medina began funding security and analytics services businesses and activities in 2012, making only two acquisitions since that time, both in 2013.

    BC Partners, on the other side of the scale, is a three-decade-old London-based private equity firm specializing in what the equities industry calls “defensive growth” plays: assets that are non-cyclical in nature, and whose gains come more from revenue returns than from capital growth.

    The Value of Savvis

    What these new venture partners are getting for their $2.15 billion plus stock is Savvis, which CenturyLink acquired in 2011 in a $2.5 billion cash + stock deal.  (It’s been reported that CenturyLink was seeking $2.5 billion for today’s deal, but settled for $2.15 billion.)  Collectively, Savvis boasts more than 2.6 million feet of floor space worldwide, serving 3,500 customers connected by about 401,000 miles of cable.  Its expansions over the past few years in Canada earned praise from IDC MarketScape just last month.

    Savvis’ secret, as Data Center Knowledge reported in November 2015, is that it leases most of the properties in its portfolio, except for those acquired during the 2011 Qwest deal.  It was a way to keep costs down, but as CenturyLink CEO Glen Post told analysts during a quarterly earnings call that month, that strategy wasn’t paying off.

    “For us to really grow the colo business, it requires really more CapEx than we’ve been willing to put in the business,” said Post [our thanks to Seeking Alpha for the transcript].  “We said that, up-front, that we weren’t going to invest heavily in the data centers — that we felt they were a synergistic asset that we could grow with the rest of our business.  However, with the valuations. . . we think our cash flow could be used for investments that can drive higher returns, basically, and drive better shareholder value.”

    Last Monday, Post told analysts that growth for his firm’s colo business was basically flat — with revenue declining 0.2 percent annually.  This while his content delivery network service was growing 15 percent annually, and managed security services 16 percent.  When asked whether Level 3’s data centers would add anything to CenturyLink’s portfolio, Post sloughed off the question as not “a major consideration of ours.”

    The Security Angle

    For its part, the new and unnamed joint venture’s strategy will be to combine Savvis’ data centers with a treasure trove of established security services, all of whose deals were also announced today:

    • Addison, Texas-based Brainspace is one of the joint venture’s acquisitions, for an undisclosed sum. It specializes in machine learning of information through the analysis of textual documents in multiple languages, drawing a collective database of conceptual connections deduced from that text.
    • Miami-based Easy Solutions, an electronic fraud protection service, is being acquired by the joint venture, again for an undisclosed sum. That company’s engineers have recently been applying a venerable, old artificial intelligence algorithm called isolation forests to deduce behavior patterns from a system using alternative methods to traditional machine learning.
    • Cryptzone, based in Waltham, Massachusetts, is a known player in the security space. It’s a leading proponent of a concept called software-defined perimeter — a way to render virtualized networks more secure by altering the perception of its “endpoints” to networks on the outside.
    • Scotts Valley, California-based Catbird is an advocate of a concept called microsegmentation — effectively rendering small components of a network asset as belonging to a domain, without housing them all within the same volumes, VMs, or resources. It’s a way of breaking networks into very small pieces, distributing them everywhere, and yet have them still all work as though they were seamlessly woven together.

    Clearly, the new group’s plan is to use scientific security means as a differentiator for its colocation services, offering more than just a lot of space and reliable power.  But since Savvis was leasing a lot of that space from asset owners, it will be interesting to see how the new group plans to deploy all these new, and very high-minded, services on systems it does not (yet) own.

    On Friday, Medina Capital founder and managing partner Manuel D. Medina issued this statement:  “We’re combining a worldwide footprint of best-in-class data centers with cutting-edge security and analytics services, integrating these capabilities into a global, highly secure platform that meets today’s critical enterprise, public sector and service provider demands for cybersecurity, colocation and connectivity.  Our customers will be able to leverage a suite of on-net security and advanced analytics services deeply integrated into the data center.”

    In an interview with Boston Business Journal published Friday, Cryptzone CEO Barry Field said, “We don’t see anybody else in the infrastructure world who’s coming at it from a cybersecurity perspective.”

    The new group expects to close its deal with CenturyLink in the first calendar quarter of 2017, which is not a lot of time to execute this deep integration plan.  CenturyLink executives will likely provide much further comment on the deal next Thursday, during a presentation at a Wells Fargo conference on media and telecom in New York City.

    10:48p
    Equinix VP: New Power Models Make Open Source Necessary

    The 100 Gbps router and transponder device called Voyager, announced last Tuesday, may be recorded in history as the first such device ever to have been created by a social network and a colocation provider.  Facebook’s and Equinix’ joint laboratory are data centers SV3 and SV8 in Silicon Valley, in two of Equinix’ prime locations.

    In an exclusive interview with Data Center Knowledge, Dr. Kaladhar Voruganti, Equinix’ vice president for technology innovation and formerly an IBM researcher, told us his company’s participation in Facebook’s Open Compute Project, and its networking offshoot Telecom Infra Project (TIP), is not some little experiment on the side.  It’s a campaign necessitated by a perfect storm of conditions: the status of the cloud services market, the architecture of servers, and the laws of physics.

    “How you design a data center to support this type of new hardware, is different than how traditional hardware is supported,” said Dr. Voruganti.

    “For example, traditional hardware relies on power from a centralized UPS system in the data center.  However, with the new hardware that is coming in — which is the TIP/OCP hardware — the power distribution has a decentralized model where the batteries are on the racks.  So how you do the AC-to-DC power conversion is different in this model, than how you would do it for traditional hardware.”

    The Voyager and the Jeep

    Facebook's and Equinix' Voyager open source network router.

    Facebook’s and Equinix’ Voyager open source network router.

    For Equinix to support the lower-cost, highly modular OCP class of server hardware, it needed a more adaptable data center model — one that aligned better with modular devices using very localized power sources.  Alternately, Equinix could have stuck with traditional models for all its data centers going forward.  But that would have made it more difficult, Voruganti said, for customers to move off of their traditional workloads, onto a new class of more scalable, containerized workloads that use distributed servers and more automated orchestration.

    There’s a gathering multitude of hardware providers in the OCP space.  At the same time, new Equinix customers are demanding faster provisioning for their services, he told us.  That makes it incumbent upon Equinix to work with a broad number of hardware vendors simultaneously, on behalf of customers who no longer have the option of waiting weeks for their data centers to be provisioned.

    The implication here is that Equinix (or a large colo provider like Equinix, of which admittedly there aren’t all that many) must have a bigger stake in negotiations.  OCP gives this entity a role that an ordinary customer could never have had before: an architectural role.  Voruganti denied that Equinix wants to assume the role of “calling the shots;” nevertheless, by participating with Facebook, it is definitely assuming a place for itself at the table.

    Not too often in history has the customer been capable of stepping into the architect’s role, or been compelled by economic or other circumstances to do so.

    The original Jeep prototype as constructed by Bantam Motors, 1940. [Photo in the public domain]

    The original Jeep prototype as constructed by Bantam Motors, 1940. [Photo in the public domain]

    But there is one historical parallel, whose repercussions have affected manufacturing even to this day:  Just prior to World War II, when the War Department was preparing to help arm the Allies in Europe, the U.S. Government awarded contracts to Willys-Overland and Ford Motor Co. for prototyping, and eventually building, the concept we now call the Jeep.  The customer specified the urgency of the timeframe, and had the purchasing power to make design decisions.

    Today, open source has given the power to major corporate customers — what Dr. Voruganti classifies as the hyperscale users — to lay down specifications for servers and racks.  Microsoft and Google are among those users.  Because their specs are now becoming the standard for massive data centers, the networking specifications must follow suit, keeping up with newer, less traditional server architectures.

    Here is where Equinix seized the initiative: making itself available to Facebook (which leads OCP) as both the architect and the laboratory for a new kind of modular network router — one that functions more like SDN, but is manageable like hardware because it is hardware.  If software can be made smarter, Voruganti believes, hardware can be made faster.

    “If you have a centralized UPS, it takes up to a quarter of your data center space, and you need very specialized, skilled operators for the batteries,” he explained.  “So all these hyperscalers — Facebook, Microsoft, Google — said, ‘Let’s redesign this and take a fundamental look differently at how this is done.’  Then they started to publicize their designs for how they’re doing this.”

    Customer Demand

    Once the Tier-2 cloud service providers got wind of the OCP’s first specifications, they wanted a piece of the action.  These are Equinix’ customers, as Voruganti described them; they want freedom from the hyperscalers’ realm, but they don’t want to own and manage their own data centers.  They want the benefits of OCP’s advanced power distribution model.  And to the extent that Equinix couldn’t answer their demands, one gets the impression they weren’t very happy.

    “If that’s where the puck is going,” said the Equinix VP, “we want to be there.”

    Voruganti believes in the benefits of disaggregation from a network architecture perspective.  He’s noticed that it’s enabled software developers to enter what had historically been a hardware space, staffed with physicists and guarded by lawyers.  But he knows that customers don’t purchase disaggregated services.

    Kaladhar Voruganti - Equinix“All of these disaggregated components need to be aggregated somewhere,” he said.  “And in many cases, the management software is being provided as a SaaS model.  So we want to make sure that the ecosystem for the disaggregated model actually occurs and resides at Equinix.”

    Granted, when new customers become Equinix tenants, they don’t all flock to these new and disaggregated systems.  Voruganti acknowledged that they actually demand more conventional systems first, because they’ll be transitioning their existing, conventional (“legacy”) workloads into leased systems.  Those customers then want Equinix to be the one managing the transition to newer, more modular, more cost-effective systems that handle newer workloads.  If the risk in this software transition can be specified and quantified, they believe, it can be more efficiently marshaled.

    Rack Space, As It Were

    This creates a new and unanticipated problem, having to do with consistency.  Any major colo provider, but certainly Equinix most of all, must provide customers with consistent service levels across all of its data center facilities.  Equinix can’t afford to render its SV3 and SV8 facilities as some kind of “hard-hat zone” for customers; it has to maintain the same service levels there as for all its other facilities, even as they receive Voyager routers for the first time.

    The company’s strategy for addressing this potential customer headache point is by equipping its servers for classes of use cases, on a per-rack basis.

    There will be a small window of time, Dr. Voruganti told us, in which certain metropolitan areas may observe service variation.  But the goal is to immediately ameliorate those effects, staggering some of the variations across particular racks, and assigning preferred use cases to those racks.  “Based on what the customers want, we will give them the proper types of racks and proper types of data center solutions,” he said.

    “We are going to help come up with a deployment model and an operational model, but we will not be doing it alone,” said the Equinix VP.  “We will be doing it with the CSPs, the MSPs, the hardware and software vendors, working as part of a consortium.  At the end of the day, the CSPs and MSPs are the major guys deploying their stuff in our data centers, and other software vendors deploying their software need to agree to deploy to that operational model.  And the community as a whole needs to make sure they’re all in agreement — that the model will operate and will be supported.  I think we’ll have a major role to play, but I would not say that we will be calling the shots.”

    It’s a sweet sentiment.  But Microsoft’s and Google’s contributions to the OCP have already dramatically altered every aspect of the server business, down to the semiconductor level.  Like it or not, Equinix is now occupying a seat at the same level of the networking table.

    << Previous Day 2016/11/04
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org