Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 12th, 2017

    Time Event
    12:00p
    HPE’s Gen10 Servers Will Have Security Drilled into Silicon

    Hewlett Packard Enterprise unveiled Gen10 at Discover in Las Vegas last week, the first major upgrade to its ProLiant line of servers since Gen9 was released in 2014. While the release of a new server is generally not very interesting in this age of commodity hardware, this one is a bit more notable as it has some interesting security features built into the hardware.

    The announcement was made by Alain Andreoli, head of HPE’s infrastructure group, with no shortage of hyperbole: “We have definitively created the world’s most secure industry standard server.”

    The security feature works at the firmware level, utilizing custom HPE silicon.

    “In each Gen10 server we have created a unique individual fingerprint for the silicon,” Andreoli explained. “Your server will not boot unless the firmware matches this print — it is just locked end to end.”

    This silicon-level approach to security is reminiscent of the approach used by Google, which has designed custom security chips for servers its cloud runs on inside Google data centers.

    Read more: Here’s How Google Secures Its Cloud

    According to HPE, the technology from the silicon to the firmware is proprietary. A mismatch is only possible if the firmware has been altered.

    In addition, the servers have another level of built-in security protection that utilizes technology the company gained when it acquired the behavioral security analytics startup Niara earlier this year.

    “We have embedded proactive detection and recovery,” Andreoli said. “Your server has been turned into your own active spy. Every day it scans millions of lines of code to detect any potential malware. Then we decided to apply advanced machine learning to identify any malicious behavior. You can think of it this way: The system endlessly trains itself and learns again and again. It analyses patterns, identifies suspicious activity, and informs you if there is a threat so you don’t have to be paranoid anymore.

    “Finally, it’s all about the life cycle of the data. Security’s a long journey. We have even planned for the grave. When your server is being disposed of, its embedded data cannot be reconstructed or retrieved any longer. We protect it forever.”

    “This means that not only do we have the most secure industry standard servers,” Andreoli said, “but also that none of our competitors will be possibly able to catch up.”

    HPE’s competitors might have different ideas.

    In addition to these security features, the new servers will offer Scalable Persistent Memory with terabyte capacity and includes another feature that Andreoli called “intelligent system tuning.”

    “What this does is optimize a new capability that Intel CPUs will have to tune their clock speeds for different levels of performance.” This will evidently allow the server to match workload profiles, boosting overall performance.

    The servers will also be available with a pricing option that will allow users to scale up or down while only paying for what they use. Current users of Gen9 servers will be able to “upgrade to Gen10 with no upgrade in your payments.”

    The servers are expected to be available this summer.

    3:00p
    Infinera Doubles Data Center Interconnect Throughput

    In a bid to maintain its market share lead, Infinera has introduced Cloud Xpress 2, new 100 Gigabit Ethernet data center interconnect (DCI) equipment that offers much faster speeds for cloud, colocation, and communication service providers.

    The new version of Cloud Xpress – which became generally available this month – delivers 1.2 terabits per second (Tb/s) of throughput in a single rack, more than twice bandwidth of the original product, which reached 500 gigabits per second (Gb/s) speeds.

    Analysts say Cloud Xpress 2 will enable data center operators to keep up with their ever-increasing bandwidth demands, while giving Infinera an opportunity to stay competitive as a growing number of networking companies try to grab a piece of the emerging, but fast-growing market.

    When Infinera first released Cloud Xpress in late 2014, it essentially ushered in a new product category that provided technology that internet content providers, cloud providers, and other data center operators craved: small and simple-to-use purpose-built optical networking equipment for interconnecting multiple data centers in a metro area.

    “Since the release of the original Cloud Xpress, there have been a number of new entrants in the market and announcements of competitive products expected to be available in 2017, so there is a window for Infinera to get this new product in the hands of customers – both to secure its existing market leading position and potentially to expand it,” said Tim Doiron, principal analyst of intelligent networking at ACG Research.

    The market for small-form-factor optical data center interconnect appliances is one of the fastest growing segments in optical networking, with sales expected to grow from $281 million in 2016 to $1.6 billion in 2021, Doiron said. Thanks to its first-mover advantage, Infinera is the market share leader, but competitors such as Cisco and Ciena have produced their own products in the space and now rank second and third in market share, he said.

    For example, Cisco sells the NCS 1002 product, which provides 2 Tb/s speeds in a 2RU rack, while Ciena has announced plans to ship Waveserver Ai, which will provide 2.4 Tb/s throughput in a single rack.

    Andrew Schmitt, founder of market research firm Cignal AI in Boston, agrees that it was critical for Infinera to update Cloud Xpress. That’s because customers that purchase data center interconnect products in this category have zero vendor loyalty and are willing to switch suppliers every 12 to 18 months for the latest state-of-the-art equipment, he explained.

    The companies that use this equipment, such as cloud and colocation providers, can easily swap out components and pursue what’s called a “disaggregated optical network,” where they buy products from multiple vendors and have them co-exist on the same network, he said.

    “It’s a big announcement,” Schmitt said in reference to Infinera’s release of Cloud Xpress 2. “They invented this category. They are the big gorilla and have come under attack, and this is their response.”

    Infinera executives say Cloud Xpress 2 allows its customers to tackle the massive scaling challenges they have and that large internet content provider networks have already deployed the technology.

    Cloud Xpress 2, built using Infinera’s Infinite Capacity Engine, not only features increased throughput, it increases density by 4.8 times and uses half the power than its predecessor.

    For example, Cloud Xpress 2 can reach 6 Tb/s in 5 RU, while using only .57 watts of power per gigabit of bandwidth. In contrast, the predecessor reaches 6 Tb/s in 24 RU and uses one watt per gigabit of bandwidth, says Jay Gill, Infinera’s principal marketing manager.

    One of the product’s biggest advantages is that it’s easy to install and configures automatically, Gill says. In fact, one Cloud Xpress appliance can connect to another data center 130 kilometers away and deliver 1.2 Tb/s of capacity on a single fiber pair without having to add an external multiplexer or external amplifier, he says.

    Besides content and cloud providers, Infinera’s target audience for Cloud Xpress 2 include traditional communications service providers, enterprises, and even educational and research networks, Gill says.

    5:50p
    How Software-Defined Storage Can Unify Your Infrastructure

    Charles Foley is SVP of Talon.

    Today, more than ever, information is a critical resource. Not only are the daily operations of most companies data driven, but the insights derived from big data analytics can uncover entirely new business opportunities that otherwise would be overlooked. That’s why the amount of data corporations generate and consume is growing at exponential rates.

    Yet for many companies, it’s basically impossible to get a comprehensive view of their data assets. That’s because information is scattered across the organization. Some, perhaps the majority, may be in the main corporate data center. But a lot of critical information resides in remote locations, or even on the desktop or laptop computers of individual employees.

    In many cases this fragmentation is the result of what’s called “shadow IT,” a practice  where individual employees, or even whole departments or remote locations (ROBOs) take the responsibility for meeting their IT needs into their own hands. Rather than relying on a centralized IT department, they make their own arrangements. That may mean that each ROBO location has its own servers and storage units, and does its own data backups. Or workers may decide on their own to use a cloud storage service like Dropbox, or even to store critical company data on a USB memory stick they keep in a desk drawer.

    Even when the data is contained within a corporate or remote data center, information produced or used by different applications may be siloed, restricted to servers and storage units that are physically dedicated to specific workloads, and inaccessible to others.

    This information fragmentation, whether due to shadow IT or to the data silos that so easily arise when different business units each develop IT solutions to meet their own particular needs, can have a severe impact on an organization’s ability to take full advantage of its data resources.

    Why Data Fragmentation Is a Critical Issue

    Corporate data seems to follow an IT version of the second law of thermodynamics – unless deliberately constrained to do otherwise, data fragmentation within an organization’s IT infrastructure will automatically increase.

    For example, as a company grows and sets up remote locations, it’s natural for each site to take steps to secure the IT services required to meet its own peculiar needs. The result is often that the organization’s ROBOs each install and manage their own network, server, and data storage resources. Under those circumstances, securely sharing data with other locations, or with the home office, so that it is available across the organization in near real time can be a difficult, costly, and often unreliable process.

    When that process fails (or is not in place at all), critical information remains segregated in its own enclave, inaccessible by others in the company who may urgently need it. This can have a negative impact on decision-making at every level of the organization since leaders have access to only limited portions of the available information. As David Rae, CEO of Allied Global, a call-center services company that had to deal with its own data silos says, “We were probably making more decisions based on intuition or gut feeling than solid facts.”

    Other issues caused by data fragmentation include:

    • Data integrity exposures: When users in different locations each make their own changes to what is supposed to be a common data set, how are those updates propagated throughout the organization, and which of the differing local copies of that data is the authoritative version?
    • Data insecurity: Each time a user or location makes their own non-standard (and usually far from expert) provisions for data storage and backup, the potential for information to be lost or compromised is increased.
    • Inefficient use of resources: The server and storage resources associated with siloed data are typically underutilized, as each installation must overprovision to ensure it can always meet local capacity demands. The result is that across the organization, more CapEx and OpEx spending than would otherwise be required must be devoted to supporting the IT infrastructure.
    • Inefficient use of staff: When business-critical information is fragmented, some employee (or IT managed services vendor) must devote time and attention to supporting each island of data.

    How Software-Defined Storage Eliminates Data Fragmentation

    The key to minimizing information fragmentation is consolidating the organization’s data into a single centralized instance. Data need no longer be stored locally in ROBOs or in some shadow IT solution. Instead, all users, wherever located, are given concurrent access to the same centrally located data repository. The entire process is managed through the use of Software-Defined Storage.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    6:00p
    Report: Facebook to Move WhatsApp from IBM Cloud to Own Data Centers

    Brought to You by The WHIR

    Facebook is planning to move its WhatsApp messaging service from IBM’s cloud to its own data centers, according to a report by CNBC.

    WhatsApp, with its 1.2 billion uses, was at one time one of IBM’s top five cloud customers in terms of revenue, spending $2 million a month with IBM. It is no longer in the top five, according to CNBC sources.

    The move is not necessarily surprising; when Facebook acquired WhatsApp in 2014, it was already in the process of migrating Instagram from AWS to its own data centers, according to the report. IBM told CNBC that it is “completely natural for Facebook to seek synergies” across its business.

    In an off-the-record case study, IBM said that WhatsApp runs across more than 700 servers split between data centers in San Jose, California, and Washington, D.C., CNBC reports.

    With around two times the number of users as Dropbox, it seems that WhatsApp’s cloud bill could have been hefty. Facebook also has realized a lot of efficiencies within its own infrastructure through custom designed servers, switches, power supplies and UPS units. [For more on Facebook data centers, check out the Data Center Knowledge Facebook Data Center FAQ]

    Companies like Dropbox have been moving workloads from public cloud to on-premise infrastructure, citing cost savings, using the public cloud where it makes sense as part of a hybrid approach.

    This article originally appeared on The WHIR.

    7:31p
    Managed Services and Cloud Hosting: What the Leading Cloud Hosts Offer

    Brought to you by MSPmentor

    From Rackspace to AWS to Azure, there’s no shortage of cloud hosting platforms available today.

    One thing that sets them apart is the degree of managed services available from each cloud host.

    This is a key differentiator that MSPs need to understand when building a managed service offering.

    All of the major cloud-hosting platforms provide the same basic thing: Cloud-based infrastructure that organizations can use to run physical and/or virtual servers that host their workloads.

    The details of the hosting plans, prices and features of each platform vary, but that’s fodder for a separate article.

    This article’s goal is to compare the extent to which managed services are built into each major cloud platform, how easy it is to obtain third-party managed services and which cloud platforms are most in need of additional managed-service help.

    Cloud Platforms Compared

    Toward that end, here’s how cloud hosts stack up on the managed services front:

    Amazon Web Services (AWS)

    AWS is probably the best known cloud host in the market today.

    It has a built-in Managed Services feature, but what AWS calls Managed Services is actually just an automation tool.

    That said, because AWS has been around for so long, the actual managed services market around AWS is already very crowded.

    AWS is an important cloud platform to support if you want to build a comprehensive MSP business that covers all cloud vendors.

    But if you’re trying to build a niche MSP offering based on cloud hosting, AWS isn’t a good place to start.

    The AWS managed services market is already saturated.

    Microsoft Azure

    Azure is also a well-established cloud-hosting platform.

    Its features and functionality mirror those of AWS, to a large extent.

    Azure doesn’t have any built-in managed services offering, and it’s somewhat harder to find Azure third-party managed services support for Azure than it is for AWS.

    Still, the Azure managed services market is pretty mature.

    Rackspace

    Rackspace, which began as a cloud infrastructure company, has shifted gears and now focuses heavily on managed services.

    Its most recent move in this vein was its recent TriCore acquisition.

    As a result, Rackspace is not a good cloud host to focus on if you want to build an MSP offering.

    Rackspace already provides managed support for its infrastructure.

    Indeed, Rackspace even offers managed services for other clouds, including AWS and Azure.

    This means Rackspace is now a competitor with MSPs in all areas of the cloud.

    DigitalOcean

    DigitalOcean, which markets itself as a cloud-hosting platform for developers, is not as big a cloud host as AWS or Azure, but it ranks on any shortlist of major cloud providers.

    DigitalOcean doesn’t offer managed services for its infrastructure, although third-party companies do.

    Because DigitalOcean managed services is a smaller market, it is easier for new MSPs to enter.

    Linode

    Linode is another cloud host that pitches itself as a platform for developers.

    It provides hosting on high-performance Linux servers.

    Like Rackspace, Linode has expanded its managed service offerings in recent years.

    Linode’s managed services aren’t totally comprehensive, but the company offers backups, incident management and other types of professional services.

    There is some opportunity for third-party vendors to add extra managed services around the Linode platform that are not offered by Linode itself.

    Vultr

    Vultr is a cloud host that focuses on high-performance virtual servers.

    The company doesn’t offer managed services itself, but it partners with Cloudways to provide professional services.

    Still, there is room in the Vultr managed services market for other MSPs.

    This article originally appeared on MSPmentor.

    << Previous Day 2017/06/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org