Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, March 28th, 2016

    Time Event
    12:00p
    Juniper CIO on Going All-In with Cloud

    When Bob Worrall joined Juniper Networks as CIO in July 2015, the multi-year transformation of the company’s IT strategy had been mostly complete. Juniper had gone from an on-prem-first to a cloud-first approach to infrastructure and from operating 18 data centers around the world to one in Sacramento, California, which hosts some leftover legacy applications, and two sites to support its engineering efforts.

    DCK covered Juniper’s data center consolidation earlier.

    We caught up with Worrall earlier this month in Las Vegas, following his presentation at the Data Center World Global conference there, to get more details about Juniper’s infrastructure rethink and to ask him about some important industry trends.

    Below is a Q&A, in which the Juniper CIO reflects on the experience of the company’s switching from a “Why cloud?” attitude to a “Why not cloud?” one, his views on commodity hardware, which is eating into traditional sources of revenue for vendors like Juniper, the rise of Software Defined Networking and Network Function Virtualization, as well as the aftermath of the announcement he made last year that the company had found a “backdoor” in its data center security software.

    The interview has been edited for clarity and brevity:

    Data Center Knowledge: Can you describe Juniper’s use of cloud computing today?

    Bob Worrall: The short version [of the answer] is we use cloud for everything. We have pivoted away from ‘why cloud?’ to ‘why not cloud?’ and I think that has transformed our overall approach. We don’t even consider anything on-prem anymore. It’s just not part of the conversation. If someone even suggested that, they would be looked at as someone from a different planet or something; it’s just not part of the DNA of the company.

    We use cloud in all various ways, from engineering to corporate applications to some custom applications, everything in between. All of the flavors of it.

    DCK: Can you share what your cloud services spend is?

    bob worrall juniper

    Bob Worrall, CIO, Juniper Networks

    BW: I can’t; that’s an internal number. But the overall spend for IT in the last five years has not materially changed. The shift, obviously, has changed: from depreciation on equipment and people to cloud services. We’ve harvested savings in one area and applied those monies to others.

    DCK: So much of the spending has shifted from capital to operational cost?

    BW: Most of it. We still have a lot of people as well, but the people that we have, fewer and fewer have been on the infrastructure side. We’ve invested more in roles like vendor management and compliance and security. We still have a large development team doing custom development, applications and so on, for internal needs.

    DCK: Can you describe the tipping point when Juniper decided to go all-in with cloud?

    BW: Back in 2011, with some early wins in cloud adoption of [Microsoft] Office 365 and so forth, I think the company just realized that cloud was real, if you will. Real in the sense that the savings were real, the operational benefits were real, the agility was real. And so there was a fundamental change in the attitude: Let’s go chase it. Let’s go big.

    I think that was further supported by the realities of the customers that we began selling into back then. Large cloud providers who were running their businesses on all of these technologies and doing so very effectively. So, if it worked for them, certainly it could work for us.

    DCK: Many of your biggest customers, the big cloud providers, have switched to commodity hardware. Have you done the same?

    BW: On the hardware side we have various flavors of commodity hardware, up through name-brand providers, and everything in between. Really, it depends. In areas like engineering, we’re more apt to go with white-box approach to things, but for some applications that have unique software requirements, or other requirements, we might choose a more traditional supplier.

    DCK: Can you elaborate on that last point, that some software requires hardware from traditional vendors?

    BW: Security, or DR, or high availability, or some of those kinds of requirements for the applications. It’s not that you can’t achieve those with commodity hardware, but in some cases it might just be easier to buy a solution that includes some name-brand server providers.

    DCK: Are services a big part of that calculation?

    BW: Services are a big part of it. We have applications that support our customer service team, for example, with points all around the world, so it’s critical that we be able to get depots and spares and those kinds of things to many locations around the world. For some commodity suppliers, they may not have depots and points of presence in the Far East or Eastern Europe, so, in some cases we’ll go with name-brand providers.

    3:00p
    How to Choose a Microsegmentation Solution to Protect VMs

    Tim Liu is Chief Technology Officer for Hillstone Networks.

    To ensure security, traditional networks are usually divided into “security zones,” where groups of assets such as servers or desktops are put on different network subnets or segments. Security policies and inspections are then performed over the traffic between these security zones. The security zones can be set up as needed for departmental boundaries (e.g. R&D, finance), functions (e.g. web servers vs. databases), or for security requirements (e.g. DMZ). This physical segmentation creates regions where breaching in a specific security zone will not quickly spread elsewhere and has been the basis of security enforcement before today’s cloud age.

    As we already know, virtualization blurs the physical boundaries between applications and workloads. These boundaries are becoming virtual as well. And since the virtual machines in the clouds are dynamic, these boundaries are also dynamic and can change as new VMs are created, moved or terminated. For a long time now, companies have been looking for a technology that can provide the same level of granularity for security control in the cloud, to be able to control effectively the east-west traffic in today’s virtualized data centers.

    Microsegmentation is now that technology. It uses software technology to create and maintain security boundaries between virtual machines. The virtual machines can reside on the same or different servers, or can be grouped as needed into logical segments, each isolated from each other. Access control can be applied and security inspections can be performed between these segments.

    Together with network virtualization, microsegmentation offers businesses an easy migration from their physical network into the cloud, by maintaining the same logical network and security functions. In addition, microsegmentation brings about a new level of manageability into data center, allows for increasing visibility into the east-west traffic and interaction between VMs.

    Microsegmentation, however, is not a panacea for security problems in the cloud. For example, it does not address the security of virtualization platforms or cloud orchestration. But it does offer a very important step forward for security in the data center.

    There are several ways different solutions implement microsegmentation. Some are offered on top of Software Defined Storage (SDN) solutions, others are implemented in the endpoint VMs through workload agents. Businesses – when choosing such a solution – need to take a look at the requirements of a virtualized data center, and that any microsegmentation technology they choose, need to be able to deliver on several fronts:

    • The nicrosegmentation technology needs to offer the same level of elasticity that the data center provides, handling both the change in the size of the physical infrastructure, as well as the change of workloads that run on the infrastructure. It needs to support the dynamic nature of the virtualized workload, and provide security for a VM throughout its life cycle. It also needs to offer required performance and latency for demanding applications.
    • The microsegmentation solution needs to work with a diverse set of hardware and software environments. There is an advantage to using a microsegmentation technology that is decoupled from the virtualization technology, in that the solution it provides can be independent of, and in addition to, any security features that the virtualization layer supports.
    • In order to provide on-demand security in the virtualized environment, it is imperative for the microsegmentation solution to support changes to security functionalities without changing the infrastructure. The traffic between a source and destination can be subject to different security functions through service chaining, as dictated by security policies. Services can be added and removed from the chain without reconfiguration of workloads and VMs that contains them.
    • The microsegmentation solution needs to integrate well with cloud orchestration and avoid intrusive changes to the cloud infrastructure. The solution should strive for zero disruption to existing applications during initial installation and subsequent updates.

    In summary, microsegmentation offers a powerful way to add security control to east-west traffic inside virtualized data centers. Its segmentations of virtualized infrastructure offers a familiar architecture where traditional security practices can be applied. The technology will facilitate cloud acceptance and help transition of more legacy IT onto the cloud.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:51p
    NTT to Buy Dell’s Perot IT Services Business

    NTT Data, a subsidiary of the Japanese giant NTT Group, which also operates an extensive global data center network, has agreed to acquire the IT services business of Dell, the companies announced Monday.

    The buyer said it will integrate Dell Services, a business Dell gained when it acquired a company called Perot Systems for $3.9 billion, in its IT services and business process outsourcing division. Once closed, the deal will substantially increase NTT Data’s presence in North America, an expansion tactic other NTT subsidiaries have applied consistently around the world.

    The acquisition price is about $3.1 billion, the New York Times reported citing a regulatory filing by NTT Data with the Tokyo Stock Exchange.

    Dell is in the midst of a $67 billion acquisition of EMC, and the sale of its services business is likely a move to raise money for that deal.

    Dell Services data centers will become part of NTT’s global platform hosted in about 230 data centers around the world. The Dell division has data centers in the US, UK, and Australia.

    Along with additional services capabilities and infrastructure, NTT gains a plethora of big-name customers through the acquisition. Dell Services recently trumpeted new customer wins, including American Express and Blue Cross & Blue Shield, and customers it has been serving since earlier, such as Gap, Hilton, and Staples, among others.

    One of the new customers is Dubai Health Authority. The Dubai government recently enacted a mandatory health insurance law, according to Dell, and the company is helping DHA modernize its IT infrastructure to process health insurance claims more efficiently.

    NTT and its various subsidiaries have been investing heavily in international expansion, with particular focus on data center providers. Data center companies NTT acquired in recent years include US-based RagingWire, India’s NetMagic, UK’s Gyron, Germany’s e-shelter, and Indonesia’s PT Cyber CSF.

    NTT has spent about $550 million on acquisitions outside Japan since 2011, according to Bloomberg. Its revenue outside Japan has increased steadily over the last 10 years, going from almost none in 2006 to more than $400 million last year, according to Bloomberg.

    6:39p
    Nlyte Integrates DCIM Software with HPE’s IT Management Solution

    Nlyte Software has announced the most recent integration of its DCIM software Monday. Its data center infrastructure management platform now integrates with OneView, the IT management software by Hewlett-Packard Enterprise.

    Nlyte has historically placed focus on integration and interoperability with other management systems, most recently focusing on integration with IT service management platforms, such as the popular products by ServiceNow and BMC Software.

    Integration with HPE’s OneView fills a gap for data center managers that want to manage their infrastructure holistically, from data center resources through IT.

    Read more: Who is Winning in the DCIM Software Market?

    Nlyte enables users to:

    • Automate and optimize chassis and asset placement management and tags
    • Monitor infrastructure resource utilization, such as power, CPU and temperature readings for chassis and mounted blades
    • Synchronize changes in infrastructure under HPE OneView management
    • Increase rack density within a stated power envelope

    According to the software company, these capabilities will help improve planning, allocation, and billing. IT managers can tie real-time usage and utilization data to specific customers.

    It also gives greater control over changes on the data center floor.

    OneView’s capabilities include management of HPE’s new Composable Infrastructure products. The idea is to have fluid, expandable and contractible pools of compute, storage, and networking resources.

    This infrastructure comes as chassis filled with any mix of compute, storage, or networking modules. The goal is to have just the right amount of each for every application.

    << Previous Day 2016/03/28
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org