Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, July 9th, 2014

    Time Event
    12:00p
    Can Dragonfly Attacks Cause Data Center Outages?

    Data security firm Symantec has been sounding alarm bells with reports of an ongoing cyber espionage campaign by a group dubbed Dragonfly aimed primarily at the energy sector. The group’s initial targets were defense and aviation companies in the U.S. and Canada, but in early 2013 the focus shifted to U.S. and European energy firms. According to Symantec, Dragonfly has managed to compromise a number of strategically important organizations for spying purposes and could potentially damage or disrupt energy supplies.

    A disruption to parts of the U.S. energy grid could be disastrous and put data center providers and customers through some rough times. While data centers generally have multiple layers of infrastructure redundancy and backup power supplies to ride out utility outages, prolonged grid-power interruptions could lead to data center outages.

    The Dragonfly group has a range of malware tools at its disposal and could launch attacks in multiple ways.  Also known as “Energetic Bear,” it has been in operation since at least 2011. Symantec says it bears the hallmarks of a state-sponsored operation, displaying a high degree of technical capability. Based on an analysis of when they attack, the company says the attackers are likely based in Eastern Europe.

    The group started with planting malware in phishing emails sent to personnel in target firms. It moved on to watering-hole attacks, compromising websites likely to be  visited by employees in the energy sector with exploit kits. The third phase of the campaign was the Trojanizing of legitimate software bundles belonging to three different Industrial Control System equipment manufacturers. Two of them were identified as MB Connect Line, a German maker of industrial routers and remote-access appliances, and eWon, a Belgian firm that makes virtual private network software used to access industrial control devices. The third vendor has not been identified.  Through a Trojan, companies installed malware when downloading software updates for computers running ICS equipment.

    The previous major malware campaign to target ICS equipment was Stuxnet, which specifically targeted Iran’s nuclear program with the goal of sabotaging it. Dragonfly’s goals are broader, with a focus on espionage and persistent access immediately with sabotage as an option down the line.

    Anything connected to Internet

    Ron Bradburn, director of technology for Vancouver-based data center provider Peer 1 Hosting, says anything connected or able to connect to the Internet is vulnerable to attacks by such a sophisticated group. “What I found interesting about all of this is the possible linkage to state sponsored espionage, the level of sophistication that these groups are exhibiting, and the growing concerns in the market place to privacy and security,” he says. “The scale of this event is quite large, and the adept way they leveraged different attack vectors make it well organized and strategic in nature.”

    Long-term utility outages real threat to data center uptime

    It would be difficult to use the tactics used in attacking utilities to create data center outages, but data centers rely on utilities for long-term power supply. “I don’t think data centers themselves could be as attackable as utilities because many of the building management systems run off the Internet,” Vincent Rais, who does business development at EvoSwitch, an Amsterdam-based service provider. “There’s no remote turning on and off for most data centers.”

    Jason Yaeger, of Ann Arbor, Michigan-based Online Tech, however, says, “The scary truth is that the data center industry is not as prepared for this kind of electrical grid scenario as clients expect our industry to be. That’s because not all data center and cloud companies have the kinds of systems and protocols in place to be prepared for a lengthy power outage.”

    ITC Holdings, a major utility serving Michigan, where Online Tech’s data centers are, has recently filed a cyber-attack incident report, but later said it was a false alarm. Other utilities, Duke Energy and NRG Energy, each filed a report last year detailing suspected cyber attacks. Duke isolated and removed several computers from the rest of the company’s systems and all software was stripped, reinstalled and tested again.

    The only way for data center operators to maintain uptime during prolonged utility outages is to sign fuel delivery contracts with multiple vendors to keep their backup generators running.

    Mike Terlizzi, executive vice president of engineering and construction at New York-based Telx, said, “When we set up our [fuel] contracts we figure out logistically how they fulfill their SLAs.” If a distributor’s fuel truck has to cross a river to get to a data center, for example, there has to be a contract with another distributor whose trucks have a path without a river in the way.

    12:30p
    Maintenance Management and Your Data Center

    Jeff O’Brien is an industry specialist and blogger at Maintenance Assistant Inc., a provider of innovative web-based CMMS, which is a tool to manage facilities and infrastructure equipment at data centers. 

    Unplanned maintenance costs 3 – 9 times more than planned maintenance which is mainly due to overtime-labor costs, collateral damage, emergency parts and service call outs. When data center infrastructure equipment goes down, there are a number of other costs to consider such as data loss, data corruption, damaged reputation, inefficient use of resources and safety issues, among others.

    Can you afford to have one of your critical power distribution assets fail unexpectedly?

    According to the Ponemon Institute, downtime costs the average data center $7,900 per minute. With the typical outage lasting 86 minutes, that equates to approximately $700,000 for an average outage. As data centers grow, become more complex and host progressively more critical business data, you can expect this number to scale accordingly.

    Stop the damage before it starts

    Like any asset intensive businesses, it is vital that critical infrastructure assets are kept in good working order and ready to function when needed. For the average data center, the basic “fix it when it breaks” strategy is not an acceptable approach for mission critical assets such as backup generators, CRAC units and fire suppression systems. Yes, it is difficult to tell when a hard drive is going to fail so you replace it when it breaks. But data center organizations cannot allow this reactive mentality to bleed into the maintenance approach for support systems such as HVAC, UPS and generators.

    Unplanned power outages can be seamless when backup UPS and generators activate flawlessly but the equipment must be able to perform its function when required. Data center maintenance needs to be planned proactively so issues can be identified before they turn into something more serious.

    So how do you effectively plan and manage maintenance on critical infrastructure assets to limit emergency breakdowns and keep maintenance costs under control? The answer is simple – develop an asset management strategy for your infrastructure assets that focuses on planned preventative maintenance, set availability and reliability targets, and track it all using a Computerized Maintenance Management System (CMMS).

    What is a CMMS?

    A CMMS is a software tool to help manage and track maintenance activities such as scheduled maintenance, work orders, parts inventory, purchasing and projects. It gives full visibility and control on maintenance operations so everyone can see what has been done and what needs to be done.

    Dashboard KPI’s help measure current performance against defined goals. It also helps identify recurring tasks that need to be done or prioritized ensuring nothing is overlooked.

    Benefits of using a CMMS

    One of the biggest benefits of a CMMS is increased labor productivity as the system can help plan and track work so technicians can complete their tasks without interruption. This also helps optimize maintenance schedules and troubleshoot breakdowns because you can see what’s been done in the past.

    A CMMS can also help a data center become more health and safety compliant. Safety procedures can be included on all job plans ensuring technicians are aware of the risks, and regular inspections on fire suppression systems can be planned and tracked ensuring the organization is compliant and ready for those audits.

    CMMS software tracks how much time and money is being spent on which assets helping organizations make repair versus replace decisions effortlessly. Business intelligence reports built into the CMMS can be used to analyze and refine your maintenance tactics. Rather than trolling through receipts and dockets at the end of a year, the manager can simply run a costing report to see where the budget was spent and what needs to be improved.

    It takes more than just software

    A CMMS is not a magic bullet that will effortlessly turn your maintenance department into a well-oiled machine. Asset management is an ongoing process of continuous improvement and the CMMS is the tool to help manage it.  In time, it becomes a database of maintenance related information that can be used to outline best practices, identify workflow improvements, pinpoint cost savings and eliminate waste.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    CenturyLink Expands Canadian Cloud in Toronto Data Center

    CenturyLink has been expanding its global cloud footprint at a rapid pace, and its next expansion is in its Toronto data center, where the company has established a cloud node. It joins an existing Toronto node as well as one on the country’s west coast in Vancouver.

    Cloud in Canada is of growing importance, with rising desire to keep data within the country’s borders. This need goes beyond legal requirements. Customers in general worry that other countries could place restrictions and prevent data from being transferred outside of the hosted country. For businesses in Canada, or those that do business in Canada, the new node means CenturyLink continues to build a big-league cloud within borders, with the benefits of data sovereignty and performance through proximity.

    “There’s lots of demand for cloud in Canada,” said Richard Seroter,  director of product management for CenturyLink cloud. There are more workloads in the market targeted for cloud, and many of them are large workloads, he noted, adding that the major public cloud providers haven’t established themselves much within the country’s borders.

    The CA3 cloud node will also appeal to customers in T3, CenturyLink’s new Toronto data center opening later this year.

    The provider offers compute, storage and networking services in its public cloud. Customers can create custom dimensions for CPU and RAM, use block storage for app data, and opt in for a premium storage option that auto-replicates data from Toronto to Vancouver.

    The company currently offers cloud managed services only in Santa Clara, California, and in Sterling, Virginia, but it is planning to add them in Canada in the coming months. It initially rolled out on-demand managed services that customers can order through the same portal as cloud infrastructure last month.

    In addition to expanding its cloud locations, it has had a very active 2014 in terms of data center expansion. CenturyLink’s data center services play came with acquisition of Savvis and its colo roots remain core to its strategy. By the end of the year, it will have added more than 180,000 square feet and 20 MW to its global presence. It recently opened a new facility in Minnesota, which is actually further north than its Toronto facilities and boasts a better professional hockey team.

    2:52p
    EMC Goes for Hyper-Consolidation of Enterprise Storage With VMAX³, TwinStrata Acquisition

    In a significant move to advance its flagship VMAX storage line EMC launched VMAX³ hybrid systems and announced that it has acquired cloud storage management company TwinStrata.

    The storage giant announced this news plus several other announcements about XtremeIO all-flash arrays, Isilon updates and integration with Pivotal for Data Lake Hadoop bundles were announced at its Redefine Possible event Tuesday in London.

    Hyper-converged infrastructure

    The new VMAX³ has Hypermax converged storage hypervisor and operating system as a foundation, which lets it embed storage infrastructure services like cloud access, data mobility and data protection directly on the array. This also gives it the ability to perform real-time non-disruptive data services. Support for hybrid cloud comes in the form of dynamically allocating up to 384 Intel Ivy Bridge processor cores between mixed workloads.

    The addition of Hypermax and EMC’s Dynamic Virtual Matrix architecture to the VMAX³ family are significant on top of sheer horsepower and scalability offered by the product family. It empowers diverse workloads across a range of hypervisors and virtual machine types and simplifies management across private storage and public cloud platforms.  A variety of VMAX³ models fit various storage and performance needs, with the top-of-the-line 400K model offering up to 720 drives per rack, up to 8 VMAX3 engines with 48 CPU cores each and maximum raw capacity of 4.7 petabytes.

    The TwinStrata cloud-integrated appliance will be integrated with the VMAX solution to create what EMC calls its enterprise data services platform. TwinStrata co-founder and CEO Nicos Vekiarides said that once integrated with VMAX³ the new data service platform will allow automatic tiering of workloads even more seamlessly for off-premise storage capacity expansion, data protection and disaster recovery.

    Massachusetts-based TwinStrata launched its CloudArray about four years ago as an easy yet powerful solution for primary storage, disaster recovery, multi-site file sharing, data archiving or offsite backup needs. With support for many of the leading public cloud vendors TwinStrata’s cloud tiering technology allows customers to save money by moving infrequently accessed data to the public cloud.

    “The revolutionary transformation of VMAX³ – from enterprise storage to an open enterprise data service platform – delivers customers the ability to transform a traditional storage infrastructure to an agile data center infrastructure,” said Brian Gallagher, Midrange Systems Division president at EMC. “Simply put, it’s hyper-consolidation for existing workloads and their underlying infrastructure. As the first open enterprise data service platform, VMAX3 is the foundation for hybrid cloud as customers look to deliver Storage-as-a-Service with simple, policy-based service levels.”

    7:02p
    DCIM Vendor Modius Gets Patent for Data Collection Across Distributed Infrastructure

    Data Center Infrastructure Management software company Modius was awarded a patent that relates to the collection of data across distributed environments. The patent covers the company’s OpenData Collector, a scalable software component that actively collects unstructured performance data from serial and network-connected devices, normalizes this data, optionally combines it with other metrics and transports it to a centralized database for further analysis.

    The patent covers the technology and architecture OpenData uses to collect and normalize data from data centers, IT assets and supporting applications across a distributed environment. DCIM providers are looking to offer real-time data collection for data center monitoring across distributed infrastructures, and the patent addresses San Francisco-based Modius’ way of doing this.

    OpenData displays power, environmental intelligence and asset information across a distributed network of facilities on a single console. Data center personnel use this data to make informed decisions to reduce power consumption, prevent outages and proactively manage facilities in general.

    OpenData Collectors act as independent hubs for data collection in distributed environments across multiple locations – DCIM for a distributed data center footprint. The software can run on standard servers, dedicated appliances or the cloud, monitoring infrastructure everywhere. There’s also a specialized collector gateway for non-networked devices.

    It collects raw performance data, converts it to structured data and combined it with other metrics, available for advanced predictive analytics. It does this using a variety of standard protocol for networked devices like SNMP, BACnet, and Modbus-TCP.

    “We believe that real-time data collection is the foundation of every successful DCIM implementation.” said Craig Compiano, CEO of Modius. “Building a high-speed, highly scalable, open data collection architecture was our design goal for OpenData, and we are extremely pleased to now have a patent for this work.”

    Compiano added that the company intended to “strongly defend its intellectual property and the products covered by that intellectual property.”

    7:17p
    Cannon’s Latest Data Center Modules Can Be Assembled On Site

    Cannon Technologies introduced the T4 Granular Modular Data Centre, the latest product in its modular data center portfolio. T4 uses components that can be manually assembled on-site without the use of a crane to solve logistical challenges.

    Modular data centers are generally used for just-in-time infrastructure or for data centers in hard-to-reach places. The company says that virtually any size is possible from a few racks to a few hundred.

    Building out in modules helps a data center start small and grow cost effectively. The lead times are also shorter, typically around 12 weeks from order to delivery to the customer site, according to Cannon.

    T4 can be located on a simple concrete slab in locations such as parking lots, hangars, warehouses or even on roofs. It’s designed to cope with temperature extremes and protect against environmental conditions. Thick insulated walls are surrounded by weatherproof galvanized powder coated steel. Non-combustible materials are used throughout.

    It can be configured for different levels of resilience and meets Security Equipment Assessment Panel (SEAP) Level 3.

    It also uses efficient cooling. Close-coupled cooling using Cannon’s WithIn Row Cooling (WIRC) units eliminates the need for long ducting runs, deep raised floors and large powerful fan systems. Combined with economic free cooling chillers to draw maximum benefit from ambient air, they reduce compressor run time and save energy consumption.

    The aisles are full-width for easier moving, and there are several options for racks, cable raceways and free-form pockets to accept third-party cabinets. There’s also optional built-in DCIM (Data Center Infrastructure Management) capabilities via  Cannon T4 Data Centre Manager software.

    “The T4 Granular Modular Data Centre is our most flexible and fully scalable solution yet and is the perfect option for areas that have traditionally found it difficult to accommodate this type of facility,” Mark Hirst, head of T4 Data Centre Solutions at Cannon Technologies Group, said. “Just as importantly, this can all be achieved using our existing best-in-class components, with all the cutting-edge features and benefits that our customers expect.”

    9:01p
    IBM Pumps $3B in Chip R&D to Keep Moore’s Law Alive

    As if to tell the world that it has no intention of getting out of the chip business, IBM announced a $3 billion investment program in research and development for processor technologies of the future.

    Reportedly close to selling off its money-losing Power chip manufacturing business, the Armonk, New York-based technology behemoth now says it is doubling up on its efforts to get beyond the 7-nanometer process technology threshold – the point at which fundamental physics may force Moore’s Law to break down.

    While $3 billion is an impressive figure, Big Blue – which said it would spend that money on chip R&D over the next five years – will need a lot more than that to compete in the kingdom of Intel. Commenting on IBM’s announcement, an Intel spokesperson told us Intel spends about $10 billion a year on processor R&D.

    IBM’s latest Power8 chips are built using 22 nm process technology, while Intel’s 14 nm Broadwell chips are already in production. “We’re on track to get to 7 nm,” the Intel spokesperson said. The chipmaker has not revealed a timeline for availability of 7 nm processors.

    Beyond 7 nm

    Claiming that it has twice as many patents (500 plus) for technologies that will drive beyond-7-nm progress than the “nearest competitor,” however, IBM’s tone is optimistic, regardless of its chip business losses, which an anonymous source told Bloomberg were about $1.5 billion a year.

    The giant’s $3 billion investment will focus on two broad research programs. The one looking at process technology beyond 7 nm is meant to address physical challenges the company says will make it extremely difficult and expensive to manufacture them.

    “The question is not if we will introduce 7 nm technology into manufacturing, but rather how, when and at what cost?” John Kelly, senior vice president of IBM Research, said in a statement.

    IBM’s "noise-free" lab in the Binnig and Rohrer Nanotechnology Center of IBM and ETH Zürich (a university). The tools will be placed on custom-tailored tool bases. As part of the initiative, IBM is investing in research facilities in Switzerland, New York State and California. (Photo by Emanuel Lörtscher, courtesy of IBM)

    IBM’s “noise-free” lab in the Binnig and Rohrer Nanotechnology Center of IBM and ETH Zürich (a university). The tools will be placed on custom-tailored tool bases. As part of the initiative, IBM is investing in research facilities in Switzerland, New York State and California. (Photo by Emanuel Lörtscher, courtesy of IBM)

    The second program is related to the first one. Its goal is to find materials that are better than silicon at handling 7 nm and below. Silicon transistors, IBM said, are close to reaching a size so small they will no longer work. “Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower-power, lower-cost and higher-speed processors that the industry has become accustomed to,” IBM said.

    There is a number of answers to this dilemma IBM is planning to evaluate, including quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low-power transistors and graphene.

    Chip business sell-off?

    IBM is close to making a deal with GlobalFoundries to sell its chip manufacturing business, Bloomberg News reported, citing anonymous sources. The buyer is interested primarily in intellectual property and IBM engineers rather than its manufacturing facilities.

    Chip manufacturing is a costly affair, and getting rid of the money-losing business will help IBM’s bottom line. Divestiture of underperforming businesses has been a theme at IBM, which is currently in the process of selling its x86 server business to Lenovo, the same Chinese computer vendor that bought its PC business a decade ago.

    It is unclear how the GlobalFoundries deal may affect the OpenPower Consortium IBM launched together with Google, Mellanox, NVIDIA and Tyan in 2013. The consortium’s goal was to license intellectual property tied to the Power processor architecture to others to stimulate development of an ecosystem around the architecture.

    In addition to acting as a licensor of chip technology, IBM also acts as a licensee. The company licensed processor architecture from UK’s ARM Holdings last year it said it would use to build custom chips for clients.

    IBM may be selling its processor manufacturing operations — a plan still unconfirmed — given its various processor-related entanglements and the most recent $3 billion R&D program, it is clear that the company is not abandoning the chip business. It does, however, seem that Big Blue is changing its strategy in this space to one focused on creation of technology rather than manufacturing products.

    << Previous Day 2014/07/09
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org