Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, July 13th, 2016

    Time Event
    12:00p
    Lowering Your Data Center’s Exposure to Insurance Claims

    There are multiple areas of potential risk in data center environments that can cause incidents resulting in an insurance claim. Risks include:

    • Accidents that damage the facility
    • Potential for workplace injuries
    • Business risks from downtime events that impact the data center’s or its customers’ business continuity.

    Organizations depend on 24 x 7 x 365 IT infrastructure availability to ensure that services to customers/end-users are available whenever needed.

    To provide and maintain this availability is not only a matter of designing and building the right facility infrastructure; it’s about how that facility is managed and operated on a day-to-day basis to safeguard the business-critical infrastructure.

    Importance of Insurance and Risk Management

    Relying solely on the physical characteristics of the data center like construction, type of fire protection system, and proximity to flood and earthquake-prone areas, although important, leaves out very important considerations in evaluating the effectiveness of a service provider’s risk management program. Typically, the redundant infrastructure of engineered data centers does present a low frequency of loss when compared to other types of operations. However, there is a significant increase in reliance on these data centers by end users as more companies outsource to the cloud or house their primary or backup networks offsite. This increasing dependency of end users on a centralized, outsourced infrastructure presents opportunities for technology service providers to set themselves apart from the competition and manage risks by formally addressing operational controls.

    In framing the risks that service providers are exposed to—and that insurers will be concerned with—it is important to view the operation in terms of what part of the “data supply chain” the service provider occupies or is responsible for. Infrastructure providers, such as a colocation provider, have a specific but related set of exposures as compared to a software as a service (SaaS) provider at the other end of supply chain. The various entities in these increasingly complex supply chains must make decisions about the viability of accepting, avoiding, mitigating or transferring these risks. The risks to the data supply chain include not only first-party direct losses, but third-party liability losses as well. Even the first party losses will differ based on the services provided. The primary risks to the data supply chain can be categorized as:

    Third Party (Liability)

    • Service Interruption: Service providers may be responsible for customer losses incurred as the result of unplanned outages.
    • Data Security / Privacy: Service providers may have statutory, contractual, or implied duty to protect data from unauthorized access or disclosure. Service providers may also be responsible for appropriate backup and recovery of data to prevent customer loss.
    • Damage to Property of others in care, custody, or control: Infrastructure service providers’ facilities typically house multiple customers’ assets with values in the millions of dollars for each customer. Contract terms, customer insurance requirements, and state laws may impact the degree to which the provider is responsible for damage to customer equipment.
    • Premises Liability: Responsibility to ensure that owned properties remain free of unsafe conditions.

    First Party (direct losses to insured)

    • Property Damage: Loss or damage to owned property, which in the case of an infrastructure provider may include real property and business personal property. Service providers who are dependent on others to provide infrastructure services may also have significant property values related to IT assets at widely distributed locations.
    • Business Interruption: Service interruptions create potential for direct loss of income. Interruptions may be caused by perils impacting an infrastructure or service provider directly or as contingent loss caused by perils impacting a service provider on which the operation depends.
    • Extra Expense: Additional expenses incurred to resume operations after a loss event can include additional staff, overtime costs, leased equipment, etc.
    • Equipment Breakdown: Service interruptions may also be caused by the breakdown or failure of machinery or equipment as opposed to the standard property insurance perils (fire, theft, weather related, etc.).

    Employee Health and Safety: Providing a safe work place is the responsibility of all types of employers, and a key cost management strategy related to workers’ compensation and health benefits costs.

    Regulations: Regulations create compliance risks at all levels of the data supply chain. Regulatory impact is greatly dependent on the types of services offered, industries served, and the complex shared responsibilities of infrastructure and service providers and their clients. A few examples of regulatory frameworks that may have impact down to the infrastructure level include U.S. regulations such as HIPAA, GLBA, FISMA; international regulations such as the EU Data Protection Directive and industry standards such as the PCI DSS. In these complex regulatory environments, regulatory enforcement actions are common and the impact of fines and penalties is growing.

    Management and Operations Critical

    Even the best facility infrastructure will not keep a site from having an outage or accident if the individuals running it fail to define effective policies and procedures, maintain staff training, and apply those procedures in practice. Additionally, existing centers may have vulnerabilities due to aging facilities or equipment, yet can still minimize downtime risk and limit exposure if the operations team is working effectively.

    From 20 years of collecting incident data, Uptime Institute has determined that human error (i.e., bad operations) is responsible for approximately 70% of all data center incidents. Compared to this, the threat of “fire” as a root cause is dwarfed: data shows only 0.14% of data center losses are due to fire. This means bad operations practices are 500 times more likely to negatively impact a data center than fire. In fact, an outage at a mission critical facility can result in hundreds of thousands of dollars or more in losses for everything from equipment damage and worker injuries to lost business and penalties for failure to maintain contractual Service Level Agreements.

    For both data center operators and insurers, there are some key questions to ask:

    • Are we looking in the right place to assess and mitigate data center risk?
    • Are we adequately protected from claims and losses due to data center downtime?
    • How can we improve data center risk management?
    • How do we know efforts are focused on those operating factors that have the most impact on risk and availability?

    Managing Liability Risks

    Managing liability risks starts with contracts. A clear scope of work and allocation of risk between the contracting parties is essential. Clauses such as service level agreements, limitation of liability, force majeure, wavier of subrogation and indemnification wording reinforce the intended allocation of risk. Complex multiparty contract disputes are common particularly when significant losses are incurred. Claims of negligence are non-contractual, so even well executed contracts may not mitigate significant liability losses.

    Data center operations credentials are another means of mitigating liability risks. In addition to reducing the probability of loss, clearly defined repeatable procedures and processes demonstrate adhering to a duty of care that is foundational to most standards of care. As with any human endeavor, residual risk will remain regardless of mitigation efforts. Insurance provides a means of risk transfer particularly effective on high severity risks.

    About the Authors

    Lee Kirby is President of Uptime Institute, an advisory organization focused on the performance and efficiency of business-critical infrastructure and administration of the global Tier Standards & Certification for data centers. He has more than 30 years of information technology and leadership experience in the military and private sector.

    Stephen Douglas is Risk Control Director for CNA – Technology, focused in the Technology Industry Segment. CNA provides insurance and risk control solutions to businesses in software and IT services, electronics manufacturing and communications industry. He has over 20 years in experience in risk engineering and information technology.

    1:00p
    IT Services Provider Pays $650,000 HIPAA Breach Fine
    Brought to you by MSPmentor

    Brought to you by MSPmentor

    There’s no longer much question about whether federal health authorities are serious about cracking down on technology solutions providers that don’t take cybersecurity seriously.

    Catholic Health Care Services of the Archdiocese of Philadelphia (CHCS) has agreed to pay $650,000 to settle “potential violations” of the Health Insurance Portability and Accountability Act of 1996 (HIPAA), after patient data was stolen from a smartphone.

    Mishandling HIPAA-protected data has generated more than $9 million in fines this year alone, federal authorities reported.

    By providing management and information technology services to six skilled nursing facilities, CHCS is deemed a “Business Associate,” under HIPAA laws.

    Business Associates of “covered entities” can be held liable in the event of a breach or violation.

    “Business Associates must implement the protections of the HIPAA Security Rule for the electronic protected health information they create, receive, maintain, or transmit from covered entities,” said Jocelyn Samuels, director of the U.S. Department of Health and Human Services Office for Civil Rights. “This includes an enterprise-wide risk analysis and corresponding risk management plan, which are the cornerstones of the HIPAA Security Rule.”

    The Office of Civil Rights (OCR) launched a probe in April of 2014, after receiving a report that a CHCS-issued iPhone had been breached.

    Investigators determined that protected health information (PHI) belonging to 412 nursing home residents was illegally obtained, including social security numbers, diagnoses and treatments, medical procedures, and names of relatives and medications.

    “The iPhone was unencrypted and was not password protected,” HHS officials said in a statement announcing the settlement.

    “At the time of the incident, CHCS had no policies addressing the removal of mobile devices containing PHI from its facility or what to do in the event of a security incident,” the statement continued. “OCR also determined that CHCS had no risk analysis or risk management plan.”

    Liability costs under HIPAA rules has become a growing concern for technology solutions providers in recent years.

    Medical digitization requirements prompted by the Affordable Care Act offer lucrative new veins of revenue in the healthcare vertical.

    But MSPs and other solutions providers must weigh the market opportunity against the risk of criminal penalties, lawsuits or civil fines as high as $1.5 million per breach for mishandling PHI.

    Last March, Federal health authorities launched random audits – the second such round – aimed at assessing the compliance of covered entities, MSPs and other business associates with HIPAA privacy laws.

    In determining the CHCS penalty, federal authorities say they took into consideration that the firm provides important health services in the Philadelphia area that benefit the elderly, developmentally disabled, foster care recipients and those living with HIV/AIDS.

    The agreement, dated June 24, also includes a corrective action plan.

    “OCR will monitor CHCS for two years as part of this settlement agreement, helping ensure that CHCS will remain compliant with its HIPAA obligations while it continues to act as a Business Associate,” the government’s statement said.

    This first ran at http://mspmentor.net/msp-mentor/it-services-provider-pays-650k-hipaa-breach-fine

    3:00p
    Ocean-Cooled Data Center and Desalination Colocation

    Crises tend to inspire ideas for creative, unexpected solutions. One such crisis has been brewing in California’s Monterey County, which is experiencing water shortages because of the severe drought plaguing the state, and where a group of entrepreneurs and local officials came up with an idea to build a massive water desalination plant to address the crisis.

    There is obviously nothing new about a desalination plant. What is unique about the project is that it will take some serious data center capacity to make it work financially. Learn more about the project at Data Center World in New Orleans, Louisiana, this September:

    Presented by: Gary Cudmore, Global Director of Data Centers, Black & Veatch

    This Data Center World session (Tuesday, Sept. 13, 11:45-12:45) will provide updates to the DeepWater Desal project, including the recent Environmental Impact Report (EIR) submittal. Register for Data Center World today!

    More background on the project here: Desalination Plant and Data Center: Not as Odd a Couple as May Seem

    5:14p
    Here’s How Facebook Ensures It Doesn’t Drain Your Phone Battery

    Most of Facebook’s daily active users (989 million out of 1.09 billion) use the social network on mobile devices. That’s as of the end of March, the most recent user statistics the company has made available.

    This means Facebook’s software engineers have to ensure the application, which is in a constant state of change, works on thousands of kinds of smartphones, built by different manufacturers using different hardware components and running different versions of multiple operating systems. How do you test every single code change on such a maddening variety of devices?

    The answer to that question sits inside the Facebook data center in Prineville, Oregon. It is a lab that consists of custom racks designed specifically to hold and test software on thousands of smartphones at a time. Facebook unveiled the lab today, and said it plans to open source rack designs and some of the software its engineers use to test their code.

    Thousands of Code Changes Weekly

    Facebook needs the lab because its developers change its software code thousands of times per week, and they want to know how every change will affect user experience on as many different devices as possible. Make one mistake and on particular phone can run out of battery, or even memory.

    “Given the code intricacies of the Facebook app, we could inadvertently introduce regressions that take up more data, memory, or battery usage,” Antoine Reversat, a Facebook production engineer, wrote in a blog post.

    The service that tests code changes on mobile devices is called CT-Scan, used in combination with Chef. Facebook developed CT-Scan last year, and its engineers used to run it on devices they had at their desks, but the team quickly found that this approach couldn’t scale, which is why there is now an entire lab in the Prineville data center, custom racks and all, dedicated just to this task.

    “We needed to be able to run tests on more than 2,000 mobile devices to account for all the combinations of device hardware, operating systems, and network connections that people use to connect on Facebook,” Reversat wrote.

    That number isn’t arbitrary. It was based on things like the number of commits per week and the number of iterations that had to be done during each test to get results that mattered statistically. The number of phones required is one of the big reasons this operation was moved to Prineville: a “slatwall” holding 240 phones (similar to store display) that was tried at one point would have to scale to nine rooms in Facebook’s headquarters in Menlo Park, California.

    facebook mobile testing rack

    A closer look at Facebook’s custom rack for testing its software on smartphones (Photo: Facebook)

    The Challenge of Wi-Fi in the Data Center

    Every rack holds 32 phones, including eight Mac Minis or four of Facebook’s custom OCP Leopard servers. The Minis oversee software tests on iPhones (four per Mini), while each Leopard server drives eight Android phones to install, test, and uninstall the software. The phones are controlled using custom Chef recipes, which the company is also planning to open source.

    Designing these racks is a significantly different challenge than designing a typical data center rack, and that’s because of Wi-Fi. You have to be careful about Wi-Fi signals between 32 phones in one rack or between phones in different racks interfering with each other, so every rack, in addition to having its own wireless access point is designed as an Electromagnetic Isolation Chamber.

    Engineers watch the way phones react to code changes during tests remotely, via cameras installed in the racks.

    Phone Density Unsatisfactory

    For the next iteration of the lab, Reversat and his team are looking to double each rack’s phone density, from 32 to 64 devices, and give engineers ways to test software with tools other than CT-Scan, since it doesn’t fit every use case. One of the reasons to open source the hardware design and the Chef recipes is to have engineers outside of Facebook potentially contribute their own ideas to improve the platform.

    6:08p
    How LinkedIn is Part of Microsoft’s Plan to Build a Cloud for Good
    Brought to You by The WHIR

    Brought to You by The WHIR

    TORONTO — Microsoft has broken its silence on its acquisition of LinkedIn. Kind of.

    During the keynote on the final day of Microsoft’s Worldwide Partner Conference (WPC), Microsoft president and chief legal officer Brad Smith said that one of the reasons he is excited about the LinkedIn acquisition is that it will help people advance their education and connect with their next jobs, a particularly important tool as more and more jobs are replaced with automation.

    “There’s so much that technology can do,” Smith said. “We need to do more than advance the cloud; we need to build the cloud for good.”

    So what does he mean by building the cloud for good? Smith outlined several examples of how Microsoft cloud is used to promote good around the world. In one example, Azure data analytics and machine learning is used in Tacoma, Wash. high schools to identify at risk students to lower dropout rates.

    According to Smith, the cloud for good will need to be three things: trusted, responsible, and inclusive.

    See also: After Microsoft Deal, What Happens to LinkedIn Data Centers?

    Trusted Cloud

    Trusted cloud is something that Microsoft has been emphasizing throughout the conference, not just in terms of security but also in terms of privacy and transparency. The company has been vocal in its fight for user privacy, launching four lawsuits against the U.S. government. In April, Microsoft sued the Department of Justice

    Smith enforced the idea of needing to stand up for transparency; “We need an internet that respects people’s rights, we need an internet that is governed by good law,” he said.

    “We need to practice what we preach. We have to a great job of respecting people’s privacy,” he said.

    See also: LinkedIn Deal Means More Microsoft in Digital Realty Data Centers

    Responsible cloud

    In Microsoft’s view a responsible cloud is an earth-friendly one.

    “We need to think about the environment,” Smith said. “We’re consuming more electricity than Vermont.”

    See also: Here’s How Much Energy All US Data Centers Consume

    Smith said that Microsoft is committed to transparency so people will know how much it is consuming in terms of electricity. He said the company has promised to use more renewable energy each year. In two years, Microsoft will surpass 50 percent renewable energy, up from 44 percent today.

    Read more: Microsoft Expands Green Data Center Ambitions

    Inclusive Cloud

    One of the defining issues of our time, according to Smith, is what the future of the workforce will look like as more jobs are replaced with technology.

    “What jobs will disappear?” Smith said. “Where are the new jobs going to come from? That’s what the world is asking.”

    Smith said there is a need to make sure that every business can grow and create new jobs. Microsoft is doing his part, he said, by partnerships that are bringing coding and computer science to schools.

    “We know that when we give young people these opportunities they take advantage of them,” he said.

    “Diversity is strength,” he said. “That’s one of the reasons we’re excited about LinkedIn.”

    This first ran at http://www.thewhir.com/web-hosting-news/microsofts-brad-smith-on-building-a-cloud-for-good-and-how-linkedin-is-part-of-the-plan

    6:30p
    One Year After EOL, Windows Server 2003 Still Running in 53 Percent of Companies
    By IT Pro

    By IT Pro

    If it ain’t broke, don’t fix it: That’s the attitude that has kept Windows Server 2003 chugging along despite a number of new and improved offerings from Microsoft over the years. In fact, a recent survey from Spiceworks found that 53 percent of business are still running Windows Server 2003 —almost a year after the software’s July 14, 2015 End of Life.

    But while Windows Server 2003 might still be going strong, it comes with plenty of its own risks: It’s no longer receiving security updates, and the longer it goes past EOL, the more duct tape it will take to keep running.

    Still, 13 years is a lot of value out of one piece of software.

    Read the full results of the survey, which dives into virtualization adoption and server hardware, at Spiceworks’ blog.

    This first ran at http://windowsitpro.com/windows/one-year-after-eol-windows-server-2003-still-chugging-along-53-companies

    11:24p
    Digital Realty Buys Wind Energy for Its Entire US Colocation Footprint

    Digital Realty Trust has become the third major US-based data center provider to buy enough renewable energy to offset 100 percent of its US colocation data center power consumption. The company has agreed to buy about 400,000 megawatt-hours of energy per year from a wind farm operator, according to a statement issued Wednesday, which will offset energy consumed by facilities where the company provides colocation and interconnection services, the footprint that consists mostly of facilities it gained through the acquisition of Telx.

    The agreement is the latest sign that renewable energy is becoming more and more important to data center customers, and that data center providers increasingly view the ability to power their facilities with renewable energy as a competitive advantage. Renewable energy has also become price competitive with regular grid energy, making it even more attractive to data center operators from business perspective.

    Until recently, such long-term utility-scale data center power purchase agreements had been signed exclusively by web and cloud giants, such as Google, Facebook, and Microsoft. Last year, however, Equinix, the world’s largest data center provider, and Switch, a smaller but important provider, announced the first big renewable energy deals in the industry.

    Read more:

    While clean energy is becoming more and more of a focus for data center users and operators, getting energy from a wind farm or a photovoltaic installation directly to a data center site remains an often insurmountable challenge. The best place to build a wind farm is different from the best place to build a data center, and electrical transmission infrastructure and utility regulations in most markets make it difficult to transport energy from the former to the latter.

    Energy in Digital realty’s recent deal will be generated by a wind farm in North Texas, but the company’s data centers are in many locations across the US.

    Read more: Cleaning Up Data Center Power is Dirty Work

    The predominant way of addressing the challenge has been buying renewable energy generated in one place on the grid, and consuming an equivalent amount of regular grid energy wherever the data center happens to be. While enabling a substantial amount of new renewable energy generation capacity to come online, such arrangements do little to reduce the amount of non-renewable energy that’s being generated currently.

    Overall, Digital Realty procured 2.9 billion kilowatt-hours of energy on behalf of customers last year, 600 million kWh of which came from wind, solar, and hydroelectric generation sources, the company said. The recent agreement increases the amount of renewable energy the company purchases by about two-thirds.

    Last year, Digital Realty kicked off a program under which it would procure renewable data center power for its customers premium-free for one year. The program, called Clean Start, is available to customers at all Digital Realty data center locations around the world.

    See also: Here’s How Much Energy All US Data Centers Consume

    Corrected: The recent wind power deal will offset energy consumed by Digital realty’s colocation and interconnection business in the US, not its entire US property portfolio as this article previously stated.

    << Previous Day 2016/07/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org