Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, July 9th, 2013

    Time Event
    12:31p
    What the HIPAA Final Rule Means for Data Centers and Cloud Providers

    Matthew Fischer, is a partner in Sedgwick LLP’s San Francisco office. He focuses on intellectual property, media, data privacy and complex commercial litigation.

    Fischer_Matt-tnMATTHEW FISCHER
    Sedgwick, LLP

    The long-awaited HIPAA Omnibus Final Rule (“Final Rule”), which primarily amends regulations in the HIPAA Privacy and Security Rules and breach notification rules, went into effect on March 26, 2013 and the compliance date is fast approaching. Data centers and cloud providers servicing the health care industry should take particular note that the Final Rule clarifies that they are officially considered “business associates” under HIPAA and must therefore comply with all applicable privacy and security requirements.

    The Final Rule expands the definition of “business associate” to include an entity that “creates, receives, maintains, or transmits protected health information (PHI) on behalf of a covered entity.” While most data centers and cloud providers have operated under the assumption that they are considered business associates, the Final Rule leaves no doubt and explains in the preamble that “document storage companies maintaining [PHI] on behalf of covered entities are considered business associates, regardless of whether they actually view the information they hold.”

    Subcontractors Included

    The changes broaden the definition of a business associate even further to encompass all subcontractors that create, receive, maintain or transmit PHI on behalf of a business associate. Thus, not only must data centers enter into a business associate agreement (“BAA”) with covered entities pledging to maintain adequate administrative, physical and technical safeguards to protect PHI, they must also enter into BAAs with their subcontractors, who in turn must now institute the same privacy and security measures This obligation continues down the vendor chain with respect to other subcontractors.

    Under the Final Rule, business associates are directly liable for the following Privacy Rule requirements as well as that of their subcontractors, even if they never entered into a BAA:

    • Impermissible uses and disclosures of PHI;
    • Failure to enter into a BAA with subcontractors;
    • Failure to provide breach notification to the covered entity;
    • Failure to provide access to a copy of electronic PHI to either the covered entity or the owner of the data and;
    • Failure to disclose PHI when required by HHS; and failure to provide an accounting of disclosures of PHI upon request.

    Covered entities and business associates that are considering contracting with data centers and cloud providers will carefully scrutinize whether their vendors have implemented adequate administrative, physical and technical safeguards as mandated by HIPAA. They also will likely require the disclosure of any vendors to which the business associate outsources those portions of its operations that involve PHI, in order to ensure that such subcontractors are HIPAA-compliant as well.

    Establish a Part of the Business as HIPAA-Compliant

    One cost-effective and practical option available to data centers and cloud providers is to make a select part of the business HIPAA-compliant and institute strict procedures to ensure that the receipt, maintenance or transmission of PHI occurs only in the compartmentalized HIPAA-compliant part of the system. Data centers and cloud providers should also have their own template BAA so they are not stuck using a covered entity’s proposed BAA, which may have onerous terms and obligations that are not even mandatory under HIPAA. Likewise, it is helpful to have a template subcontractor BAA in place that ensures protection from liability arising from vendors to which operations involving PHI have been outsourced.

    The Office of Civil Rights (“OCR”), which is the enforcement arm of the Department of Health and Human Services (“HHS”), has significantly intensified its enforcement efforts and HIPAA compliance audits over the last few years, even going so far as to target small hospices. Civil monetary penalties can range from $100 to $50,000 per violation, with a cap of $1.5 million for multiple violations. With the issuance of the Final Rule, many in the health care industry expect that the OCR will start to directly investigate business associates for non-compliance.

    The September 23, 2013 compliance deadline for the Final Rule is right around the corner; although companies operating under existing BAAs can continue to do so until March 26, 2014.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    Planning for a Cloud-Ready Distributed Storage Infrastructure

    cloud-storage

    Storage array systems provide greater amounts of flexibility and business agility. Where direct-attached storage solutions may fall short – SAN, NAS and other types of shared storage environments can really help a company scale. Many organizations are leveraging larger, more comprehensive storage arrays which help them distribute their environment. Here’s the important part to consider – in many cases, the storage platform has become the heart of a cloud solution. Intelligent replication and storage control mechanisms now allow for the distribution of cloud components. This includes user information, workload replication and of course – big data.

    IT managers are seeing that by using intelligent storage platforms can help them stay agile and continue business operations should a site – or even storage controller – fail. The idea is to create a resilient and distribute storage infrastructure which can support the user base, the workloads, and a growing business. In creating such an environment, engineers need to be aware of a few concepts which will revolve around building a successful storage solution.

    Consider bandwidth.

    A distributed storage environment will require a thorough amount of planning around bandwidth. The amount will completely depend on the following:

    • Distance the data has to travel (number of hops).
    • Replication settings.
    • Failover requirements.
    • Amount of data being transmitted.
    • Number of users accessing the data concurrently.

    There may be other requirements as well. In some cases, certain types of databases or applications being replicated between storage systems have their own resource needs. Make sure to identify where the information is going and create a solid replication policy. Improper bandwidth sizing can create serious performance issues (if one under-sizes) or too much bandwidth may result in an organization overpaying for services. In some cases, using WAN optimization is a good idea.

    Pick the right storage platform.

    Although this might seem like common sense – the process of selecting the right storage platform for a distributed environment is very important. In some cases, organizations forget vital planning steps and select storage systems which suite them now and only in the near future. In selecting the proper type of platform, consider the following:

    • Utilization – What is your utilization – now, 3 years, 5 years, and end of life? Also, how well is the controller able to handle spikes in usage? Does it meet IOPS requirements?
    • Migration – How easy is it to migrate data once you outgrow your current needs or need to upgrade?
    • Data Management – Does the system have granular data control mechanisms? Does it do data deduplication – file or block level?
    • Policy Management – Ensure that the system you select has good integration with your internal systems and is able to support the storage policies that you require for your organization.

    For large deployments, many vendors will gladly offer a POC or pilot program for their controllers. Although there may be some deployment costs associated with this pilot, it may be well worth it in the long run. By establishing which workloads, applications and what data will reside on a distributed storage system, administrators can better forecast their needs and spend less time (and money) trying to fix an undersized environment.

    • Control the data flow. Distributed storage systems require special attention as information traverses the Wide Area Network. As mentioned earlier WAN optimization may be the right move to help support a more robust data transfer methodology. Furthermore, controlling where the other storage controllers reside can really help narrow down bandwidth requirements. By settings up dedicated links between data centers and using QoS to facilitate the right amount of bandwidth – administrators can control the data flow process and still have plenty of room on the pipe for other functions. Basically, there needs to be consistent visibility in how storage traffic is flowing and how efficiently it’s reaching the destination.
    • Use intelligent storage (thin provisioning/deduplication). Today’s enterprise storage solutions are built around direct efficiencies for the organization. Data control, storage sizing optimization, and intelligent deduplication all help control the data flow and management process. By reducing the amount of duplicate storage items, administrators can quickly reclaim space on their systems. Furthermore, look for controllers which are virtualization-ready. This means that environments deploying technologies like VDI, application virtualization or even simple server virtualization should look for systems which intelligently provision space – without creating unnecessary duplicates.
    • Distributed storage as DR. Storage infrastructures deployed within a distributed environment can be used for a variety of purposes. Data resiliency, better performance or just placing the storage closer to the user would all be good business use-cases. In some instances, companies look to deploy a distributed architecture for the purposes of disaster recovery. In these cases, it’s important to have special considerations around using storage for the purposes of DR. It’s recommended that an organization first conduct a business impact analysis (BIA) to establish some very important metrics. This includes isolating systems, platforms and other data points which are deemed critical. Then, organizations can identify their recovery times and establish a scale of importance for their various workloads. Once that is identified, it becomes much easier to select a distributed storage system capable of meeting those needs.

    Designing a good storage platform can become very expensive – very quickly. This is especially the case when the planning and architecture processes are either skipped or rushed. Although modern storage arrays may be expensive, they’re built around efficiency. The ability to logically segment physical controllers, remove duplicate data and archive information are all features which help to control the storage environment. When a solid storage platform is in place, organizations can see benefits in performance, agility and – very importantly – uptime.

    2:00p
    Brad Hokamp is new CEO at CoSentry

    Data center service provider Cosentry has named industry veteran Brad Hokamp as its new Chief Executive Officer and a member of its Board of Directors. Hokamp has worked in the data center and hosting markets for over 27 years, including executive stints at Savvis, Telx and most recently Layered Tech, where he served as President.

    “The Cosentry Board of Directors is excited to have Brad join as Cosentry’s CEO, based on his extensive datacenter, hosting and cloud experience, and his strong reputation in the industry,” said Harry Taylor, Chairman of the Board of Directors at Cosentry. “Brad has a proven track record of driving high growth in the mid-size and enterprise markets for datacenter services, establishing thriving and dynamic corporate cultures, and building long-term, successful partnerships with customers. We look forward to similar success at Cosentry.”

    “Cosentry has done an excellent job in establishing itself as the leader in providing comprehensive data center services to businesses located throughout the Midwest,” said Hokamp. “In our core cities, our strategy is to operate as our customer’s local business partner in providing IT infrastructure solutions, in conjunction with delivering outstanding service. Based on a substantial capital investment from TA Associates, a leading private equity firm, Cosentry has excellent financial backing. Their initial investment and continued support will enable Cosentry to continue to meet our customer’s demand by expanding our existing data centers, provide industry leading services and customer support, and expand into new geographic markets. I am looking forward to working with the Cosentry team, the Board of Directors, and TA Associates to accelerate our position in this rapidly growing market.”

    TA Associates is a private equity firm that has invested in more than 425 companies around the world and has raised $18 billion in capital. The firm acquired CoSentry in 2011.

    2:30p
    Using DCIM to Create a Common Data Center Management Approach

    The modern data center is beginning to be considered the data center of everything. This means that more platforms, services and users are deploying their workloads to a data center infrastructure. Furthermore, dependence around the data center and the resources that they provide is increasing as well. Even when smaller server rooms and server closets in small businesses as well as remote branch offices as excluded – the number of data centers in operation worldwide will increase from over 191,000 to almost 202,000 between 2011 and 2014. More significantly, the size of those data centers is also growing, with total data center square footage increasing from 569 million to 737 million over the same period.

    Because of this new reliance around the data center environment, DC managers now have to face challenges around resiliency, uptime and risk. This is why it’s important to have a well-implemented data center infrastructure management (DCIM) solution. The paper outlines the major reasons to why using a DCIM platform can help improve the visibility into the modern data center. This includes:

    • Fragmentation calls for a unified approach to datacenter management.
    • The full benefits of DCIM deployment require broad scope of integration.
    • Use DCIM solution to enable change in processes and culture.

    These types of platforms can help IT and facilities teams coordinate management tasks by providing a common view of the truth to boost data center operating efficiency in power and cooling as well as IT asset utilization. It also makes it easier for data center teams to better tie physical systems to virtual machines, applications, and business services.

    According to the white paper from CA Technologies, a well-implemented DCIM solution can help data center teams better leverage existing capacity, avoid costly buildouts, and optimize IT workloads in the face of growing complexity, enabling organizations to capitalize more quickly on business innovation. Furthermore, as the modern data center becomes more distributed, a DCIM solution can create a unified view of the entire infrastructure.

    In this whitepaper, IDC explores the key factors in data center utilization, where growth patterns are happening and the important role that data center management plays. Download this whitepaper today to learn why over half of the datacenter managers IDC surveyed said there would be value in using an integrated DCIM system. Creating a unified view into the modern distributed data center not only improves management – it helps simplify the entire data center control process.

    3:58p
    Cobalt Data Centers Names Jeff Brown as President
    cobalt-interior1

    A look at the network operations center at the Cobalt Cheyenne data center in Las Vegas. (Photo: Cobalt Data Centers)

    Las Vegas colocation provider Cobalt Data Centers today announced the appointment of Jefferson Brown as President of the company, effective immediately. The company also announced that Mike Ballard, Cobalt’s previous CEO, has stepped down to pursue other interests.

    Brown brings more than 20 years of experience with high-tech operations to Cobalt. Most recently he served as Vice President of Sales for Savvis, the cloud infrastructure and hosted technology services division of CenturyLink. Before joining Savvis, Brown was held various leadership positions at VeriSign Equinix, iPass and CompuServe.

    “Attracting someone of Jeff’s caliber to Cobalt is enormous validation for our business model,” said Philip Lamb, a board member of Cobalt. “His experience and track record are exactly what Cobalt needs to ignite our growth and meet the growing demand for datacenter alternatives in Las Vegas. We wish Mike well in his future endeavors and look forward to helping Jeff build Cobalt into an industry leader.”

    JeffBrown

    In 2006, Brown (pictured at left) also led a buyout of Equinix’s Honolulu data center business to form DRFortress. He served as the company’s original CEO, raising more than $15 million and building the foundation for the largest data center operator in Hawaii. Brown continues to serve on the company’s board of directors.

    “Jeff’s industry insight and strategic guidance helped build DRFortress into the most reputable datacenter operator in all of Hawaii,” said Steve Lee, Managing Director at The Bank Street Group. “It will be exciting to watch him do it again in the fast growing Vegas market. He’s got a great facility, an experienced staff and escalating demand.”

    In February, Cobalt Data Centers opened the doors on the first of two planned Tier 3-compliant data centers in Las Vegas. Cobalt Cheyenne is a 34,000 square foot facility backed by 5.5 megawatts of critical power.

    5:32p
    Equinix Plans Data Center In Osaka; Partners With CloudSigma

    Equinix is expanding its Asian presence and its global cloud capabilities, by developing its first IBX data center in Osaka, Japan, in partnership with K-Opticom, and by partnering with CloudSigma to enable hybrid cloud services globally. CloudSigma joined Equinix’s growing roster of cloud providers within its ecosystem.

    The data center in Osaka, called OS1, will be Equinix’s first data center in the western region of Japan. The OS1 data center will provide a total capacity of 32,000 square feet and more than 800 cabinet equivalents. Partner K-Opticom is one of the largest access providers in Osaka/Kansai area. Along with Kanden Energy Solutions (KENES), the collective will open the new data center in the fourth quarter of 2013, with the first phase providing initial capacity of 320 cabinets. The total investment for Phase 1 of the Osaka data center will be $12 million. The data center will directly connect to Dojima, the network core in Osaka.

    The announcement follows Equinix’s recent announcement to build its fourth IBX in Tokyo in response to high demand for data center services in Japan. Equinix choosing to partner in the region is smart, because it is a standard practice within an insular culture. Osaka is Japan’s second largest economy after Tokyo and saw internet traffic grow a staggering 68 percent and bandwidth increase at a compound annual growth rate (CAGR) of 56 percent from 2008 to 2012 rivaling Tokyo. There is access to more than 900 domestic and international carriers located there, promoting the internationalization of Osaka overall.

    “Osaka is another important strategic market, along with Tokyo in Japan,” said Kei Furuta, managing director of Equinix Japan. “Many of our global customers have requested Equinix data center services in Osaka. With the support of K-Opticom, Kenes and O-BIC, we can meet that demand by opening our first IBX data center in Osaka. As the backbone of the digital economy Platform Equinix is used by more than 4,000 customers worldwide and serves as an interconnection platform to promote business growth among customers. I am honored to be able to help invigorate the Osaka economy and promote internationalization through OS1.”

    CloudSigma Deploys in Equinix Zurich and DC locations, More to Follow

    Infrastructure as a Service provider CloudSigma has deployed in Equinix’s Zurich and Washington, D.C. facilities as part of a new partnership to market cloud and data center services in North America and Europe. Equinix will provide existing customers with a direct connect private line virtual LANs to CloudSigma to make it easy for companies to adopt hybrid cloud architectures.

    These deployments represent the first phase in an ambitious targeted roll-out in selected markets covered by Equinix’s global footprint of more than 90 IBX data centers in 31 markets. There are future plans to deploy in emerging digital markets such as Brazil, United Arab Emirates and Asia Pacific.

    “CloudSigma is a global company with ambitions to expand its presence in Europe and North America, as well as newly emerging digital markets such as Brazil, the United Arab Emirates and Asia Pacific, fueled by our existing customers’ demand,” said Bernino Lind, Chief Operating Officer, CloudSigma. “Equinix is a perfect strategic match for this expansion due to its innovative culture, proactive partnership approach, global footprint and the IBX connectivity solution.”

    7:15p
    Toronto Flooding KOs Data Center Cooling Systems
    151front

    Toronto’s leading telecom hub at 151 Front Street stayed online during last night’s flooding and power outages in the city, but cooling systems experienced an outage. (Photo: Allied REIT)

    A massive rainstorm caused widespread flooding and power outages Monday night in Toronto, which created challenges for some tenants at the city’s largest data center hub. When the utility power from Toronto Hydro went offline, the carrier hotel at 151 Front Street was able to successfully switch over to generator power. However, the building’s district cooling system experienced problems, causing the heat to rise in some data centers at the building.

    Severe thunderstorms rolled through Toronto Monday evening, dumping heavy rain that overwhelmed drainage systems. The flash floods stranded passengers in cars and subways, and knocked out power to more than 300,000 residents. The storms brought major disruption to travel and commerce throughout the Toronto area, providing the latest illustration of how increasingly intense weather systems can test the infrastructure of major North American cities.

    That included 151 Front Street, the downtown building with more than 150 telecom and data center tenants, including Equinix, PEER 1, Rogers, Telus, IBM, Cologix, Cogent and Verizon, as well as the Toronto Internet Exchange. Some telecom tenants reported scattered service outages on Twitter, but most data center providers were able to continue service as they watched the rising temperature in their server rooms.

    “Due to the current severe weather conditions in the Toronto area, our 151 Front Facility experienced power issues which required the building to transfer to generator power,” PEER 1 reported Monday night. “The building’s chill loop provider also experienced power issues which impacted the cooling capacity of our AC units. Due to this our Data Center experienced higher than normal temperature levels. The building is currently back on commercial power and the chill loop provider is also restoring to normal operational levels.”

    151 Front Street is among the Toronto buildings served by a district cooling using a Deep Water Lake Cooling (DWLC) system from Enwave, which taps cool water from the depths of Lake Ontario and uses it to cool a chilled water supply loop used by downtown buildings. That system was affected by the flooding, leaving some parts of 151 Front without cooling. Enwave was able to provide an emergency chiller to provide some relief until the system was restored.

    Here’s a look at the stream of reports from Twitter:

     

     

    << Previous Day 2013/07/09
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org