Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 26th, 2017

    Time Event
    2:07p
    Anthem Agrees to $115 Million Settlement Over Data Breach

    (Bloomberg) — Anthem Inc. agreed to pay $115 million to resolve consumer claims over a 2015 cyber-attack that compromised data on 78.8 million people, marking what attorneys in the case called the largest data-breach settlement in history.

    The proposed accord, which would end class-action lawsuits filed in several states, requires approval from a federal judge in San Jose, California. Anthem sells coverage under the Blue Cross and Blue Shield brand in 14 states.

    “We are pleased to be putting this litigation behind us, and to be providing additional substantial benefits to individuals whose data was or may have been involved in the cyber-attack and who will now be members of the settlement class,” the Indianapolis-based company said Friday in a statement.

    Anthem didn’t admit any wrongdoing in the settlement.

    The company said in February 2015 that hackers obtained data on tens of millions of current and former customers and employees that led to a probe by the Federal Bureau of Investigation. The information compromised included names, birthdates, Social Security numbers, medical IDs, street and e-mail addresses and employee data, including income, Anthem said at the time.

    Alternative Compensation

    As part of the proposed settlement, $15 million would be set aside to pay for out-of-pocket expenses incurred as a result of the data breach.

    The proposal filed Friday would require Anthem to establish a fund to buy at least two years of credit monitoring services for the class to help protect them from fraud.

    For individual class members who already have their own credit-monitoring services and don’t want to enroll in the settlement’s plan, the settlement provides alternative compensation of as much as $50 per class member.

    The plan also requires Anthem to spend an undisclosed amount to help protect members’ personal information over the next three years.

    In 2015, after the breach was made public, Anthem established a website, anthemfacts.com, where people affected by the breach could sign up for two years of credit monitoring.

    The case is In re Anthem Inc. Data Breach Litigation, 5:15-md-02617, U.S. District Court, Northern District of California (San Jose).

    3:00p
    Object-Based Storage Cost-Effective for Unstructured Data

    Erik Ottem is Director of Product Marketing, Data Center Systems, Western Digital

    Editor’s Note: In the first part of this two-part series, we explored how Object-Based Storage (OBS) cost-effectively delivers data at scale, which is replacing traditional file-based Network-Attached Storage (NAS) architectures in today’s data centers.  In this second part, we present the key features associated with OBS covering extreme scalability, advanced data availability and durability, and simplified data management.

    Extreme Scalability

    OBS platforms operate on a flat address space, and as such, massive scalability is achieved without the overhead associated with file system hierarchies, data look-ups, or a block reassembly.  With traditional file storage architectures, indexes enable scaling beyond a single folder, but as the number of files increase, the file hierarchy and associated overhead become cumbersome, limiting performance and scalability.  Instead of indexes, OBS uses metadata to aggregate objects into buckets (or other logical associations) which delivers more efficient capacity scaling that equates to virtually unlimited data at scale.

    Advanced Data Availability

    In traditional storage architectures, RAID (Redundant Array of Independent Disks) is a common approach to ensure that data is available and accurate when it is read.  Striping data across multiple drives will protect one or two of them from failing; however, once a failure occurs, performance drops dramatically during the rebuild operation and the likelihood of other group members failing increases as well.  RAID rebuild times can take hours, or even days, and may require an immediate replacement of a failed drive.  If an unrecoverable read error occurs during a rebuild, data will be permanently lost possibly placing business data and productivity at risk.

    With OBS, data availability is achieved through advanced erasure coding – a technique that combines data with the parity information, divided into chunks, and distributed across the local storage pool.  Erasure coding best practices require that no single drive hold more than one chunk of an object, and a single node never hold more chunks than an object can afford to lose.  This approach ensures data availability even if multiple components fail since only a subset of the chunks are needed to rehydrate the data.  There is no rebuild time or degraded performance, and failed storage components do not need to be replaced at the time of the read error, but when it is convenient.  Rather than focus on hardware redundancy, OBS focuses on data redundancy.

    An OBS system achieves data availability through geographically spreading across three locations, but unlike the triple mirroring data replication model, the total data is not replicated to each location.  Rather, only one-third of the object data is stored in each location.  This approach not only reduces network traffic, but maintaining this level of data availability only incurs about 67 percent of overhead, whereas triple mirroring requires replicating, storing, and managing 100 percent of the data at three locations.  The geo-spread model provides very high data accessibility and resiliency at a substantially lower cost in equipment and management than traditional triple mirror data replication.

    Advanced Data Durability

    Data durability refers to long-term data protection, so a media failure, such as bit rot, where a portion of the drive surface becomes unreadable and corrupts data, makes it impossible to retrieve data in its original unaltered form.  Protecting chunks as they lie dormant on disk is of paramount importance in enterprise storage.  Simply protecting against a complete hard drive failure (as with RAID) does not protect against the gradual failure of bits stored on magnetic media.

    When combined with appropriate data scrubbing technology, OBS guards against bit failures, so if a given chunk were to become corrupt, a replacement chunk can be constructed from the parity information stored in the remaining chunks that constitute the object.  It isn’t necessary to rebuild or replace an entire drive, just the affected data.  The combination of erasure coding with data scrubbing technology achieves extreme durability.  Some systems achieve up to 17 nines of data durability, or in simpler terms, for every 1,000 trillion objects, only one would be unreadable.  This is why OBS is widely used in hyperscale data centers and cloud computing environments to meet the highest data durability requirements.

    Simplified Data Management

    Unlike hierarchical file storage used in NAS environments, OBS has a flat architecture known as a namespace that collects the objects to hold within the object store, even those objects that reside in disparate storage system hardware and locations.  The namespace provides an effective and cost-efficient way to manage multiple racks of storage within one entity, thus enabling a simplified, single management solution for all data.  Although geo-spreading distributes data across multiple storage systems in various locations, the actual operation is only performed once, and invisible to the end user.  A single namespace makes it is easier to manage one system spanning multiple locations than managing multiple sites individually.

    Summary

    When one looks at the exponential growth in data, one can easily see that the challenge of storing that data has become significant.  Object-Based Storage offers key benefits for today’s data centers as an alternative to traditional storage solutions.  Combined with the high-density, highly-distributed nature of OBS, data centers can cost effectively support data at scale at a lower capital and operational expense due to more efficient data protection and a simplified management structure when compared with traditional storage architectures.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    5:29p
    How to Change Executive Perceptions Around Digital Transformation

    Brought to You by The WHIR

    We’ve come to a point where almost every organization out there has become a digital entity. To the extent that they realize this will identify just how far along they are in their own digital transformation (DX) journey. Something I’ve learned over the past couple of years is that the digital journey can’t be defined by partners, vendors, or even specific technology solutions.

    Digital transformation journeys are being defined by the organization and the end-user. And, it’s up to a capable DX partner to help the organization build that vision.

    This means that a lot of these paths into the digital era are unique and custom. However, this doesn’t mean that it has to be complex or even expensive. IDC has been chronicling the emergence and evolution of the digital ecosystem, built on cloud, mobile, big data/analytics, and social technologies, for nearly a decade. Over the past several years, the adoption of these technologies has accelerated as enterprises undergo digital transformation (DX) on a massive scale. IDC predicts that digital transformation will attain macroeconomic scale over the next three to four years, changing the way enterprises operate and reshaping the global economy. This is the dawn of the “DX Economy.”

    “We are at an inflection point as digital transformation efforts shift from ‘project’ or ‘initiative’ status to strategic business imperative,” said Frank Gens, Senior Vice President and Chief Analyst at IDC. “Every (growing) enterprise, regardless of age or industry, must become ‘digital native’ in the way its executives and employees think, what they produce, and how they operate. At the same time, 3rd Platform technology adoption and digital transformation is happening much faster than most expected and early competitive advantages will go to those enterprises that can keep pace with the emerging DX economy.”

    In an IDC study, there are ten predictions to help people understand the major trends around this new DX economy. Let’s look at three key predictions:

    1. By 2020, 50 percent of the Global 2000 will see the majority of their business depend on their ability to create digitally-enhanced products, services, and experiences.For industry leaders, the fastest revenue growth will come from information-based products and services. To facilitate the development of these products and services, worldwide investment in DX initiatives will reach $2.2 trillion in 2019, almost 60 percent more than in 2016.
    2. By 2020, 67 percent of all enterprise IT infrastructure and software spending will be for cloud-based offerings. Today’s “cloud-first” strategy is already moving toward “cloud-only.” The cloud will morph to become more distributed, more trusted, more intelligent, more industry-focused, more channel-mediated, and more concentrated with the top 5 IaaS/PaaS vendors controlling at least 75 percent share by 2020.
    3. By 2020, over 70 percent of cloud services providers’ revenues will be mediated by channel partners/brokers. By 2018, IDC expects major channel partners to have transitioned at least one third of their business from hardware sales to cloud services sales/brokering.

    The point is very simple: those organizations which leverage this digital revolution and the digital economy will be the ones to recognize real-world competitive advantages.

    So, what can you do to help change executive perceptions around the digital revolution? And, most of all, how can you get on the journey yourself?

    In working with a variety of verticals and customers here are three great ways to get the conversation around a digital transformation flowing:

    1. You’re already a digital entity; it’s up to you if you want to benefit from it. Take a look at your users. What kind of devices are they using? Do they have 2 or more ways they can connect into business systems? Most importantly – are you doing anything to either allow or optimize their experience? Your users are already very much a part of the digital world. And, you can optimize their experience and how they interact with core business data and apps. Every year, we see a reduction in the amount of laptops and traditional desktops being deployed. This is a direct evolution around the business world and the end-user. Digitally-enabled organizations will find ways they can optimize this mobile workforce, enable greater levels of cloud services utilization, and see where their business can continue to evolve. Legacy IT and business processes will only act as anchors in our quickly evolving digital economy. From the users all the way down into your data center – you can implement digital strategies to allow for automation, better user enablement, greater levels of density, better business economics, and more cloud.
    2. Your competition is looking at the digital economy and finding better ways to compete. You better believe they are. They’re looking at ways they can optimize hardware and software consumption, ways they can optimize user experiences, and ways they can leverage cloud services for even more business agility. Digital transformation isn’t a binary process; it’s not on or off. Rather, it’s a truly evolving strategy that morphs around your users, your business, and the market. Digitally-enabled market strategies will provide better competitive opportunities. This can be changing the way you manage your call center (by creating remote users and easier app access) all the way to enabling a consumption-based licensing and hardware model (where you never again pay for what you don’t use). All of these are digital strategies and considerations.
    3. You don’t have to go on the digital journey alone; just define your path, and a good partner can help get you there. I mentioned this earlier – it’s up to you to define your digital journey, and it’s up to a good partner to help get you there. This can be the creation of a hybrid cloud ecosystem or the enablement of a new mobile workspace solution. A good partner can help sit down with you, understand your business and your users; and then apply a digital strategy which fits your unique use-cases. The most successful digital strategies begin around the business, areas of improvement, and ways to optimize user interaction with core applications. Simply put – you don’t have to know off the bat what your digital strategy will look like but you have to start building the path. Good ecosystem partners can help.

    Please don’t let this digital jumping point pass you by. We’re at a point in the market where you can actually capture all of the benefits of becoming a digitally-enabled organization. Whether you’re in healthcare, pharma, manufacturing, government, or any other vertical, you can absolutely leverage the befits of become a digital entity. And remember, even though your journey is unique, it doesn’t have to be a bumpy one. Defining your own digital strategy can mean moving more into cloud, leveraging more virtualization, or even adopting better user mobility technologies. The point is that you recognize where your own digital path lies and that you begin that journey today.

    5:43p
    Beneath Red Hat’s and Oracle’s Earnings Reports

    By WindowsITPro

    There are probably a lot of smiles in Raleigh and Redwood Shores this week. Both Red Hat and Oracle turned in quarterly earnings reports that greatly beat expectations. Wall Street blessed both companies for the news, as the value of Red Hat’s stock rose by 10.6 percent in early trading after the announcement and Oracle saw a 9 percent spike.

    Although Red Hat’s gains were unexpected insofar as they beat both the company’s and Wall Street’s estimates by a long shot — they weren’t that much of a surprise. The open source company has been on a roll for a long while now.

    Red Hat reported sales of $677 million for a year-over-year increase of 19 percent, the company’s highest revenue growth rate in four years. Net income came in at $102 million, or 56 cents a share, an increase of 11 percent over the same quarter last year. According to the Motley Fool, Wall Street “would have settled for 6 percent earnings growth on 14 percent revenue gains.”

    According to CEO Jim Whitehurst, the company’s “subscription revenue for our infrastructure-related offerings, which is mainly comprised of our flagship RHEL technologies, is approaching a $2 billion annual run rate” and “our application development-related and other emerging technology subscription revenue … now has an annual run rate of over $500 million.”

    Speaking to a group of investors in the company’s earnings call, Eric Shander, the company’s executive vice president and CFO, said “there were a total of 44 deals over $1 million. Within these deals seven were greater than $5 million and four of those were greater than $10 million, including one deal greater than $20 million.”

    A look at what Red Hat is selling to raise its bottom line offers some insight into what’s going on in IT these days. Most of the company’s income, 88 percent of the total, came from subscriptions revolving around RHEL and its related technologies. Ansible, a DevOps automation engine that’s often used with Kubernetes deployments, was big, responsible for six of the quarter’s transactions of over $1 million. This included one deal valued at over $5 million — “our largest deal ever for Ansible,” according to Shander.

    Responsible for six of the million dollar plus deals was OpenStack, the cloud computing platform often deployed as infrastructure-as-a-service and used in hybrid cloud deployments. OpenShift, which is basically Red Hat’s supported distribution of Kubernetes using Docker containers and DevOps tools, was also a million dollar baby and responsible for five of the big deals.

    “The company also performed well in the public cloud,” Whitehurst told investors. “We’re seeing a continued movement to cloud, including more production workloads. I think that’s overall one of the reasons that you see growth in the RHEL numbers.”

    According to Whitehurst, a large portion of the company’s cloud income — customers taking existing subscriptions and moving them to the cloud — isn’t counted as such. “We don’t track where the subscriptions are run,” he said. “We don’t have good data there.” However, the “on demand” revenue from from cloud deployments was considerable and growing. “[W]e expect to see a $200 million run rate in that business in the second quarter.”

    The cloud wasn’t so profitable for much larger Oracle, even though the company’s overall performance was stellar. This isn’t surprising since everyone but Oracle seems to have given up on it’s cloud efforts long ago.

    On Wednesday, the company reported fiscal fourth quarter revenue of $10.94 billion, well above the $10.46 billion that Wall Street had been expecting. It’s cloud and on-premise software revenue rose 6 percent to $8.9 billion. However the company’s cloud-based infrastructure as a service performance remained lackluster, with revenue of only $208 million. For perspective, consider that AWS earned $3.7 billion in the same period and that Red Hat earned $200 million merely on public cloud spin-ups of its software.

    When it came to the cloud, Oracle preferred to accent the positive, pointing out that in May, telecom AT&T moved its on prem databases to Oracle Cloud, its public cloud offering. The company indicated that it expects more “big customers” to make similar moves in the near future. The company’s overall cloud related earnings were buoyed somewhat by its acquisition in November of NetSuite, which markets web-based business related software services.

    6:05p
    U.K. Parliament Maintains Restrictions After Email Hack

    (Bloomberg) — Staff at the U.K. Parliament remain hampered after a cyberattack that compromised about 90 lawmakers’ email accounts.

    To prevent the attackers from gaining access to vital data, Parliament has limited the ability of MPs to access the legislature’s computer network remotely. Those restrictions remained in place Monday. A House of Commons spokeswoman said in a statement that Parliament is planning to resume its wider IT services.

    Staff arriving at Parliament Monday morning were handed notices informing them that access to the parliamentary network was suspended for all users until they changed their passwords and put in place multi-factor authentication. This is an added layer of security that requires users to present another type of identifying information beyond just a password. In this case, Parliament is asking users for a mobile-phone number that can be used to text them a security code. The notice also reminded staff how to choose a strong password.

    The spokeswoman described the cyberattack as “sustained and determined.” Hackers gained access to lawmakers’ accounts that had used “weak passwords” that did not comply with government guidance, the spokeswoman said.

    Last week, The Times of London reported that passwords for thousands of U.K. government officials had been made available for purchase on Russian hacker forums. The passwords were believed to have been stolen in a 2012 hack of the business social network LinkedIn.

    The House of Commons said the investigation into the most recent cyberattack was ongoing. It said the attack had compromised fewer than one percent of the 9,000 accounts on Parliament’s network.

    “As they are identified, the individuals whose accounts have been compromised have been contracted and investigations to determine whether any data has been lost are under way,” the Commons statement said.

    The Parliament has been working with the National Cyber Security Centre, a division of Government Communications Headquarters (GCHQ), the U.K. signals intelligence agency, to investigate the hack.

    Cybersecurity experts said the attack was a wake-up call. “This initial attack may have only affected one percent of parliamentary emails, but getting into one is enough,” said Jamie Graves, the chief executive officer of Edinburgh-based cybersecurity company ZoneFox. He said compromising a single account could allow hackers to penetrate vital government systems. “It really calls into question the security practices of government if, in 2017, we are still being compromised by the basics, such as weak passwords.”

    Neil Larkins, co-founder and chief operating officer of London-based Egress Software Technologies, which provides message-encryption technology, said that the attack demonstrated that human beings remain the biggest security vulnerability for most computer networks. “Hackers weren’t targeting the technology, but the people,” he said.

    << Previous Day 2017/06/26
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org