Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, August 7th, 2017

    Time Event
    12:00p
    IBM’s Tape Breakthrough May Change Cold Storage – But Not Anytime Soon

    It’s not uncommon to think that tape is dead — or at least on the way out. But this venerable storage medium keeps on going, with periodic jumps every couple of years increasing the amount of data that can be stored on a square inch of the magnetic material. The latest record, set recently by IBM Research, takes it to 201Gb per square inch, on a sputtered tape from Sony.

    Areal density is how we measure the amount of data, in bits, that can be stored in a specific area of computer storage (or a specific volume for 3D storage media like flash). That 201Gb is more than twenty times the areal density of current commercial tape – a huge jump in the amount of data we’ll be able to can squeeze into a cartridge.

    Sony’s sputtering technique deposits a multilayer structure of magnetic and other particles more densely onto thinner tape with a smoother surface. There are also new signal processing techniques for reading data, at a density of 818,000 bits per inch, significantly improved head positioning, and new read heads that take advantage of the physical improvements to the tape. All that adds up to a track density of 246,200 tracks per inch.

    Not Coming Soon to a Data Center Near You

    But if you’re hoping to see this new-generation tape arrive in your data center sometime soon, don’t hold your breath.  We spoke to Roy Cideciyan, one of the authors of the IBM research paper announcing the breakthrough, who expects a decade or so will pass before basic research like this is turned into a product. Technologies in the currently shipping TS1150 tape series, released in 2014, took eight years to go from the lab to commercial tape drives.

    This latest breakthrough does however put tape firmly on the growth roadmap predicted by INSIC, the Information Storage Industry Consortium. It currently predicts areal density growing at 33 percent per year, reaching a predicted density of around 90Gb per square inch by 2025, and 270Gb per square inch in 2028. What seems at first to be a huge improvement in tape technology is actually just the latest step in an ongoing improvement in storage capacity.

    Even so, it’s an impressive amount of data; thanks to the thinner tape, you can expect to fit 330TB of raw, uncompressed data into a familiar LTO cartridge. Even with 2.5x compression, the current native capacity of an LTO 7 cartridge is 6TB. The highest-capacity enterprise cartridges store 15TB, a significantly higher storage capacity than spinning disk or flash storage. With backwardly compatible readers, old tape formats will sit alongside newer higher-density tapes, reducing the risk of data loss in a transfer between formats.

    Waiting ten years for a new tape technology with a hefty storage upgrade could actually be a good match for the lifecycle of tape drives. Standard disk drives get replaced every three to five years, but tape drives are replaced every seven to ten years. Not only is tape “secure, reliable, energy efficient,” Cideciyan said, it is also significantly more reliable than most people think. “We’re finding it three to four times more reliable than hard drives,” he said.

    The Future of Tape in the Data Center

    So where does this technology fit into the data center? Cideciyan suggests, “there’s lots of innovation in tape storage; it’s experiencing a renaissance.” It’s not just the hardware that’s being improved, he says; so is the software and the management tooling. Part of that is the move to the Linear Tape File System (LTFS), a standard that uses a metadata partition with information about the files stored on a tape to make it look more like a hard drive and offers transfer rates similar to hard drives, as well as support for high-latency media in OpenStack. “It makes tape invisible to the user,” he says.

    Clive Longbottom, Service Director at QuoCirca, is more cautious about the role of tape. “Tape has its place — but that ‘place’ is getting smaller. There are certain heavily regulated verticals where long-term tape storage is still mandated, but for a general business seeking a means of creating a backup-and-restore capability, tape is now too slow and unreliable.” Alternative storage architectures are popular. “Tiered disk with cloud backup is becoming far more affordable, and the RTO-RPO difference is far better than tape,” he said.

    IBM’s research on improving how file systems operate – intended to develop ‘cognitive’ storage solutions that model users’ data needs, and store data appropriately – could fit into those architectures, though. This approach dynamically manages hot and cold data, from flash storage all the way down to tape. The research team is also working with massive scientific data sets, one of which will manage data from the world’s largest array of radio telescopes.

    Longbottom also has concerns about the amount of data that you’re going to be storing on a single cartridge. “IBM’s announcement sounds good,” he said. “It now means that one-third of a petabyte can be stored on a cartridge. That’s a lot of data. A lot of data that is one full load of eggs in one very fragile basket. Any damage to the cartridge itself or to the tape means that you’d better have multiple copies in different places.”

    In fact, IBM suggests that the very highest capacities of tape will be a better match for large-scale cold storage in cloud data centers – holding millions of photos uploaded to social networks like Facebook, or holding off-site backups for personal and business users – than for the economics of corporate on-premises data centers. The market for archive storage isn’t going away, and this new format from IBM and Sony should keep tape viable for quite some time.

    3:00p
    VMware Applies Polish in Latest vSphere Release

    The first update to VMware’s vSphere 6.5, the hypervisor and virtualization management platform, substantially expands scalability for large data centers and provides more value for smaller organizations.

    The update, released last month, proves that vSphere 6.5 is stable and ready for IT organizations to deploy if they haven’t done so already, said Martin Yip, VMware’s product line marketing manager. Any issues that have been discovered have been fixed in patches, and all the patches are rolled into the update.

    “Although this is just an update release, it’s a huge milestone,” he said. “It’s been production-tested for eight months, and we’ve gotten positive customer feedback, that this a robust release.”

    When VMware initially released vSphere 6.5 last November, it touted new features like improved security, including VM-level encryption; a simplified user experience through its new HTML5-based graphical user interface; and predictive analytics that automates data center management.

    vSphere 6.5 Update 1 vastly improves scaling. The number of powered-on VMs that can run on a vSphere Domain has increased from 30,000 VMs to 50,000 VMs. Furthermore, a Domain can now handle 5,000 hosts, up from 4,000. The number of registered VMs a Domain can handle has gone from 50,000 to 70,000.

    The company has also improved scalability of its vCenter central management tool. Organizations can now run 15 vCenter servers per Domain, up from ten.

    Most customers don’t reach those scale limits, but the improved scalability is important to large data center operators, VMware executives say.

    However, Adam Eckerle, VMware’s senior technical marketing architect, said the improved scalability of vCenter servers per Domain is significant for both small and large enterprises.

    In the past, a small retail chain could’ve had 15 locations and managed those locations with two separate Domains. “Now, by increasing it from 10 vCenter servers per Domain to 15, they can have one single management point and link those vCenter servers together,” he said.

    Gary Chen, research manager for cloud and virtualization system software at IDC, said VMware increases scalability of its virtualization software with every new release, so it’s nothing new, but customers do expect it; better scalability can impact customers positively and result in cost savings.

    “It could mean more efficiency for them, and in some cases, they can get more use out of their VMware investment,” he said.

    Other notable upgrades:

    • VMware has upgraded the HTML-5-based vSphere Client, so it supports up to 90 percent general workflows. Early versions of the client only allowed IT administrators to power-on and power-off and edit the settings of VMs. Now with Update 1, the company has added more administrative tasks, such as the ability to add and remove ESX hosts and the ability to adjust resource pool settings and networking. “We’ve added many host-centric and network-centric features,” Eckerle said.
    • VMware has improved the vCenter Server Foundation offering, the version of vCenter that is aimed at small organizations. In the past, small IT shops that purchased vCenter Server Foundation could only use it to manage three hosts. Now, VMware has expanded it to four hosts. “Small customers have been saying, ‘if we could manage one more host, it would make all the difference, so we are giving them a 33 percent increase in capacity,” Yip said.
    • Improved support by hardware and software vendors. For example, because server vendors have improved their support for VMware’s Proactive HA feature, IT organizations that purchase those servers can now use vCenter to monitor the health status of server components, such as fans, memory and power supplies.
    • An upgrade path for users of vSphere 6 Update 3 to migrate to vSphere 6.5.
    • General support for vSphere 6.5 has been extended to a full five years, so support will be available until Nov. 15, 2021.

    Ultimately, Chen from IDC said the latest update from VMware is important, but still, it’s just an incremental release.

    “It’s a lot of refinement, but they are welcome enhancements,” he said.

    5:15p
    NTT’s Dimension Data Said to Attract Interest From MTN, Vodacom

    Nippon Telegraph & Telephone Corp. has attracted competing interest from MTN Group Ltd. and Vodacom Group Ltd. for the African operations the former Japanese monopoly acquired with the takeover of Dimension Data, according to people familiar with the matter.

    Didata’s management is also considering buying back the African business and relisting its shares, the people said, asking not to be identified because the matter hasn’t been made public. The main prize is Didata’s Internet Solutions unit, the people said, as the companies seek to tap growing demand for web-based services.

    MTN is weighing an offer of as much as $600 million to get Internet Solutions, the people said. The structure of a deal hasn’t yet been decided as many options are still being discussed, including a potential partnership agreement, said the people. Tokyo-based NTT, the owner of Japan’s biggest mobile-phone company, is open to approaches from interested buyers for the African business, other people said in June.

    Africa’s largest wireless carriers are seeking to expand into internet services to boost growth in their South African home market where demand for data is increasing much faster than their traditional voice business. The Johannesburg-based cross-town rivals also need to be able to compete with Telkom SA SOC Ltd.’s fast-growing BCX unit in gaining more business customers.

    Didata “doesn’t respond to market rumor, speculation or hypothetical questions about the company or any of its affiliates or subsidiaries,” spokeswoman Hilary King said. Vodacom doesn’t comment on speculation, a spokesman said. A spokesman at MTN declined to comment.

    NTT bought all of Didata’s stock for 2.1 billion pounds ($2.7 billion) in 2010, which at the time was its second-biggest purchase of an overseas asset. Didata, which has operations across the globe, provides live data for the Tour de France cycle race as well as information-technology services from outsourcing to supplying computer and networking equipment.

    6:32p
    As Tech Execs Rally Around Kushner, Government Cloud Adoption Still Has Ways to Go

    Brought to you by IT Pro

    Six weeks after a group of tech executives traveled to Washington, D.C. for a June meeting with President Donald Trump and his advisers, including Jared Kushner, the president’s son-in-law and his team are starting to work with companies, including Apple and Google, to get government to more effectively use technology.

    According to a report by Recode, Kushner and other top advisers had a private call last week with major tech companies who are members of the American Technology Council, asking for input to modernize government IT. One of the ideas on the table is a system where “leading tech engineers do ‘tours of duty’ advising the U.S. government on some of its digital challenges,” Recode says.

    Though details are scarce at this point, that idea is not a new one. The U.S. Digital Service has run a similar program where it recruits “top technologists for term-limited tours of duty with the Federal Government.”

    The American Technology Council, which was formed in May, is led by Kushner’s White House Office of American Innovation (WHOAI), a small team focused on bringing “new thinking and real change” to the country’s toughest problems, according to a report by Politico.

    So far consensus around the effectiveness of WHOAI is mixed, with critics worried that Kushner’s split focus will mean critical projects – like moving more agencies to the cloud – get left behind. On the other hand, proponents praise his ability to “spot problems, figure out who’s already working on it, and identify-then-provide whatever help they need to do a better job” – an approach that doesn’t cut into federal IT budgets. One of the tangible wins of WHOAI so far is fixing the VA’s electronic health care system.

    “Just as well-known tech companies use rapid experimentation to test new approaches, government can too, using existing resources,” a report by Brookings that looked at ways Kushner can modernize government said. “For example, the Department of Education ran quick, virtually cost-free tests to see which email messages worked best in reaching borrowers in default on student loans. Within a few weeks, it had the answers. It used that information to help thousands of individuals shift to more manageable repayment plans.”

    “As long as Kushner can keep persuading agency secretaries and CEOs and civil servant to get together and talk, he has a shot at making progress on some of the most intractable issues that have long stymied Washington, from federal agency mainframes to well-maintained roads and bridges,” Politico said.

    Arguably one of the biggest technology initiatives at the federal level that has carried over into this administration is the shift to cloud computing. Under former President Barack Obama, the government adopted a Cloud-First Initiative in 2011, where agencies were encouraged to adopt cloud-based services in lieu of expensive on-premise data centers. Along with the initiative, the government has been consolidating its data center footprint as part of its Data Center Optimization Initiative (DCOI), a move which has a cumulative savings from 2012 to 2017 of $2.2 billion. By 2018, the government hopes these savings to reach $2.7 billion.

    Indeed, cost continues to be a primary driver for adopting cloud in the public sector. According to a report by MeriTalk last year, primary motivations for moving to the cloud are cost savings (46 percent), increased flexibility (42 percent) and legacy systems reaching their end of life (35 percent). This last point is particularly interesting as feds continue to spend more than 80 percent of their time and budgets on legacy system life support, according to a separate report by MeriTalk.

    According to government IT services provider CSRA, there are five key roadblocks that are preventing federal cloud adoption. These are: concerns around cloud security; organizational culture and maturity; lack of readiness to adopt cloud technologies; perceived lack of control; and immaturity of federal procurement models.

    To its credit, the government does acknowledge that roadblocks exist, and is slowly making headway on removing some of them. For example, a lot of the cultural barriers to cloud exist because of a lack of education. In a June report the USDS outlines its efforts in providing digital service training to help the government “become a smarter buyer of technology once it establishes a specialized procurement workforce that understands the digital and IT marketplace, agile software development methodology, cloud hosting, and the ’DevOps’ practice of integrating system operations with application development teams.”

    If WHOAI is going to be successful at modernizing government IT, public-private partnerships are just the start. The cultural changes within the government needed to fully embrace technology could be what makes or breaks the momentum of the initiative, and is one that IT pros will be watching play out over the coming months.

    7:29p
    Cisco Says it Lost Some Meraki Customer Data

    Brought to you by MSPmentor

    A botched configuration change to Cisco’s Meraki object storage has resulted in the loss of any user data that was uploaded during a half-day window last Thursday, the IT equipment and services giant confirmed.

    San Francisco-based Meraki offers cloud managed IT solutions, including wireless, switching, security and enterprise mobility management.

    Cisco said it has fixed the problem and is trying to help customers determine the extent of the data loss.

    “On August 3rd, 2017, our engineering team made a configuration change that applied an erroneous policy to our North American object storage service and caused certain data uploaded prior to 11:20AM Pacific time on August 3 to be deleted,” the Meraki advisory said. “The issue has since been remediated and is no longer occurring.”

    See also: Supply Chain Blunder Means Cisco Servers Could Lose Data

    “In the majority of cases, this issue will not impact network operations, but will be an inconvenience as some of your data may have been lost,” the statement continued. “Your network configuration data is not lost or impacted – this issue is limited to user-uploaded data.”

    Cisco said it would provide an update by the end of today with details about what resources would be made available to help “restore functionality.”

    “Our engineering team is working over the weekend to investigate what data we can recover, as well as what tools we can build to help our customers specifically identify what has been lost from their organization,” the company said. “We recommend waiting until we make these tools available prior to restoring files as we will be trying to design our tools to help our customers save time.”

    The company added: “We are deeply regretful for this error and apologize for the inconvenience caused.”

    Cisco recommends that customers with issues email support@meraki.com or call (415) 432-1203.

    Visit the Cisco Meraki advisory for a full list of affected services.

    << Previous Day 2017/08/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org