Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, March 11th, 2015

    Time Event
    12:00p
    Wall Street Rethinking Data Center Hardware

    SAN JOSE, Calif. – Earlier this month, the Wall Street Journal reported that Bank of America, currently the second largest banking firm in the country, was in the midst of a shift from traditional data center hardware to hardware designed through the collective efforts of Open Compute Project, the Facebook-led open source data center and hardware design community. The article quoted the bank’s CTO David Reilly saying he was planning to run about 80 percent of the workloads on infrastructure fashioned after the web-scale infrastructure companies like Facebook and Google have been designing for themselves by 2018.

    But BofA is only one example of a U.S. financial services giant considering the big changeover. Other Wall Street heavyweights are either testing the waters with Open Compute gear, preparing to buy it wholesale, or are already running it in production. One of them is Goldman Sachs, whose engineers have been involved with OCP from the project’s early years, and which has had a member on the OCP foundation’s board of directors since 2012.

    After being involved in development of OCP hardware, firmware, and BIOS, Goldman is now gearing up for a wide-ranging deployment of OCP servers across its data centers. In an interview, Jon Stanley, senior technology analyst and vice president at Goldman’s IT and services division, said 70 percent of new server purchases the company is going to make this year will be OCP gear.

    The company started buying servers from Hyve Solutions, one of the official OCP hardware vendors, last year, Stanley said. But Goldman has other suppliers lined up, which is one of the things that make this approach to IT procurement so attractive. There are always multiple vendors selling hardware built to similar designs, which ensures supply continuity and drives down the price for the end user.

    Example of an Open Compute server by Hyve Goldman Sachs has been buying for its data centers (Photo: Yevgeniy Sverdlik)

    Example of an Open Compute server by Hyve Goldman Sachs has been buying for its data centers (Photo: Yevgeniy Sverdlik)

    ‘Inevitable Thing to Happen’

    The rise of something like OCP was an “inevitable thing to happen,” Grant Richards, Goldman’s managing director of global data center engineering, said while sitting on a panel of IT infrastructure heads from multiple major financial institutions at the Open Compute Summit in San Jose, California, Tuesday.

    The usual process, where vendors like HP or Dell design proprietary boxes and have them manufactured in Asia before being shipped to customer data centers, is slowly living itself out. The latest sign was HP’s announcement this week that it had joined OCP and launched a line of OCP-compliant servers.

    Goldman had to make a lot of adjustments to be able to “consume” OCP hardware, hardware that didn’t come with as much vendor involvement beyond providing the boxes themselves as Richards’ department was used to. Goldman changed as a company as a result. There are some organizational barriers lots of enterprise IT shops have that today prevent them from deploying something like OCP infrastructure at scale, and whether it is actually cost-competitive with “incumbent” gear is a matter of some controversy. But, if more incumbent vendors make announcements similar to HP’s, other end users won’t be faced with a learning curve as steep as Goldman was faced with.

    Fidelity Looks Beyond Servers

    Goldman is not the only pioneer financial services company driving open data center hardware forward. Another firm that’s been involved with OCP from its early days is Fidelity Investments. Like Goldman, Fidelity has been actively participating in development of the OCP specs and has been testing OCP servers in its data centers for several years. The company may now be considering a deployment at scale.

    “I’m starting to see equipment that we can absorb,” Bob Thurston, director of integrated engineering at Fidelity, said on the panel. “This year particularly could be a very good turning point.”

    Fidelity has made some substantial contributions to OCP and continues to do a lot of work. One of its biggest contributions was the “Bridge” rack, which can accommodate both traditional 19-inch-wide IT gear as well as the 21.5-inch chassis in some of the OCP designs.

    One big ongoing OCP project at Goldman is called the Open Sensor Network, which is about bringing some data center infrastructure management intelligence to the Bridge rack. Thurston’s team has designed its own sensors that measure temperature, humidity, and amounts of particulate matter in air, and detect when a cabinet door opens or closes. The sensors feed data into Raspberry Pi computers, the tiny low-cost devices powered by ARM chips, but the ultimate goal is to store that sensor data on a Hadoop cluster and then write analytics applications that can use Hadoop to help improve cooling and power efficiency.

    Other Giants More Than Curious About OCP

    Capital One, another major U.S. banking firm, also recently became a member. “Capital One’s gone to open source in a pretty significant way, and I think we’re going to do that for infrastructure,” Brian Armstrong, director of Open Compute and next-gen infrastructure at the company, said during the panel session.

    The company has just joined, so what exactly its involvement in the open source project is going to be remains to be seen. Right now, Armstrong and his team are figuring out where they contribute and how to start contributing as soon as possible, he said.

    Others on the financial services panel included Matthew Liste, managing director of cloud development at JP Morgan Chase (currently the largest banking firm in the U.S.), and Justin Erenkrantz, head of compute architecture at Bloomberg, the big financial information services company. Both companies are looking at ways to integrate Open Compute hardware into their environments.

    As financial services companies morph into technology companies, they are focused more and more on driving down the cost of their IT infrastructure while increasing the amount of things that infrastructure can do. IT and data center infrastructure is where interests of the internet industry and interests of the banking industry (and almost all other industries) are really similar.

    The internet giants built the stuff for their own needs, so it will take some more time for the Open Compute ecosystem to produce technologies that can be adapted to a wider range of users. We’re already seeing signs of progress, but this is only the beginning of what many say will be a complete transformation of the way data center hardware is designed, produced, sold, and consumed.

    3:35p
    Data Migration Strategies: Retiring Non-Production Backup Platforms

    Jim McGann is vice president of information management company Index Engines. Connect with him on LinkedIn.

    Many companies choose to move to a new backup platform that provides better functionality, support or simply superior integration within their storage environment. Others have inherited non-production backup environments through a merger or acquisition.

    Either way many companies must maintain a legacy backup software instance in order to continue to access aged content on tape for legal or compliance purposes.

    But the benefits outweigh the procrastination as once data with business value is migrated from legacy tape there is no need to maintain this legacy environment. This will result in cost savings through the retirement of the old backup software maintenance as well as data center resources and management overhead. This also provides a key opportunity to decide what data needs to be preserved and what is redundant, outdated or trivial which is something your legal and records team should have a grip on.

    There are three recommended backup data migration strategies that enable intelligent management and access to relevant data for legal and compliance needs. The method ultimately chosen depends on industry and company policies to determine what content has business value. This is best decided by working with legal and records management.

    Single Instance of All Data

    For organizations that cannot determine what should be preserved and what no longer has value, migration of a single instance of legacy backup data from highly redundant tape or disk into an accessible and manageable online archive is the solution.

    This allows for legal and records to manage the data going forward where they can determine retention periods and purge what is no longer required. For IT organizations this represents a savings in offsite tape storage as tapes can be remediated once the migration is complete. This also saves ongoing tape restoration costs and provides more efficient support for eDiscovery and compliance requirements.

    Single Instance Email, Specific File Types

    Many organizations are only concerned with legacy email or a specific file type (i.e. PDF, Excel, etc.) as this content contains important corporate records or sensitive communications that must be preserved and archived.

    Preserving a single instance of email or specific files from legacy backups is a much smaller subset and simplifies the migration process, especially if you can define a date range and not extract data that has outlived its retention requirements. This data can then be managed according to existing legal hold and retention polices and content that no longer has value can be purged.

    Selective Culled Dataset

    The most efficient method of migrating data from backup images is using a culled dataset. A culled dataset is based on what business records are required for long-term preservation. If legal and records management have a defined policy as to what is required for legal hold, compliance and other regulatory requirements, this criteria can be built into the migration strategy and only this content can be restored and persevered. This would typically represent less than 1 percent of the tape content.

    Once a strategy is chosen, the migration process can begin. The first phase of this process is optional but in many instances it can significantly streamline the process. The catalog is the key to providing knowledge and access to the legacy tape data. If it is not available or cannot be ingested it can be recreated by scanning tape headers to determine the content.

    Once the catalog is ingested or created, the legacy backup software can be retired and eliminated from the data center as it will no longer be required for data access and restoration.

    Ingestion of the catalog allows for all the metadata to be indexed including the backup policies. An assessment and analysis of the backup content can then be performed in order to further define the migration strategy to one of the three above.

    From there, data can be reported on further and culled down. Some of the migrated content may be outside any retention periods, some may be a file type with no long-term preservation value (databases, log files, etc.), some may exist on hosts or servers that have no sensitive content that would require archiving.

    Disposition strategies can include migration to cloud sources, archives, network storage and more, saving the organization money and the time of managing legacy data sources.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:33p
    SimpliVity Raises $175M for its Converged Infrastructure Cube

    Hyper-convergence startup SimpliVity has achieved a valuation of more than $1 billion less than two years after shipment began for its main product, Omnicube, helping to secure a $175 million Series D round of funding from Waypoint Capital.

    The Switzerland-based growth equity fund may have contributed the most to SimpliVity thus far, but it’s not the only one. Previous investors include: Accel Partners, Charles River Ventures, DFJ Growth, Kleiner Perkins Caufield & Byers Growth and Meritech Capital Partners. All told, SimpliVity has received $276 million in funding.

    Omnicube provides a unified data center stack that replaces several disparate systems for storage, virtual servers, and networking. It reduces clutter and complexity and is tuned for performance and efficiency at scale.

    At its core, Omnicube is a bunch of products combined into one, optimized for everything from backup and archive to primary high-performance data. Some of the promises of converged infrastructure are better system manageability, reduced costs and footprint, and improved service and performance. The need for less equipment and more change occurring through software translates into decreased staffing costs.

    Waypoint certainly practices what it preaches. It has four data centers that are 100 percent SimpliVity based. The transition to SimpliVity allowed it to future proof and extend IT services, reducing the amount of servers by 30 percent while increasing computing power four times. The switch also meant Total Cost of Ownership decreased by a factor of three, according to the company’s Chief Information Officer Frederic Wohlwend in a video.

    “As a customer, we experienced first-hand the transformational impact of SimpliVity’s hyper-converged infrastructure,” said Wohlwend via a press release. “After a thorough analysis of alternatives, we were convinced of SimpliVity’s technological superiority, and that its unique data architecture is years ahead of the market. When we learned SimpliVity was raising capital, we insisted on taking the lead.”

    More than 1,500 of the systems have been shipped to customers such as T-Mobile, Major League Baseball, and Swisscom all looking to refresh their technology with the future in mind. So, it’s no surprise that in 2014—the year SimpliVity partnered with Cisco—revenues grew by more than 500 percent year-over-year. It now has more than 400 employees worldwide and has added resellers in 50 countries.

    There are a variety of different types and combinations of converged infrastructure. Other competitors in the converged infrastructure market include giants increasingly getting into the game and pure play upstarts. Competitor Nutanix recently highlighted the promise of converged infrastructure as well as raised a $101 million round last year. Other in this arena include: EMC’s VCE, HP, Oracle as well as mid-market play Nimboxx.

    The round is a large one, but converged infrastructure vendors, or even hardware vendors in general, require a lot of capital. Still, big money doesn’t always guarantee success. Calxeda, provider of servers based on ARM chips, went out of business despite raising over $200 million.

    IDC expects the converged infrastructure market to reach $17.8 billion in 2016, up from $4.6 billion in 2012. Gartner predicts it will grow by a 24 percent CAGR from 2013 through 2018, reaching a total of $19 billion.

    “Legacy IT infrastructure is due for a shake-up and the market continues to show signs of acceleration,” said Mark Bowker, senior analyst for ESG, in a press release. “There is significant interest in, and momentum behind hyper-converged infrastructure investments. Legacy architectures packaged and delivered with ‘a bow on top’ are failing to achieve the scale and operational efficiency we are seeing IT professionals achieve with a true hyper-converged platform like SimpliVity. This is the wave of the future and the path to modern IT infrastructure.”

    5:00p
    Business Intelligence Startup Looker Raises $30M

    Business Intelligence Software-as-a-Service provider Looker has raised a $30 million Series B financing round led by Meritech Capital Partners.

    Looker believes that business intelligence (or BI) needs to evolve to address modern data needs. The company views itself as a provider of tools that allow data analysts to create and curate custom data experiences from modern and big data—a web developer for BI, if you will—using a unique data description language called LookML for data modeling.

    A curated view means anyone can interpret and act on live data, helping to create a data-driven culture rather than limiting BI to a select few. The big emphasis in the BI space has been on self-service visualization, but Looker CEO Frank Bien believes it has been somewhat tapped-out at this point.

    “Organizations are going after much bigger data,” he said. “We saw there was this big evolution in BI tools, but an even bigger revolution in the infrastructure—the ability to collect and store large amounts of structured data.”

    With the proper tools, the result is organizations become very data, and collaboratively data driven. Bien argues that while organizations have been able to deploy really large data infrastructure quickly (think Amazon Redshift, Google BigQuery), no solution existed to explain all of the data being stored.

    The message has resonated with customers. The company grew 400 percent in 2014, surpassing 250 customers, a sizable number for a BI provider. Customers include very data-driven organizations like Yahoo!, Warby Parker, Gilt Group and Docker. Bien said that the company started with tech-oriented type companies and extended into e-commerce and enterprise. It has since bit into the lion’s share of the market historically owned by the likes of BusinessObjects and Cognos.

    After a $16 million round raised in summer 2013 earmarked for building out the rest of the product, growth investor Meritech is ready to make a big global push designed for rapid scale and growth phases.

    Additional participation in the recent $30 million round came from Sapphire Ventures (formerly SAP Ventures), and existing investors Redpoint Ventures, First Round Capital, and PivotNorth.

    “These new investments reaffirm Looker’s commitment to creating data-driven cultures, not just providing another analysis tool,” said Dave Hartwig, managing director at Sapphire Ventures, in a press release.

    “At Sapphire Ventures we focus on identifying and accelerating transformation in enterprise IT. Looker is clearly set up to challenge and transform the established BI industry, helping entire companies become data-driven through transparent access to data, a powerful analysis environment, and unrivaled support.”

    6:31p
    Schneider Submits Data Center Operations Model to OCP

    Schneider Electric has submitted a data center Facility Operations Maturity Model to Open Compute Project, the Facebook-led open source data center and hardware design community.

    The framework is different from The Green Grid’s maturity model in that it focuses specifically on facilities management, while TGG’s model takes a holistic approach, encompassing both facilities and IT.

    The submission is a step toward adding a facilities operations element to Open Compute’s collection of specs and designs for servers, storage, and networking gear, as well as power and cooling infrastructure.

    Today, data center operations are a fairly subjective thing, although there are industry best practices. Schneider’s submission attempts to provide a measuring stick for operational maturity. It is a framework to form best practices for people and processes across operations, including emergencies, as well as day-to-day stuff, such as maintenance, work-order management, etc.

    “The Data Center Operations Maturity Model is an industry-first way of allowing organizations the ability to evaluate the operations of a data center with an open, non-proprietary framework,” Jason Schafer, one of the model’s creators and lead of the OCP Data Center Project, said.

    Wide in scope, FOMM is divided into seven disciplines that are further divided into elements and sub-elements. The model rates a data center team’s maturity on the scale of one to five, one being the the initial ad hoc stages, and five meaning optimized or refined processes and practices.

    The assessment is done across several topics and sub-topics. It is extremely detailed. The people-management category, for example, even includes things like charting career development. That’s in addition to guidelines on what people should do at the site, how they should monitor and measure efficiency, and how optimize the data center.

    The model is currently under review. You can take a look at it on the Open Compute Wiki.

    Yevgeniy Sverdlik contributed to this report.

    6:55p
    Google Says Cold Storage Doesn’t Have to Be Cold All the Time

    Google has introduced a low-cost cold storage service called Cloud Storage Nearline. This type of cloud storage is meant for less-frequently accessed data, and Google’s service costs about a penny per gigabyte of data at rest.

    Competitor Amazon Web Services also has a cold storage offering, which is called Glacier, as well as a data warehousing service called RedShift. Google is playing catch-up with its cloud storage portfolio, but it has taken a different approach to cold storage. While pricing is comparable to Glacier, a big difference is data stored in Nearline will be available within a few seconds rather than the standard hour or so for other offerings.

    Cold storage normally implies a compromise in terms of how fast the data can be accessed. It’s low-cost because it’s offline, but Google’s approach is “nearline.” The data remains online just in case, with latency gap closing despite the price gap remaining.

    It means customers don’t need to make that time compromise for the types of data that isn’t accessed often, but may be needed quickly every once in a while, for example when someone is searching through historical data or photos that users don’t frequently look. It’s cold storage that doesn’t always have to be called.

    Avtandil Garakanidze, a product manager at Google, highlighted the dichotomy of cold storage needs on the company’s blog: “Organizations can no longer afford to throw data away, as it’s critical to conducting analysis and gaining market intelligence. But they also can’t afford to overpay for growing volumes of storage.”

    Google’s Nearline is fully integrated with other Google Cloud Storage services and uses the same programming models and APIs. Redundant storage at multiple physical locations protects data.

    Google partnered with Veritas/Symantec, NetApp, Geminare, and Iron Mountain to roll the service out. The partners give their enterprise users on-ramps into Nearline easily.

    Geminare runs Disaster Recovery-as-a-Service on Google Compute, and its portfolio extends with Nearline. Iron Mountain, a big data archiving services company, will send customer data on physical disks to Google for secure upload into Nearline.

    8:54p
    Internal DNS Issue at Apple Causes Lengthy Outage for iTunes, App Store

    logo-WHIR

    This article originally appeared at The WHIR

    An internal DNS error is to blame for a major outage affecting users of several Apple services including iTunes and the App Store on Wednesday.

    Reports of the outage started at 5 am ET, and the iTunes Store, App Store, Mac App Store and iBooks Store were all still down as of 1:30 pm ET. On Wednesday morning, Apple cloud services were also down, with iCloud recovering around 9 am ET.

    In a statement provided to CNBC this afternoon, Apple said that it is “working to make all the services available to customers as soon as possible.” Customers in the US, Switzerland, Spain and the UK were affected by the outage.

    ZDNet predicts that Apple is losing around $2.2 million for every hour its stores are down. The company makes around six percent of its global revenue from iTunes and App Store purchases.

    The prolonged outage is a rare occurrence for Apple, who just previewed its highly anticipated Apple Watch earlier this week and announced its official involvement in the Open Compute Project. Its last major outage, according to ZDNet, was in July 2013 when Apple’s Developer Center was offline for more than a week after a security breach.

    While its latest outage is not related to an external breach, it still doesn’t look great for the company, particularly as it continues to rebuild its public image after the PR nightmare that was the iCloud breach last fall.

    Apple is not the only technology company to fall victim to DNS errors recently. Enom, a wholesale domain registrar, fixed a DNS issue that impacted one-third of its customer base earlier this week.

    This article originally appeared at http://www.thewhir.com/web-hosting-news/internal-dns-issue-apple-causes-lengthy-outage-itunes-app-store

    << Previous Day 2015/03/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org