Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, September 22nd, 2015

    Time Event
    12:00p
    Engineer: Lack of Procedure Plagues Data Center Industry

    NATIONAL HARBOR, Md.–Being the imperfect organisms that they are, humans are often the ones to blame for their problems.

    And that’s true for the biggest problem of the data center industry: downtime. It’s often cited that more than 80 percent of data center outages can be attributed to human error.

    While that may be true, there’s a degree of subtlety to that estimate. “It really comes down to the definition of what human error is,” Steven Shapiro, mission critical practice lead at Morrison Hershfield, said. Morrison Hershfield is a major US engineering firm with a substantial data center practice.

    The only way the 80-plus-percent estimate is true is if you take into account errors in things like system design, commissioning, and training, not errors made during operation, Shapiro said on the sidelines of our sister company AFCOM‘s Data Center World conference taking place here this week. According to his company’s numbers, actual operator error, where somebody flipped the wrong breaker or shut the wrong valve after losing utility power and brought the facility down as a result, is responsible for more like 18 percent of data center outages.

    There is a way to bring the possibility of that kind of human error down, because it almost always “comes down to not following the procedure,” Shapiro said.

    The main problem isn’t failure to follow procedure, however, but lack of documented procedure, which is an industry-wide problem that people who work in the data center industry are generally reluctant to talk about, he said. “If the training is there, and the procedures are there, we find a facility that has that, there’s almost no human error associated with failure.”

    Most data center facilities teams today don’t have proper procedures in place, relying instead on the knowledge of staff who have a lot of experience in their specific facility. “The guy that built the facility is still there, and he feels that he knows everything that there is to know about it,” Shapiro said, illustrating an example. “And now there’s four guys that work for him, and he hasn’t told them everything they need to know, but he’s still around.”

    Sometimes, the team knows they need procedures written down, but they don’t have the budget to do it. “It’s a well-known issue, but nobody wants to talk about it. If the funding was there, it would get done.”

    The reasons funding for such projects doesn’t materialize vary. One common scenario is where the IT team controls the data center budget, and the facilities team doesn’t want to tell IT that they don’t have proper training and procedures in place. In other cases, the facilities team has the budget but always finds something more important to spend the money on.

    In either scenario, documenting procedures gets put in the back burner because the team doesn’t need that documentation during day-to-day operations. They only need them when there’s a failure or during maintenance, so it’s a problem that’s easy to ignore as long things run smoothly.

    1:07p
    Switch Claims Reno Site Will be World’s Largest Data Center

    Switch, the Las Vegas-based data center provider that’s building a massive campus in Reno, Nevada, said the first facility there will be the largest data center in the world. The company, known for its huge SuperNap Las Vegas data center campus, with a futuristic interior design and ex-military security personnel, announced plans to build the $1 billion Reno data center earlier this year with eBay as the anchor tenant.

    SuperNap Tahoe Reno 1 is only the first phase of development. Switch’s future plans call for as many as seven buildings, mostly of comparable size. The campus will neighbor a Tesla battery manufacturing plant currently under construction.

    A mega data center hub of sorts is forming in Reno, which has not historically been a major high-tech destination. Right across the highway from the Switch site is a growing Apple data center campus. Rackspace is eyeing the possibility of building a large data center there as well.

    Because companies that don’t sell data center services tend to keep details about their mission critical sites under wraps, it’s nearly impossible to verify with 100-percent certainty that there isn’t a bigger data center somewhere in the world. Switch spokesman Adam Kramer said the company used a dedicated internal team of researchers to confirm that the capacity of the future Reno facility will be bigger than any other data center facility out there.

    While companies periodically claim that they are building or have built the largest data center in the world, square footage or amount of utility power available at the site don’t say much about the data center’s actual capacity. What’s more important is the ability to supply enough cooling and conditioned, UPS-backed power.

    Switch plans to provide 82,000 tons of cooling for its 150-megawatt data center, which will have a 300MVA substation. A power capacity of 150MW converts to about 43,000 tons of refrigeration. In a typical data center, IT loads contribute about 70 percent of the total thermal output, according to a white paper by Schneider Electric’s data center infrastructure subsidiary APC. The rest of the cooling capacity is consumed by UPS systems, lighting, power distribution, and personnel.

    Bigger of course doesn’t necessarily mean better in the data center market, but data center providers and web companies with massive facilities do benefit from economies of scale. Scott Davis, executive VP of operations at DuPont Fabros Technology, said during a presentation at Data Center World in National Harbor, Maryland, this week that the annual cost of operating a 6MW data center would cost about $4 million more than operating 6MW of capacity within DuPont’s latest 42MW facility Ashburn, Virginia. It costs about $5.4 million to operate a stand-alone 6MW data center, compared to $1.3 million to provide 6MW of capacity within the 42MW building, he said.

    Operators save primarily because a larger data center doesn’t mean a larger supporting staff, such as security personnel. “A lot of [the savings] is manpower,” Davis said. “Staffing doesn’t need to scale with the size of the facility.”

    Much of the savings achieved gets passed on to the data center provider’s customers in the form of lower rates, he said.

    Switch’s high-speed network loop will interconnect the Reno and Las Vegas campuses and link to connectivity hubs in Los Angeles and San Francisco. Customers will be able to use the connectivity between Reno and Vegas for the “active-active” resiliency topology, where infrastructure and applications are replicated at two sites and stay on around the clock in both locations.

    Switch says its Reno-Vegas link will provide latency under seven milliseconds.

    3:00p
    Schneider to Roll Out Open Compute Server Chassis, Enclosures

    NATIONAL HARBOR, Md. – Schneider Electric, the French energy distribution and automation giant, is working on a server chassis based on Facebook’s disaggregated rack design to make it easier for the common IT shop to use design concepts Facebook developed for its own data center needs. In parallel, Schneider is also working on an enclosed pod that will house multiple such chassis and supply them with cooling.

    Schneider, which for several years has been a member of the Open Compute Project, the open source data center and hardware design initiative Facebook started, has been designing custom chassis based on these concepts for hyperscale data center operators in Asia and Russia, Steve Carlini, senior director of global data center solutions at Schneider, said in an interview with Data Center Knowledge on the sidelines of the Data Center World conference here. The company is making them into products for a more general audience because demand has been growing, he said.

    “Our goal is to simplify the deployment of this new OCP architecture,” he said. There’s no definitive target date for launch of the new chassis, but the general expectation is that it will be on the market around the end of this year.

    Emerson Network Power, one of Schneider’s biggest rivals in the data center market, has had an Open Compute rack on the market for several years. Emerson’s rack was based on the first generation of OCP servers.

    The chassis is a rack where individual server components, such as CPUs, hard drives, memory, and network cards, share common resources, such as power supply, cooling, and network connectivity. Those components can be swapped out individually, making it possible to upgrade just the CPU if needed, without having to rip and replace an entire server.

    Another big difference is power. It enables the user to bring medium voltage directly to the rack instead of stepping it down first. Carlini said Schneider’s chassis will give customers the ability to choose the voltage they bring to the rack and how it gets treated.

    The enclosure Schneider is working on will provide cooling for the new chassis and allow the chassis to be rolled in for quick installation, he said.

    Earlier this year, Schneider submitted a data center facilities operations framework called Facility Operations Maturity model to OCP.

    Data Center World is organized by AFCOM, a sister company of Data Center Knowledge. Come back to DCK for more Data Center World coverage.

    3:30p
    Why Traditional Data Management Fails in the Era of NoSQL and Big Data

    Nitin Donde is CEO of Talena.

    The rapid growth of data has enabled exciting new opportunities and presented big challenges for businesses of all types. The opportunity has been to take this vast swath of human- and machine-generated data to better personalize e-commerce sites; more closely identify fraud patterns; and even sequence genomes more efficiently. And the NoSQL movement has been instrumental in delivering data platforms like Cassandra, MongoDB and Couchbase that enable rapid, distributed processing of these web-scale applications.

    Data Management Challenges in a Data-Rich World

    However, the downstream challenge of this new application class is that traditional data management techniques used in the world of relational databases no longer work. Data management includes the concepts of backup and recovery, archiving, and test/dev management in which engineering groups can test new application functionality with subsets of production data. So why do traditional processes fall short?

    Data management capabilities now have to handle hundreds of terabytes, if not petabytes, of data in a scale-out manner on commodity hardware and storage. Traditional data management products are built on top of a scale-up architecture that cannot handle petabyte-scale applications, nor do they possess the economics to handle traditional open source technologies.

    The advent of DevOps and other agile methodologies has led to the need for rapid application iteration, which implies that data management products need to help these teams refresh data sets to enable iterations. When was the last time you heard the words “DevOps” and “Veritas” in the same sentence? I thought so.

    Even within the world of NoSQL there are a variety of data formats, making it difficult for data management products to handle the storage optimization needs of each data platform. If your company uses both Cassandra and HBase, then your data management architecture needs to handle each of these unique application formats when it comes to backup, recovery, archiving and other key processes.

    NoSQL and Data Availability

    How should companies think about ensuring universal data availability for their NoSQL and other Big Data applications?

    At the volumes that Big Data applications handle, backups have to be incremental-forever after the first full backup. Otherwise you’ll constantly be scrambling for more storage, typically on your production cluster. This is often the origin of operational fire-drills and a big drain on resources. In addition, trying to do a full weekly backup on, for example, a 1-petabyte data set will never meet any corporate service-level agreement.

    With NoSQL implementations often running into the hundreds if not thousands of nodes, you have to think about a data management architecture that is agentless. The overhead of managing agents across production nodes that are constantly being commissioned or decommissioned is simply overwhelming.

    Your data science or DevOps teams will need access to production data to support ongoing analytics efforts or to iterate on new application functionality. However, production data may contain confidential information, the leakage of which can compromise your brand and reputation. It’s important to think through a data masking architecture that is irreversible, consistent (so your analytics will result in the same results on the masked data), and one-way.

    Your backup architecture needs to be aware of the different data abstractions across the universe of NoSQL applications. For example, workflows associated with Cassandra have to be aware of and set up using keyspaces and tables. This applies to both the actual data as well as the metadata layer.

    Summary

    These are just a few of the ways the advent of the NoSQL and other Big Data platforms have altered the thinking around traditional data management. Paying attention to these architectural considerations will ensure that the data that powers your new applications will be always available to the consumers of your applications.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Mirantis Partners with Citrix, Metaswitch, Overture Networks in NFV Initiative

    talkincloud

    This article originally ran at Talkin’ Cloud

    By Michael Cusanelli

    OpenStack provider Mirantis is looking to help telecom providers reduce the amount of time and money it takes to bring applications and services to their customers with a new Network Function Virtualization initiative with Citrix, Metaswitch Networks, and Overture Networks.

    Mirantis formed the group on the idea that resellers can reduce overhead costs by virtualizing their network functions, thus eliminating the need to purchase proprietary hardware on a regular basis. With the growing need for MSPs and carriers to deliver instantaneous service to customers, the new reference architecture serves as an opportunity for companies to evolve their businesses ahead of the predicted NFV market boom by the end of the decade.

    “Telecommunications companies are increasingly drawn to NFV because it frees them from expensive proprietary hardware platforms, reduces operational expenses, and facilitates the launch of new applications and services quickly,” said Heather Kirksey, director of OPNFV, in a statement. “Developers can spend less time on administrative tasks and more time delivering innovative applications and services to consumers.”

    The initiative will include a whitepaper and deployment guide for customers to help them deploy NFV solutions on OpenStack, as well as Mirantis OpenStack validation for partner virtual network function solutions via the company’s Unlocked Technology Partner Program. Mirantis will also add several NFV features to its OpenStack distribution so that users can decrease their physical footprint and offer higher availability, as well as deliver single root I/O virtualization.

    Additional partner validations for Citrix NetScaler, Metaswitch’s Perimeta Session Border Controller and Overture Networks’ Ensemble Service Orchestrator have also been added, according to the announcement. The company plans to offer an NFV reference architecture and OpenStack validation program for partner VNFs.

    “The telecommunications industry is in the midst of a massive transformation,” said Kamesh Pammaraju, VP and partner marketing at Mirantis. “These companies must innovate, and to do so, their networking needs to be agile, scalable and cost effective. The best way to do this is through NFV.”

    This first ran at http://talkincloud.com/telco-hub/mirantis-partners-citrix-metaswitch-overture-networks-nfv-initiative

    4:30p
    Study Finds Most Corporate Networks are Outdated

    varguylogo

    This post originally appeared at The Var Guy

    By Michael Cusanelli

    With the amount of corporate hacks making headlines each week, it has never been more important for companies to keep their network infrastructure up to date. But a new study from Softchoice found that more than half of businesses surveyed in North America continue to use devices without manufacturer support, opening them up to data loss or information theft.

    After examining 52,000 networking devices at more than 200 organizations in North America, Softchoice found that 60 percent of businesses currently use end-of-support devices in their networks. Additionally, 95 percent of businesses currently utilize end-of-sale devices on their networks, meaning these solutions are still supported by the manufacturer but no longer in production.

    While using older tech within a corporate network doesn’t necessarily mean a company’s sensitive files will be breached, it does place these organizations at a higher risk of data loss, theft and network downtime, according to Softchoice. Like the recent End of Service date for Microsoft Server 2003, these solutions still function, but without the safety net provided by knowing there are manufacturer patches in place in case something goes wrong.

    “Most organizations struggle to get a basic understanding of the state of their network because few regularly check up on the different devices they have in play,” said David Brisbois, senior manager of Assessment and Technology Deployment Services Consulting at Softchoice, in a statement. “It isn’t until a breach occurs or their network crashes that most organizations react and realize their network is past its ‘best before’ date.”

    Information for the study was gathered using Softchoice’s Cisco Contract TechCheck assessment service, which examines the state and health of a company’s network infrastructure, according to the announcement.

    Overall, 51 percent of all devices analyzed were deemed to be end-of-sale, 4 percent were end-of-support, and 30 percent lacked Cisco’s SMARTnet technical support service, according to the announcement.

    For companies looking to upgrade their enterprise networks, Softchoice recommends businesses first assess their existing infrastructure and then evaluate their disaster recovery strategy before deciding on if and what they wish to replace. Once these steps have been taken, Softchoice said it is important to remember that end-of-sale solutions normally have a maximum of two to five years left of manufacturer support, and suggests replacing these solutions as soon as possible.

    While the suggestion to replace aging solutions is solid advice, budget restrictions and a general lack of IT knowledge will result in many companies continuing to use out-of-date hardware and software until a disaster strikes. However, MSPs in particular can capitalize on the opportunity to help upgrade these networks by offering subscription-based solutions, thus helping customers to stay up to date with the latest technology while securing a monthly revenue from each subscriber.

    This first ran at http://thevarguy.com/network-security-and-data-protection-software-solutions/092215/study-finds-most-corporate-networks-are-outd

    5:00p
    NetApp Names Mark Bregman CTO to Drive Innovation

    talkincloud

    This article originally ran at Talkin’ Cloud

    Data management and cloud storage company NetApp has named Mark Bregman chief technology officer in an announcement on Monday. Bregman joins NetApp from machine-learning startup SkywriterRX where he served as CTO, and continues to serve as a board member and an advisor.

    At NetApp Bregman will be responsible for leading its portfolio strategy and innovation roadmap. He will evaluate where the biggest opportunities are to support its vision of the Data Fabric, the company’s vision for the future of data management.

    “Mark will work with NetApp’s Advanced Technology Group anticipating and capitalizing on new and emerging trends,” NetApp executive vice president of Product Operations Joel Reich said in a statement. “His wealth of experience across technology sectors will be invaluable to accelerating NetApp’s innovation leadership. We look forward to Mark helping to deliver outstanding value to our customers and partners, now and into the future.”

    Bregman has held senior positions at Neustar, Symantec, Veritas, AirMedia and IBM.

    “It’s an honor to join NetApp, especially at a time when our customers are rapidly transforming their IT environments,” Bregman said. “NetApp’s people, products, and intellectual property address customers’ evolving needs and ultimately played a big role in my decision to join the company.”

    In addition to his various senior roles, he has served as executive sponsor and member of the Women in Technology programs at his previous places of employment. He has served as a director of the Anita Borg Institute for Women and Technology. He is also on the boards of the Bay Area Science and Innovation Consortium, ShoreTel, and Skywriter.

    Earlier this year, NetApp launched three editions of its SteelStore backup and disaster recovery software that can be installed as an Amazon Machine Image.

    This first ran at http://talkincloud.com/cloud-computing/netapp-names-mark-bregman-cto-drive-innovation

    << Previous Day 2015/09/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org