Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, November 12th, 2013

    Time Event
    12:48p
    Cray Offers Tiered Adaptive Storage

    Cray launches Tiered Adaptive Storage through a partnership with Versity Software, Internap and Aerospike parter for a fast, big data solution, and Permabit software exceeds one million IOPS performance.

    Cray Tiered Adaptive Storage

    Cray introduced a complete and open storage archiving solution for Big Data and supercomputing with the launch of Cray Tiered Adaptive Storage (TAS). Through a strategic partnership with Versity Software, Cray has developed a deployment-ready, flexible tiered archiving solution designed to be easy to use, reduce costs and preserve data indefinitely. The latest addition to Cray’s line of storage and data management solutions, Cray TAS can be provisioned as a primary file storage system with tiers, as well as a persistent storage archive. It provides transparent data migration across storage tiers – from fast scratch to primary and archive storage. Cray TAS features up to four flexible storage tiers mixing media, solid state drive, disk or tape, and simplifies storage management with familiar, easy-to-use archive commands for storage operators.

    “Our new data management product gives supercomputing and Big Data customers the ability to more effectively access, manage, and preserve their exponentially increasing amounts of data,” said Barry Bolding, Cray’s vice president of storage and data management. “Today, Cray is at the forefront of scalable, parallel file system storage. With the launch of Cray TAS, we now have a complete tiered storage and archiving solution that combines our scaling expertise with proven block storage and tape technologies, while also providing integration with HSM features in future versions of the Lustre file system. To make Cray TAS even more compelling, we have integrated Versity’s open-format virtualization technologies that manage and optimize customer data across multiple storage tiers to provide best-in-class hierarchical storage management.”

    Internap and Aerospike off tunkey solution

    Internap Networks Services (INAP)  and in-memory NoSQL database provider Aerospike  announced an integrated solution that creates the industry’s first “fast big data” platform by offering Aerospike’s leading NoSQL database on Internap’s AgileSERVER bare-metal cloud. One of the early customers to benefit from the joint Internap-Aerospike solution is eXelate, the smart data company that powers digital marketing decisions worldwide for agencies, platforms, and publishers. eXelate runs Aerospike databases on Internap bare-metal cloud servers with direct attached SSDs in four Internap data centers around the world. The combined solution enables eXelate to provide its customers with 8,000 distinct segments of demographic, behavioral and purchase intent data on more than 700 million consumers within strict, 100-millisecond service-level agreements (SLAs), cost-effectively and with 100percent uptime.

    Permabit DedupeTops 100 Million IOPS

    Permabit Technology announced that its Albireo Virtual Data Optimizer (VDO) software has exceeded the one million IOPS performance barrier for inline data deduplication enabling it to be integrated into the highest performing enterprise storage arrays. By taking advantage of multi-core, multi-processor, scale-out architectures Permabit has assembled a reference architecture delivers over 1 million sequential 4K IOPS (4 GB/s) and 500 thousand random (2 GB/s) under workloads generated using the flexible I/O tester (fio).  These results are three or more times faster than the published deduplication performance numbers of other high-end storage arrays. “Data efficiency has burst on to the storage scene as a high-value product requirement,” said Tom Cook, CEO of Permabit. “Breaking the 1M IOPS barrier with inline deduplication means even the highest performing storage arrays will have advanced data efficiency incorporated over the next two years. The result is a huge windfall for IT efficiency.”

    1:30p
    Kinetic Open Storage: Why It Matters for Cloud Builders

    Cloud-based storage has really come a long way. The distributed nature of the cloud forced the logical evolution of the modern storage platform. Seagate recently announced Kinetic Open Storage, an interesting new platform that aims to remove another layer between applications and  storage.

    Here’s Seagate’s description: “The Seagate Kinetic Open Storage platform eliminates the storage server tier of traditional data center architectures by enabling applications to speak directly to the storage device, thereby reducing expenses associated with the acquisition, deployment, and support of hyperscale storage infrastructures. Companies can realize additional cost savings while maximizing storage density through reduced power and cooling costs, and receiving potentially dramatic savings in cloud data center build outs.”

    Major cloud and storage vendors can see direct benefits from adopting this type of storage solution, which allows applications to communicate with key resources that can optimize their performance. This model of cloud storage is the future of how applications will behave within a data center. By incorporating intelligent APIs into the entire stack, applications become more intelligent. As a result, your overall infrastructure will see serious gains.

    So what makes up the Seagate Kinetic Cloud Storage Model? What are the real benefits, and will this technology stick in the future? Let’s take a look.

    • Creating More Intelligent Storage Solutions. Cloud-based storage has really come a long way. The distributed nature of the cloud forced the logical evolution of the modern storage platform. The Kinetic Open Storage architecture allows administrators to use cheaper, more scalable object storage solutions that free up IT professionals from having to invest in hardware and software they don’t need. Organizations will be able to better utilize storage solutions and optimize storage resources.
    • Optimizing Your Applications and Cloud. The really great thing about applications in the cloud is the software and logical layer. We are able to do so much more now with applications than ever before. By breaking down physical resource barriers, administrators can create truly optimized cloud solutions.  As Seagate goes on to explain, “With the Kinetic Open Storage platform, applications can now manage specific features and capabilities and rapidly implement and deploy in any cloud storage software stack. The technology also increases I/O efficiency by removing bottlenecks and optimizing cluster management, data replication, migration, and active archive performance.”
    • New APIs and New Open Source Capabilities. There have been two ways to go about this: proprietary or open source. Fortunately, Seagate decided to integrate key API structures which will remain open-source technologies. As Seagate points out, “Designed for rapid implementation and deployment in any cloud storage software stack, this technology can be deployed across a portfolio of storage devices enabling system builders and software developers to design new solutions that will deliver against a full array of cloud data center use cases.” This means true storage agnosticism. Furthermore, servers can utilize intelligent storage technologies to even better support the applications running on top. This new type of API structure for the Kinetic Cloud allows servers with internal storage – or full storage arrays – to leverage the performance optimizations of the platform.
    • Next-generation scale-out storage systems. Cloud computing heavily revolves around our ability to be scalable and efficient. This means that data centers and cloud providers needed to redefine how they utilize hardware and software resources. During this revolution, developers realized that they needed to reduce the amount of communication happening between the logical and the physical layer. When that challenge was overcome, you suddenly created a cloud environment capable of great scale. By better understanding hardware and software capabilities, the Kinetic Open Storage platform enables cloud service providers and independent software vendors to optimize scale-out file and object-based storage. There is already a lot of vendor buy-in from this design. “Key/ value data storage and IP interface drive modules are an important trend for scale-out storage systems,” said Yuan Yuan, senior director of Huawei’s IT Massive Storage. The next-generation storage platforms will allow applications to use resources in a balanced manner and an as-needed basis. Applications will be intelligent enough to ask for resources as required and create further cloud as well as application delivery optimization.

    Data centers are already striving to create optimal systems which can process vast amounts of cloud traffic. These distributed cloud models require careful storage planning so that neither dollars nor resources are wasted. A recent article discussed how storage is shaping the cloud data center. As part of this process – the modern API has created new ways for applications, users and resources to communicate.

    The days of the PC are numbered. The evolution of big data, IT consumerization and cloud computing has shown administrators the power of an application. Most devices now are web-enabled and OS agnostic. This means that they key focus becomes application delivery, rather than the operating system.

    The future of the user environment will heavily revolve around application delivery and performance. This is why removing hops as an application or resource is delivered becomes critical. As cloud continues to proliferate, the need to optimize the user experience will be ever-present. This means that data center and cloud administrators will have to continuously find new ways to optimize and deliver their workloads.

    2:16p
    Microsoft Will Use Fuel Cells to Create Self-Powered Racks
    Serious Server Density: Packed racks of servers in an IT-PAC at the new Microsoft data center in Quincy, Washington (Photo: Microsoft Corp.)

    Will these Microsoft racks soon include fuel cells? The company has outlined a plan to integrate methane-based fuel cells directly into racks. (Photo: Microsoft Corp.)

    Microsoft wants to bring power generation inside the rack, and make data centers cheaper and greener in the process. The company says it will test racks with built-in fuel cells, a move that would eliminate the need for expensive power distribution systems seen in traditional data centers.

    In a new white paper, Microsoft researchers say the use of methane-powered fuel cells at the rack level offers the greatest efficiency and savings. It’s a new twist on the convergence of data centers and renewable power, which has seen eBay use fuel cells as a building-level power source in its Utah server farm, eliminating the need for UPS units and generators. Microsoft says integrating fuel cells directly into data center racks can eliminate power distribution systems and even server-level power supplies, dramatically reducing energy loss.

    This approach builds on Microsoft’s goal of creating “data plants” that operate with no connection to the utility power grid. The company is deploying a proof of concept in Cheyenne, Wyoming featuring a modular data centers housed at a water treatment plant. This waste-powered data center will use electricity from a fuel cell running on methane biogas from the plant.

    Microsoft now wants to take this a step further. Using a rack-level fuel cell can “collapse the entire energy supply chain, from the power plant to the server motherboard, into the confines of a single server cabinet,” said Sean James, Senior Research Program Manager for Microsoft Global Foundation Services.

    “The main distinction between this data plant concept and previous architecture ideas is the notion of bringing the power plant inside the data center, instead of putting the data center in the power plant,” James writes in a blog post. “A lot of energy is lost in today’s data center energy supply chain. We show how integrating a small generator with the IT hardware significantly cuts complexity by eliminating all the electrical distribution in the grid and data center.”

    James said Microsoft is in the “early stages” of exploring the concept, but believes this design could improve efficiency, reduce the total cost of operating a data center, and improve reliability by distributing risk. If a fuel cell fails, it would affect only one rack rather than an entire data center.

    “We plan to install a fuel cell with servers to get first-hand measurements,” said the white paper.

    The Microsoft team explored several approaches to incorporating fuel cells into data centers, evaluating the cost and efficiency of using them at the utility power level and even at the server level. Using fuel cells to replace the utility feed improves efficiency, but allows a failure to affect the entire data center.  Small fuel cells can be integrated into servers, eliminating the need for power cabling, but this approach isn’t cost effective.

    Integrating fuel calls into racks eliminates the need for power infrastructure – including UPS units, generators and switch gear – by replacing it with gas pipes to distribute fuel. This approach would also cost more to cool. Microsoft estimates this design would reduce capital expenses by 16 to 2o percent, while cutting operating costs by 3 percent or more.

    “Having fuel cells close to the servers makes direct DC power distribution possible,” the researchers write. “This can also eliminate the AC power supply unit in servers, currently used to convert AC input to internal DC power.”

    “We see tremendous potential in this approach, but this concept is not without challenges,” said James. “Deep technical issues remain, such as thermal cycling, fuel distribution systems, cell conductivity, power management, and safety training that needs to be further researched and solutions developed. But we are excited about working to resolve these challenges.

    3:30p
    How to Prevent a Data Breach When Refreshing Your Server Equipment

    Steve Skurnac is the president of Sims Recycling Solutions, the global leader in electronics reuse and recycling.

    SkurnacSTEVE SKURNAC
    Sims Recycling Solutions

    As more people leverage cloud computing, data centers play a more critical role in supporting our individual and corporate IT requirements. Operating in the background, today’s data centers are much larger and more ubiquitous than previous centers as they offer back-end support to our expanding daily IT demands and increased cloud computing.

    In the last three years, 74 percent of data centers have added to their physical server count. In addition, sales figures for new server purchases rose from 8.9 million servers purchased in 2010 to 9.5 million purchased in 2011. However, more important than the purchase and growth of this new equipment, is the method of disposal for the old servers being replaced. With 88 percent of unsecured data being shared electronically according to IT News Online, responsible server disposal has proven to be a key component to IT security, environmental responsibility and corporate compliance.

    What some companies overlook is that the security of this unwanted equipment can be as important as the security of the working equipment still in use. The following information offers guidance on how to responsibly manage end-of-life IT equipment to help prevent a data breach and avoid potential litigation.

    Assignment of Responsibility

    In an ever-changing technology environment, IT and data center executives are constantly challenged with ensuring 24/7 availability of data center equipment and services. These challenges can make it easy to allow old devices to pile up in a storeroom.

    Assigning this responsibility to an individual within the workplace can help maintain continuous oversight of an ongoing technology and server disposal program, ensuring standardized and systematic processes are in place, and proper records are maintained regardless of who is physically conducting the disposal activities and tasks.

    Understanding the complex process of IT asset disposal along with costs and services can be a lot of work and at times overwhelming, but when compared to the potential risk of corporate digital data ending up in the wrong hands or equipment being illegally “landfilled,” the payback always proves worthwhile. According to a survey, 66 percent of executives with purchasing authority are unaware of the financial implications of ignoring environmental regulations when disposing of IT equipment, and may not even realize the significance of this role.

    Know Your Risks

    Data arrays, data servers, hard drives, tape drives, routers and switches are just some of the data-rich IT assets that can potentially expose your company’s confidential, proprietary or network information if not handled securely during the disposal process.

    Other factors to be mindful of when disposing of obsolete data bearing devices include regulatory compliance, data protection, fiduciary accountability and environmental stewardship. Specifically the Environmental Protection Agency (EPA) can hold the equipment owner personally liable if your equipment is improperly disposed of, even if this service has been outsourced. Legislation governing disposition of obsolete data center equipment varies by location.

    There are many risks associated with equipment disposal and many reasons why it is critical to work with a disposal company who not only will protect you and your company’s data, but also will ensure you comply with the 550 U.S. laws that affect IT equipment disposal.

    Consider Your Options

    While the destruction of data residing in retired data center assets may occupy a small part of a company’s larger data security strategy, no policy is complete without it. If hard drives are to be reused, then digital data must be 100% overwritten, by use of commercially certified software. Data on hard drives that will not be reused can be erased via a degaussing method, which uses strong electromagnetic fields to destroy digital data. Hard drives that are degaussed cannot be reused, and are typically also physically destroyed. Shredding of hard drives is a common commercial solution to ensure destruction of digital data on hard drives. Special processing of solid state hard drives (SSHD) is required, as traditional data destruction methods are not effective for these devices.

    IT asset disposal vendors such as Sims Recycling Solutions can perform these services on-site at the customer’s office leaving no question about 100% data destruction. Certificates of data destruction provide documented proof that assets have been properly managed and digital data destroyed.

    Choose Your Vendor Carefully

    Choosing a vendor to manage your IT assets can be a daunting task. It is important to take the time to ask questions, learn and understand the disposition process to know exactly where your IT assets and servers are ending up.

    When navigating the selection process, you may want to consider a few things.

    • Reliance on Subcontractors: Selecting an IT asset disposal vendor that doesn’t rely on subcontractors and manages every step of the process internally improves accountability, increases security and streamlines reporting.
    • Data Security Standards: NIST-compliant data destruction and validation of that destruction through certificates of data and physical destruction can be offered, depending on the vendor.
    • Certifications: Look for a company that operates in accordance with industry best practices that govern environmental, health, and safety management systems (R2, e-Stewards, ISO 14001, OHSAS 18001), but also look for standards that regulate information destruction (NAID) and the secure handling, warehousing and transportation of equipment (TAPA).
    • Liability Insurance: An insured vendor is able to protect customers from and manage the potential financial risks associated with the proper disposition of obsolete electronics.
    • Location of Business: Strategically located facilities will minimize freight costs, reduce greenhouse gas emissions and simplify logistics.
    • Examine IT Asset Disposal Equipment: Conduct a site visit and evaluate the physical security measures in place. Determine if employees are background screened and drug tested.

    It is important to remember that your organization will continue to be held accountable for the data in your IT equipment even after retirement. Data has never been more valuable or more vulnerable. By having a clear disposal plan for obsolete equipment, you make the security of the protected digital data entrusted to your organization a priority rather than a postscript.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:03p
    Open Compute Reports Progress on Open Network Switch
    facebook-najam-ahmad

    Najam Ahmad, director of technical operations at Facebook, speaks at an Open Compute Project networking news announcement at Facebook headquarters. (Photo: Jordan Novet)

    MENLO PARK, Calif. - The Open Compute network switch is moving closer to reality. Broadcom, Intel and Mellanox have each submitted specifications for a top-of-rack switch to the Open Compute Project, and were on hand at Facebook headquarters Monday to promote their contributions to the open-source hardware initiative.

    Broadcom introduced and built a switch based on the Trident II chip architecture, while Mellanox put forward its SwitchX-2 switch, and Intel offered a specification for a switch that Quanta and Accton have been building. The three switches are now being tested in Facebook’s labs.

    Hardware vendors such as Hyve have released designs to meet the server needs of the Open Compute Project. Now the arrival of an webscale-approved open-source switch is just around the corner, six months after the Open Compute Project announced plans to come up with such a switch.

    But contributions to the project now go beyond hardware. Cumulus Networks, which delivers a Linux networking operating system, is providing the Open Network Install Environment for the Open Compute Project. The idea is to be able to run different operating systems on networking hardware, said JR Rivers, CEO and a co-founder of Cumulus.

    And that is, in fact, a use case Facebook would like to implement, said Najam Ahmad, director of technical operations at Facebook. Such capability would enable Facebook to let a switch sometimes run, say, OpenFlow-based vSwitches.

    “There is that interest from a lot of users to be able to write software to affect changes on the networks, and these closed platforms don’t provide enough visibility and control to be able to do that,” Ahmad said. “The desire from a lot of the user base to be able to use software to do things is growing rapidly.”

    Fortunately for Ahmad, the Open Compute Project has more than 30 contributions to consider for networking, and among them are “SDN-type solutions,” he said, referring to software-defined networking.

    In the coming weeks, Open Compute Project leaders will choose one or two of the specifications from the hardware vendors to be officially included in the Open Compute Project. Then Open Compute people want to run demonstrations with the switches and talk about the work at the Open Compute Summit in San Jose in January.

    The switches could be deployed first at Facebook’s data center in Altoona, Iowa, in late 2014 or early 2015, a Facebook spokesman said.

    For Ahmad, the switches are more than a business decision. They’re part of an effort to rethink networking.

    “We talk about hardware, and we talk about being able to build stuff cheaper,” Ahmad said. “The real gain is in the operations of things — transparency being the keyword, visibility, and being able to do something when you see something. Transparency starts with that visibility. (If you) have visibility and can’t do anything about it, it’s even more frustrating to me. This project is the start of something really big, where a few years from now networks will be built very differently.”

    7:20p
    New PowerMOD Power System for Modular Data Center Market

    GE’s Critical Power this week rolled out its new PowerMOD, which provides increased energy efficiency and greater cost savings, and can be configured with critical power protection and efficiency technologies.

    GE’s Critical Power business provides mission-critical applications with end-to-end power product and service solutions that maximize uptime and power efficiency. With the market for modular data centers expected to grow by nearly 33 percent each year over the next five years, according to an  IHS report in 2013; therefore, the market for modular power systems is expected to expand as well. And GE Critical Power with its experience across industry is poised to leverage that marketplace.

    With the introduction of the PowerMOD, can see greater efficiency in the uninterruptible power supply (UPS), as well as lower operating expenses. The power module can contain a TLE Series UPS, which provides up to 97 percent power efficiency in double conversion mode and up to 99 percent efficiency in eBoost or multi-mode operation. GE’s efficient TLE UPS system helps lower system energy expenses and power usage effectiveness (PUE). GE PowerMOD provides backup critical power from 200kW to 1,500kW in standard design points, for both 50Hz and 60Hz configurations. Also, operating expenses can be reduced by up to 44 percent through the greater efficiencies of the TLE UPS and the Free-Cooling Economizer, standard features in the PowerMOD. Environmental cooling options for the PowerMOD include direct expansion (DX), chilled water and evaporative/adiabatic.

    “GE PowerMOD delivers lower total cost of ownership and higher energy efficiency, and can be deployed on site faster than brick-and-motor data centers,” said Jeff Schnitzer, general manager of GE’s Critical Power business. As owners and operators seek to grow their storage and processing capacity in fixed brick-and-mortar facilities, many are seeing modular data centers as a way to build out new capacity. In a survey conducted by the Uptime Institute in 2012, 41 percent of respondents viewed modular “power and cooling blocks” as part of their current data center expansion strategies. According to the company, GE’s PowerMOD can be deployed in four to five months compared with 24 months for traditional facilities, with capital expense reductions of almost 25 percent.

    Liz Cruz, senior analyst in IHS’ data center and critical infrastructure research group, said, “The market for facility containers, or those that provide the power and cooling infrastructure for data centers, has an attractive future due to increased rack densities, which are causing data centers to run out of power and cooling capacity before IT capacity, presenting a unique market opportunity for facility containers. GE’s entry into the modular data center market with a power supply solution signals widening growth for this segment.”

    For further updates, follow GE Energy Management and its Critical Power business on Twitter @GE_EnergyMgmt and @GEcriticalpower.

    << Previous Day 2013/11/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org