Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, April 9th, 2013

    Time Event
    11:30a
    AFCOM Names Finalists for DC Manager of Year

    AFCOM, the world’s leading data center association, named the 2013 finalists for its annual “Data Center Manager of the Year” award, nominating Tate Cantrell, chief technology officer for Verne Global; Donna Manley, senior IT director, computer operations at the University of Pennsylvania; and David Shaw, senior vice president of IO. The award will be given at the Data Center World spring conference.

    These data center professionals are being honored for their outstanding leadership and excellence in the field. “We received many submissions this year for the DCMY award and like in the past the selection was difficult. These three individuals stood out at as leaders for their companies and have shown a high level of commitment to the Data Center industry,” said Tom Roberts, president of AFCOM and chairperson of Data Center World. “Their methods, innovations and contributions inspire all of us to keep moving forward toward continual improvement.”

    Tate_VerneTATE CANTRELL
    Verne Global

    Tate Cantrell, Chief Technology Officer at Verne Global

    Notable Achievement: Verne Global’s Icelandic campus is the industry’s first efficient, sustainable, recyclable, zero emission data center.

    Tate Cantrell is responsible for product design and development, and data center operations. Mr. Cantrell has overseen the build-out and operations of some the largest and most sophisticated data centers in the industry.

    • 15+ Years’ Experience
    • 25 Staff Members
    • Innovative designs utilizing geothermal and hydroelectric energy for power and cooling
    • ISO 27001 Certification Standard for Information Security
    Donna-ManleyDONNA MANELY
    U of Penna

    Donna Manley, Senior IT Director at the University of Pennsylvania

    Notable Achievement: University of Pennsylvania is the only Ivy League University Data Center with ISO 9001:2008 certification and HIPAA, PCI, and FISMA compliancy

    Donna Manley is a senior IT director at the University of Pennsylvania’s Information Systems and Computing (ISC) Systems Engineering and Operations organization. Ms. Manley evaluates evolving technologies and the changing demands of the Penn community, finding creative ways to prolong the life of the existing Data Center well beyond the expected capacity limits. Despite several additions to the Computer Operations’ services portfolio, process reengineering and workflow automation strategies have resulted in sustained staffing and budget levels over an eight year period.

    • 25+ years’ experience Data Center Management in various industry verticals
    • 28 Staff Members
    • IT Infrastructure Library (ITIL) V2 and V3 Certified – Foundations
    • President – AFCOM Mid-Atlantic Chapter (6 years)
    David-Shaw-tnDAVID SHAW
    IO

    David Shaw, Senior Vice President at IO

    Notable Achievement: Manages over 1.5 million square feet of data center capacity – including the world’s largest modular data center.

    As Senior Vice President at IO, David Shaw is responsible for IO’s global Data Center as a Service (DCaaS™). Mr. Shaw oversaw the opening of the world’s largest modular data center, IO.New Jersey, in Edison, New Jersey, and ensured that the data center was 100 percent operational during Hurricane Sandy and in its aftermath in late 2012.

    • 4 US data centers and currently implementing new data center in Singapore
    • 50 Staff Members
    • 25 years of global data center operations, engineering management & information technology experience
    • Certified Specialist in Information Technology Infrastructure (ITIL)

    More detailed bios on the Data Center Manager of the Year Finalists can be found at http://www.datacenterworld.com.

    The Spring Data Center World 2013 will be held April 28 – May 2 at Mandalay Bay in Las Vegas. The schedule for the Spring 2013 Data Center World can be found at http://www.datacenterworld.com/spring2013/attend/schedule-of-events/.

    To track the DCW conference on Twitter, follow @datacenterworld and search for the hashtag: #datacenterworld.

    12:00p
    Carter Validus Buys Data Centers in Boston, Raleigh

    Carter Validus Mission Critical REIT continues to acquire data center properties. On Monday the company said it has paid $12 million to acquire a data center property in Andover, Massachusetts, about 25 miles north of Boston.

    The property is fully leased under a long term, net lease to a leading provider of advanced network communications, including cloud computing and managed services. The purpose-built facility, originally constructed as a build-to-suit for a major telecommunications company, totals 92,700 square feet and has benefited from extensive capital investment by the current tenant.

    “We are pleased to continue to expand our diversified portfolio of mission critical real estate assets throughout key markets across the United States,” said John Carter, CEO of Carter Validus Mission Critical REIT, Inc.

    The announcement follows Carter Validus’ announcement last week that it has purchased the Raleigh Data Center property for $19.5 million. The multi-tenant data center is located near Raleigh-Durham International Airport in Morrisville, NC. The 143,770 square foot property, originally constructed in 1997, is 100 percent leased to four tenants.

    “Given the property’s desirable location and long term leases with high-quality tenants, we believe that the Raleigh Data Center is a great addition to our growing portfolio of mission critical real estate”, said Carter.

    Carter Validus Mission Critical REIT is focused on two sectors, data center and healthcare, citing societal trends that it believes will boost demand for data storage and outpatient healthcare. The company owns properties in Dallas, Atlanta and Philadelphia.

    12:59p
    Quantifying the Cloud: Bringing the Science Back

    Mike Goodenough is global director of cloud field engineering for savvisdirect at Savvis, a CenturyLink company and global provider of cloud infrastructure and hosted IT solutions.

    Mike-Goodenough-tnjpgMIKE GOODENOUGH
    Savvis

    Far back in the mists of time–at the pre-dawn of the digital age–scientists and engineers huddled in small rooms, devising ways to move electrons in such a manner as to represent the operations of logic upon the physical domain. No more would the slide rule be the sword of innovation and discovery. The future of the world rested firmly in the whirring circuits of that which would become: the computer.

    Computers weren’t things that rested on one’s desk. No, they were mighty monoliths erected as harbingers of mathematical, scientific and engineering feats that recreated the world of humanity in its own, towering image. To be master of the computer was to hold the destiny of the universe in your hand.

    A scant few years later, though, things changed.

    Where once a high degree of science was required to understand, navigate and create meaning from numbers, figures and formulas in a swarm of electro-mechanical interactions, now even high-school dropouts could construct powerful systems and arrange them strategically to illuminate the fabric of world commerce.

    Return to Science

    At some point along the way, the science was lost. When we abandoned the mainframes for personal computers and servers, math succumbed to convenience. Physics became a distant memory. And the culture that at once feared and admired the messengers of technology knelt down to worship tiny fruit-based entertainment.

    In this fashion, the corporate environment shifted. Enormous tasks could be tackled with stacks of small machines, instead of acres of memory core. Simplicity became the watchword, and when computer science was replaced by information technology, so too was knowledge traded for the total cost of space and power.

    But now cloud computing returns us full-circle to where we began, with enormous data centers housing collective systems so massive that a million businesses can fit inside. The science of “big computers” is here once again.

    Evolution of Cloud

    Bringing cloud into the mix changes the nature of modern business computing. Most recently, you would measure IT in terms of physical CPUs, disks, networks, blades and other manifestations of technology to be managed, replaced and amortized—entities of their own designs. However, the budgets with which these eventualities were addressed became so compressed that they began to collapse in upon themselves, ultimately resulting in an explosion of outsourced infrastructure.

    The equation thus mutated from purchasing power to operational effectiveness. Services now deliver what once was provided by legions of staff. And budgets for computers and software are instead shifted to create meaningful, lasting value for the enterprise. With the business itself rejoining this computing equation, the service model provides a solution to an evolutionary change in commercial mechanics.

    Measuring the Intangible

    When you compare it to the corporeal world in which we live, computer services are the energy in a universal system of hardware matter. Energy is mobile—it can transfer states, add or subtract properties, and alter its surroundings—yet exists regardless of its bindings. Cloud services are likewise portable, divorced from the platform on which they operate. Moving cloud energy around to satisfy the demands of a continually changing business environment does not end in chaos, but merely reflects the subtle alteration in technical trajectory.

    This is the promise of the future data center. Cloud meets the requirements of the operating system budget. What was once implemented with CapEx servers costing $7 per day is delivered for $3 per day in the OpEx cloud. Your operational parameters go from return on investment (ROI) on assets to value realized from capabilities.

    Indeed, cloud is more than just cost efficacy. It moves beyond hardware implementation and software licensing, and is instead quantified by the value of the services being provided. And while the basic variables in the algorithms of the cloud remain—compute, memory, storage and network—they become hidden by a universal architecture that focuses on the what, not the how.

    IT is not about defining “what is the cloud.” Rather, it is deriving value by conjoining business principles with technology innovation. How does the equation fit you?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:30p
    Accelerate Your IT for Better Business Results

    With the emergence of cloud computing, big data and IT consumerization, many data centers and organizations have redesigning their IT infrastructure. One of the biggest drivers within the modern data center is to design an environment capable of scalability, agility and of course efficiency. In creating such an infrastructure, many organizations are working with new types of converged technologies which can help achieve greater amounts of density.

    Based on a recent IDC study, customers who have implemented a converged infrastructure have been able to:

    • Shift more than 50 percent of their IT resources from operations to innovation—flipping the ratio from 70 percent of your people and budget focused on operations to 70 percent dedicated to innovation to improve customers’ experience, increase employee productivity, and make the business more competitive.
    • Cut time to provision applications by 75 percent. Based on the IDC research study, it takes IT organizations 20+ days to deploy a new application in traditional environments—and only five days in a converged infrastructure—a 75 percent decrease.
    • Reduce downtime by 97 percent. Go from an average of 10 hours of downtime per year down to less than 20 minutes.

    There are truly direct benefits in working with evolving converged infrastructure solutions. HP’s Converged Infrastructure is able to offer data centers the flexibility and capability to expand as needed. The core functions of a data center are all tied into one framework where management overhead is reduced and efficiency is increased.

    hp1

    [Image source: HP - Accelerate your IT for better business results]

    Download HP’s white paper on the HP Converged infrastructure to see how this technology can bring direct benefits to your organization. This includes:

    • Accelerate innovation
    • Accelerate responsiveness
    • Accelerate cloud
    • Accelerate security
    • Accelerate disaster recovery
    • Accelerate ROI

    In HP’s white paper, you are able to see how intelligent and efficient technologies – like the converged infrastructure – can not only improve management processes; the infrastructure can also help drive business, reduce operating risks and lower costs.

    4:54p
    Sony Launches Media Storage Cloud Services

    Sony Corporation of America today announced the launch of Sony Media Cloud Services, a new subsidiary that will serve as a virtual workspace that includes media applications to store, share and manipulate content from any location in the world. The scalable cloud platform, Ci (pronounced see) will provide studios, broadcasters, filmmakers, independent producers, marketing teams and other creative individuals a “one-cloud” solution to collect, produce and archive high-value, high-definition content, allowing fast and secure collaboration on a global scale.

    “Every day, creative professionals around the world spend numerous hours and resources on non-creative tasks like moving and sharing content, figuring out how and where to store it, and getting the right assets to the right places and in the right hands,” said Naomi Climer, President, Sony Media Cloud Services. “Sony understands these complex challenges, which is why we designed Ci as a functionally rich, scalable and secure, media-focused cloud platform that can enhance and streamline traditional production workflows to make it easier to collaborate more effectively and cost-efficiently.”

    The Ci cloud platform will feature infinite scalability, pay-as-you-go pricing, and a number of browser-based applications.  Ci MediaBox will collect, organize, preview, share and archive every media type and size using studio-designed cloud storage solution suite. Ci VideoLog Enables logging of frame-accurate events to prepare content for downstream opportunities, distribution and playout automation. Ci AudioSync uses  analysis algorithms and audio pattern matching to reduce non-creative editing work time in content-preparation workflows. Ci FrameMatch analyzes media files to automatically identify differences and likenesses between two sets of video files. Ci ReviewApprove enables review, annotation and collaboration on media files across multiple locations in real time, simultaneously.

    “Built by Sony Pictures movie and television professionals, incubated by Sony’s hardware and technology, and brought to market by a global sales force who understand the needs of our industry, Ci goes beyond simply delivering innovative technology—it brings the total power of Sony into the cloud,” said Climer. “Together, we’ve built a platform with applications that not only addresses today’s media challenges, but serves as the foundation to develop innovative services to transform our industry for years to come. The possibilities are endless.”

    “The efficiency and flexibility that cloud solutions provide will radically change the way creative professionals collaborate,” said Chris Cookson, President, Sony Pictures Technologies. “Working with Sony’s Cloud Services team to further enhance Ci’s platform and applications will enable our production and distribution teams around the world to work together more efficiently, without sacrificing creativity or quality.”

    Ci is currently in beta production and will be on display at the 2013 NAB Show in the Sony booth.

    4:59p
    Compass Commissions Its First Data Center In Nashville

    Compass Datacenters has completed construction and commissioning of its first data center facility in Franklin, Tennessee, rolling out a 21,000 square foot, 1.2 megawatt stand-alone data center facility. Groundbreaking to customer handover occurred in just six months using Compass’ patent-pending “Truly Modular Architecture.”  The facility in suburban Nashville has been leased by a customer (previously identified as Windstream) that is taking possession of the data center this month.

    “Completing a stand-alone, hardened, Tier III-certified data center facility in only six months is a fraction of the time it typically takes for this kind of facility, but that is the standard timeline with Compass’ methodology,” said Chris Crosby, CEO of Compass Datacenters. “It’s not uncommon for this kind of project to take more than a year or two with traditional design and construction practices for data centers. Compass was founded to make that a thing of the past, and our very first project is a successful demonstration of the advantages of our methodology,”

    “There was 50 full days of rain in Nashville during the timeframe,” Crosby added. The company still hit its deadline. “For a greenfield build, it’s a big deal.”

    The facility in Franklin was built using Compass’ modular architecture, which makes it possible for companies to locate their data centers where they need them—at an affordable cost—rather than where their provider happens to have a facility. The centerpiece of the design is the CompassPod, which provides 10,000 square feet of column-less raised floor space supported by 1.2 MW of electrical power with 2N power distribution. The facilities delivers a PUE of 1.2 – 1.5 or lower at loads as low as 25 percent. CompassPods are contained within, and protected by, the CompassStructure, a hardened, energy-efficient, highly-secure structure for the facility’s mission critical IT systems.

    The CompassPowerCenter provides the UPS (2N) and switchgear (2N) equipment required to ensure uptime and reliability. Each facility includes a dedicated CompassSupport module that provides to meet the needs of operational staff and logistics for data center operations, including a security center, lobby, office space, loading dock, break area and restrooms.

    “In terms of momentum, this is huge,” said Crosby. “Raleigh Durham is going through level 5 commissioning next week. Once again, it’s within the six month time frame. With only six months from groundbreaking to delivery, this brings the concept of just-in-time-delivery to data center facilities, enabling customers to take delivery of their new standalone facilities on a timeline that was never before possible.”

    In short, Crosby believes the architecture has proven itself. In terms of giving the ability to expand, and time the capital, Compass’ model is attractive to customers. Compass customer Windstream, as one example, has been able to add space as they add revenue. It takes the guessing game out of the equation and allows a company to expand data center space in line with the business.

    One of the big differences Crosby sees is the hardened nature of the facilities. “The level of hardening is unique for our space and will continue to set us apart,” said Crosby. “During construction, there was a tornado that touched down basically across the street. Those folks were happy they were inside that facility at the time.”

    The company is seeing continued interest across the country. “The level of interest in the secondary markets are high. The next set of markets, Minneapolis and Columbus, also are seeing high interest,” said Crosby.“From an overall funnel perspective, we’ve been tracking 75-80MW of opportunity. We feel pretty good where things are at. We’re at the negotiation stage with a few clients.” The company’s goal is to be able to manage up to 10 projects at the same time in 2014.

    “2012 was the year of the prototype, this year was the engine – we’ll probably work another six to 10 projects this year,” said Crosby. The company says that although it has improved the level of efficiency in builds, it has a continuous improvement program in effect. Crosby gives the example of a car model undergoing tweaks from year to year to become better and better. Compass believes it has the blueprint to do things right, but will not stop looking for ways to enhance at every step of the way.

    The company’s prospects have prompted it to add Jay Forester, formerly of Digital Realty Trust, to the talent pool. “We are getting an unbelievable resource here,” said  Crosby. “It’s really an opportunity for him to take industrialization and move it to productization. Jay will lead that charge.”

    Forester was named Senior Vice President of Data Center Product Delivery, a new position at the company with responsibility for the construction and delivery of data center facilities across the United States.

     

    5:30p
    Why Consider a Modular Data Center?

    This is the thirds article in the Data Center Knowledge Guide to Modular Data Centers series. The initial black eye for containers and the modular concept was mobility. The Sun Blackbox was seen on oil rigs, war zones and places a data center is typically not found. As an industry of large brick and mortar facilities that went to all extremes to protect the IT within, the notion of this data center in a box being mobile was not only unattractive, but laughable as a viable solution. What it did do however, was start a conversation around how the very idea of a data center could benefit from a new level of standardizing components and delivering IT in a modular fashion around innovative ideas.

    Faced with economic down-turn and credit crunches, business took to modular approaches as a way to get funding approved in smaller amounts and mitigate the implied risk of building a data center. Two of the biggest reasons typically listed for the problem with data centers are capital and speed of deployment. The traditional brick and mortar data center takes a lot of money and time to build. Furthermore, the quick evolution of supporting technologies further entices organizations to work with fast and scalable modular designs. Outside of those two primary drivers there are many benefits and reasons listed for why a modular data center approach is selected.

    Design

    • Speed of Deployment: Modular solutions have incredibly quick timeframes from order to deploy¬ment. As a standardized solution it is manufactured and able to be ordered, customized and delivered to the data center site in a matter of months (or less). Having a module manufactured also means that the site construction can progress in parallel, instead of a linear, dependent transition. Remem¬ber, this isn’t a container — rather a customizable solution capable of quickly being deployed within an environment.

    • Scalability: With a repeatable, standardized design, it is easy to match demand and scale infrastructure quickly. The only limitations on scale for a modular data center are the supporting infrastructure at the data center site and available land. Another characteristic of scalability is the flexibility it grants by having modules that can be easily replaced when obsolete or if updated technology is needed. This means organizations can forecast technological changes very few months in advance. So, a cloud data center solution doesn’t have to take years to plan out.

    • Agility: Being able to quickly build a data center environment doesn’t only revolve around the abil¬ity to scale. Being agile with data center platforms means being able to quickly meet the needs of an evolving business. Whether that means providing a new service or reducing downtime — modular data centers are directly designed around business and infrastructure agility. Where some organizations build their modular environment for the purposes of capacity planning; other organizations leverage modular data centers for their highly effecitve disaster recovery operations.

    • Mobility and Placement: A modular data center can be delivered where ever it is desired by the end user. A container can claim ultimate mobility, as an ISO approved method for international transporta¬tion. A modular solution is mobile in the sense that it can be transported in pieces and re-assembled quickly on-site. Mobility is an attractive feature for those looking at modular for disaster recovery, as it can be deployed to the recovery site and be up and running quickly. As data center providers look to take on new offerings, they will be tasked with stay¬ing as agile as possible. This may very well mean adding additional modular data centers to help support growing capacity needs.

    • Density and PUE: Density in a traditional data center is typically 100 watts per square foot. In a modular solution the space is used very efficiently and features densities as much as 20 kilowatts per cabinet. The PUE can be determined at commissioning and because the module is pre-engineered and standardized the PUE’s can be as low as 1.1–1.4. The PUE metric has also become a great gauge of data center green efficiency. Look for a provider that strives to break the 1.25 –1.3 barrier or at least one that’s in the +/- 1.2 range.

    • Efficiency: The fact that modules are engineered products means that internal subsystems are tightly integrated which results in efficiency gains in power and cooling in the module. First generation and pure IT modules will most likely not have efficiency gains other than those enjoyed from a similar con¬tainment solution inside of a traditional data center. Having a modular power plant in close proximity to the IT servers will save money in costly distribution gear and power loss from being so close. There are opportunities to use energy management platforms within modules as well, with all subsystems being engineered as a whole.

    • Disaster Recovery: Part of the reason to design a modular data center is for resiliency. A recent Market Insights Report 2 conducted by Data Center Knowledge points to the fact that almost 50% of the surveyed organizations are looking at disaster recov¬ery solutions as part of their purchasing plans over the next 12 months. This means creating a modular design makes sense. Quickly built and deployed, the modular data center can be built as a means for direct disaster recovery. For those organizations that have to keep maximum amounts of uptime, a modular architecture may be the right solution.

    • Commissioning: As an engineered, standardized solution, the data center module can be commis¬sioned where it is built and require fewer steps to be performed once placed at the data center site.

    • Real Estate: Modules allow operators to build out in increments of power instead of space. Many second generation modular products feature evaporative cooling, taking advantage of outside air. A radical shift in data center design takes away the true brick and mortar of a data center, placing modules in an outdoor park, connected by supporting infrastructure and protected only by a perimeter fence. Some modular solutions offer stacking also — putting twice the capacity in the same footprint.

    Operations

    • Standardization: Seen as a part of the industrialization of data centers the modular solution is a standardized approach to build a data center, much like Henry Ford took towards building cars. Manufactured data center modules are constructed against a set model of components at a different location instead of the data center site. Standardized infrastructure within the modules enable standard operating procedures to be used universally. Since the module is prefabricated, the operational procedures are identical and can be packaged together with the modular solution to provide standardized documentation for subsystems within the module.

    • DCIM (Data Center Infrastructure Management): Management of the module and components within is where a modular approach can take advantage of the engineering and integration that was built into the product. Many, if not all of the modular products on the market will have DCIM or management software included that gives the operator visibility into every aspect of the IT equipment, in-frastructure, environmental conditions and security of the module. The other important aspect is that distributed modular data centers will now also be easier to manage. With DCIM solutions now capable of spanning the cloud — data center administrators can have direct visibility into multiple modular data center environments. This also brings up the ques¬tion of what’s next in data center management.

    • Beyond DCIM – The Data Center Operating System (DCOS): As the modular data center market matures and new technologies are introduced, data center administrators will need a new way to truly manage their infrastructure. There will be a direct need to transform complex data center operations into simplified plug & play delivery models. This means lights-out automation, rapid infrastructure assembly, and even further simplified management. DCOS looks to remove the many challenges which face administrators when it comes to creating a road map and building around efficiencies. In working with a data center operating system, expect the following:
    – An integrated end-to-end automated solution to help control a distributed modular data center design.
    – Granular centralized management of a localized or distributed data center infrastructure.
    – Real-time – proactive – environment monitoring, analysis and data center optimization.
    – DCOS can be delivered as a self-service automa¬tion solution or provided as a managed service.

    Enterprise Alignment

    • Rightsizing: Modular design ultimately enables an optimized delivery approach for matching IT needs. This ability to right-size infrastructure as IT needs grow enables enterprise alignment with IT and data center strategies. The module or container can also provide capacity when needed quickly for projects or temporary capacity adjustments. Why is this important? Resources are expensive. Modular data centers can help right size solutions so that resources are optimally utilized. Over or under provisioning of data center resources can be extremely pricey — and difficult to correct.

    • Supply Chain: Many of the attributes of a modular approach speak to the implementation of a supply chain process at the data center level. As a means of optimizing deployment, the IT manager directs ven¬dors and controls costs throughout the supply chain.

    • Total Cost of Ownership:
    – Acquisition: Underutilized infrastructure due to over-building a data center facility is eliminated by efficient use of modules, deployed as needed.
    – Installation: Weeks and months instead of more than 12 months.
    – Operations: Standardized components to sup¬port and modules are engineered for extreme-efficiency.
    – Maintenance: Standardized components enable universal maintenance programs.
    Information technology complies with various internal and external standards. Why should the data center be any different? Modular data center deployment makes it possible to quickly deploy standard¬ized modules that allow IT and facilities to finally be on the same page.

    The complete Data Center Knowledge Guide to Modular Data Centers is available for download in a PDF format and brought to you by IO. Click here to download the DCK Guide to Modular Data Centers.

     

     

     

    5:30p
    Intel Selects LSI Server-side PCIe Flash

    LSI announced that it has entered into an expanded original equipment manufacturer (OEM) relationship with Intel (INTC), whereby the LSI Nytro MegaRAID technology will be available as part of the Intel RAID product family. The LSI Nytro tMegaRAID takes server-side PCIe flash and dual-core RAID-on-Chip (ROC) technology and integrates intelligent caching software, to enable transparent application acceleration and RAID data protection for directed attached storage (DAS) environments.

    “Customers in virtually every industry are facing competitive pressures to increase data center efficiency and lower IT costs,” said Noury Al-Khaledy, General Manager Intel Enterprise Platforms and Services Division. “Through our expanded relationship with LSI, we’re able to offer customers a single, integrated solution that enables exciting levels of application performance, data protection and a low TCO.”

    The Nytro MegaRAID technology will help to provide Intel server board and systems customers with high levels of random IOPS performance for data-intensive and latency-sensitive workloads such as databases and big data applications, Hadoop implementations and virtual desktop infrastructure (VDI). It integrates LSI SandForce Flash storage processors to delivery performance, reliability and energy efficiency. Benchmark testing using Nytro MegaRAID cards have achieved up to a 33 percent improvement in the time it takes to complete Hadoop jobs and delivered support for up to twice as many VDI sessions compared to a non-caching storage implementation.

    “Intel’s selection of LSI Nytro MegaRAID technology is another significant validation of our strategic focus and investments in flash-based server acceleration technology,” said Gary Smerdon, senior vice president and general manager, Accelerated Solutions Division, LSI. “We’re excited to be working closely with Intel to bring the powerful performance, data protection and TCO benefits of Nytro MegaRAID technology to Intel customers.”

    Intel will offer LSI Nytro MegaRAID technology within their Intel RAID SSD Cache Controllers RCS25ZB040 and RCS25ZB040LX which include embedded flash of 256GB and 1TB, respectively.

    7:44p
    DCK Webinar: Saving Up to 30% on Ops Costs

    Moderated by Data Center Knowledge Editor-in-Chief Rich Miller, the next Data Center Knowledge webinar, entitled, Is It Worth 60 Minutes to Save Up to 30% in Data Center Operation costs? will feature an in-depth conversation with Intel’s Jeff Klaus, Director of Data Center Manager (DCM) Solutions.

    The webinar will revolve around specific use cases as they apply to real data center situations. For example, in a jointly tested POC conducted over a three-month period in late 2011 at Korea Telecom’s existing Mok-dong Data Centre in Seoul, South Korea, results showed that a Power Usage Effectiveness (PUE) of 1.39 would result in approximately 27 percent energy savings. This could be achieved by using a 22◦C chilled water loop.

    Register today to join Rich Miller of Data Center Knowledge and Intel’s Jeff Klaus on April 25, 2013 (2:00pm-3:00pm EDT) to learn how these types of assessments can represent a significant savings in data center environment management.

    8:00p
    Calxeda: We’re Still in the Moonshot ARMs Race
    Redstone-470

    An HP Redstone Development Platform server, using ARM chip technology from Calxeda.

    When HP first announced its Project Moonshot server initiative in November, 2011, the announcement centered on the potential for low-power ARM chips from Calxeda to be the agent of transformation. Yesterday HP began selling its first production servers … based on Intel Atom chips.

    As HP touted its new servers as game changers, Calxeda is again talking about how its ARM technology is still … well, the future. The company today reaffirmed  its “ongoing commitment” to  Project Moonshot, saying HP servers based on Calxeda technology will be in production “later this year.”

    “We were honored to be selected by HP to be among the very first Project Moonshot partners at the November 1, 2011 launch and to have our processors incorporated into HP’s first generation of extreme low energy servers,” said Calxeda CEO, Barry Evans. “Since that time, the two companies have worked closely to advance extreme low-energy processing technologies, which have received positive industry response, and outstanding early customer implementations.”

    HP says Moonshot consists of a “comprehensive roadmap” of workload-optimized ProLiant servers that will incorporate processors from partners including AMD, AppliedMicro, Cavium and Texas Instruments, as well as Calxeda and Intel. This approach is part of HP’s focus on flexibility, offering customers more options for processors and  faster iteration of server designs.

    With Moonshot, HP is providing an architecture that adapts some of the concepts of blade chassis, centralizing components like power supplies and fans at the chassis level, with hot-swappable server cards that can be used to customize compute processing with particular workloads. This ability to match different processors to specific workloads is one of the most interesting features. And that’s where Calxeda and the other chip vendors come in.

    Calxeda says that once its Moonshot server arrives, it will offer better TCO than the Intel Atom Centerton based products unveiled today.   The server will feature four ECX-1000 servers, running at 1.4 Ghz, each with 4 GB DRAM.

    But by the time Calxeda’s Moonshot server is ready for production, Intel may be moving the goalposts again. In the second half of 2013, Moonshot servers will begin using the next generation of Intel’s Atom chips, known as Avoton. Intel says Avoton will quadruple the densityof the current prodiuct with 4 Avoton SoCs per server. Avoton is built on Intel’s 3D tri-gate 22-nanometer process technology and is based on a new microarchitecture codenamed “Silvermont.”

    “We have not only enabled the first Moonshot system to lift-off, but with Avoton we will also bring HP Moonshot’s customers a revolution in energy efficiency and performance per watt to drive major TCO improvements when processing lightweight web scale workloads,” writes Intel’s Raejeanne Skillern in a blog post.

    Will Moonshot reverse a trend in which the largest server purchasers have been working with original design manufacturers (ODMs) or next-generation designs from the Open Compute Project? It remains to be seen, but the progress of Moonshot will give data center operators initial options to manage the energy being used in their facilities. A key question is whether HP’s focus on converged infrastructure – standardized around HP hardware, software and services –   will gain traction in the fast-moving landscape for web-scale servers.

    << Previous Day 2013/04/09
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org