Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, July 9th, 2015

    Time Event
    6:05a
    IBM Takes Wraps Off 7nm Processor Tech

    In collaboration with GlobalFoundries, Samsung, and SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering, IBM Research revealed it has developed the industry’s first 7-nanometer node test chips with functioning transistors.

    While semiconductors based on 7nm technology are still a long time off from being manufactured, the breakthrough shows that many technical advances can still be wrung out of existing semiconductor technology.

    While the most advanced servers available today are based on 14nm and 22nm processors, combining new materials with traditional silicon will make it possible to extend existing application architecture for decades to come, Mukesh Khare, vice president of semiconductor technology for IBM Research, said.

    The fundamental challenge researchers face is conventional processes to developing smaller chips have degraded performance and negated the expected benefits, such as lower cost and lower power requirements. Khare said that by employing, for example, Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, researchers have been able to create a 7nm processor that moves beyond what has been perceived to be a 10nm barrier.

    In moving to 7nm, IBM claims to have achieved close to a 50-percent improvement in area scaling over 10nm technology, while still delivering at least a 50-percent power-performance improvement.

    “We’re making use of new innovative materials to truly scale,” said Khare. “We’re also employing new lithography techniques.”

    Specific IBM contributions to the project include the invention or first implementation of a single-cell DRAM, the Dennard Scaling Laws, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multicore microprocessors, immersion lithography, high-speed SiGe, High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

    GlobalFoundries took over IBM’s semiconductor business last year.

    If and when IBM brings 7nm semiconductors to market, Khare said, they will most likely manifest themselves in mainframes before being employed in other IT infrastructure platforms.

    The research itself is being conducted at a $500 million SUNY Poly’s NanoTech Complex in Albany, New York, as part of a previously announced $3 billion five-year investment in semiconductor research and development.

    In the meantime, there are a lot of other processors technologies under development that promise to usurp traditional approaches all together. But in the case of IBM at least, most of the current research and development effort appears to be focused on extending existing architectures.

    1:00p
    Uptime Institute Kills Tier Certification for Commercial Data Center Designs

    Yesterday’s massive United Airlines and New York Stock Exchange outages served as a reminder of just how much the world today depends on reliability of IT infrastructure.

    NYSE traced its 3.5-hour outage to a “configuration issue,” and United said it had to ground all of its flights around the world for hours because of a malfunctioning network router. Both were IT problems, but a lot of thought and money also goes into designing resilient underlying data center infrastructure – the power and cooling systems – to make sure IT systems stay online.

    The way the most important rating system for data center reliability works has been changed recently. Starting at the beginning of this month, Uptime Institute, a division of the 451 Group, will no longer issue Tier certification for design documents to North American companies that provide commercial data center services, from colocation to cloud.

    The Uptime Tier system, which rates data center efficiency on the scale of I (least reliable) to IV (most reliable) has long been criticized for numerous reasons, but the biggest complaint has been the proliferation of misuse of its terminology.

    Many data center providers have claimed that their data center design is Tier III or Tier IV without actually going through the expensive process of having Uptime certify their facility. Market benefits of such claims for service providers are clear: higher reliability means more sales.

    Design Certification Opened Doors to Abuse

    Complicating things further has been Uptime’s practice of issuing separate certifications for design documents and for constructed facilities. A company could get design documents certified without actually building the facility to those designs. And some did, proceeding to market their facilities as something they weren’t.

    To combat the abuse, Uptime put two-year expiration on design certifications in 2014. Essentially, if you received a design certification, you had two years to build the data center and get it certified as a constructed facility or risk losing the design certification.

    All design and constructed-facility certifications are listed on the organization’s website, so it’s very easy to verify data center providers’ Tier claims.

    Uptime spokesman Matt Stansberry said the two-year limit was the first step in stopping abuse; this month’s changes were the second.

    “The main reason is to prevent folks from using a certification of their design (as) a marketing tool for their facility, when they haven’t actually certified the facility,” he said. “There’s a lot of things that can change between that plan and the final facility.”

    Certified Providers Put Pressure on Uptime

    Uptime has received lots of complaints from data center providers that had gone through the expensive facility certification process and were competing for the same business with providers that had not certified their facilities but claimed that they had high tier ratings, Stansberry said.

    Colorado-based ViaWest was the subject of a complaint that came to the State of Nevada Attorney General, accusing the data center provider, owned by a Canadian telco called Shaw Communications, of misrepresenting the Uptime Tier rating of its Lone Mountain data center in Las Vegas.

    Chris Crosby, CEO of Compass Datacenters, which has received Tier III constructed-facility certification for six data centers, said in a blog post that the practice of claiming Tier III or IV certification based on design documents alone was “patently deceptive, since customers believe the facility has been constructed to meet certification criteria, when in fact it hasn’t.”

    Another data center provider that has been complaining actively is Switch, a Las Vegas-based company with a massive campus there called SuperNAP. Switch is currently building another huge SuperNAP in Reno, Nevada, where eBay will be the anchor tenant.

    One of Switch’s data centers in Vegas has Tier IV certification for constructed facility. Two more have Tier IV design-doc certifications.

    The company has been complaining about abuse of the Uptime Tier system and even indicated to the organization that it would not pursue facility certification in the future if it doesn’t change its rules, Rob Roy, Switch founder and CEO, said in an interview.

    “It’s really sad that our industry has devolved to the point where (many) data centers misrepresent something about their sites,” Roy said.

    The recent changes by Uptime were a welcome development for Switch. “We’re 100 percent behind this,” Roy said. “Really happy that Uptime’s doing stuff.”

    Changes May Expand Internationally

    Changes in the Uptime Tier certification policy apply only to companies that make money by providing services out of their data centers and only to companies in North America. They apply to “folks who basically sell computing capacity in some form,” Stansberry said.

    It’s not beneficial for enterprises that operate their own data centers to support their IT needs to misrepresent their facility Tier rating, because they don’t compete for customers. A design certification is a useful benchmark for enterprise data center operators and can be an “important milestone,” Stansberry said.

    The reason Uptime limited the restriction to North America was that the most pressure came from North American companies, he explained. While no decision has been made to expand the restriction globally, Stansberry said he would not rule that possibility out.

    Uptime will not annul design-doc certifications it issued before the changes went into effect on July 1 retroactively, Stansberry said.

    Similarly, design certifications that were issued before the two-year expiration period was put in place in 2014 will not expire.

    “That doesn’t mean we will not find a way to identify all ‘stranded’ design certifications in the future,” he said. “It just means that they won’t expire.”

    3:00p
    Mirantis, Dell, Juniper Cook Up OpenStack Appliances

    As part of an effort to make it simpler to deploy the OpenStack cloud framework inside the enterprise, Mirantis unveiled an appliance that comes pre-loaded with its distribution of the open source cloud infrastructure system.

    The Mirantis Unlocked Appliances are built using Dell PowerEdge R730xd servers running dual Intel e5-2600 Xeon processors and Juniper QFX5100s and EX3300 top-of-rack switches, Jim Sangster, senior director of solutions marketing for Mirantis, said.

    The OpenStack appliances can be configured with six compute nodes and 12 TB of usable storage to a full rack comprised of 24 compute nodes and 24 TB of usable storage. There is a maximum of two racks sustaining over 1,500 virtual machines and 48 TB of usable storage.

    “The appliances fit in 1u to 2u racks,” said Sangster. “In all, there are six different sizes.”

    In the future, Mirantis plans to work with other IT infrastructure vendors to create version of their OpenStack appliances that would appeal to IT organizations that have standardized on equipment other than what is provided by Dell and Juniper, he said. The goal is to create a family of turnkey OpenStack appliances that can be dropped into almost any data center environment.

    While enterprise IT organizations are not replacing investments in VMware and Microsoft management frameworks wholesale just yet, interest in building private clouds based on OpenStack is running high. The challenge that most organizations face is that the initial learning curve associated with deploying OpenStack can be quite high.

    Distributions of OpenStack that are managed by vendors such as Mirantis reduce that complexity. By bundling that distribution with hardware, Sangster said, Mirantis is now making it possible for the vast majority of enterprise IT organizations to easily get started with OpenStack.

    As part of that effort, Mirantis is also certifying Mirantis Unlocked Appliance partners. The first of those partners is Redapt, a systems integrator based in Redmond, Washington.

    As OpenStack continues to mature, the biggest issue that IT organizations may have to contend with next is actually finding IT professionals with enough expertise to run it. While the potential savings on commercial IT management software can be substantial, it may take a while before enterprise IT organizations have the internal expertise need to deploy OpenStack in production As a result, most OpenStack adoption to date has been led by cloud service providers and enterprise IT organizations that have a lot of access to internal engineering talent.

    But as OpenStack continues to become a more turnkey deployment experience, there should also be a corresponding increase with the number of IT administrators that have sought after hands-on OpenStack expertise.

    3:30p
    Best Practices for Designing Your Physical Security Infrastructure System

    Scott Walters is Director of Security for INetU.

    One of the most critical aspects of designing a data center is the physical security infrastructure system. Here are five best practices for ensuring that it is effective and compliant:

    View Physical Security in Layers

    Physical security is much like information security in that it should be viewed in layers. For example, access control systems act as the primary keys to the castle and should use methods that cannot be shared, such as biometric access. Coupling a key card with biometrics requires the user to match the access card and the biometric such as fingerprint or retinal recognition.

    I’ve been to data center facilities where an employee who had forgotten his access card borrowed one from another employee in order to enter the data center. Sharing access is a strict no-no. Adding a third form on top of this, such as a pin code, is a best practice as is installing video surveillance that covers all access points, both for real-time surveillance and for diagnosing past events.

    Keeping access lists up to date on a real-time basis is also critical. Only those with a true business need should be able to access the data center or secure area. For example, if job roles change and access is not truly needed in the new role, access should be revoked. While revoking access due to a job role change may not be easily received by the employee, it is in the best interest of the business.

    Be Aware of Surroundings

    It is a good practice not to build data centers against outside walls whenever possible. This provides an additional layer of protection against a variety of threats. Additionally, it is important to be mindful of what is above and/or below the data center. This is most commonly a threat in multi-floor facilities.

    Each environment needs to be evaluated separately for its unique risks. For example, a data center in a multi-floor building in Manhattan will have far different risks than a data center in Quincy, Washington. Physical barriers need to be evaluated room to room. Physical security is broken into two pieces: the physical elements such as cameras, access control systems and locks; and the operational processes such as visitor and contractor policies and general awareness training. If both elements are not addressed, neither will be 100 percent effective.

    Be Diligent Against the Biggest Threat: People

    Whether it is intentional sabotage, social engineering, carelessness or lack of following a defined policy, people working in the facility can be the biggest risk. For example, social engineering is a common threat because most people by nature want to be helpful. It’s important to train people to stick to the security policy and require them to be 100 percent accountable for their access.

    Provide Proper Training

    Creating a sound physical security policy can be relatively straight forward for an experienced operations professional, but proper training to verify that all of the people who determine the success or failure of the policy is often more challenging.

    Perform Regular Internal Audits

    Many data centers have some level of compliance requirements and, therefore, are audited on a regular basis. Even if external audits are performed, they do not replace the need to perform internal audits and checks regularly. Internal assessments include using an outside firm to assess the facility with a fresh set of eyes.

    By following these best practices when designing a data center, managers can reduce many of the common design pitfalls and avoid future physical security infrastructure system headaches.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:03p
    Dell and SGI Partner on SAP Hana Appliances

    In alliance that could have broad ramifications, SGI announced that Dell will resell its high-end servers to customers adopting SAP Hana in-memory computing applications. What Dell will sell are essentially Hana appliances, aiming to make deployments of the German enterprise-software giant’s in-memory database system quicker and easier.

    While Dell has two- and four-socket servers capable of running Hana, at the higher end of the market it has not invested in eight-socket servers and above. SGI CEO Jorge Titinger said the alliance will enable Dell to counter rivals that provide servers that range from two to eight sockets now and eventually as many as 32 sockets down the road.

    This is a second appliance-oriented partnership Dell announced this week. In a separate alliance, it partnered with Mirantis and Juniper to sell pre-integrated hardware-software systems that come with the Mirantis distribution of OpenStack.

    The shift to in-memory computing appears to be coming in multiple phases. The first wave involved analytics applications that were primarily used to bolster data warehousing environments. Now that SAP is delivering its S/4Hana transaction processing applications based on its in-memory database tech, customers at the high end of the server market are beginning to make the shift to the core Hana platform, Titinger said.

    SGI is best known for its high-performance computing systems, but in 2009 it merged with Rackable Systems, which developed a family of servers aimed at traditional enterprise IT applications.

    “The systems that Dell is reselling are based on our scale-up architecture,” said Titinger. “Our other two server families are built around a scale-out architecture.”

    As enterprise IT continues to evolve, Titinger said, many of the concepts initially created for HPC environments can now be applied to in-memory computing running in a private cloud.

    For its part, SAP has made it clear that it prefers that customers run S/4 applications on a cloud managed by SAP. But because of a range of compliance and security issues, many IT organizations will prefer to run S/4 on-premise or in a hosted environment.

    Given the cost of the IT infrastructure required to deploy those applications, competition for that business among server vendors is fierce. Titinger says the alliance with Dell gives SGI a way to cost-effectively expand the number of sales people selling its offerings.

    In terms of competing with rivals, Titinger said that the SGI systems are based on an appliance architecture that over time makes it simpler to scale in smaller four-socket increments.

    For example, SAP has certified SGI’s four- and eight-socket servers, as well as forthcoming 12- and 16-socket single-node systems configured with up to 12TB of memory. Eventually, Tinger said, SGI plans to extent that architecture out to 32 nodes.

    Because those systems are essentially appliances, Tinger added that it only takes a few hours to get them running compared to traditional four-socket-and-above servers that can take days and weeks to configure.

    The degree to which Dell will be able to leverage SGI to usurp HP, IBM and Lenovo remains to be seen. But the one thing that is for certain is that a lot more IT organizations are about to be at the very least be exposed to SGI servers.

    9:04p
    Expert Reveals the Latest Variable Capacity Technology

    Cooling data centers and other mission-critical environments is an ever-changing challenge. Doing so in an energy efficient manner is an even bigger challenge. Today the CRAC (Computer Room Air Conditioning) units in use at most facilities rely on mechanically-modulating fixed speed or fixed capacity components. Although these units provide
    adequate climate control, they do so at a cost. Because the compressor, fans and other vital parts are either fully on or fully off, air cooling units based on this old technology must constantly cycle through the on/off mode in order to reach the desired end result. This consumes a lot of energy, and creates a great deal of wear and tear on the parts themselves. Learn about a better solution in this white paper.

    To understand the inherent problem with mechanically modulating compressors and other components, imagine if vehicles operated this way. What if in order to maintain speed you had to keep your foot on the gas, run your car engine flat out, but keep switching gears between neutral and drive? Not only would this waste a lot of fuel, it would also be extremely hard on the car.

    Download this white paper today to learn about gForce Ultra CRAC equipment, the latest variable capacity technology to lower energy usage and increase reliability.

    << Previous Day 2015/07/09
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org