Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, June 19th, 2013

    Time Event
    12:30p
    New Service Organization Control Standards Turn Two

    Hassan Sultan is a partner at Reckenen, which provides SOC audits to data centers and other service organizations.

    hassan_tnHASSAN SULTAN
    Reckenen

    Service Organization Control (SOC) examinations, which are performed by independent auditors in order to report on the internal controls of an organization, are part of a relatively new reporting framework issued by American Institute of Certified Public Accountants (AICPA). As we celebrate the second birthday of the new SOC reporting framework (the effective date was June 15, 2011), I would like to look back at the past two years and point out some trends in the application of SOC to the data center industry.

    SOC Certification Represents a Competitive Advantage

    We surveyed 91 data centers about the reason why they got SOC certification. Data centers cited the following reasons.

    1. The customers of the data center are asking for a SOC certification.

    2. Their competitors have a SOC certification so they want one as well.

    3. The data centers without SOC certification are not being invited to bid on significant contract opportunities and they feel that having a SOC certification will enable them to bid on these contracts.

    When we put these reasons together, a consistent message seems to emerge; SOC certification represents a competitive advantage.

    Smaller Data Centers Can Benefit, Too

    Increasingly, much smaller data centers are looking into getting a SOC certification. In the past, larger data centers with staff of over 50 people have been getting SOC certifications. Recently, we have seen that much smaller data centers (with staff < 20 people) have been in the market to get SOC certification. These data centers understand that a SOC reports allows them to compete more effectively and target more significant customers.

    Ninety Percent of Data Centers are Either SOC 1 or SOC 2 Compliant

    In our data center survey, we found that 90 percent of the colocations are SOC compliant. Also, the research showed:

    1. 82 percent of the surveyed data centers choose to get SOC 1 (SSAE 16) certification only.

    2. 6 percent of the surveyed data centers choose to get SOC1 (SSAE 16) and SOC 2 certification.

    3. 3 percent of the surveyed data centers got SOC 3 certification.

    SOC 2 is Becoming Increasingly Popular

    Based on our survey, we are noticing a shift from SOC 1 certification only towards SOC 1 and SOC 2 certification in the data center industry. In 2013, the number of data centers which got both SOC 1 and SOC 2 certification has increased by 100 percent year-over-year.

    After the issuance of SOC reporting framework in 2011, most data centers which had a SAS 70 certification initially obtained SOC 1 (SSAE 16) certification. SSAE 16 is based on financial audits and isn’t very specific to the data center audit. AICPA came up with SOC 2 for service organization like colocation providers and web hosts to provide a standard benchmark by which we could potentially compare two data centers.

    Recently, an increasing number of data centers are leaning towards getting SOC 2 certification in addition to SOC 1 as it covers controls over security, availability, processing integrity, confidentiality and privacy which are more relevant to data center industry. The SOC 3 is the overview of the SOC 2 for the selected controls, comes with a public seal and is able to be shared with potential clients. In future, SOC 1 coupled with SOC 2 audits are becoming the standard for data centers and cloud based service providers.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:30p
    Efficiency and Density-Boosting Upgrades for the Modern Data Center

    The modern data center is being asked to do a lot more for the ever-evolving business organization. New technologies around cloud computing, big data and IT consumerization have placed the data center environment in the spotlight. With these growing trends and more demands from companies trying to moving to the cloud, data centers are looking for ways to optimize efficiency.

    Already, we have seen virtualization within the data center reach all-time highs. More so, high-density computing is making a lot of these multi-tenancy technologies possible. So, in a world of ever-demanding efficiency – how can the modern data center keep up?

    According to the white paper, cloud computing, virtualization and converged infrastructure solutions decrease IT overhead and increase business agility. Big data systems extract revenue-producing insights from masses of structured and unstructured information. Not surprisingly, then, businesses are adopting all four of those technologies quickly. Consider, for example, these statistics from analyst firm Gartner Inc.:

    • 82.4 percent of total operating system deployments will be virtualized by 2016
    • The global public cloud services market will grow a projected 18.5 percent in 2013 to $131 billion. Furthermore, over 75 percent of enterprises worldwide plan to pursue a private cloud strategy by 2014.
    • 42 percent of IT leaders globally have either invested in big data or plan to do so within a year

    eaton2

    [Image source: Eaton]

    In working with intelligent Eaton technologies, power and cooling systems can help maximize capacity and minimize waste. Furthermore, these systems work without locking companies in to a limited set of deployment options and vendors. It comes down to efficiency and flexibility within the data center. Download this white paper to learn why many data centers are ill-equipped to support today’s most important new technologies; discusses why packaged power and cooling solutions can be a flawed way to upgrade existing facilities; and describes the core components of a data center upgrade strategy capable of enhancing efficiency and power density more completely and cost-effectively.

    3:00p
    Seven Tips to Help Keep IT Competitive

    As cloud computing has grown and matured, IT departments often find themselves squaring off against the ease and simplicity large public cloud providers offer.

    Dick Benton, principal consultant with Glasshouse Technologies, wrote a series of columns for Data Center Knowledge that outlined how IT managers and staff can strategically position themselves successfully in the organization through offering services to meet the needs of the users, and, ultimately, the business.

    His first column, Seven Tips to Help Keep IT Competitive, outlined ways to battle back against users buying and deploying servers with a few clicks and a credit card. While users like the efficiency, elasticity and customizability they get from public cloud offerings, what it means is that IT must deliver those same benefits, and in a way that not only improves the quality of services rendered, but delivers higher productivity levels and, ultimately, more revenue.

    The other columns dove into the details of each tip:

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:30p
    AMD Reveals Plans for Low-Power ARM Solution in 2014

    Chipmaker AMD announced a strategy and roadmap Tuesday, aimed at enterprise and data center servers. The company unveiled products that address key technologies and meet the requirements of the fastest-growing data center and cloud computing workloads.

    64-bit ARM SoC

    Set to be released in 2014, AMD’s 64-bit ARM Server CPU code-named “Seattle” will look to set the bar in power-efficient server compute. “Seattle” is an 8- and then 16-core CPU based on the ARM Cortex-A57 core and is expected to run at or greater than 2 GHz, with significant improvement in compute-per-watt. It will deliver 128GB DRAM support, extensive offload engines for better power efficiency and reduced CPU loading, server caliber encryption, and compression. Most significantly, this SoC will be the first AMD processor to integrate the advanced Freedom Fabric for dense compute systems directly onto the chip.

    “Our strategy is to differentiate ourselves by using our unique IP to build server processors that are particularly well matched to a target workload and thereby drive down the total cost of owning servers,” said Andrew Feldman, general manager of the Server Business Unit, AMD. ”This strategy unfolds across both the enterprise and data centers and includes leveraging our graphics processing capabilities and embracing both x86 and ARM instruction sets. AMD led the world in the transition to multicore processors and 64-bit computing, and we intend to do it again with our next-generation AMD Opteron families.”

    “Berlin” and “Warsaw”

    The x86-based processor code named “Berlin” will be available as both as a CPU and APU. Offering almost 8 times the gigaflops per-watt when compared to the current Opteron 6386SE processor “Berlin” will have 4 next-generation “Steamroller” cores. It will be designed to double the performance of the recently available lower power Opteron X-Series “Kyoto” processor. It will be built on the AMD Heterogeneous System Architecture (HSA), which  enables uniform memory access for the CPU and GPU and makes programming as easy as C++. It is expected to be available in the frist quarter of 2014.

    “Warsaw” is an enterprise server CPU optimized to deliver unparalleled performance and total cost of ownership for two- and four-socket servers. Expected in the first quarter of 2014, it is a fully compatible socket with identical software certifications, making it ideal for the AMD Open 3.0 Server.

    5:00p
    ISC13: NVIDIA Technology Powers Brain Simulator

    After Monday’s big announcement of Tianhe-2 becoming the world’s most powerful supercomputer, the buzz continues at the International Supercomputing Conference in Leipzig, Germany. NVIDIA uses its GPUs to build an artificial neural network, Cray launches a x86 storage solution, and Intel, Mellanox and Xyratex all tout their impact on the Top500 supercomputers.

    SGI and DoD launch supercomputer Spirit

    SGI announced that the United States Department of Defense (DoD) has deployed the SGI ICE X high performance computing (HPC) system for its supercomputer Spirit, making it the 14th fastest supercomputer in the world according to TOP500. The SGI ICE X system has been deployed as part of the DoD’s High Performance Computing Modernization Program (HPCMP), which provides compute resources for the Air Force Research Laboratory (AFRL) DoD Supercomputing Research Center (DSRC). ICE X products now comes with the newly-released Intel Xeon Phi 5120D coprocessors

    “Our customers are already flocking to the fastest system in the Department of Defense, finding that their applications are performing significantly better on the new system,” stated Jeff Graham, the director of the AFRL DSRC. “SGI has delivered a system that is exceeding their standard benchmark performance on DoD applications by an average of over 27%, which translates into higher productivity for scientists and engineers across the DoD.”

    NVIDIA GPUs to help build artificial neural network

    NVIDIA (NVDA) announced that it has collaborated with a research team at Stanford University to create the world’s largest artificial neural network built to model how the human brain learns. Google had previously developed a 16,000 CPU core large-scale neural network, and NVIDIA hopes to build one 6.5 times bigger. A team led by Stanford’s Andrew Ng, director of the university’s Artificial Intelligence Lab, created an large network with only three servers using NVIDIA GPUs to accelerate the processing of the big data generated by the network. With 16 NVIDIA GPU-accelerated servers, the team then created an 11.2 billion-parameter neural network.

    “Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modeling to the masses,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at NVIDIA. ”Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.”

    Cray launches x86 Linux Lustre storage solution

    Cray  announced the launch of Cray Cluster Connect — a complete Lustre storage solution for x86 Linux clusters. A compute agnostic storage and data management offering, Cray Cluster Connect allows customers to utilize their Linux compute environment of choice. The solution provides customers with a complete, end-to-end Lustre solution consisting of hardware, networking, software, architecture and support.

    “Cray has a long, rich history in the HPC storage space, and we have built some of the largest and fastest Lustre file systems in the world,” said Barry Bolding, Cray’s vice president of storage and data management. “With Cray Cluster Connect, we are applying our Lustre expertise and innovation, and taking all that we have learned, developed and invested in parallel storage solutions to an expanded customer base. We can now deliver end-to-end, Lustre storage solutions for customers’ existing x86 Linux environments. With the launch of Cray Cluster Connect, our storage and data management solutions are no longer limited to Cray supercomputer customers.”

    Companies showcase command over Top500 Supercomputers

    • Intel takes a look into its influence on the current state of the Top500 supercomputers in this infographic.  Most notable – that Intel powered 174 of 177 newly added supercomputers on the June 2013 list.
    • Mellanox announced that the company continued its leadership as the global interconnect solution provider for the TOP500 list of supercomputers. The systems connected with Mellanox FDR InfiniBand doubled from November 2012 to June 2013.
    • Data storage technology provider Xyratex will showcase the new ways it is delivering value to HPC storage users at ISC13 this week. The company announced that it has released version 1.3 of the operating system software for its ClusterStor 6000 high-performance computing (HPC) storage solution. The enhancements in the new version will deliver added performance, capacity, security and fault tolerance to customers.
    7:30p
    The Web Has an Image Problem. How Photos Can Be Faster
    akamai-photos

    Guy Podjarny of Akamai Technologies outlined ways to optimize photos for speedy delivery in a session yesterday at Velocity 2013 in Santa Clara. (Photo: Rich Miller)

    SANTA CLARA, Calif. – The web is filled with photos. Photos of cats. Photos of kids. Pictures of what you just had for dessert. Animated GIFs of Hello Kitty riding a unicorn over a rainbow.

    Even as they touch and entertain us, these photos fill up storage arrays and the pipes that power the Internet. The explosion of mobile devices and high-resolution screens has meant more photos and bigger photos. For web performance engineers, optimizing photos to load quickly has become a major priority.

    “We have an image problem on the web today,” said Guy Podjarny, the CTO for Web Experience at Akamai Technologies. “We like images because they’re static and simple. They’re visually significant on a page. But they contend with other resources for bandwidth and CPU and connections.”

    Podjarny provided a deep dive on photo optimization Tuesday in a session at the Velocity 2013 conference titled “A Picture Costs A Thousand Words,” in which he outlined the importance of photo optimization.

    Images make up 61 percent of the bytes on the average web page, and 70 percent on pages delivered to mobile devices. The average image weight of a web page has grown to 881 kilobytes, Podjarny said.

    To address this problem, Podjarny outlined detailed strategies to optimize photo formats, delivery and loading, along with techniques to address

    Formats

    Most of the images on the web use one of three formats: the Graphics Interchange Format (GIF), the Joint Photographic Experts Group (JPEG) and Portable Network Graphics (PNG). All three formats date to the mid-1990s or earlier. “These photo formats are ancient,” said Podjarny.

    But they’re widely supported in current web browsers, which isn’t yet true of two newer formats: the WebP format from Google and the JPEG XR format championed by Microsoft. Both formats have reduced image sizes by 25 to 33 percent compared to the GIF, JPEG and PNG, but are supported by only about 25 percent of the current global browser footprint. Over time, Podjarny said, they will become more widely supported.

    In the meantime, web site operators can improve their page loading times by using progressive JPEGs rather than the standard “baseline” format. New research from Patrick Meenan and Ann Robson discovered that just 7 percent of JPEGs on major web sites use progressive loading, which can improve page load times by 7 percent on cable modems and 15 percent on DSL connections.

    Which format is best? Podjarny outlined the current collective wisdom:

    • For tiny images (like 1×1 pixels) use GIFs
    • For most small images, use PNG
    • Where possible, use JPEG rather than PNG on larger images

    Delivery and Loading

    To optimize delivery, performance engineers should consider using a cookieless domain, caching content with a CDN (such as Akamai or  Limelight), and leverage browser caches to store images locally on user systems.

    A particular challenge in image loading is a new design trend featuring extremely long pages laden with images. Many of these images are “below the fold” – meaning they slow down the page load time but don’t appear within the initial field of vision for readers. Podjarny presented several methods to manage this problem, including “lazy loading,” which uses a script or style sheet to load an image if it appears in the visible portion of the page, and then load additional images as needed if the user scrolls down the page.

    Another option is LQIP – short for Low Quality Image Placeholders – in which smaller versions of images are loaded first, and then replaced by higher quality images as the page load completes.

    Mobile

    The growing universe of mobile devices includes screens of every shape and size. When full-size images are delivered to these smaller screens, they are displayed at smaller sizes. So why not use smaller images whenever possible? It requires creating multiple versions of each image, and sorting out which image to deliver to a device.

    “There is value in serving smaller images to smaller devices,” said Podjarny. “We can’t do it easily, because it’s hard to detect mobile” with precision.

    Podjarny said scripts and CSS can be used to deliver custom images based upon the screen size of the device. There’s also an effort to expand markup languages to expand the capabilities of web page tags, allowing them to declare multiple sources for an image or delay image loads depending on their location on a page. See ResponsiveImages for more.

    “Images are not as simple as they seem,” said Podjarny. “But they’re worth optimizing.”

    8:00p
    Arizona Passes Incentives for Data Centers
    The Digital Realty Trust data center in Chandler, Arizona.

    The Digital Realty Trust data center in Chandler, Arizona.

    The data center incentive arms race continues, as legislators in existing industry hubs are adopting beefed-up incentives to keep pace with up-and-coming states seeking to attract new projects.

    The latest state to pass enhanced incentives is Arizona, home to an active data center cluster in the Phoenix market. At the behest of a coalition of data center companies, the Arizona legislature has passed a package of incentives, which were signed into law Monday by Gov. Jan Brewer.

    “This legislation is a quantum leap forward for Arizona,” said Jim Grice, a partner with Lathrop & Gage LLP and a primary architect of the legislative effort. “I have analyzed every state’s standing in the competition for data centers. This will not go unnoticed in the data center industry and I fully expect investment and jobs to follow.”

    Industry Coalition Drives Initiative

    The effort was advanced by the Arizona Data Center Coalition, a group of data center operators, utilities, realtors and economic development groups that would benefit from increased data center business in the state.

    Arizona’s passage of incentives follows the adoption this year of tax breaks in Virginia and Texas, two of the most active data center markets. The moves are a response to aggressive incentives offered by states seeking to attract major server farm projects, such as  North Carolina, Iowa and Wyoming.

    The coalition says that while other states have introduced legislation that primarily benefit single occupant data centers, Arizona has tailored its legislation to assist colocation and multi-tenant facilities as well as single-tenant sites.

    The legislation allows data center operators and qualified tenants to receive an exemption for sales and use taxes attributable to data center equipment purchased for use in a qualified data center, defined by new investment in the state of at least $50 million for urban locations and $25 million for non-urban locations.

    Data center operators can benefit from the tax exemption for 10 years. The incentives also reward sustainable redevelopment, adding an enhanced tax benefit for up to 20 years if an owner/operator seeks to redevelop a vacant structure or existing facility using sustainable development practices.

    Benefiting from the Cloud

    “This legislation will bolster Arizona’s standing in the data center industry and position it to benefit from the ever growing transition to the cloud, particularly as site selectors evaluate the relative benefits of locating in Arizona,” said David Caron, Senior Vice President, Portfolio Management with Digital Realty, founding member of the Arizona Data Center Coalition. “These prudent tax framework adjustments can be a ‘game-changer’ for Arizona.”

    In 2012, Digital Realty founded the Arizona Data Center Coalition to publicly advocate for legislation to keep Arizona in the top tier of states to host data centers.

    “Arizona was losing ground … it was at a crossroads,” said Russell Smoldon, CEO of B3 Strategies and a founding member of coalition. “The Governor and the legislature chose to boldly march forward to secure the jobs we have, expand the facilities that are here and attract new investments.”

    Coalition members included eBay, Microsoft, Digital Realty, CyrusOne, GPEC-Greater Phoenix Economic Council, The Arizona Technology Council, CBRE, Grand Canyon State Electric Cooperative, CRBE, Jones-Lang-LaSalle, NextFort Ventures Chandler LLC, Salt River Project, and Tucson Electric Power.

    The state’s largest data center player, IO, also expressed support for the new measures.

    “We have long been committed to the Arizona market and to creating employment opportunities here in our home state,” said Troy Rutman, Director of Corporate Communications, IO. “We see this legislation as a real win for our substantial base of Arizona customers.”

    Virtuous Cycle for Data Centers

    The new round of state-level incentives creates a virtuous cycle for the data center industry, as providers get more tools to boost their attractiveness, and customers get more options to save money on their IT deployments.

    What about the states themselves? Thus far the incentives have been driven by two scenarios:

    • States seeking to win data center projects to boost their efforts to adapt to a digital economy, including the infrastructure to drive the Internet.
    • States with a large base of existing data centers seeking to maintain their competitive position amid changing tides in site location and data center geography.

    Both groups seem happy with their investments, which in most cases have been eagerly supported by legislators and governors. For now, the cycle seems to have no end in sight.

    << Previous Day 2013/06/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org