Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, May 6th, 2014

    Time Event
    4:01a
    StackStorm Out of Stealth to Give DevOps True Data Center Automation

    There needs to be more automation in DevOps, but data center managers still mistrust automation and generally avoid implementing it at scale. This is the problem StackStorm, a Palo Alto, Calif.-based startup, came out of stealth today to fix.

    The company’s operations automation software, delivered as a service, is designed to use management and monitoring tools data center managers already use to automate management tasks across their entire infrastructure.

    Funded by venture capital from the U.S. and Europe, the 12-person startup is yet another company going after the growing market for solutions that enable companies to build and change software quickly and constantly, as the likes of Facebook and Google do, and have a data center infrastructure that is dynamic enough to support constant iteration.

    StackStorm engineers have been involved in OpenStack, the popular open source cloud infrastructure project, and the company is currently focused on customers with OpenStack deployments.

    Co-founder and CEO Evan Powell said StackStorm’s team helped lead development of Mistral, a task-management service in OpenStack, which became a key part of its product. “The core workflow component comes right out of OpenStack,” he said.

    The company is not limiting itself to the OpenStack market, however. “We tend to be focused on OpenStack, but technology itself is pretty extensible,” Powell said.

    The solution can be used with popular management tools, such as Puppet, Chef and Salt, and implemented across heterogeneous cloud infrastructure, such as a combination of an in-house OpenStack cloud and an Amazon Web Services or a Heroku public cloud deployment.

    Automation Future Still a Clean Slate

    At the moment, StackStorm is available as a private, invitation-only beta. Intelligence of the current version is at the level of a “reptile’s brain,” compared to what the founders envision it will become as it evolves, Powell said.

    The beta version uses algorithms to analyze data collected from management and monitoring tools and rank automations by performance. It tracks who did what to what system under what workload, he explained, determining which automations are most effective.

    While the solution today is basic in comparison to what it may become in the future, Powell said its existing capabilities were not to be underestimated. “Without us or some other system you can’t really learn which operations or automations are performing well,” he said.

    Powell and the rest of the team are not quite sure at the moment what exactly StackStorm will be able to do when it gains the self-learning capabilities they are building it to gain. “We’re really trying to understand that,” he said, explaining that this is what the private betas are all about.

    One possible use, which Facebook claims to have been automating successfully, is remediation of infrastructure problems, he said.

    A number of people in the OpenStack community have gotten very good at putting together troubleshooting guides, for example. These guides come in handy when unexpected performance problems arise, but it is still difficult to make good decisions under stress.

    Automating steps in these guides could be one thing StackStorm does one day, Powell said.

    Funding Team’s Business and Tech Credentials Solid

    This is not the first company Powell has founded and led as CEO. Before StackStorm, he was a founding CEO of Nexenta Systems, which developed software for storage and backup.

    Prior to that, he was on the founding team of Clarus Systems, which he also ran as CEO. Clarus, maker of service-management software for communications solutions, was acquired by OPNET Technologies in 2012, which was itself promptly gobbled up by Riverbed Technologies.

    The other StackStorm co-founder, Dimitri Zimine, most recently ran R&D for cloud infrastructure at VMware. He earned his infrastructure automation credentials, however, when he served as senior director of engineering and chief architect at Opalis, a company that played a big role in the first wave of operations automation, according to a StackStorm news release.

    In 2009 Opalis was acquired by Microsoft and has since become System Center Orchestrator, Microsoft’s widely used automation product.

    12:00p
    AMD to Build Pin-Compatible x86 and ARM server SoCs

    AMD has decided there is no need for a single server design to be limited to an x86 processor or an ARM chip and announced a plan to create a reference architecture for a system where the two are fully interchangeable.

    The plan is to enable hardware vendors to design and validate one server for both ARM and x86 chips. If it works, the approach may significantly speed up the adoption of low-power ARM chips – today used primarily in mobile devices – by server vendors and end users.

    The Sunnyvale, Calif., chipmaker calls the initiative “Project SkyBridge,” and promises to have the design framework and first products out in 2015. SkyBridge System-on-Chips (SoCs) will also carry AMD’s graphics processing units (GPUs) for more compute horsepower.

    Suresh Gopalakrishnan, vice president and general manager of AMD’s server business unit, said AMD was the only company able to bring x86 and ARM together and give hardware manufacturers an opportunity to leverage investment into a single system across two distinct chip architectures. “This is a unique thing that only AMD can do,”he said.

    AMD’s arch nemesis Intel does not license ARM chips, positioning its Atom chips as the alternative to ARM in the data center. AMD’s competitors in the ARM ecosystem today cannot match the amount of IP the Silicon Valley chipmaker has built around the x86 architecture over the years.

    ARM and x86 versions of the SkyBidge SoC will be pin-compatible and have the same IO, memory and GPU cores, Gopalakrishnan said.

    The company has not said whether there is interest from hardware vendors in its “ambidextrous” computing project.

    Both ARM and GPU-assisted computing have been key to AMD’s strategy in the data center market recently. The company has been struggling to win share of the x86 server market away from Intel and has been promoting ARM as the future of scale-out data centers.

    This week’s announcement is both an acknowledgment that the x86 architecture will continue to be a mainstay in the data center as well as a roadmap for addressing that reality while leveraging its existing investment in ARM and Heterogeneous System Architecture (its name for using GPUs to offload processing loads from the CPU).

    Permission to Tweak ARM Cores Acquired

    AMD also announced that it had bought an architectural license for ARM cores from the United Kingdom’s ARM Holdings, which means it can now modify the cores. Until recently, the company only had an implementation license for ARM’s Cortex-A57 core, which meant it could implement the cores without modifications.

    “The architectural license allows us to take the instruction set and … implement it in any way we want,” Gopalakrishnan explained. The license enables the company to leverage the institutional processor engineering expertise and IP it has accumulated over its lifetime to fine-tune the ARM architecture for its needs.

    AMD will be able to “make it faster, make it higher-performance, based on all the stuff that we’ve been doing for a while.” While there are many modifications AMD can do, the primary ones are increasing the amount of instructions the chip can process in one clock cycle and increasing core frequency, he explained.

    Web Hosting on 64-bit ARM Platform Demoed

    At a press conference in San Francisco, Calif., Monday, AMD for the first time demonstrated in public a 64-bit ARM server in action. A motherboard powered by the company’s Seattle SoC ran a WordPress website and served video.

    “We showed a whole web hosting solution running on Seattle,” Gopalakrishnan said.

    AMD started sampling Seattle in January, becoming one of the first companies out with a 64-bit ARM processor. Another front-runner in this space is Sunnyvale-based Applied Micro.

    12:00p
    CenturyLink Follows Cloud Giants’ Lead With Big Price Cuts

    CenturyLink became the latest public cloud provider to slash prices, following major cuts by AmazonGoogle and Microsoft. A typical CenturyLink cloud virtual machine will now cost at least 60 percent less.

    With such big cuts across the board, price is becoming less and less a point of differentiation for cloud providers. CenturyLink believes its true differentiation in the cloud wars is in its data centers. With 56 locations worldwide, as the focus moves toward providing services as close to the customer as possible, CenturyLink has an edge here.

    Not all CenturyLink data centers host the company’s cloud, however. About 30 have some kind of cloud services, but the flagship public cloud product is hosted in about 10. The company is actively expanding its cloud footprint, however.

    The company also differentiates with its built-in management, orchestration and platform solutions for hybrid cloud deployments. It has maintained an aggressive release cycle, adding several major new cloud capabilities since November.

    “Our Seattle Cloud Development Center – featuring our own championship lineup of Jared Wray, Lucas Carlson and a host of other standouts – delivers on an agile 21-day release cycle,” Andrew Higginbotham, senior vice president of cloud and technology at CenturyLink, wrote in a blog post announcing the price cuts.

    CenturyLink has been growing its cloud capabilities aggressively over the past several years, buying Platform-as-a-Service provider AppFog and Infrastructure-as-a-Service platform provider Tier 3 last year. Its first big cloud move came in 2011 when it acquired Savvis.

     

     

    12:30p
    Codero Expands Into Dallas Market With Digital Realty

    Data center service provider Codero Hosting has launched a data center in the Dallas-Fort Worth region of Texas. Located in a Digital Realty Trust facility, this is the fourth data center location for the company, which provides dedicated, managed, cloud and hybrid hosting services.

    The company announced it was expanding to Dallas earlier this year, but did not disclose what kind of facility the new location would be in. It has been operating out of data centers in Phoenix, Ariz., Ashburn, Va., and Chicago, Ill., and said it chose the Dallas market for its next phase of growth because of a high concentration of hosting providers in the region.

    “[Dallas-Fort Worth] is the hub of connectivity for U.S. bandwidth and offers the industry’s latest technology – everything from power and cooling to density, as well as a variety of choices in bandwidth providers, flexibility in labor pool and unbeatable power costs,” Robert Autenrieth, the company’s COO, said. ”All these factors combined to make it a natural choice for our fourth data center in the U.S.”

    The new facility is part of the 70-acre Digital Dallas Data Campus situated along primary fiber routes and master planned for more than 100 megawatts of power.

    12:30p
    Will Cat 8 Cabling Force A Topology Change In The Data Center?

    Ken Hodge is Chief Technical Officer, Brand-RexHe is a chartered engineer and a fellow of the Institute of Engineering Technology.

    Work is advancing fast on Category 8 copper cabling which will find its applications primarily in data centers.  Like its predecessors, this “BASE-T” will be later to market than its co-ax and fiber-based competitors, but when it arrives, it will rapidly displace them because of its far lower cost.

    Cat 8 will become the mainstream technology for rack-level interconnect in the data center. However, unlike earlier gigabit and 10-gigabit technologies it will not have a 328 feet (100 meters) range and so it will not support centralized switching with passive patch-panels at row level, except in smaller server rooms. This column explains how data center network topologies will need to change to take advantage of this new low cost 40Gb/s connectivity.

    The Need for Speed

    Cat 6A, being the fastest BASE-T solution currently available, is the de facto choice for data centers as specified by TIA/EIA for North America and by ISO/IEC Internationally.

    In the data center, individual devices require ever faster interconnects. For example, one physical server now runs maybe ten virtual servers – and so the physical interconnect must handle roughly ten times the data. There’s the seemingly unstoppable move to streaming more and more video, plus the upcoming wide-scale adoption of ultra high definition ‘4K video’ and ‘big data’ is going to affect a lot of data centers in coming years. All of which means that as data center professionals we would be very unwise if we did not forecast that much higher bandwidths will be needed.

    Already in certain high performance computing data centers (or HPC sections of data center) we see that 10Gb/s is not enough. Solutions such as bonded multiple 10Gb/s copper or fiber channels and 40Gb/s or 100Gb/s fiber channels are being deployed.

    Also as happened in the early days of gigabit and then again with 10 gigabit/s – high-speed short-range co-ax copper solutions are already handling the early-adopter need for very high speed interconnects at 40Gb/s.

    Unlike BASE-T, these short-range co-ax solutions don’t need all of the complex signal processing that is required for longer-length channels. So they are far quicker to develop and bring to market. The downside is that the cables and connectors are extremely expensive. Whilst these high costs are not really an issue in early-adopter applications, they are totally unaffordable in the data center mass-market. And that is where a BASE-T has historically come in around two years later at a fraction of the cost.

    I predict that a similar cycle will happen with 40Gb/s.

    Category 8 is still in its early days of development and it will be a good year or more before we’ll really know how it will look technically. But it’s almost inevitable that once standardized and productized its cost per link will quickly drop to a fraction of the co-ax and fiber based alternatives.

    What is Cat 8?

    Currently, there are a number of similar but different ‘Cat8’ solutions being considered by the standards bodies for 40Gb/s over twisted pair copper.

    In the USA, TIA/EIA is considering Cat 8 based on an extended performance Cat 6A cable. Meanwhile in Europe ISO/IEC is looking at two options currently tagged Cat8.1 based on an extended performance Cat 6A cable and Cat 8.2 based on an extended Cat 7A cable. Interestingly all of these are based on shielded cables and connectors because of alien crosstalk difficulties.

    As yet, there is no clear choice of connector – though there is a significant body of weight in favor of the RJ-45 footprint rather than the larger ‘square’ contender. This is partly in order to achieve high density patch panel and switch configurations and partly because RJ-45 is what almost everyone in the industry is used to and comfortable with.

    It looks likely however, that even if a RJ-45 profile jack is used, to ensure the necessary crosstalk performance, its pin configuration will mean that it will not be backward compatible with 10GBASE-T and lower speed standards. Whilst this is unlikely to create a problem in the data center, it might not be so acceptable if Cat 8 ever reaches the enterprise LAN.

    2:00p
    Key Strategies to Reduce Risk and Increase Efficiency in the Data Center

    One the biggest data center initiatives within the modern organization directly revolves around various infrastructure efficiencies. This can be both physical and logical in nature. With all of these advancements and new technologies, the hope was that adding, moving, decommissioning and renaming devices in the data center should have been a simple process.

    However, in today’s complex data centers, these tasks have become something of a high-wire act, fraught with inefficiency and risk. Trends like virtualization are only compounding the difficulty. In these dense environments, add a server at the wrong place and the result can be an overloaded power grid with a lot of unplanned downtime.

    Ultimately – what can you do to reduce data center risk and improve efficiency? This whitepaper from Emerson describes how one of the ways that data centers have chosen to reduce risk and inefficiency is to improve their strategic planning capabilities using data center infrastructure management (DCIM) solutions. These intelligent platforms help organizations understand the total environment, effectively predict and plan capacity needs and cut energy consumption and costs.

    Download this whitepaper today to learn about the key strategies which can directly improve your data center model. In using a powerful DCIM solution, data centers and their administrators are able to create a much more proactive compute model. Further key strategies include:

    • The essential connection between planning and execution
    • Use best practice workflows that match our operations
    • How to support the process from start to finish
    • Fully integrate planning and execution
    • Report on the big and small picture

    As the data center model continues to evolve, it will be critical for you to create a model that can be controlled, optimized and proactive. Remember, a healthy – proactively monitored – data center platform is one that will have less downtime and keep your organization running smoothly.

    2:00p
    Dell Hardware Powers Digital Effects at Spin VFX

    Visual effects studio Spin VFX has selected Dell to supply IT infrastructure for its new location in Toronto. Spin VFX’s portfolio includes more than 70 feature films and 13 television series, such as Game of Thrones (HBO), Showtime’s The Borgias, The Twilight Saga: Breaking Dawn Part 1 and 2 (Summit/Lionsgate).

    Dell worked closely with third-party industry specialist Island Digital to design the updated technology infrastructure at Spin VFX’s new studio space and ensure the hardware was tested and ready to go as soon as the move was complete. The studio often has several shows under production simultaneously, with some generating upwards of 20 terabytes of data.

    To handle this workload, Spin VFX selected six compact, energy-efficient Dell PowerEdge R420 rack servers with powerful Intel Xeon E5 processors, 48GB of memory and 600GB hard drives. These space-saving servers allow for substantial energy savings, take up less space in the studio’s server room and create an efficient hub that allows the artists to share their work and collaborate across the 10GbE network.

    On the artists’ desks, Spin VFX installed Dell Precision T5600 workstations with Intel Xeon quad-core processors, NVIDIA Quadro 4000 video cards, 32GB of memory and 500GB SATA hard drives. These are paired with Dell UltraSharp U2413 PremierColor monitors – 24-inch diagonal display monitors that feature Widescreen Ultra Extended Graphics Array (WUXGA) technology, and offer both 1920×1200 pixel resolution with a 16:10 screen ratio, and 1920×1080 pixel resolution with 16:9 aspect ratio.

    “When the time came to move our studio we were able to minimize our production disruptions because Dell and Island Digital had everything staged and setup in advance,” said Neishaw Ali, president and executive producer at Spin VFX. ”I’ve never seen such urgency and attentiveness in all my decades of working with technology suppliers. They were truly partners in our success and we’re so thankful to them.”

    “Over the years Dell has built up a large portfolio of clients in the film and entertainment sector,” said David Miketinac, vice president and general manager, commercial sales at Dell Canada. ”We understand this incredibly dynamic and competitive business and take pride in delivering powerful and reliable technology solutions that allow our customers to push the boundaries of creativity and produce award-winning material. Our role is to ensure the hardware works seamlessly and flawlessly enabling the creative talent to work its magic – on time and on budget.”

    3:30p
    Exinda Launches WAN Orchestration Platform

    Exinda announced the availability of Exinda Network Orchestrator, designed to provide network managers and administrators with management tools for wide area networks (WAN). The solution integrates monitoring, analytics, purpose-built reporting, prescriptive recommendations and actions such as traffic shaping and optimization into one orchestrated system.

    “As we look to leverage a variety of new applications and services, both premise-based and in the cloud, we can no longer afford to use a passive WAN solution,” said Mark Knight, IT operations controller for Yamaha Motor (UK) Ltd. “Given the complexity of our network today, and anticipating our future needs, we require a more active and intelligent approach to WAN management. Exinda’s Network Orchestrator fits our needs on every level.”

    The Exinda Network Orchestrator features a recommendation engine for studying patterns and network changes, interactive analytics to inspect and analyze network activities, and purpose-built reports designed to address common problems faced by network managers. The solution provides end-to-end orchestration to diagnose, recommend and repair application and network performance issues. This approach provides proactive network management to ensure reliable user experience across the applications, devices and activities.

    “The network has evolved and network managers can no longer afford to apply passive, isolated solutions to a complex problem,” said Michael Sharma, CEO at Exinda Networks. “Exinda’s Network Orchestrator takes a proactive approach to managing the WAN environment, addressing multiple network performance challenges with one orchestrated system. As a result, we’re enabling IT to better support and manage the influx of users, devices and applications on their networks.”
    3:30p
    SanDisk Unveils 4TB Enterprise SAS SSD

    SanDisk (SNDK) has announced two new solid state drives, unveiling the 4 TB Optimus MAX Serial Attached SCSI solid state drive (SSD) and the Lightning Gen. II family of enterprise-class 12Gb/s SSDs. The two announcements extend SanDisk’s Serial Attached SCSI (SAS) portfolio to cover the performance, capacity and endurance needs of enterprise applications.

    Optimus MAX

    The Optimus MAX is aimed at enterprises that are looking to replace under-performing disk drives while leveraging their current SAS storage infrastructures. The entire Optimus product family is being updated to take advantage of 19nm MLC NAND flash. The company is also renaming the previous Optimus and Optimus Ultra+ SSDs as the Optimus Ascend and Optimus Extreme SSDs, respectively.

    “Currently, SSDs are used to accentuate high-capacity HDDs in traditional enterprise, cloud and hyperscale data centers, however, increasing numbers of IT managers are finding that they need accelerated performance,” said Laura DuBois, Program Vice President for IDC’s Storage practice. “As SSDs, such as SanDisk’s new Optimus MAX, continue to increase in capacity while achieving greater cost-effectiveness, more enterprises will look to SSDs to replace their legacy HDD infrastructures in order to meet today’s high I/O applications and enterprise workload requirements.”

    Lightning Gen II

    The new Lightning Gen. II SSD product family for mission-critical and virtualized data center application workloads doubles interface speeds over previously available 6Gb/s SSDs. The drives feature error correction and detection technology, full data path protection, instant secure erase, thermal monitoring and die fail recovery. The SSDs come with a five-year warranty.

    The new products offer a range of performance and endurance capabilities, with a range of random read/write performance and sequential read/write speeds. Capacities range from 200 GB to 1.6 TB.

    “Business data needs are becoming so performance-intensive that even applications that are already using SSDs need an additional boost,” said John Scaramuzzo, Senior Vice President and General Manager, Enterprise Storage Solutions at SanDisk. “The Lightning Gen. II SAS SSDs deliver double the throughput to each slot, empowering organizations to quickly and easily scale their performance infrastructure to meet this growing need.”

    4:56p
    Greenpeace Takes Clicking Clean Campaign to Pinterest HQ

    Greenpeace continued its unapologetic pursuit of the internet’s top brands, pressuring them to do something about coal power which fuels most of the internet and all other industries, with an action outside of the San Francisco, Calif., office of Pinterest Tuesday.

    The activists set up large Pinterest-style “pin boards” and eight “pins” designed by some of the social network’s top users on the street in San Francisco’s SoMa neighborhood. The artists that designed the pins have about 5 million Pinterest followers combined, according to Greenpeace.

    Like many other major internet properties, Pinterest uses public cloud infrastructure by Amazon Web Services. By Greenpeace’s standards, the cloud provider does not do nearly enough to reduce reliance on coal power by its global data center infrastructure.

    “AWS has dropped further and further behind its competitors in building an internet that runs on renewable sources of energy, estimated at only 15 percent, and is the least transparent of any company we evaluated,” the activist organization said on its website.

    Amazon’s response has been to impugn Greenpeace’s data about its energy use. The provider has consistently said the environmentalist organization’s calculations relied on false data, but has not released any information about its data center operations to support its claims.

    Pinterest spokesman Barry Schnitt said the company agreed with Greenpeace’s goal of a future powered by renewable energy. “While we can’t build a data center near the arctic circle, like Facebook, or finance a wind farm like Google, we are thinking about what we can do and talking to Greenpeace and others as part of that effort,” he wrote in an email.

    Besides Pinterest, popular online properties that rely on Amazon for infrastructure include Netflix, Spotify, Vine, Yelp, AirBnB and Reddit.

    The strategy behind Greenpeace’s campaign to clean up the internet’s power mix has been to put pressure on the biggest data center operators to leverage their buying power with utilities to push them to add more renewable sources to their power mix.

    Some companies, such as Apple, Facebook and Google, have committed to pursuing 100-percent renewable energy for their operations, while others, including Amazon and Twitter, have not.

    In addition to the pin boards and pins outside of Pinterest’s headquarters, Greenpeace activists set up a solar-powered café and served cupcakes iced with a “green Pinterest” logo to the company’s employees.

    5:23p
    Alibaba’s Cloud Subsidiary Opens Beijing Data Center

    While all the hype is around Alibaba’s upcoming IPO, the company’s cloud computing subsidiary Aliyun has formally opened a data center in Beijing. The first phase consists of 10,000 servers, primarily serving clients in the Beijing and North China region.

    This is the third location focused on global services for Aliyun, which also has data centers in Hangzhou and Qingdao, China Tech News reported. The company is accelerating expansion, reportedly evaluating locations overseas, the U.S. and Southeast Asia being the most likely markets it tackles next.

    The provider offers four types of services: cloud servers, relational database, cloud storage and load balancing. More products are anticipated for the future.  Its customers range from startups to government research institutions and financial organizations.

    Aliyun’s parent company Alibaba is an e-commerce giant in China. Its IPO could mean a valuation of more than $160 billion once the offering completes. Yahoo, which owns nearly a quarter of Alibaba, has been enjoying a jump in the value of its own stock as the market prepares for what is expected to be a blockbuster float.

    China remains an extremely compelling and interesting market, not just for homegrown companies, but for U.S. tech giants establishing cloud plays locally, such as Microsoft and Amazon.

    There are some government hurdles and politics standing in the way of the market truly opening up, with the Chinese government imposing strict control over both business and the internet. Aliyun being local, it stands a good chance of capturing an increasing amount of the cloud market share.

    << Previous Day 2014/05/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org