Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 13th, 2013

    Time Event
    1:59p
    Livestream Scales With AMD SeaMicro Server
    seamicro-sm15000

    The AMD SeaMicro SM15000 many-core server has been deployed by LiveStream to support its growing live video infrastructure.

    Event streaming provider Livestream has deployed AMD’s SeaMicro SM15000 server with SeaMicro Freedom fabric storage as the core platform to provide live video and collaboration services. Livestream said the SM15000 will double its computing density while reducing power consumption. 

    With a focus on big data, the SM15000 was released last year, and extends SeaMicro’s networking fabric beyond the chassis to connect directly to massive disk arrays, enabling a single 10 rack unit system to support more than 5 petabytes of storage.

    Livestream’s analysis also showed that the SeaMicro SM15000 server provided significant energy savings compared to solutions from the other leading server vendors, verified by the New York State Energy Research and Development Authority (NYSERDA).

    “Our data center has plenty of rack space, but we just could not fill them with servers because we could not get enough power to the racks,” said Thomas Bonnin, chief architect, Livestream. “SeaMicro technology provides the highest density servers on the market allowing us to get multiple racks of servers into a quarter of a rack with AMD’s SeaMicro SM15000 system. What’s more, the technology allows us to reduce power consumption and the resulting cost savings goes straight to our bottom line. The SeaMicro SM15000 server also allowed us to double our computing capacity while at the same time, retire our energy inefficient servers.”

    As a result of its rapid growth, Livestream is building out an architecture that will scale to support hundreds of millions of people. The SeaMicro SM15000 was selected for its ability to provide reduced power consumption as well as being optimized for Internet applications while providing the compute power to transcode live video.  The SeaMicro platform was tested to be used for front end web applications, backend application programming interfaces (API), workflow systems and streaming infrastructure.

    “We went through a rigorous set of testing benchmarks as Livestream evaluated servers from the leading vendors in the market,” said Andrew Feldman, corporate vice president and general manager, Data Center Server Solutions, AMD. “Real time video transcoding is one the most demanding applications for a server, and our SM15000 server with the Freedom fabric’s 1.28 terabits-per-second bisectional bandwidth proved to be the winning platform. It not only delivers the computing power and low latency needed, but also creates compelling savings in both space and power.”

    2:08p
    A Forgotten Data Center Cost: Lost Capacity

    Sherman Ikemoto is General Manager, Future Facilities North America, which is a supplier of data center design and modeling software and services.

    Sherman-IkemotoSHERMAN IKEMOTO
    Future Facilities

    Data center capacity is the amount of IT equipment that is intended to be loaded in the data center and is typically expressed in terms of kW/sqft or kW/cabinet. This specification is derived from projections from the business units for the amount of computing capacity required over the long term.

    But most data centers never achieve the capacity for which they were designed. This is a financial challenge for owner/operators because of the partial return on the capital expenditure and the need to raise additional capital to add capacity years sooner than originally planned. The cost of this lost capacity dwarfs all other financial considerations for a data center owner/operator.

    Often 30 percent or more of data center capacity is lost in operation. On a global scale, out of 15.5 GW of available data center capacity, a minimum of 4.65 GW is unusable. At industry averages, this amounts to about 31 million square feet of wasted data center floor space and $70B of unrealized capital expense in the data center. The losses are staggering.

    Given the stakes, why isn’t much being said about lost capacity? Because these losses are due to fragmentation of infrastructure resources – space, power, cooling and networking – that build slowly and imperceptively early in the data center life span. As resources fragment, the data center becomes less and less able to support the full, intended IT load. Only well into the operational life of the facility, when the margin on capacity has closed, is the problem discovered. Lack of visibility and the delay between cause and detection conceal the elephant in the room:  Lost Capacity.

    Compute Capacity Fragmentation

    Fragmentation occurs when the actual IT configuration build-out differs physically from the design assumptions used to design the facility. For example, assume during the design of a data center, standard server hardware was assumed to be the installed IT equipment form factor. But, due to changing requirements, blade server form factor was selected and installed instead. It may be true that the power draw of a blade server might be the same as the standard server hardware, but the space and cooling (airflow) utilizations could be substantially different. These differences were not accounted for in the design of the infrastructure. As a result, space, power and/or cooling fragment and data center capacity is reduced.

    To better understand fragmentation, consider a computer hard drive. Given that you pay per unit of storage, your goal is to fully utilize the capacity you have before buying more. However, hard drive capacity will fragment incrementally as you load and delete programs and files. The amount of fragmentation that occurs depends on how the hard drive is used. Eventually, a point is reached at which the remaining available capacity is too fragmented to be of use. Only with defragmentation tools can you reclaim what has been lost and fully realize your investment in the device.

    The concept of resource fragmentation also applies to the data center. Data center capacity, like hard drive storage capacity, will fragment through use, at a rate that depends on how it is used.  The simple answer to fully realizing the capacity potential of the data center is continuous defragmentation. This, however, is where the similarities between hard drives and data centers end.

    The first difference is that hard drive capacity is defined by space only while data center capacity is defined by the combination of space, power, cooling and networking. This makes defragmentation of data center capacity significantly more complicated as it requires coordinated management of four data center resources that are traditionally managed independently.

    The second difference is that unlike hard drive capacity, data center capacity cannot be tracked by traditional means. This is because cooling – a component of capacity – is dependent on airflow that is invisible and impractical to monitor with sensors.  Cooling problems therefore can be addressed only if airflow is made “visible.” A simulation technique called computational fluid dynamics (CFD) is the only way to make airflow visible. Therefore, the only means to defragment data center capacity that has been affected by cooling problems is through the use of CFD simulation.

    The final difference is that unlike the hard drive, defragmentation of data center capacity is often not an option because it puts IT service availability at risk. Therefore, to protect against data center capacity loss, fragmentation issues must predicted and addressed before IT deployments are physically implemented.

    These differences have a significant impact on the techniques required to protect against data center capacity fragmentation.

    2:59p
    Rackspace Shares Slide as Cloud Revenue Moderates

    cloud-revenue-4q-2012

    Shares of Rackspace Hosting dropped nearly 20 percent today after the company’s earnings raised concerns that the rate of adoption for cloud computing services may be moderating. Shares of RAX were down $14.80 to $60.30, a drop of 19.6 percent on the session. The slide reflects Wall Street’s high expectations for Rackspace, which saw its shares rise 72 percent in 2012 amid investor enthusiasm for cloud computing.

    Rackspace said sales of cloud services rose to $87.3 million in the fourth quarter of 2012, up 49 percent from the year-earlier quarter. While a strong gain , that represented a decline from previous year-on-year cloud gains of 69 percent in the second quarter and 57 percent in the third quarter. Rackspace’s overall earnings of 21 cents per share was in line with analysts’ estimates, but revenue of $353 million for the quarter missed analysts’ average estimate of $355.4 million.

    The company released its earning after the market closed Tuesday, and discussed them in an after-hours conference call with analysts. After the call, Stifel Nicolaus downgraded Rackspace’s shares, citing the revenue growth outlook for 2013. Other analysts also cited concerns that revenue growth may fall short of expectations.

    “Clearly, growth is slowing. That’s probably the primary driver as to why the stock is off so much,” Stephens Inc. analyst Barry McCarver told Reuters.

    “Year-over-year growth has now decelerated for five quarters in a row,” Cowen analyst Colby Synesael told Investors Business Daily.

    Rackspace CEO Lanham Napier described 2012 as a year of “execution and rebuilding, a year spent in the lab.”

    “Accelerated growth isn’t something that happens overnight or guaranteed to be linear given the variable on-demand nature of the cloud,” said Napier. “In the fourth quarter, we were pleased to see cloud growth accelerate. We are getting pilot projects with enterprise customers and these projects come before revenue.”

    Rackspace announced that office retailer Staples is using its new Open Cloud platform based on Open Stack, which was deployed in October.

    “This project was one of the most difficult, exciting and strategic challenges that our company has very pursued,” said Napier. “It was the largest product investment the company’s ever made and we believe it has tremendously improved the technical capabilities of our offering.”

    Napier remained enthusiastic about the “transformational” growth potential for cloud computing. “We believe the economics of the cloud will drive an explosion of new demand for computing, just as the proliferation of smartphones has driven explosive demand for new applications,” said Napier.

    As we’ve seen today, on Wall Street those projections reside on a double-edged sword, and hot stocks face a narrow boundary separating enthusiasm and disappointment.

    3:45p
    Cloud News: CA Rolls Out CloudMinder Update

    News from the cloud computing sector includes developments from Dell and Wipro, Dimension Data and CA Technologies:

    CA rolls out CloudMinder update. CA Technologies (CA) announced the next rollout of its CA CloudMinder identity and access management (IAM) service solution. CloudMinder provides enterprise-grade IAM for both cloud-based or on-premise applications. New capabilities in the release include support for social identities – with support for OpenID, OAuth, WS-Fed and SAML, extended support for On Premise and Cloud applications, and support for customer choice in how they support their IAM needs on the operational back end. CloudMinder also presents a variety of opportunities for partners, including hosting the CA CloudMinder service operations in their own data center and / or managing the IAM administration on behalf of the customer in its tenant environment. “Identity and access management demands are evolving as organizations employ different ways to save money and grow the business, such as incorporating cloud services, adopting bring-your-own-device policies and supporting social identities as a way to get closer to the customer,” said Mike Denning, general manager, Security business at CA Technologies. “This evolution and customer demand drove our vision for CA CloudMinder to be able to function as that single bridge that manages the identities and access for employees, partners and customers to cloud or on-premise applications no matter what device or what identity they chose to use.”

    Dell Boomi and Wipro partner for Cloud-First.  Dell and Wipro Technologies announced that the two companies have partnered to enable enterprises with a ‘Cloud-First’ IT strategy to accelerate their on-demand agility and ability to painlessly scale operations. “Enterprises are looking for an affordable and fast on-ramp to cloud computing that will drive IT efficiency, connect their existing on-premise applications and data assets to those in the cloud, all while lowering capital expenditures,” said Robert Mahowald, research vice president, SaaS and Cloud Services, IDC. “Through this partnership Dell Boomi and Wipro will bring these capabilities to Wipro’s global customer base to optimize their investment in IT integration, management and services and help ensure their success as they incorporate cloud in their IT strategies.”  Dell also announced that its Boomi AtomSphere platform now exceeds 1 million cloud-managed integration processes per day.

    Dimension Data launches WAN optimization across cloud.  Dimension Data  introduced WAN optimization capabilities to its cloud globally. By deploying WAN optimization technologyin Dimension Data’s Managed Cloud Platform(MCP) cloud data centers,Dimension Data clients are reporting a significant increase in application performance across the entire cloud. With WAN optimization controller (WOC) appliances deployed around the world, traffic is moved between these sites and encrypted over a VPN tunnel and optimized for delivery using deduplicationand application-specific protocol optimization. With the addition of WANoptimization capabilities, we are helping our clients overcome the latency and bandwidth constraints often associated with public cloud services,” said Steve Nola, CEO of Dimension Data’s Cloud Solutions Business Unit. “No other cloud provider offers a core that is optimized for acceleration. By providing optimal network and cloud performance, in addition to flexibility and ease of use, we provide organizations the ability to speedthe process of migrating their data and applications to the cloud.”

    4:00p
    The Next-Generation Workspace Will Revolve Around Mobility and Virtualization

    The user landscape has evolved beyond the standard PC or even laptop. Now, users are deploying numerous different devices all requiring access to a central infrastructure. IT consumerization and BYOD has created a new type of user where mobility and virtualization play a vital role in productivity. In deploying new types of mobility and virtualization solutions, administrators must consider the underlying infrastructure as well as the all-important end-user experience.

    In April 2012, Cisco Systems commissioned Forrester Consulting to conduct a study to better understand mobility, virtualization, and other key technology initiatives enterprises are implementing to improve the productivity and flexibility of employees. The key areas of focus included the benefits, challenges and timelines for implementing workspace initiatives such as BYOD programs, desktop virtualization, application virtualization and security solutions. The study methodology was an in-depth survey of 325 global IT senior-level decision-makers in the US, Europe, and China.

    In this white paper, you’re able to see the key findings which dictate the direction of the user mobile market. Some of the findings include:
    • A new generation of workers requiring IT policies and practices which support mass mobility.
    • The new consumerization and mobility momentum which is driving technical and security requirements.
    • New delivery models and management strategies are evolving to help firms become more cost-effective and support these new initiatives.

    This white paper not only outlines the current mobile and virtualization market – it also helps visualize it. The responses gathered were metered and then graphed.

    Mobile works are now truly globally distributed and will always require fast access to centralized resources. This is where good management and solid security practices can keep an environment running smooth and optimally. The research that this white paper presents will help managers and administrators apply best practices which will create a truly mobile, user-friendly, environment. Some of the key recommendations include:
    • Make mobile devices and solutions a key platform for engaging with customers, employees and partners.
    • Anticipate supporting a wide variety of internally and externally provided mobile devices and solutions.
    • Prepare for complexity and variety to push the limits of policy, security and performance.

    As both the IT environment and the user continue to evolve, creating the right management and security policies will help administrators control the infrastructure – both internally and globally. Download this white paper today to see how the new mobile and virtual environment has shifted how IT organizations think about management and security.

    << Previous Day 2013/02/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org