Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, September 11th, 2013

    Time Event
    1:41p
    Cisco Buys Flash Storage Specialist WHIPTAIL For $415 Million

    Cisco (CSCO) announced its intent to acquire solid state memory systems provider WHIPTAIL for approximately $415 million. As an addition to Cisco’s UCS strategy WHIPTAIL will enhance application performance by integrating scalable solid state memory into the UCS’s fabric computing architecture.

    “We are focused on providing a converged infrastructure including compute, network and high performance solid state that will help address our customers’ requirements for next-generation computing environments,” said Paul Perez, vice president and general manager, Cisco Computing Systems Product Group.  “As we continue to innovate our unified platform, WHIPTAIL will help realize our vision of scalable persistent memory which is integrated into the server, available as a fabric resource and managed as a globally shared pool.”

    The Cisco UCS architecture evolves, by integrating data acceleration capability into the compute layer. Integrating WHIPTAIL’s memory systems with UCS at a hardware and manageability level will simplify customers’ data center environments by delivering the required performance in a fraction of the data center floor space with unified management for provisioning and administration. UCS’s architectural advantages such as built-in automation and high performance fabrics complement WHIPTAIL’s high performance data services. UCS and WHIPTAIL, together with Cisco Nexus data center switches, will accelerate Cisco innovation and momentum in the converged infrastructure.

    The Cisco-WHIPTAIL deal is the second SSD/Flash deal this week, following Western Digital’s acquisition of Virident.

    For more from WHIPTAIL, see “Data Storage in Flux – Time for a Radical Change?,” the Aug. 27 Industry Perspectives column from WHIPTAIL CEO Dan Crain.

    2:30p
    Best of the Data Center Blogs for Sept. 11

    Here are some of the notable items we came across in this week’s surfing of the data center blogs:

    The Soft Whisper That Big Data & Cloud Computing Should Treat as a Clarion Call - At Loose Bolts, AOL’s Mike Manos looks at the political component of site selection: “Big Data is becoming a dangerous game.  To be fair content and information in general has always been a bit of a dangerous game. In Technology, we just go on pretending we live under a Utopian illusion that fairness  ultimately rules the world. It doesn’t. Businesses have an inherent risk collecting, storing, analyzing , and using the data that they obtain. Does that sound alarmist or  jaded? Perhaps, but its spiced with some cold hard realities that are becoming ever more present every day and you ignore at your own peril.”   

    Open Internet Exchange (Open-IX) – At the RagingWire blog, Annie George lays out the company’s perspective on the Open IX movement. “The biggest problem Open-IX is trying to solve, however, has nothing to do with geographic diversity or carrier treatment. It’s simple economics. In the United States, the major Internet exchanges are concentrated in the hands of a few data center companies and those companies charge carriers a premium for the right to participate in the exchange.”

    How to Make IaaS Work for Your Big Data Needs – At the Internap blog, Gopala Tumuluri looks at infrastructure choices for Big Data requirements: “The virtual, shared and oversubscribed aspects of multi-tenant clouds can lead to problems with noisy neighbors. Big data jobs are some of the noisiest, and ultimately everyone in the same shared virtual environment will suffer, including your big data jobs. An alternative is to build out dedicated infrastructure to alleviate these problems.

    Why Virtualizing the Network is not the Same as Virtualizing the Server? - At the Cisco Data Center Blog, Archana Khetan reflects on news from VMworld: “In his keynote, VMware CEO Pat Gelsinger portrayed Network Virtualization as a very natural extension to what VMware accomplished in Server Virtualization. However market fundamentals and early drivers for Server Virtualization are not quite the same as Network Virtualization. Hence any comparison and contrast between the two should be understood and weighed on in their respective contexts.”

    Your In-Flight Internet experience is supported by the Enterprise Cloud - At the Verizon Terremark Enterprise Cloud Blog, Eric Horce of LiveTV provides a customer’s take: “For the past decade LiveTV (a wholly owned subsidiary of JetBlue Airlines) has enabled the flying public to “Enjoy the Journey” with the “LiveTV At Home in the Air” experience by providing live television directly to the passenger seat.  One of the challenges during the architecture of the product’s infrastructure was to be able to create a highly reliable and secure ground-based Order Processing and Content Management platform.”

    3:00p
    SGI Beefs Up HPC Installations With New Xeon Chips
    A row of SGI ICE X servers for high performance computing. SGI has updated the ICE X line with new Xeon E5-2600 v2 processors. (Photo: SGI)

    A row of SGI ICE X servers for high performance computing. SGI has updated the ICE X line with new Xeon E5-2600 v2 processors. (Photo: SGI)

    SGI announced support for the new Intel Xeon E5-2600 v2 product family, and highlights several customer success stories of the Xeon used in SGI ICE X, Rackable and Modular InfiniteStorage products.  With the ability to lower total cost of ownership by 66 percent, SGI leverages its Xeon-based servers incorporated with Intel Turbo Boost to enable greater workload consolidation and virtualization.

    Several SGI cluster solutions wins and installations at NASA, Irish Center for High-End Computing (ICHEC), T-Systems, and AWE with the Intel Xeon processor E5-2600 v2. ICHEC installed 10 racks of SGI ICE X servers with 8,320 cores of the new Intel Xeon processor E5-2600 v2 as well as a large 1.7TB UV2000 shared memory system and Intel Xeon Phi processors.

    Additionally, SGI installed at NASA 46 new racks of the ICE X product with the new Xeon processor E5-2690 v2 in less than four weeks, tying them into the existing Pleiades system for an overall 2.88 Petaflop peak performance with no user downtime. NASA is using the new SGI ICE X machine to research 136 new planets, analyzing that data to find answers of possible new earths, unsettled areas and much more. “NASA Ames continues to push the boundaries of compute performance,” said Piyush Mehrotra, chief of the NASA Advanced Supercomputer Division at Ames Research Center, Moffett Field, California. “The expansion of the Pleiades supercomputer provides increased computational power to support missions in aeronautics, Earth and space sciences, and space exploration across the agency.”

    “High-performance computing once reserved to a privileged few has become a fundamental driver of research and development competitiveness for all – ranging from small manufacturers to the largest national research labs,” said Rajeeb Hazra, vice president Datacenter and Connected Systems Group and general manager of Technical Computing Group. “Intel Xeon processor E5-2600 v2 processors powering SGI’s unique platform solutions deliver outstanding performance and energy efficiency that enable customers to compete and win through a wide range of applications from modeling and analysis, to data driven science and analytics, and beyond.”

    3:30p
    Cedexis Updates Cloud Performance Comparison Tool

    Cloud and mobile app performance benchmarking company Cedexis announced significant upgrades to its Radar service, including new data sets and portal reports to help evaluate and select the clouds, cloud regions and CDN platforms. Radar service is a crowd-source collaboration, composed of over 80 clouds and CDNs, and hundreds of leading enterprises, which provides visibility into the performance provided by their clouds, CDNs and private data centers.

    New features include timeline trending of both the volumes, and geographic distribution, of website and mobile app audiences. Users can also observe the time-based trending of page load times across the globe, or within specific countries, for correlation to announcements and campaigns.

    “As cloud and CDN adoption continues to accelerate, enterprise IT professionals need objective third-party data to make increasingly strategic cloud/CDN vendor and cloud region purchasing decisions,” said Robert Malnati, VP Marketing at Cedexis. “This latest upgrade of the Radar service provides IT decision makers with a much more detailed view of their existing and potential providers, and puts this information a tap or click away for quick decision support.”

    New features also compare cloud and CDN platforms. These include  the ability to address real-end-user experience impact of website and mobile app modifications, the ability to visually compare the performance of existing cloud and CDN providers with platforms under consideration, the ability to assess in real-time the end-user experience impact of configuration changes, and to leverage increased granularity of real end-user measurements for performance comparisons using 25, 50 ,75, 95 percent, median and standard deviation.

    New compliance features include the comparative measurement of availability, latency and throughput performance between an enterprise’s origin server(s) and their cloud/CDN delivered content, as seen by real end users. It gives the in-depth comparative measurements and detailed segmentation of page-load time data. Cedexis Radar will also automate performance alerts and provide the added context of peer-provider performance, to help identify provider-specific issues.

    5:00p
    Cray, Super Micro, Mellanox Support the New Intel Xeon Processor

    In support of Intel’s launch of the Xeon E5-2600 v2, partners Cray, Super Micro and Mellanox all launch solutions centered around the new brawny processor.

    Cray

    Cray announced that the Cray XC30 series of supercomputers and the Cray CS300 line of cluster supercomputers are now available with the new Intel Xeon processor E5-2600 v2 product family. The new Intel Xeon processors will be featured across the complete line of Cray XC30 and Cray CS300 products, including both air and liquid cooled models. “Designing and building innovative, reliable and scalable supercomputing systems — that are also flexible — lies at the heart of our Adaptive Supercomputing vision, and adding the new Intel Xeon processors to our systems is another exciting step in that evolution,” said Peg Williams, Cray’s senior vice president of high performance computing systems.

    Super Micro

    Super Micro (SMCI) introduced new server and storage technologies ready to support the new Intel Xeon E5-2600 v2 families. Products taking advantage of the new processor include: FatTwin, new TwinPro² systems, 12Gb/s SAS3 solutions, SuperBlade, Xeon Phi coprocessor solutions, SuperStorage, SuperWorkstations and Embedded products.  “Our architecture advancements in FatTwin, TwinPro² and SAS3 12Gb/s solutions deliver the highest computing performance and energy efficiency with maximized PCI-E, memory and storage I/O bandwidth for unrivaled performance per watt, per dollar, per square foot. Our new server, storage and workstation solutions, combined with full integration and support services worldwide, help organizations minimize TCO and maximize ROI as they scale their business,” said Charles Liang, President and CEO of Supermicro.

    Mellanox

    Mellanox (MLNX) announced that its end-to-end, FDR 56Gb/s InfiniBand interconnect solutions provide industry-leading  performance for compute demanding applications running over Intel’s new Intel Xeon processor E5-2600 v2 product family. Applications running over the new Intel Xeon processor and Mellanox’s FDR 56Gb/s InfiniBand solutions demonstrate over 30 percent better performance. “We applaud Intel’s new Intel Xeon processor E5-2600 v2 product family, which will provide end-users with even greater application performance potential,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Together with Mellanox FDR 56Gb/s InfiniBand interconnect solutions, the new Intel-based platforms will ensure IT managers the highest performance for their compute demanding applications and provide the best return-on-investment in their data center server and storage upgrades.”

    7:00p
    Endurance International Group Files for $400 Million IPO

    Brought to you by The WHIR.
    WHIR_logo_100

    Endurance International Group announced on Monday plans to raise $400 million in an IPO. EIG did not disclose how many shares of common stock it plans to sell or the price. The company plans to list on the NASDAQ under the ticker symbol EIGI.

    Based in Massachusetts, EIG is one of the top hosting companies in the United States, among GoDaddy, Web.com and United Internet, with brands including HostGator, Bluehost and FatCow as part of its hosting portfolio.

    United Internet was one of the first hosting companies to go public in 1998, with IPO proceeds of around $60 million was used to finance further growth. Web.com shares have strengthened considerably since 2009 when it filed for an IPO. In June, Wix.com submitted a draft registration statement to the US Securities and Exchange Commission for a potential IPO of its ordinary shares.

    EIG’s revenue tripled to $292.2 million, while its net losses reached $139.3 million from $44.3 million during the last three years, according to a report by Reuters.

    Endurance shareholders include Warburg Pincus and Goldman Sachs. Goldman Sachs, Credit Suisse and Morgan Stanley are lead underwriters to the offering.

    In August, Endurance brands HostGator, Bluehost, HostMonster and JustHost suffered an outage related to network issues in its Provo, UT data center.

    Original article published at: http://www.thewhir.com/web-hosting-news/endurance-international-group-files-for-400-million-ipo

    7:30p
    School Districts Hit by Data Center Failures

    It’s back to school time, but the new school year got off to a rough start for two public school systems due to data center failures that crippled their IT systems.

    In Oregon, the Beaverton School District experienced several days of disruption after an errant alarm set off a fire suppression system in the district’s data center, damaging hard drives and servers. That left Beaverton schools unable to use email or access class lists, student schedules and online textbooks. “It knocked all of the systems in the data center off line,” said Steve Langford, chief technology officer. “All of the systems that staff need to do their jobs.” District IT staff worked over the Labor Day weekend to replace the damaged systems.

    In California, the Davis Unified School District started the week without key IT services after the district’s servers overheated. An air conditioner unit failed Sunday, allowing the temperature in the server room to rise to 120 degrees F. “There’s incredible impact on everyone in the whole organization,” says the district’s Kim Wallace. “Students can’t access computers. Teachers can’t take attendance. Parents can’t email. We can’t email out. So I’ve seen more people on phones than I’ve ever seen in the last several months because there’s no other means of communication.” As of Tuesday, staff were still troubleshooting damaged equipment and lost data.

    8:00p
    From the Ground Up: Building an Efficient Data Center

    Shawn Mills is a technology entrepreneur, founding member and president of Green House Data. You can find him on Twitter at @tshawnmills.

    shawn_mills_tnSHAWN MILLS
    Greenhouse Data

    This article series focuses on new data center facility development for small or medium operators, people who focus more on managed services and infrastructure development than building construction. Previous entries included planning for expansion, selecting a site, finding incentives, and deciding whether a realty and/or design partner is right for you.

    Today, we will explore a bit more about designing an efficient facility, where design partners are the most useful and how site limitations and local ordinances force a compromise between the ideal infrastructure and realistic expectations.

    Using Space Efficiently to Minimize Building Footprint

    We covered some of the initial design process in our first post, namely deciding how large of a facility you need based on demand projections. A key factor for Green House Data was energy efficiency, a core aspect of our company value proposition and central to our business model. This also helped us determine facility design in many ways. When the design process reached a phase where we needed to settle on the building size, we worked backwards from our power goals.

    For instance, we knew we wanted an average of around 5 – 5.7 kW of cooling per cabinet in a 4 MW facility, and 4,000 kilowatts divided by this cooling level is around 700-800 cabinets. We’re able to have a ratio of 7,500 sq ft of support space for every 15,000 sq ft of data center space, with a significantly higher cabinet density than comparably sized data centers, because we place our cooling equipment outside. Between CRAC units, air handlers and free cooling systems, the air conditioning equipment takes up a dramatic amount of traditional white space. By planning for the cooling systems to be installed outside, we were able to maximize the amount of cabinets on the floor and squeeze every useful space out of the 35,000 sq ft building footprint. Of course, the smaller you can afford to make your building, the lower the total capital expenditure.

    Where a Design Partner Really Helps

    Designers are extremely helpful in reaching energy efficiency and physical space goals. We knew what kind of PUE we wanted in our limited space, and how many cabinets we wanted to fit. They made it happen, and we sorted out many other details along the way, such as:

    • What does the building look like?
    • How does it lay out on the site?
    • What are some of the electrical redundancy decisions to be made?
    • What is the right level of overall redundancy?

    The design process has three stages: schematic documents, design documents and construction documents. As of this posting, Green House Data is finalizing our schematic documents, which include a package with everything that needs to be built. This package is used for budgeting and the budget is used to create the design documents in turn. You really see the balance between the over-engineered, and a “right” engineered highly reliable data center design. Budget and maximized efficiency always play a factor in these decisions.

    We have set an efficiency investment target of 5-7 year payback. Design engineers will attempt to meet your goals while maximizing reliability, however its important to stay heavily involved to ensure you are getting the payback you are looking for. Once you get your hands on the schematic documents, you might need to dial back some of the expenditure, and they change the engineering accordingly for the next round.

    Moving from Design to Construction

    The design documents are used to get a bid from contractors for the actual construction. The construction crew and engineers work together to create the construction documents. While the plans are finalized, you can get rolling on site approval and permitting with local and state jurisdictions.

    Our experience is that you need to plan for a significant amount of time during this process. It just takes time. This was even with us working with a highly motivated to help move the process along efficiently city building department. The closer you work with the permitting agencies, the smoother it can go. More on zoning and construction permits in our next post – stay tuned.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    8:30p
    AWS and DevOps Skills in High Demand, says Dice

    Recruiters are overwhelmingly searching for developers with skills using the Amazon Web Services (AWS) cloud platform, according to the latest “Mad Skills” report from Dice, which also reported strong demand for engineers with DevOps experience.

    Dice reported that 3,600 jobs were created in data processing, hosting and related services in the month of July. That’s the single best month of job growth in this category since June 1998.

    An analysis of recruiter searches also showed two very clear trends: AWS and DevOps are at the forefront of industry needs.

    “What we found was that within their searches, half included one specific provider: AWS,” said Howard Lee, CEO of Dice subsidiary WorkDigital and chief architect of Open Web. AWS was a clear frontrunner in recruiter searches, he said. with no clear number 2 in terms of providers. Nearly none of the searches distinguished between, private, hybrid, and public clouds.

    “We keep track of the types of searches, and we can look at how it changes over time,” said Lee. “August has been the peak for searches of cloud specific technologies.”

    Open Source Skills Loom Large

    The top 10 cloud-adjacent requests reveal an Open Source trend, with key requests included everything from Linux to configuration management systems like Chef and Puppet to programming languages like Python, Perl and Ruby.

    The driver is cloud service providers which report in the category. Infrastructure positions are becoming more strategic and less task-based, and there’s a need for talent to support the rise of DevOps, the combined role of development and operations skill, and continued rise of cloud.

    “What I would say is that we are seeing a trend, a transitioning of roles,” said Lee. “The skill set that you traditionally find in sysadmin people in non-cloud (scenarios) are migrating from those roles and into DevOps and cloud services roles.”

    What Dice calls the “superstar coupling” of development and operations now has nearly 500 jobs posted on any given day. This is a marked increased from slightly less than 200 last fall. The rise of DevOps has long been predicted by industry pundits, but this is hard employment data that suggests that companies are interested.

    9:00p
    With New Xeon Chips, Intel Addresses the Brawny Data Center
    The Intel Xeon E5 2600 v2 processor, which became available Tuesday.

    The Intel Xeon E5 2600 v2 processor, which became available Tuesday.

    After addressing the growing market for optimized low-power workloads last week with the Atom C2000 processor, Intel turned to to the brawny side of the data center Tuesday with the launch of the Xeon E5-2600 v2 product family, formerly code named Ivy Bridge-EP.

    As a part of its journey to re-architect the data center, Intel says the new Xeon processors will provide a versatile solution for server, storage and networking workloads, and rapid delivery of data center services.

    Based on Intel’s 22-nanometer process technology, the E5-2600 v2 family features up to 12 cores, improves efficiency up to 45 percent over earlier Xeons, and delivers up to 50 percent more performance across a variety of compute intensive workloads.

    Unveiled at IDF

    Intel announced the Xeon E5-2600 v2 processor at its annual developer forum, IDF 2013, in San Francisco Tuesday. Following Intel’s software-defined infrastructure theme, the new Xeon processors provide a common, software compatible processing foundation and possess the features and tools to help transform data centers for the future.

    The new Xeon processors use less power when they are idle, a key factor in reducing energy usage, and support Intel Node Manager and Intel Data Center Manager software, which provide granular detail on power usage.

    “More than ever, organizations are looking to information technology to transform their businesses,” said Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel. “Offering new cloud-based services requires an infrastructure that is versatile enough to support the diverse workloads and is flexible enough to respond to changes in resource demand across servers, storage and network.”

    Intel Inside The Amazon Cloud

    To showcase the cloud capabilities of the new processors, Intel and Amazon Web Services (AWS) announced a new agreement to use the “Intel Inside” brand on Amazon cloud clusters, letting AWS customers know that the services it provides are using Intel technologies.

    Amazon and Intel have co-presented numerous sessions and product briefings on cloud, big data and HPC. AWS instances that exclusively use Intel Xeon processors – intended for basic to performance-intensive use cases – will now display the Intel brand. AWS is also adding the latest Xeon processor family to its data centers, with services available for customers later this year.

    Further support for Intel’s Xeon processor E5 family-based platforms will be shown from vendors such as Apple, Acer, Cisco, Dell, Fujitsu, HP, Hitachi, Oracle, Quanta, SGI, Supermicro and others. IBM introduced its NeXtScale Server platform Tuesday, combining high-density and improved power efficiency – featuring Xeon E5-2600 v2 processors.

    The Xeon E5-2600 v2 is also featured in the current number one supercomputer in the world – the Milkyway-2, along with Xeon Phi coprocessors.

    Looking Beyond the Server

    While Intel chips are known for powering servers, the company’s new product rollouts have reflected its growing focus on networking and storage technologies.

    The new XE5-2600 v2 product family accelerates efficient processing of network workloads commonly handled by proprietary offload engines and accelerators found in networking appliances. Using Intel’s Open Network Platform (ONP) server reference design, customers can use high-volume Xeon-based servers and industry open standards to consolidate virtualized networking applications. This allows customers to deliver throughput performance and latency for Software Defined Networking (SDN) and Network Function Virtualization (NFV) workloads.

    Intel also announced Intel Network Builders ecosystem, a program that allows partners to take advantage of Intel’s reference architecture platforms to accelerate SDN and NFV deployments.

    “Organizations are looking for more open, industry standard technology to support complex IT demands, whether they are cloud based-applications, support for virtualized environments or for replacing expensive appliances, such as firewalls, VPNs, and edge routers,” said Werner Schaefer, vice president of Market and Business Development, HP Servers. “These reference designs with innovative HP ProLiant Gen8 Servers and HP Networking solutions allow customers to consolidate networking workloads, reduce deployment costs and shorten provisioning time.”

    On the storage front the new Xeon E5-2600 v2 processors enhance data reliability and enable in-line deduplication for up to 2.2 times hashing algorithm performance and a 3.5 times  I/O bandwidth improvement. Dell has selected the new processors for its upcoming storage solution. “The Intel Xeon processor E5 v2 family provides a great hardware base for Dell’s high performance, innovative solutions like our PowerEdge VRTX and intelligent tiered Compellent storage solutions,” said Forrest Norrod, vice president and general manager of Dell Server Solutions.

    The Intel Developer Forum 2013 conversation can be followed on Twitter hashtag #IDF2013.

    << Previous Day 2013/09/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org