Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 30th, 2013

    Time Event
    12:30p
    Scaling Holiday Shopping With More Sales Per Second

    Ajay Nilaver is the Vice President of Products at Fusion-io where he oversees the development of flash memory solutions that accelerate IT systems for enterprises world-wide. Connect with Fusion-io on Twitter via @Fusionio.

    Ajay-Nilaver-tnAJAY NILAVER
    Fusion-io

    “Black Friday-creep” is a new pop culture phrase emerging in the U.S. media as Macy’s makes headlines with its plan to open its department stores on Thanksgiving Day 2013 for the first time, rather than waiting until midnight on Black Friday. Amidst a global boom in online shopping, the trend highlights the differences in how brick and mortar retailers are trying to scale sales, compared to how online vendors prepare for the holiday rush.

    In the physical world, retailers process transactions at the cash register in minutes, where online retailers are processing numerous transactions per second. While payment processing companies are accelerating their infrastructure to complete point-of-sale credit card charges in seconds, cashiers still need to scan and bag items before payment can be processed. In retail stores, the transactions-per-minute sales equation means that adding more hours to the sales day may be one of the only ways to nudge sales records higher during this year’s Black Friday sales spree.

    credit-card470

    In contrast to the brick-and-mortar approach, online retailers process transactions in seconds, removing many limitations on how many sales can be rung up during Black Friday mania. With many online shops lowering prices and adding deals once the clock strikes midnight on Thanksgiving Day, sales once held on Cyber Monday now take place all weekend long as digital retailers rack up sales around the clock.

    Online Retailers Get Ready

    Popular online shoe vendor Zappos.com prepared for the holiday sales crunch by adding flash memory to its IT infrastructure. With flash powered servers, Zappos was able to handle many more concurrent users browsing its MySQL database as they searched the shoes, sizes and colors showcased on its website. By using caching with flash memory, Zappos improved page load times, keeping customers clicking to support sales even under high holiday shopping demand.

    In China, online retailers have an even larger audience to support compared to U.S. retailers. Compounding the visitor volume challenges, many more customers shop on mobile devices, meaning that the infrastructure needs to provide a seamless experience across computers, cell phones and tablets.

    Flash Memory Accelerates Databases

    China’s largest B2C online retailer, JD.com, was able to support its largest sales promotion event ever after adding flash memory to its infrastructure to accelerate its OLTP databases. The company delivered 9x faster responses to queries on its website, helping scale transactions and sales success.

    The current global record for one-day sales was actually set in China last November, with Alibaba Group conducting an astounding $3.06 billion in sales on Singles Day (11/11), China’s shopping equivalent to Black Friday. In contrast, comScore reported that total Black Friday sales in the USA broke through the one billion dollar barrier for the first time last year. Alibaba Group’s online payment processing provider, Alipay, used flash memory to support over 100 million payments to make it possible for the company to achieve the goal.

    Shopping in stores on Black Friday is a tradition for many Americans, and it’s not going to vanish in favor of e-commerce any time soon. Whether on websites or in physical stores, we all hate to wait, so boosting transaction speeds in every sales channel improves the shopping experience for both the retailer and its customers. Long lines – on the scale of seconds online, or minutes in store – mean there is more latency between the shopper and the sale, so it will be interesting to see which retailers introduce new ways to streamline the checkout process during this year’s holiday shopping season.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:20p
    Research: Liquid Cooling Can Reduce Data Center Energy Use
    A server tray using Asetek's Rack CDU Liquid Cooling system, which was announced this week. The piping system connects to a cooling distribution unit. (Source: Asetek)

    A server tray using Asetek’s Rack CDU Liquid Cooling system, which was the focus of research discussed at the SVLG Data Center Efficiency Summit. The piping system connects to a cooling distribution unit. (Source: Asetek)

    PALO ALTO, Calif. - Liquid-cooled servers at a data center at the Lawrence Berkeley National Laboratory used less energy in certain circumstances than air-cooled servers on site, a lab researcher said Tuesday at the sixth annual Data Center Efficiency Summit hosted by the Silicon Valley Leadership Group (SVLG).

    The tests evaluated at Asetek’s RackCDU low-pressure hot-water CPU and memory cooling technology connected to 38 Cisco servers in a rack at the data center. The servers’ power use was compared with that of more traditional air-cooled servers, said Henry Coles, who works on infrastructure cooling systems at the Lawrence Berkeley lab.

    The researchers tested for energy consumption with idle server loads, 50 percent server loads and 100 percent server loads. In most of the tests for full server loads, Coles said, the water from cooling captured more than half of the servers’ heat.

    Coles and his team used modeling software to project how much energy the system would use if implemented at a larger scale.

    “This direct cooling technology should provide a significant reduction in total data center energy used,” he said. He estimated a 16 to 20 percent energy reduction for a server load of 50 percent.

    The research also indicated that, given the climate conditions of the modeling, using colder water when it’s economically feasible would translate to lower energy consumption. Different climates might yield different results, though, Coles said.

    High-Density or Lower Energy Use?

    Liquid cooling has typically been used as a tool for handling high-density server installations, which generate more heat and thus are harder to manage using standard air cooling techniques. Water is significantly more efficient than air as a medium for heat removal. There are tradeoffs in using liquid cooling, however, including the cost of the equipment. If liquid cooling can reduce power usage, it could change the economics of purchasing different types of equipment.

    An audience member asked about return on investment for the liquid-cooling equipment, and Coles had no clear answers.

    “That’s harder to do than this (energy modeling),” he said. “You get more opinions.”

    While liquid cooling has been deployed in supercomputing facilities and evaluated by webscale companies, it has started to make inroads inside more traditional data centers as well.

    Asetek is certainly going after that market segment, and publicly available data on total cost of ownership with its equipment could boost the gear’s appeal. The company was showing Cisco, Cray and Hewlett-Packard prototype servers packed with hot-water cooling systems at its exhibition booth, and it has previously shown a similar configuration with a Fujitsu server.

    Indeed, commercially available products from such original-equipment manufacturers could increase adoption of liquid cooling, Coles said after his presentation.

    coles-liquid-cooling-svlg

    Henry Coles, a researcher at Lawrence Berkeley National Lab, discusses his research on liquid cooling and energy efficiency at Tuesday’s Data Center Efficiency Summit in Palo Alto, Calif. (Photo: Jordan Novet)

    5:20p
    Digital Realty Sees Weaker Outlook, Prompting Selloff in Data Center Stocks
    The exterior of 111 8th Avenue, one of the premier carrier hotels in Manhattan.

    The exterior of 111 8th Avenue, where Digital Realty extended its lease in 2010. The company just took a $10 million charge related to the lease extension.

    Stocks of data center providers fell sharply today after the sector’s largest company, Digital Realty Trust, lowered its revenue guidance for the coming year, saying enterprise tenants were deploying new data center space more slowly than expected. The company also took a $10 million charge tied to an accounting error in revenue from a lease renewal at 111 8th Avenue in New York.

    Digital Realty’s stock plunged more than 16 percent on the news, dropping $9.12 to $48.90 in midday trading. Shares of other publicly held data center providers headed lower on the news, including DuPont Fabros (down 6 percent) and CyrusOne (off 4.7 percent). Two other companies, CoreSite and Equinix, sere 3 percent lower.

    Digital Realty said it would buy back up to $500 million of its own shares to support its stock price. The company said it would also focus on selling data center space to smaller “mid-market” companies that are less likely to experience delays in deploying new space.

    Slower Expansion Than Expected

    Executives of Digital Realty said that some of the company’s largest enterprise customers were delaying data center deployments within their leased space. Data center space is built and occupied in phases to manage the high cost of construction. Enterprise tenants lease large amounts of space to lock down capacity for future growth, but landlords don’t begin receiving rent until the tenant moves into the new space. Digital Realty builds space in 1.1 megawatt suites (known as “pods”)

    Some of these large tenants are moving into the first phase of their space, but then taking longer than expected to move into the second and third phases of their deployments. This is particularly true with tenants who are outsourcing for the first time, and have miscalculated how soon they would need additional spoce.

    “In many cases, their timeframes slid more than they thought, and we haven’t been conservative enough in leaving wiggle room in these timelines,” said Mike Foust, the CEO of Digital Realty.

    Slower Payments, Not Lower Revenue

    Foust emphasized that these tenants are still responsible for the leases, but it’s taking longer for Digital Realty to realize revenue from those deals.

    “We are disappointed with the third quarter financial results, but the robust leasing velocity gives us confidence in the underlying health of the business as well as customer demand for Digital Realty’s data center solutions,” said Foust. “While lease commencements have lagged our initial expectations, the solid backlog of leases signed-but-not-yet-commenced represents contractual obligations for future rental revenue, and sets the stage for healthy growth in cash flows over the intermediate term.”

    Given the delayed commencements, Digital Realty has trimmed its revenue guidance, lowring its projected funds from operations (FFO) to a range of $4.60 to $4.62 per share, down from the previous range of $4.73 to $4.82.

    “We believe our revised outlook is a realistic assessment,” said Foust. ”

    $10 Million Charge Tied to Lease at 111 8th Avenue

    The third quarter results include a $10 million non-cash rent expense adjustment related to the company’s lease at 111 8th Avenue in New York. In September 2010, Digital Realty signed a 10-year extension of its lease, pushing the expiration of its lease from June 2014 to June 2024. This appears to have occurred just prior to the building’s sale to Google, which is reportedly inclined to let data center leases in the building expire so it can claim that space for its growing New York business offices (see Internap to Move Out of Major Manhattan Data Hub).

    The lease extension allowed Digital Realty to lock down extremely valuable space in a key building, but the company failed to adjust the straight-line rent expense when the lease was modified in September 2010. The $10 million adjustment for this quarter reflects a “catch-up” of rent that should have been recorded.

    The drop in Digital Realty’s stock price may summon memories of a headline-making May presentation in which hedge fund Highfields Capital Management asserted that investors should short shares of Digital Realty Trust, saying the huge data center developer was understating the future investment in facilities that would be required to support its enterprise customers. Digital Realty said Highfields was “mischaracterizing its disclosures, but subsequently adjusted its accounting for capital expenditures for data center maintenance.

    But Foust emphasized that leasing remains strong, and insited that indistry fundamentals were sound.

    “We signed new leases totaling $47 million of annualized GAAP rental revenue during the third quarter, including nearly $28 million since the second quarter earnings call in late July,” he said. “This represents the third-highest quarter in the company’s history, and the dollar volume of leases signed year-to-date has already exceeded the full-year activity in 2012.”

    5:44p
    Ixia Acquires Net Optics For $190 Million

    Ixia agress to acquire Net Optics for $190 million, Fortinet boosts its ADC portfolio by acquiring Coyote Point, and Fusion-io enhances its ION Data Accelerator all-flash storage solution.

    Ixia acquires Net Optics

    Ixia (XXIA) announced that it has entered into a definitive agreement to acquire Net Optics, a  provider of total application and network visibility solutions. The $190 million deal is a cash-free/debt-free transaction, subject to certain adjustments based on Net Optics’ net working capital, and is expected to close in the fourth quarter of 2013. With the acquisition, Ixia will be able to leverage Net Optics’ industry-leading active monitoring capabilities with their patented inline and bypass technologies, which go beyond passive network monitoring to deliver high availability of security and monitoring tools.

    “Next-generation cloud providers, mobility operators and enterprises demand more visibility into their global networks in order to maintain quality of service across virtualization, application and service delivery,” said Errol Ginsberg, Ixia Chairman and Acting CEO. “The acquisition of Net Optics solidifies our position as a market leader with a comprehensive product offering including network packet brokers, comprehensive physical and virtual taps and application aware capabilities. Additionally, the acquisition strengthens our service provider customer base, increases our footprint in the enterprise, and broadens our sales channel and partner programs.”

    Fortinet to acquire Coyote Point

    Fortinet (FTNT) announced that it has entered into a definitive merger agreement to acquire Coyote Point Systems,  a privately-held leading provider of enterprise-class application delivery, load balancing and acceleration solutions. “This acquisition complements Fortinet’s Network Security strategy and allows the company and our channel partners to accelerate and further deliver on our vision of providing complete and comprehensive security into the enterprise,” said Ken Xie, founder, president and chief executive officer of Fortinet. “Furthermore, we expect this acquisition will generate synergy among existing Fortinet products, including our FortiGate, FortiBalancer, FortiDDoS and FortiWeb platforms.”

    “While Coyote Point has built a top class ADC product portfolio and loyal customer base, there has always been a trade off around resources,” said Bill Kish, CEO and founder of Coyote Point. “We look forward to being a member of the Fortinet family and the opportunity to make strategic technology investments in our ADC platform.”

    Fusion-io enhances ION Data Accelerator

    Fusion-io (FIO) announced new updates to its ION Data Accelerator all-flash shared storage solution. With broader high availability, increased performance and scalability, as well as simplified sharing and manageability, the new features enhance the appeal of ION Data Accelerator as an ideal flash-based storage consolidation solution for enterprises seeking to add a record-breaking flash performance tier for Oracle and Microsoft SQL Server databases and other mission critical applications. ION Data Accelerator 2.2 now supports HA deployments in high performance InfiniBand and iSCSI environments for critical applications like Oracle and Microsoft SQL Server databases.

    “Fusion ION Data Accelerator delivers outstanding performance for enterprise database workloads and has also recently set a number of world records in VMware VMmark scores with our OEM partners,” said Afshin Daghi, Fusion-io Vice President of Systems Engineering. “Fusion ION Data Accelerator removes storage-related performance bottlenecks to maximize performance for enterprise applications.  This new release enhances the enterprise appeal of the ION Data Accelerator solution with features that deliver expanded options for high availability, scalability and management across the datacenter.”

    6:30p
    More Servers! Verizon Adds Capacity for HealthCare.gov
    verizon-terremark-servers

    Servers inside a Verizon Terremark data center. The company is adding more servers and storage to support the HealthCare.gov web site. (Photo: Verizon Terremark)

    Can more hardware solve the problems plaguing HealthCare.gov? The troubled web site was taken offline Tuesday night after the Obama administration asked service provider Verizon Terremark to beef up the infrastructure supporting the site. Verizon said it was adding server and storage hardware to try and stabilize the online health insurance marketplace, which has been experiencing downtime and performance problems since it launched on Oct. 1.

    “Since HHS asked us to provide additional compute and storage capacity, our engineers have worked 24/7 to trouble-shoot issues with the site,” said Jeff Nelson, vice president of global corporate communicationsfor Verizon Enterprise Solutions. “At the request of HHS’s deputy CIO, we are now undertaking infrastructure maintenance, which should be complete overnight.  We anticipate the strengthened infrastructure will help eliminate application downtimes.

    “Verizon is committed to supporting our HHS client and stabilizing their www.healthcare.gov website,” Nelson said.

    The downtime follows an outage that began Sunday when Verizon Terremark experienced networking problems, which impaired communications between parts of the Healthcare.gov application. Verizon Terremark emphasized that the outage was caused by a network problem within the infrastructure supporting the site, rather than an outage for an entire physical data center.

    The problems for HealthCare.gov are expected to be discussed on Capitol Hill today when U.S. Health and Human Services Secretary Kathleen Sebelius testifies before Congress.

    It remains to be seen whether adding server and storage capacity can bring meaningful performance improvements to HealthCare.gov. Amid finger-pointing among contractors and the administration, outside analysts have suggested the site has major weaknesses in its architecture, poor coding and that the site was inadequately tested prior to launch. Web site performance experts say the site was poorly optimized to handle heavy loads of users.

    “The trouble the government is facing when taking itself online is that people have come to expect the high level of availability and performance they get from Amazon and Google with every web site they use,” said Kent Asland, Vice President of Acceleration at Radware, which specializes in application delivery. “In our experience supporting high volume, high performance web sites, this is a challenge for vendors of all sizes. This is a solvable problem that will likely take a significant effort to bring up to standards we expect.”

    8:20p
    Snowden: NSA is Tapping Global Fiber Links for Google, Yahoo

    Citing confidential documents leaked by former sysadmin Edward Snowden, The Washington Post is reporting that the National Security Agency (NSA) is tapping fiber lines connecting global overseas data centers operated by Google and Yahoo.

    “According to a top secret accounting dated Jan. 9, 2013, NSA’s acquisitions directorate sends millions of records every day from Yahoo and Google internal networks to data warehouses at the agency’s Fort Meade headquarters,” the Post writes.

    “In the preceding 30 days, the report said, field collectors had processed and sent back 181,280,466 new records — ranging from ‘metadata,’ which would indicate who sent or received e-mails and when, to content such as text, audio and video.”

    The Post doesn’t specify the method being used to access this data, but outlines several methods by which these interceptions could be accomplished:

    • The NSA may have developed ways to tap directly into Google’s privately owned network between its data centers.
    • The NSA’s British counterpart, GCHQ, may have induced or compelled a third-party operating a cable landing station, multi-tenant data center or Internet exchange to install surveillance equipment on Google’s private cables.

    These possibilities are laid out in an infographic prepared by The Washington Post.

    We’d like to hear from the data center community, so we welcome your comments. Does the Post’s account sound feasible? What steps, if any, can be taken by data centers, cable landing firms and Internet exchange providers to address the methods described in the story?

    9:08p
    Internap Acquires Montreal Web Hosting Provider iWeb for $145M
    internap-boston-exterior-47

    The exterior of Internap’s new expansion of its Boston data center. Internap said today that it will acquire hosting provider iWeb for $145 million. (Photo: Internap)

    Brought to you by The WHIR. WHIR_logo_100

    Internap announced on Wednesday its plans to acquire iWeb, a Montreal-based web hosting and colocation services provider, for $145 million. Internap said that iWeb’s dedicated and cloud hosting offerings will complement its existing portfolio of bare-metal and virtual cloud, managed hosting and colocation services, while adding scale.

    iWeb has four data centers in Montreal, and serves more than 10,000 SMB customers in more than 100 countries, with approximately 200 employees. In its fiscal year ending September 30, 2013, iWeb delivered approximately $44 million revenue and $11 million EBITDA.

    “Internap’s acquisition of iWeb has a number of angles that could drive some interesting synergies for Internap. There are some common technologies running with both firms and the billing system – typically something that is difficult or never properly integrated in mergers -­ runs Ubersmith (which Internap owns) on both sides,” Philbert Shih, managing director of Structure Research. “Internap also acquires scale, hosting revenues, more data center footprint, entry into a new geography and language skills that cross over several countries and regions. Perhaps the area with the most potential to deliver value is the upside of a wider-ranging portfolio and migration path.”

    Internap and iWeb also are both active participants in the OpenStack Foundation. The combined team will double the size of Internap’s IaaS R&D resources, according to the announcement.

    iWeb also brings to Internap a number of server options, including its proprietary Smart Servers solution that combines virtualization with dedicated hosting.

    “iWeb fits perfectly into our strategy to deliver a comprehensive portfolio that can serve the needs of our customers at every stage of their business lifecycle, from an initial startup wanting a single dedicated server to a scale-out Internet app provider or global enterprise requiring a hybrid solution across multiple data centers around the world,” Eric Cooney, president and chief executive officer of Internap said.“This combination represents a milestone in the transformation of Internap’s business to a leading IT infrastructure services provider that can deliver on customers’ complete range of IaaS demands.”

    The acquisition of iWeb will build on Internap’s Voxel acquisition, which closed at the end of 2011 for $30 million. In 2012, Internap launched its Agile Hosting services and a new online configurator, based on its acquisition of Voxel.

    In the third quarter Internap reported revenue of $69.6 million, up 2 percent over Q3 2012, with data center services revenue of $45.5 million, up 8 percent versus Q3 2012.

    Original article published at: http://www.thewhir.com/web-hosting-news/internap-acquires-montreal-web-hosting-provider-iweb

    11:43p
    Windows Azure Cloud Hit by Management Issues

    Microsoft’s Windows Azure cloud computing platform experienced problems Wednesday, as customers were unable to perform management functions or upload files to web sites hosted on Azure.

    The Windows Azure status dashboard reported issues with using FTP to upload files to web sites, although sites could publish content using Web Deploy or Git. The dashboard also reported widespread management problems for the Azure Compute cloud.

    “We are experiencing an issue with Windows Azure Compute that may impact Service Management operations in the North Central US, South Central US, North Europe, Southeast Asia, West Europe, East Asia, East US and West US sub-region,” Microsoft reported at 6:20 pm Eastern time. “Manual actions to perform Swap Deployment operations on Cloud Services may error, which will then restrict Service Management functions. At this time we advise customers to delay any Swap Deployment operations. We are taking all necessary steps to mitigate this incident for the affected hosted services as soon as possible.”

    << Previous Day 2013/10/30
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org