Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, November 3rd, 2016

    Time Event
    12:00p
    How To Reduce Data and Network Latency Where Others Fail

    David Trossell is CEO and CTO of Bridgeworks.

    Data is the lifeblood of business. So a slow data transfer rate makes it harder to analyze, back-up, and restore data. Many organizations have to battle data latency on a daily basis, hampering their ability to deliver new digital products and services, be profitable, handle customer relationships, and retain operational efficiency. Data latency is a serious business issue that needs to be addressed. In contrast, network latency is a technical issue; but they both correlate with each other.

    Tackling Latency

    There is very little you can do to reduce network latency. The only way you can reduce it is by moving data centers or disaster recovery sites close to each other. This has often been the traditional approach, but from a disaster recovery perspective, it can be disastrous because each of your data centers could end up being located in the same circle of disruption. Ideally, data centers and disaster recovery sites should be placed at a longer distance from each other than is traditionally practiced to literally insure your data from the impact of any man-made or natural disaster.

    Yet companies are having to move data further and further away at ever increasing network speeds. The latency within the data center is very small these days, and so latency has its greatest impact on data transfer rates when it is moved outside of the data center to the cloud, or the internet for the benefit of customers.

    The response by many organizations is to address latency issues with the implementation of traditional WAN optimization tools, which have little impact on latency and data acceleration. Another strategy to reduce latency is to increase the organization’s bandwidth with a high capacity pipe, but again, this won’t necessarily accelerate the data, reduce latency and packet loss to the required levels. Yet conversely, it is only possible to mitigate the effects of latency to accelerate data over a WAN when data is being moved over long distances.

    Traditional Response

    Traditional WAN optimization vendors give the impression of reduced latency by keeping a copy of the data locally, so the perception is that latency has been reduced because you aren’t going outside the data center. However, real latency still exists because you have two points going out across a WAN. More to the point, things have changed over the last 10 years. Eighty percent of data used to be generated and consumed internally and 20 percent externally. Disasters tended to be localized, and organizations had to cope with low bandwidth, low network availability, and a high cost per megabit. The data types were also highly compressible, involving small data sets. WAN optimization was, therefore, the solution because it used a local cache, compression and deduplication, and locally optimized protocols.

    Changing Trends

    Today, everyone has moved from slow speed connections where everything was compressed, to using big pipes now that the price has come down. In the past, data was produced faster than the pipe could manage data flows, but now we can accelerate big volumes of data without having local caches. This is why many companies are transitioning from WAN optimization to WAN acceleration.

    It should also be noted that the data scenario of 10 years ago has been reversed. Only 20 percent of data is now generated and consumed internally because 80 percent of it now emanates from external sources. Disaster often causes a wider impact today, and organizations are having to cope with ever-increasing data sets, which are created by the growing volumes of big data. Another recent trend is the increased use of video for videoconferencing, marketing and advertising purposes.

    Firms are also now enjoying higher bandwidth, increased availability, and lower cost per Mb networks. Files tend to be compressed, deduplicated and encrypted across globally dispersed sites. This means that a new approach is needed that doesn’t require a local cache, offers no data change, and uses any protocol to accelerate and reduce both data and network latency.

    Strong Encryption

    Converged systems can nevertheless help to address these issues, but strong encryption is needed to shorten the window of opportunity for hackers to intercept data flows. So if you are being attacked, you can quickly move data offsite. This can be achieved with tools that use machine intelligence to accelerate data that needs to travel across long distances without being changed.

    With data acceleration and mitigated latency, it becomes possible to situate data centers and disaster recovery away from each other and outside their own circles of disruption. This approach offers a higher degree of security because large volumes of data can be transferred within minutes and seconds, denying a hacker any chance to do serious harm.

    Organizations should, therefore, look beyond the traditional WAN optimization players to smaller and more innovative ones that can mitigate data and network latency whenever data is to be transmitted, received and analyzed over long distances. The future competitiveness, profitability, customer relationships and efficiency may depend on it. So it’s time to look anew at latency to accelerate your data no matter where it resides – in the data center or outside of its walls.

    4:16p
    DCK Exclusive: Chinese Data Center Leader GDS Holdings – IPO Update

    On Wednesday, November 2, GDS Holdings Limited (NASDAQ: GDS) rang the bell at the NASDAQ, becoming the latest publicly traded data center operator listed for trading on a US stock exchange.

    There are currently six publicly traded US data center REITs. However, GDS Holdings represents an entirely different type of investment opportunity.

    GDS Holdings is the largest operator of carrier-neutral data centers in mainland China, with a market share of 24.9 percent, according to 451 Research.

    Read more: Chinese Data Center Provider GDS Files for IPO on Nasdaq

    GDS - F-1 fact sheet

    Source: GDS Holdings – F-1 filing

    GDS Overview

    In an interview with Data Center Knowledge, GDS Holdings CFO Ben Newman explained there was no single US data center operator which would be comparable to GDS.

    Newman suggested some similarities with Equinix, due to having a primary focus on Tier 1 markets. However, GDS actively leases to wholesale data center customers similar to a Digital Realty or DuPont Fabros. He also described parallels to QTS Realty when it comes to offering customers in-house hosting and managed service options.

    GDS Holdings is essentially a hybrid data center operator with ~50 percent of area committed to wholesale customers.

    The Opportunity

    China has a huge ecommerce market, as noted in the company SEC filing, “The e-commerce market in China measured by gross merchandise value, or GMV, was RMB3,877 billion (US$583 billion) in 2015, according to the National Bureau of Statistics in China, compared to US$342 billion in the United States, according to the United States Census Bureau.”

    On September 30, 2016, GDS had two customers who accounted for 26.1% and 20.8%, respectively, of total committed square footage. No other end user customer accounted for 10% or more of the total area committed.

    GDS has a partnership with Aliyun, cloud services arm of the Chinese internet giant Alibaba, which hosts cloud infrastructure in its facilities, and wants to strike similar deals with more cloud service providers.

    Read more: Alibaba’s Cloud Arm Set for Centerstage as E-Commerce Plateaus

    Notably, the company did not break out revenue by customer in the filing. However, in addition to Alibaba, GDS also counts titans Baidu and Tencent among its largest internet and cloud customers.

    IPO Is Funding Growth

    The company operates a fleet of eight purpose-built high-performance data centers totaling 428,000 square feet. In addition, GDS operates 10 facilities leased from third-parties which in aggregate comprise another 100,000 square feet of rentable space.

    GDS Holdings currently has five new data centers and two expansion phases under construction, with an aggregate net floor area of 400,000 square feet. GDS has another 215,000 square feet held for future development. Additionally, GDS has signed a memorandum to lease three data center shell buildings, totaling 325,000 square feet of additional space.

    The GDS average price for committed area ranged from US$900 to US$1,200 per rack per month in 2015. This pricing is expected to remain largely stable from 2015 to 2018, according to the F-1 filing. By way of comparison, Equinix has reported average MRR (monthly recurring revenue) per cabinet of ~$2,000, in the recent past.

    GDS - F-1 Strategy graphic

    Source: GDS Holdings – F-1 filing

    What Is Driving Growth?

    GDS Holdings founder, chairman and CEO William Huang told Data Center Knowledge that his original vision 15 years ago, was to become a leader in the Chinese carrier-neutral data center space. He explained that he followed an entirely different strategy than Chinese companies which were simply licensed to resell state-owned carrier data center space and bandwidth at about the same time.

    Initially GDS customers primarily were large Chinese financial institutions which only operated centralized on-premises data centers at the time. GDS pioneered disaster recovery and business continuity services to help banks with risk mitigation. This required the construction of new high-performance enterprise-quality data centers.

    Today, the large cloud, internet, and ecommerce customers account for 70.8 percent of GDS revenues, with the financial institutions and large enterprise accounting for 15.1 and 14.1 percent, respectively. In 2015, approximately 70 percent of new leasing activity was driven by the 350 existing customers, according to Huang.

    When asked about future international growth, Huang replied, “Frankly speaking, there are no plans to expand outside of China.” Huang anticipates abundant growth opportunities in mainland China, making the point that there were currently about 13 million Chinese corporations. Huang believes GDS has the potential to grow to a $10 billion firm solely by focusing on the top 1,000-2,000 Chinese private sector companies.

    In the IPO announcement, Huang said, “There is opportunity in fulfilling the most demanding, mission-critical IT infrastructure requirements of top-tier internet companies, financial institutions, large enterprises and multinational corporations. We have the right resources in the right places to capture the tremendous growth opportunities on the horizon.”

    Competitive Landscape

    The major Chinese data center markets are Beijing, Shanghai, Shenzhen, Guangzhou, and, to a lesser degree, Chengdu. These five major markets accounted for approximately 90% of the total high-performance data center market of China in terms of revenue in 2015, according to 451 Research. GDS also serves customers in Hong Kong.

    CFO Newman pointed out that GDS is the only carrier-neutral provider active in all five Chinese Tier 1 markets, which was a competitive advantage.

    However, there is competition in each market from other carrier-neutral data center operators, including: domestic providers Sinnet, Dr. Peng, and 21Vianet, and international players, such as US-based Equinix and Japan’s KDDI and NTT.

    Tale Of The Tape

    On Wednesday, the GDS IPO priced at $10.00 per ADS, or American Depository Share , and closed at $10.41. The 4.1% gain was in stark contrast to the data center REITs which have struggled during the past two weeks.

    Read more: CyrusOne Q3 Earnings – Trick or Treat?

    However, GDS was not immune to this downturn in investor sentiment as the IPO offering had initially been priced in a range of $12.00-$14.00 per ADS. Notably, there have been other recent Chinese IPOs which have not performed well after being listed — another headwind for GDS Holdings, which they could not control.

    Notably, each ADS is equivalent to eight GDS Class-A common shares.

    Prior to the US listing, there were 567,000,000 shares. The 19,250,000 US ADS increased the outstanding shares by 154,000,000 shares, to just over 721,000,000 Class-A shares; plus, there is a 30-day underwriter option equivalent to 23,100,000 shares. Additionally, there are employee options, consultant options, and convertible bonds which could add ~40,000,000 Class-A shares to the fully diluted share count.

    Investors interested in this offering should read the F-1 Prospectus prior to investing.

    9:54p
    Mellanox Open-Sources Its Network Processor Platform

    In a move designed to seed a new ecosystem around its line of NPS line of network processor units (NPU), including its 400 Gbps NPS-400 model, Mellanox Technologies on Wednesday announced its launch of an open source initiative, and the release of an SDK, called OpenNPU.  After wallowing in the shallow end of open source development for the past two years with the Open Compute Project, now the company seems ready to dive deeper.

    The NPS series is already programmable using the classic C language, and features a built-in Linux operating system.  Mellanox has been pushing NPS as a platform for network functions virtualization (NFV) — for virtualizing the class of functions required to run applications and customer services on networks themselves.

    “The market for networking devices is undergoing an immense paradigm shift,” reads a statement published Wednesday on the project’s new, independent Web site, opennpu.org, “from closed proprietary equipment supplied by OEM manufacturers, to open whitebox/brite-box systems that end users can customize and configure at will according to their needs.  Mellanox recognizes this trend and has decided to open-source the entire SW [software] SDK of its NPS family of network processors in order to facilitate innovation within the open source SW community.”

    The company followed up that statement with a laundry list of the functionality it would like to see open source developers contribute to its chip.  At the top of that list is a “high-performance Layer 2/3 data path” allowing the Linux kernel to be accelerated using the open source SwitchDev interface.  A Linux kernel is necessary in a programmable ASIC (which is what the NPS series is), in order for switching applications to become bootable.

    ASIC vendors, such as Mellanox, would need to be able to provide drivers for interoperating with the on-chip kernel.  But to make certain that applications are always using the correct drivers, open source engineers suggest that vendors contribute their drivers upstream to the broader community.

    Put another way, Mellanox needs to interface with the open source community with more than just a pledge of support.  It needs to put up code that developers and engineers can put to use.  Producing an SDK for NPS series may be one giant step in that direction.

    Mellanox may have needed to take an aggressive step quickly to stay competitive in this emerging space.  In October 2015, it acquired the NPS-400 chip by way of a merger with EZchip Semiconductor, in a move that some investors actively opposed at the time.  One of EZchip’s major stakeholders complained that the merging parties were being too friendly with one another, and refusing to open up the acquisition process to competitive bidding.

    Just prior to the merger, EZchip produced a product brief [PDF] outlining the prospects for NPS series in network appliances.  NPS would provide all the programmability one gets from SDN — the separation of the control plane from the data plane, and a development environment that’s based on Eclipse, which Java developers prefer — in a fast hardware appliance.

    “To achieve the performance requirements for data plane applications,” EZchip explained, “EZchip’s optimized Linux kernel provides deterministic performance for data plane applications on par with a bare metal application, while enjoying the benefits of a standard OS support.”

    One of the major complaints that telcos and other high-volume networking users have had against typical SDN systems is that they provide non-deterministic performance — that because they operate in x86 boxes with CPUs with variable levels of cache, running some such flavor of Linux, there’s no way for them to provide the synchronicity and determinism that a large-scale NFV user would require.  NPS-400 would endeavor to solve that problem, ironically but effectively achieving SDN in hardware.

    But it needs the support and the community that SDN projects related to NFV, such as ONOS and OpenDaylight, currently enjoy.  So Mellanox is making this critical move now.  In addition to its new .org Web site, Mellanox has opened a project page for a related effort — its ALVS accelerated software load balancer — on GitHub.

    9:56p
    54-Terabit Submarine Cable Linking Asian Nations Goes Live

    There would be a ribbon-cutting ceremony, except you’d need scuba divers:  One of the world’s most ambitious optical network connection projects is now officially complete, and beginning the process of opening itself up for business.  Japan’s NTT Communications, representing a partnership of major telcos in nine Asian countries, announced this week the commencement of service over the Asia Pacific Gateway (APG), a 10,400-kilometer span of fiber optic cable.

    NTT boasts a theoretical capacity for APG of 54 terabits per second — much higher than the partnership’s original goal of 40 Tbps.  However, Greg’s Cable Map, which depicts live data for submarine cable operations worldwide, lists its operational capacity at 38.4 Tbps.

    Still, the news will have already sparked celebrations in Vietnam, whose Internet service has been plagued by service problems ever since the Asia America Gateway cable (AAG) was launched in November 2009.  AAG has suffered from numerous ruptures, and finally efforts to repair it were indefinitely canceled last March, after repair crews reported they could not find the rupture point.

    Southeast Asia Gets a Boost

    APG will link Da Nang with both Osaka and Tokyo.  From both points, NTT will manage junctions to the existing PC-1 loop, a 640 Gbps cable connecting Grover Beach, California, and Harbor Pointe, Washington.  Still, PC-1 has been in active service since 2001, and sub-terabit speed is not perceived as broad enough to support an Asia-based video service to North America.

    Other municipalities being linked by APG are: Chongming, Nanhui District, and Tseung Kwan O in Hong Kong’s Sai Kung District, China; Kuantan, Malaysia; Songhkla, Thailand; Tanah Merah, Singapore; Toucheng, Taiwan; and Busan, South Korea.  It’s there where Korea Telecom (one of APG’s partners) opened a major Submarine Network Operation Center (SNOC) facility last June.

    Since South Korea will be the host country of the 2018 Winter Olympics, KT has an interest in being able to deliver multiple channels of live, Ultra-HD resolution video, capable of reaching all points of the globe at 5G speeds.

    Three years ago, the Southeast Asia-Japan Cable (SJC) became operational, thanks in no small part to a considerable investment by Google.  Theoretical capacity for SJC is 23 Tbps.  But while it links Japan with China, Brunei, Singapore, and the Philippines, it skips over Vietnam and South Korea.

    Where Facebook Goes, Conspiracy Follows

    Facebook made an investment — albeit of undisclosed value — in the construction of APG in 2012.  Whenever Facebook gets into a new industry, either it immediately sparks controversy, or someone finds a controversy and attributes it to Facebook.

    In this case, a regional tech publication speculated at the time whether Facebook had specific designs on the Asian market.  Had Facebook secured a special agreement with the Chinese government to sidestep its national firewalls and censorship boundaries, for instance?

    One of these conspiracies has since been put soundly to rest: the question of why Facebook would invest in a cable project with no ties to America.  As we noted, NTT will handle the junction between Osaka, Tokyo, and two points on the American West Coast.

    But last May, Facebook teamed up with Microsoft to invest in the construction of an all-new submarine cable across the Pacific Ocean.  Dubbed MAREA, this first new cable will be operated by Telxius Telecom, a new division of Spanish telco Telefónica.  In its announcement at the time, Telxius boasted an incredible capacity goal of 160 Tbps, some 256 times greater than PC-1.

    And just last month, Facebook and Google were named as co-investors in a separate submarine project, a 120 Tbps line dubbed PLCN.

    But one of the many 2012 Facebook conspiracies actually does still hold water, if only a few ounces:  Is Facebook interested in ensuring its Internet traffic across the Pacific steers clear of Russia, or perhaps Africa?

    It sounds like the basis for an Ian Fleming novel, were he still with us (some say he’s gone into hiding), but it is a surprisingly legitimate question.  For the last decade, Russia has put forth proposals to the U.N.’s International Telecommunications Union that would put a consortium of state-run telecom regulators in an oversight role over Internet traffic.

    While countries participating in a 2014 meeting mostly decided not to take that route, ITU study groups, operating under the auspices of Smart Cities and Internet of Things initiatives, are managing to raise the issue yet again.  And at an ITU conference in Tunisia just last week, the African Telecommunications Union raised the issue once again, in the form of proposed regulations for the makers of phone apps that could potentially be played on over-the-top (OTT) digital TV tuners.

    Russian telecom officials are on record as supporting the rights of nations, as Internet stakeholders, to impose tariffs on certain types of traffic entering their territories — for example, taxing servers that spam citizens’ browsers.  Were such tariffs to become enforceable as law (not an impossibility, given the apparently cozy relationship between one U.S. presidential candidate and Russia), theoretically, Facebook could become liable whenever a Facebook user posts material that a Russian regulation defines as “spam.”

    It would give a firm with the resources and capital of Facebook (or Microsoft, or Google) good reason to devote extra resources to hard-wiring the Internet, bypassing countries that would impose such tariffs.  The irony here is that China, in such a scenario, would be cast as the hero of free speech rights.  In any event, it’s not a bad reason, and at the very least a good excuse, to open wider channels of digital transactions across the Pacific.

    10:03p
    Facebook Joins the Big League of Internet Spenders

    (Bloomberg Gadfly) — Facebook Inc. has spent less than five years as a public company, yet its market value is already in the global big leagues. It’s now also entering the big leagues of spending, a position that bears watching.

    The three highest-spending operators of web and cloud computing services — Google parent company Alphabet Inc., Amazon.com Inc. and Microsoft Corp. — collectively dole out more than $20 billion in cash each year to buy computer equipment and run the digital data centers that are the engines of the world’s internet hangouts. Facebook is joining this capital expenditure superpower team.

     

    The company said on Wednesday that it expects its capital spending to reach $4.5 billion this year, which would be a 78 percent jump from 2015. Executives say that money is going largely to data centers including a few new ones in the U.S. and Ireland, the servers packed inside those data centers, office buildings, and the pipes that ferry bytes among Facebook’s global computer networks and to the public internet.

    Facebook expects 2017 to be “an aggressive investment year,” including on capital expenditures, Chief Financial Officer David Wehner told analysts Wednesday. The forecast for a big jump in spending — combined with a warning about a meaningful slowdown in the company’s blistering revenue growth rate — drove a 5.7 percent decline in Facebook shares on Thursday.

    If we assume Facebook’s 2016 capex forecast holds, and outlays jump 78 percent again next year, that would take the 2017 tab to about $8 billion. That figure is almost equal to what Microsoft spent on its data centers, computer equipment and the like in its most recent fiscal year, when the software giant had about $85 billion in revenue. By contrast, analysts expect Facebook to generate 2017 revenue of $36.7 billion, according to the average of estimates compiled by Bloomberg.

    Facebook executives talk often about the company’s expertise in tweaking its computer networks to maximize efficiency. The company designs its own servers, internet network pipes and even its data center buildings to squeeze as much digital work out of them at the lowest possible cost. No detail is overlooked. Facebook even removes logos on servers so air can flow uninterrupted to cool down the powerful computers.

    Still, Facebook isn’t yet that efficient with its capital spending. Capex through the first nine months of 2016 works out to about 17 percent of revenue in the same period. At Alphabet, capital spending is about 11 percent of gross revenue.

    Google in the last year or so has tightened its purse strings, after a new financial chief came in and imposed greater discipline. Facebook is still boosting spending to juice its growth, as the company should be. But the capex tab is one area investors will keep a close eye on. If the pace of revenue growth slows materially, then Facebook might need to take a breather on capital spending, too.

    << Previous Day 2016/11/03
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org