Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, June 7th, 2017

    Time Event
    12:00p
    Land Shortage May Slow Hong Kong Data Center Market Growth

    While growing at a rate that’s similar to other top data center markets, the Hong Kong colocation market may see its growth slow if the small nation’s government doesn’t allocate more land for data center construction.

    That’s according to the latest market report by Structure Research, which has made a slight downward adjustment to its five-year growth projections for Hong Kong to reflect the very real effect land scarcity is having on the market.

    “There has been a lack of new inventory, and without it, there will be fewer options in the market and less opportunity for growth,” Jabez Tan, research director at Structure and author of the report, said. “Despite these challenges, the market remains healthy and continues to move forward. While supply has largely been unchanged, there remains a pipeline of demand and the inventory to support it.”

    Hong Kong is one of the most important network hubs mainland Chinese companies use to expand to international markets by establishing data centers there; it is also a gateway to the mainland China market for international players.

    The Land Shortage

    Scarcity of available land is the biggest problem facing the Hong Kong market today. Land supply is generally low, and the amount of land that can be used for data center development is even lower. The country’s government decides what gets built where, and very little new land has been earmarked for data center construction lately.

    Only two new data center builds are currently in the pipeline – one by Global Switch and another by SUNeVision – and only two plots of land are expected to become available for data center construction between now and 2020, Tan said. That, however, may change now that Hong Kong has had its election. (New land allocation had been on hold in the run-up to the election.)

    “It could change immediately because Hong Kong just had their election,” he said. “There could be new regulations that are put into place that will open up land for more data centers.”

    Wholesale Catching Up to Retail Colo

    In line with a global market trend, the mix between wholesale and retail colocation is shifting toward wholesale in Hong Kong, driven largely by demand for large-capacity wholesale data center leases by cloud giants. Unlike North America, where US-based cloud companies are driving that demand, however, it is Chinese providers that are gobbling up much of Hong Kong’s capacity, using it as a springboard for international expansion.

    These are the usual suspects, such as Tencent and Alibaba, both of whom have been doubling down on their already expansive Hong Kong data center footprint, Tan said. Of the top US-based cloud providers, Microsoft has a sizable Hong Kong deployment with NTT. Its rivals Amazon and Google have cloud data centers in Singapore, the other big network gateway to Asia-Pacific markets.

    The split in Hong Kong today is 42 percent wholesale and 58 percent retail, according to Structure. The analysts expect it to even out by 2021.

    Growth in Line With Global Trends

    Hong Kong was a $706 million data center services market in 2016, up from $616 million in 2015, according to estimates by Structure, which projects it will grow 17 percent this year. The firm expects Hong Kong to grow at a compound annual growth rate of 16 percent between now and 2021. The projected growth rates are slightly above what Tan expects the global colocation market’s growth rate will be this year: 15.2 percent.

    For comparison, the North American colocation market generated nearly $14 billion in revenue last year, according to Structure. That’s about 42 percent of the global market.

    Here’s the global breakdown for 2016, courtesy of Structure:

    Three Providers Run the Game

    Japan’s NTT Communications has the biggest colocation market share in Hong Kong (24 percent), followed by Silicon Valley-based Equinix (20 percent), and Singapore’s own SUNeVision (16 percent). SUNeVision is the tech arm of one of the country’s top real-estate developers, Sun Hung Kai Properties.

    Here’s the detailed 2016 Hong Kong colocation data center market share breakdown by Structure (click chart to enlarge):

    5:25p
    ViaWest Inks Access Deal for Microsoft’s New Transpacific Cable

    ViaWest announced that one of its two Hillsboro, Oregon, data centers will provide direct access to the New Cross Pacific submarine cable between the US and Asia, set to be operational later this year.

    Construction on the partially Microsoft-backed, 13,000-km cable project began in 2015. Once completed, it will be accessible at ViaWest’s Brookwood data center.

    Access to NCP will also be available in other facilities on the Hillsboro Data Center Fiber Ring, a metro fiber network that includes facilities operated by EdgeConneX, Infomart, and Digital Realty Trust, as well as ViaWest’s second Hillsboro data center.

    Through this ring, the data centers link to the Tata Communications-owned TGN Hillsboro Landing Station, which serves as the US end of the currently active TGN Transpacific Cable System. The ring also serves as a cross-connect facility for several other transpacific submarine cables already in operation, including Google-backed FASTER and TPE.

    Read more: ViaWest Fiber Access Agreement Strengthens the ‘Hillsboro Ring’

    The NCP cable system will land at a landing station in Pacific City, about 40 miles south of the TGN station in Nedonna Beach. ViaWest’s NCP deal will add access to yet another transpacific cable to Hillsboro’s metro fiber ring.

    Also under construction is the Hawaiki Submarine Cable, a transpacific cable system backed in part by Amazon Web Services that will also land in Pacific City.

    See also: Here are the Submarine Cables Funded by Cloud Giants

    The NCP network will link Hillsboro to Chongming, Nanhui, and Lingang in China; Busan in South Korea; Toucheng in Taiwan; and Maruyama in Japan. The transmission of multiple wavelength channel signals through a 100G amplified cable system will provide up to 80 tbps of capacity.

    Microsoft has a 50 percent stake in the NCP, other investors being China Mobile, China Telecom, China Unicom, Chunghwa Telecom and KT Corp.

    ViaWest is owned by the Canadian telco Shaw Communications. A report surfaced earlier this year saying the telco may be looking to sell the data center business.

    Shaw announced last year that it will offer access to Microsoft’s Azure cloud services out of all its data centers, including those operated by ViaWest.

    5:44p
    HPE’s Whitman Says Edge Will Drive On-Prem Data Center Demand

    Edge computing was front and center when Hewlett Packard Enterprise CEO Meg Whitman took the stage at the company’s Discover 2017 conference in Las Vegas Tuesday afternoon. She was there to talk about HPE’s vision for the future, which is all about taking business to the edge of the internet. In her vision of a future dominated by mobile devices and the Internet of Things, traditional on-premises data centers are the drivers, with public cloud along for the ride in the back seat.

    “Just to be clear,” she said about ten minutes into her talk, “this new intelligent edge does not make your data center less important. It actually makes it more important than ever, because your companies aren’t going to have only one edge, or a limited amount of edge devices — you will have many. This is going to require an even greater amount of centralized computing to get the most out of your digital operations.”

    This wasn’t surprising coming from a company that recently gave up its public cloud aspirations to jump on the hybrid cloud bandwagon. What was a little surprising was the hard sell. Whitman seemed to be there not so much as the company’s CEO, but as a senior sales rep selling the notion that soon everything is going to be computerized, and that with HPE’s help, companies can keep their data safe and secure in their own data centers while taking advantage of a new internet economy that will be centered on the network’s outer perimeter.

    See also: GE Bets of LinkedIn’s Data Center Standard for Predix at the Edge

    “While we keep hearing the hype that everything is moving to the public cloud, it’s just not happening,” she said, pointing out that market researcher IDC has reported that 53 percent of enterprises have left or are considering leaving the public cloud to bring their workloads back to on-premises data centers. “Public cloud is absolutely the right choice for certain applications and certain use cases, and it’s part of the right mix for hybrid IT.

    “Simplicity, time to deploy, and cost is what made the public cloud so popular,” she added. “But many customers have reached a point where they’re now asking us to help them optimize in a hybrid environment. And once they get to a certain point with the public cloud, they essentially hit what we call ‘the cloud cliff,’ where either for reasons of control, security, performance or cost, the platform they went with is no longer the best option.”

    She cited two examples, Dropbox and Smartsheet, of companies that began with a public cloud-based infrastructure but eventually decided to migrate to a hybrid approach anchored by on-premises data centers. “Dropbox is now cash flow positive, a key objective in its maturation as a business, as well as positioned to accommodate a lot faster growth in the enterprise market.”

    There were several reason behind the decision by Smartsheet, a SaaS collabration platform, to switch from a public cloud-centered approach, starting with a need to gain more control over its infrastructure, Whitman said. In addition, the company was finding that as it grew, its cloud-based model had become expensive, a particular concern since the company uses a freemium model to attract paying subscribers.

    Whitman pointed out that the move was daunting. “When the company considered moving to a hybrid IT model, it wrestled with another set of challenges. First, the shear complexity of the migration. Second, Smartsheet also needed to fund the transition without cannibalizing its investments in other areas. And finally, it needed to manage both operations, retiring the cloud environment and a new on-prem build, without adversely impacting its business.”

    The long and short of it was that with HPE doing most of the heavy lifting, the transition was evidently smooth and affordable.

    The approach that Whitman is advocating is not new of course. The hybrid cloud approach of keeping most day-to-day operations running in traditional on-premises data centers and using the public cloud as an adjunct has long been recommended by other large solutions providers — such as Red Hat, which pioneered the approach — and decentralizing the network by moving to the edge with the use of microservices and the like is increasingly being embraced in the age of mobile devices and IoT.

    6:07p
    Green House Data Enters Dallas Data Center Market

    Cloud and colocation services provider Green House Data secured a $16 million credit line from a private equity firm in late May and wasted no time putting it to work.

    The aggressively expanding company announced addition of its first data center in the Dallas-Fort Worth metroplex, one of the hottest data center markets in the country (hottest in terms of demand, not temperature, although it does tend to get pretty hot in the summer), to its portfolio.

    “Dallas has been a target expansion site for us due to business density and growth metrics,” Green House CEO and president, Shawn Mills, said in a statement. “As one of the largest metros in the United States, Dallas is home to many oil and gas, technology, healthcare, and finance organizations, all of which are well-served by the Green House model of high-touch, highly compliant hosting.”

    Dallas sits within the Texas Interconnection, a wide-area electrical grid that covers most of the state and is independently managed by the Electric Reliability Council of Texas (ERCOT).

    The new cloud and disaster recovery facility brings Green House’s national footprint to 10 data center sites within nine geographic areas, spanning the Pacific Northwest, Western and Central US, Southeast, and East Coast. The Cheyenne, Wyoming-based company says it serves customers across 49 US states, six Canadian provinces, and several states in Mexico.

    Earlier this year, Green House acquired Cirracore, an Atlanta-based enterprise cloud provider, and Seattle-based FiberCloud and three of its Washington data centers, including a crown-jewel facility in Seattle’s Westin Building.

    Because Green House is a relatively small provider, it separates itself from competitors by using wind, solar, and hydro-power for all of its operations and free cooling in its data centers year-round.

    6:30p
    Simplifying Complex Cloud Hydrations

    Howard Young is a Solutions Architect at Zadara Storage.

    Call it cloud hydration, cloud migration or whatever – the dirty secret of cloud deployments today is that the actual “lift and shift” process of data storage to a target cloud can be a lengthy, complicated and risky process. That’s particularly so when IT teams look beyond the low-hanging fruit of applications such as email and collaboration, and look to hydrate more complex, line-of-business applications.  However, with proper planning, these concerns can be alleviated.

    With experts projecting 18 to 20 percent growth in cloud hydrations of formerly on-premises applications during both 2017 and 2018 (Cloud Technology Partners, January 2017), it’s a particular issue for production settings, multi-cloud deployments, or static deployments where data volumes are massive.

    Organizations performing cloud hydrations need to address:

    • Bandwidth – so that multi-petabyte applications transfer without performance degradation to the rest of business operations
    • Mixed media support – since it likely has SSD, hard disk and tape to import
    • Physical considerations – for example, if it lacks cost-effective high-speed Internet connections
    • Security and governance requirements – so the hydration doesn’t itself expose the organization to intrusion or compliance requirement violations.

    When cloud hydrations touch storage, it’s also important to:

    • Avoid the need to convert – which is especially important at petabyte scale where a seemingly few hours for this task quickly balloons into much more.
    • Make sure there’s enterprise format support – for example, the CIFS and NFS storage formats are not always broadly supported, and converting data may jeopardize the hydration timeline.

    New options exist for exporting data to a target cloud deployment non-disruptively and accurately, even for production data in hybrid and multi-cloud.

    Large Static Data Settings

    With static data, cloud hydration is straightforward.  IT teams or their third-party resources can leverage physical media such as a NAS appliance to expedite the hydration process for file, block or object storage with over 1 TB of data volumes.   This method is appropriate when data does not need to be continuously online, or in instances requiring the use of a slow, unreliable, or expensive Internet connection.   With the few caveats, the approach is swift and painless:

    • The appliance should support the target environment (Windows vs. Linux) and file type (e.g., NFS, CIFS, Fibre Channel etc.)
    • It should include encryption, preferably 256-bit AES
    • A readily shipped form factor that is configurable with RAID for durability works best
    • For transferring over 30T of data, multiple appliances can be used – or the team can leverage one appliance and repeat the process several times to move data in logical chunks or segments.

    While some cloud hydration providers require the purchase of the appliance, others allow for one-time use of the appliance during hydration, after which it is returned, and the IT team is charged on a per terabyte use basis, without a CapEx purchase or long-term commitment.

    Production Data

    This process requires some method of moving the data and resynchronizing once the data is moved to the cloud. To do so non-disruptively, some form of intermediary is required.

    Mirroring represents an elegant answer to the tasks of hydrating production data.  It requires two local on-premises appliances that have the capability to keep track of incremental changes to the production environment while data is being moved to the new cloud target.  First production data is mirrored to the first appliance, creating an online copy of the data set. A second mirror is created from the first mirror, creating a second online copy. The second mirror is “broken” and the appliance is shipped to the cloud environment.  The mirror is then reconnected between the on-premises copy and the remote copy and data synchronization is re-established.  Thus, an online copy of the data is now in the cloud and the servers can failover to the cloud.

    Hybrid- or Multi-cloud Considerations    

    With hybrid clouds representing 47 percent of all deployments – the most popular cloud deployment strategy – and multi-cloud deployments increasingly popular (North Bridge VC/Wikibon Future of the Cloud survey), cloud hydration best practices for hybrid or multi-cloud capabilities are increasingly important.

    Hydration approaches that allow asynchronous replication between cloud platforms make it an easy decision for IT teams to optimize their cloud infrastructures for both performance and cost.  Organizations can hydrate specific workloads to one cloud platform or another (e.g., Windows applications on Azure, opensource on AWS); or move them to where they can leverage the best negotiated prices and terms for given requirements.  A cloud hydration approach that enables concurrent access to other clouds also enables ready transfer and almost instant failover between clouds, in the event of an outage on one provider.

    Longtime IT industry analyst James Governor presciently noted that “convenience is the killer app” when it comes to managing cloud infrastructure. New options make it even easier, and more convenient, to perform even complex cloud hydrations so IT teams can spend more time using their cloud deployments to accelerate their organization’s agility, while minimizing minimize risk, cost and hassle.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    6:54p
    HPE’s Whitman Sees Acquisitions as Bigger Part of Strategy

    Brian Womack (Bloomberg) — Hewlett Packard Enterprise Co. Chief Executive Officer Meg Whitman, who has been racking up acquisitions to stay competitive in the age of cloud computing, said the company’s shopping spree may not be over. “I think you will see acquisitions become a bigger part of our strategy,” Whitman said in an interview Tuesday in Las Vegas at the company’s Discover conference.  The company, based in Palo Alto, California, has already unveiled purchases so far this year worth more than $1.5 billion. Whitman is hunting for tools that would help boost demand for the company’s main server and storage products, seeking to push back against direct competitors such as Dell Technologies, as well as cloud-computing providers such as Amazon.com Inc. She’s spent the past few years slimming down HPE, including splitting off the personal-computer and printer business and shedding some services and software units in multibillion-dollar deals. Now, she said, it’s more clear where the company’s resources should be spent.

    “Back when we were an enormous company with six or seven operating divisions, there were a lot of mouths to feed,” Whitman said. “Printing wanted to make acquisitions. PCs wanted to make acquisitions. Software wanted to make acquisitions. Now, we have a much more focused strategy.”

    See also: HPE’s Whitman Says Edge Will Drive On-Prem Data Center Demand

    Still, acquisitions aren’t the only way to bolster the company’s prospects — Whitman said it’s just one of the three main ways to drive success. Another is innovation in HPE’s main product lines, such as improvement in servers, expanded storage offerings and advances in networking.

    The third area is HPE’s Pathfinder program, which invests in younger companies. Whitman said it’s a great way to get innovation without taking a financial hit. “There are a number of companies that I think would be quite interesting to buy,” she said. “The problem is they have $20 million of revenue and they lose $150 million.”Whitman is holding the annual Discover event this week after the company gave a disappointing update on its financial picture last month. Profit excluding some costs will be 24 cents to 28 cents a share in the current quarter, the company said, while analysts had projected 32 cents. The company affirmed its fiscal-year outlook for adjusted profit of $1.46 to $1.56 a share compared with analysts’ estimates of $1.52.

    See also: Incumbents are Nervous about Hyperconverged Infrastructure, and They Should Be

    Revenue was $9.9 billion in the quarter that ended April 30, compared with an average analysts’ projection of $10.09 billion.The company has already been trying to improve those sales with acquisitions. In April, Hewlett Packard Enterprise bought Nimble Storage in a deal valued at about $1 billion, adding to its lineup that helps customers with data storage. That followed a handful of purchases earlier this year, including $650 million for Simplivity, which also helps its storage line, and Niara, a security startup. As for purchases in the future, Whitman said the company would stick to its core business that includes servers, storage and networking that can help customers keep up with advances in connected devices — known as the Internet of Things — and hybrid computing, which includes cloud and on-site technology. And she said HPE will look for businesses that fit well with the company’s current offerings.”Is it a complementary technology that leverages our distribution system,” she said, “and have we bought it right?”

    7:54p
    Meet Christine Hall, Our New Writer

    We’re excited to announce that Christine Hall, one of our favorite technology writers, has joined Data Center Knowledge.

    Christine has been a journalist since 1971. In 2001 she began writing a weekly consumer computer column and began covering IT full time in 2002, focusing on Linux and open source software. Since 2010 she’s published and edited the website FOSS Force, and this week she joined the Informa family to cover IT for Data Center Knowledge and ITPro.

    When she’s not covering her beat, she spends her time in rural North Carolina, where she spends entirely too much time binge watching Netflix and thinking about cleaning the house.

    Here’s Christine’s first piece for DCK: HPE’s Whitman Says Edge Will Drive On-Prem Data Center Demand

    Follow her on Twitter: @BrideOfLinux

    On a tangentially related note (OK, this is simply a shameless plug), did you know that we launched a podcast this week? This is the only podcast on the internet that’s focused on data centers and data centers only. We called it … wait for it… The Data Center Podcast. Check out the first episode, where Equinix’s Peter Ferris tells us about Playboy’s first data center, and the birth of the internet colo business.

    << Previous Day 2017/06/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org