Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, April 11th, 2017

    Time Event
    3:00p
    Cologix Buyer Stonepeak Said to Seek $5B for Infrastructure Deals

    Melissa Mittelman (Bloomberg) — Stonepeak Infrastructure Partners, the firm founded by former Blackstone Group LP executives, is targeting $5 billion for its next fund, according to people familiar with the matter.

    Stonepeak is collecting money just a year after it finished raising its second pool, which got $3.5 billion in six months from 2015 to 2016. A representative of the New York-based firm declined to comment on fundraising.

    Stonepeak, which invests in mid-sized infrastructure deals in North America, last year deployed money in energy infrastructure company Sage Midstream Ventures and a liquefied natural gas venture called Golar Power Ltd. Stonepeak acquired Cologix Inc., a data center operator, last month.

    Infrastructure funds have gained renewed attention as President Donald Trump vows to direct more private money toward improving roads, bridges and airports. The asset class also fits the bill for liability-driven investors in the U.S. and abroad seeking current income amid near-zero or negative yields in fixed income elsewhere.

    Infrastructure deals hit a record $413 billion worldwide last year, according to research firm Preqin, and managers such as Global Infrastructure Partners and Brookfield Asset Management Inc. raised unprecedented pools of capital for their infrastructure strategies in 2016. Blackstone, the world’s biggest alternative-asset manager, would target as much as $40 billion for infrastructure if it re-enters the strategy, Joe Baratta, the firm’s global head of private equity, said in January.

    Stonepeak was started in 2011 by Mike Dorrell and Trent Vichie, who previously co-led Blackstone’s infrastructure push. Stonepeak’s first fund closed on $1.65 billion in 2013. The firm oversaw $7.3 billion as of Dec. 31, according to its website.

    3:30p
    Global Hacking Operation is Targeting MSPs, Stealing Customer Data

    Brought to you by MSPmentor

    A sophisticated global hacking operation emanating from China has compromised managed service provider (MSP) networks and is targeting additional MSPs in an effort to steal sensitive data and intellectual property from enterprise customers.

    That’s the conclusion of a new joint report from PwC UK and BAE Systems, which details an intricate cyber espionage campaign by a well-known threat actor known as APT10.

    So-called “Operation Cloud Hopper” has been in effect since at least last year, and has intensified during 2017, the researchers said.

    “APT10 has vastly increased the scale and scope of its targeting to include multiple sectors, which has likely been facilitated by its compromise of MSPs,” the report states. “Such providers are responsible for the remote management of customer IT and end-user systems, thus they generally have unfettered and direct access to their clients’ networks.

    “They may also store significant quantities of customer data on their own internal infrastructure.”

    Evidence suggests that the hackers are working during business hours in China and even taking lunchtime pauses in activity, according to the report, which was made public in recent days.

    The APT10 group is known for cyber espionage and the researchers suspect the criminals view MSPs and cloud service providers as high-payoff targets.

    “Given the level of client network access MSPs have, once APT10 has gained access to (an) MSP, it is likely to be relatively straightforward to exploit this and move laterally onto the networks of potentially thousands of other victims,” the security experts wrote.

    “This, in turn, would provide access to a larger amount of intellectual property and sensitive data,” the report goes on. “APT10 has been observed to exfiltrate stolen intellectual property via the MSPs, hence evading local network (defenses).”

    MSPs are initially infiltrated through well-researched phishing campaigns.

    “Through our investigations, we have identified multiple victims who have been infiltrated by the threat actor,” the researchers wrote. “Several of these provide enterprise services or cloud hosting, supporting our assessment that APT10 are almost certainly targeting MSPs.

    “We believe that the observed targeting of MSPs is part of a widescale supply-chain attack.”

    Click here to view the full report.

    This article originally appeared on MSPmentor.

    3:38p
    New Power Line for Amazon May Run Above Railroads Despite Opposition

    The Virginia State Corporation Commission cleared the way last Thursday for the state’s largest utility provider, Dominion Virginia Power, to seek the approval of local regulators to build a new electric substation, along with a 230 kV transmission line in a series of towers along an existing railroad line.  Both will serve, in the words of the Commission’s interim order, “a single retail customer,” whose identity is a secret to no one: Amazon.

    In issuing the order, the Commission rejected the contention of opponents to the power line, including a citizens’ group entitled the Coalition to Protect Prince William County.  That group had been pushing for Dominion to adopt a plan to build the line partly along Interstate 66 and partly underground (shown in the map above in light blue).  And the power company had originally stated its preference for a completely overhead route along I-66 (the dark blue line running parallel in parts), the Commission acknowledged in its order.

    But the Commission chose the railroad route (red line), citing construction costs about one-third those of the Coalition’s preferred alternative, and arguing the route would impact the town’s residents least of all.

    “The Commission concludes that the Railroad Route ‘will reasonably minimize adverse impact on the scenic assets, historic districts and environment of the area concerned,’” the Interim Order reads, citing the language of state statute.  “While recognizing the adverse impacts of this route, including on wetlands and to the Town of Haymarket, the Commission finds that the Railroad Route will have significantly fewer impacts to local residents.”

    The order goes on to claim the railroad route will have no impact of any kind upon residences within 200 feet of its center line, and only a small impact within 500 feet.

    Of course, all this depends upon how one defines “impact.”  In a letter to the editor of the Richmond Times-Dispatch published last January, a resident of Gainesville wrote, “Originally, this seemed to be the ideal neighborhood to raise a family.  But if Dominion erects 112-foot-tall, 4-foot-wide towers along Carver Road, our promising futures will be destroyed.”

    The Carver Road route (green line) was one of four alternate routes to Dominion’s originally preferred I-66 overhead route.  Last year, Carver became the preferred route of hearing examiner Glenn Richardson, who heard evidence from February to May of last year.  According to the Commission, Richardson recommended, “The Carver Road Route reasonably minimizes the Project’s impact on the environment, scenic assets, and historic resources.”

    But Prince William County’s assessment could not have been more contradictory.  In a letter to the Commission last December, Acting Planning Director Christopher Price presented a map clearly demonstrating the examiner’s preferred route plowed directly through a cemetery, and would probably have cast shadows upon Manassas National Battlefield Park.

    In its order, the Commission stated it had concluded the Carver Road route had minimal impacts on the environment, agreeing with the examiner.  But it could not approve Carver, it said, due to an objection filed by a local homeowners’ association rendering any power line plan impossible without Prince William County’s consent.
    “The County has indicated to the [power] Company,” the order reads, “that it will not permit an overhead transmission line to be constructed across its open space easement property interest as would be required for this routing alternative.”

    As of last Thursday, Dominion was given 60 days to show to the Commission that it procured all the necessary rights-of-way, and that all legal constraints have been removed, to build along the railroad route.

    While this could be phrased as the best chance at a compromise, Coalition director Elena Schlossberg issued a never-say-die statement on Friday, stating in part: “In repeating its demand for the Railroad alternative – an option neutralized by the Prince William Board of County Supervisors (BOCS) – then insisting Carver Road is the only fallback option, the SCC [State Corporation Commission] is explicitly playing politics, forcing the BOCS to make a decision the SCC lacks the will to make.”

    One issue the Commission leaves outstanding is the somewhat important matter of who ends up paying for construction of the line and the power substation.  In fact, the order left it open explicitly, stating that in doing so, it “need not address the potential impact of economic development related to any direct assignment of such costs.”

    All of this squabble is holding up Amazon’s plans announced in January 2015 to expand its Haymarket facilities, in the heavily competitive Richmond data center market, to about 500,000 square feet.

    4:01p
    How to Avoid Runaway Costs in the Public Cloud

    Lynn LeBlanc is CEO of HotLink.

    The speed at which the enterprise has transitioned from a “cloud-never” to a “cloud-first” posture is astounding. The fear, uncertainty and doubt that gripped the industry when the public cloud was new and exotic has been replaced by a mindset that is already thinking past the mere deployment of cloud resources to the steps needed to optimize them. But the cloud is not the local data center, and both the tools and the techniques needed to keep a lid on costs are dramatically different when you rent infrastructure instead of owning it outright.

    According to Gartner, the consumption of public cloud services is set to grow 18 percent in 2017 to top $246 billion. The research firm says this will represent the height of growth for the remainder of the decade, although the ensuing years will feature only a mild tapering. Among the leading software vendors, however, we can expect to see cloud-only releases approaching 30 percent of all products by 2019. Perhaps most interesting is that with the rationale behind the cloud quickly shifting from simply off-loading traditional IT workloads to supporting fully cloud-native applications and services, the need to optimize consumption will take center stage.

    But how, exactly, can you manage something that you don’t own? For one thing, the cloud’s pay-as-you-go pricing model and the ability for line-of-business managers to circumvent IT to craft their own data environments makes it challenging for IT to keep up with what’s happening in the cloud, let alone control the costs. And even if planned cloud deployments do come in well under budget compared to standard data center infrastructure, there is no guarantee that costs will not spiral out of control once the workloads scale three-quarters particularly if usage monitoring and management are an afterthought.

    This is the primary reason why monitoring and optimization need to be embedded as core principals in any enterprise cloud strategy. Only through continual, active engagement in the cloud’s infrastructure, abstraction, service and data layers can organizations gain sufficient leverage over resources, workflows, connectivity and all the other facets of cloud operations that define both costs and performance. This means the enterprise needs to determine how instances are being deployed, how applications are used, what will constitute effective monitoring, whether workloads are conforming to budgetary requirements and a host of other concerns. As well, advanced analytics tools will need to track multiple data points, such as instance sizes, usage patterns, update requirements, network bandwidth utilization, data retention policies and both historical and predictive analyses.

    Since most corporate IT shops continue to have a dominant on-premise footprint, the integration of internal and external resources under hybrid management architectures is the logical step to manage the resources spanning public and private IT infrastructures. Many organizations, in fact, are working toward linking their local VMware environments to those provided by hyperscale providers like Amazon Web Services (AWS). By incorporating the management of public resources into the same software that governs the existing data center, IT gets a holistic management stack without adding operational or integration complexity. Not only does this provide full cloud control using the same management interface and workflows that IT has grown accustomed to, it provides a host of benefits like access to AWS’ GovCloud for applications that deal with public sector services, unified network and security management under VMware vCenter, and hybrid automation and orchestration under VMware’s PowerCLI stack. The enterprise also gains the ability to automate bi-directional workload conversions and migration for bursting to the cloud and other temporary use cases.

    Most cloud experts are quick to say the true benefit of the cloud is not its low cost but its increased flexibility and the support it lends to digital transformation efforts. This is true enough, but costs are not an insignificant factor either, particularly as the cloud sets new lower benchmarks as to what the enterprise should be spending on data infrastructure.

    Increasingly sophisticated management techniques allow the enterprise to govern its entire data footprint on a highly granular level, and in turn this helps the enterprise determine the most effective, productive means to leverage its cloud-based resources, as well as the prevention of runaway costs in the public cloud.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    4:55p
    VMware Virtual Storage to Hook Up Flash with Docker

    Version 6.6 of VMware’s vSAN component (formerly Virtual SAN), to be made available May 5, will include a Docker volume driver that will enable containerized applications and microservices to address virtual storage volumes as though they were persistent data containers.  It’s a move that VMware hopes should ease the transition of many enterprise applications from first-generation virtual machines to more adaptable, scalable components — while at the same time connecting containerized apps to volumes backed by the latest flash hardware.

    “The new Docker volume driver gives you the ability to enable folks running native Docker to run vSAN on the back end with the native Docker APIs,” said Michael Haag, VMware’s group manager for product marketing, in a conversation with Data Center Knowledge.  “And we’re expanding the functionality to include our policy-based management capabilities, cloning, snapshots, and some other capabilities.”

    Volume Driver

    When the Docker wave was first catching on in 2014, there was a groundswell of excitement over the possibilities for data centers to distribute individual functions of running, cloud- and Web-based applications on demand.  The first developers to get excited, however, ran smack into an architectural hurdle that felt at the time like a sudden, stone wall:  There was no architecture yet for multiple copies of the same function to securely access the same database.

    Proponents of a kind of “pure” containerization asserted that this roadblock was actually a guidepost toward a new kind of application development called the 12-factor methodology.  One of its tenets is described as statelessness, but which translates into a kind of isolationism: the idea that functions can scale up and down much easier if they share no data between themselves.

    That kind of application architecture might just work if your first name is “Net-” and your last name “-flix.”  Out in the big world, however (breaking news, everyone), databases exist.  Existing applications use existing databases.  And you can’t migrate existing apps to new apps without carrying the data over with them.

    “Docker does a great job of managing applications at a high level, splitting things apart, and making them more consumable,” explained Haag.  “But they haven’t dug into the storage side, where if something happens and your container goes away, how do you make sure that storage lives, and you don’t have to rebuild and re-create it?”

    VMware’s Docker volume driver premiered in June 2016, but as a beta project hosted by the GitHub open source platform.  It has been one of the most ambitious applications of Docker’s platform extensibility, introduced the year before.  Its goal has been to enable vSphere administrators to create persistent storage volumes — components that don’t get instantiated and revoked like containers — but that remain under the administrative control of vSphere.

    Of course, this plan has the virtue of leaving vSphere in the picture, for data centers considering a full-on move to containerization.  And for those data centers that are more ambitious, vSAN 6.6 is being made a native part of Photon, VMware’s containerization environment for those weaning themselves off of vSphere.

    Plan B for Hyperconvergence

    One of the main problems many data centers are facing with respect to decisions over hyperconverged infrastructure, is how to converge both legacy and modern workloads onto one platform.  Extraordinarily, some have suggested compartmentalizing systems into workload-specific blocks, enabling those that don’t play as nicely with HCI to be managed under separate policies.  Others suggest that workloads designed to run in parallel — specifically, those that use a lot of data, or that may generate variable amounts of latency — should perhaps be excluded from HCI platforms.

    You’re reading this accurately:  Experts are saying you should stratify your hyperconvergence for safety’s sake.

    By contrast, VMware has asserted that it’s taking a more converged (to coin a phrase) approach, by plotting routes for multiple components to be continually integrated onto single virtual platforms.  Last month, the company opted to reduce the number of virtual switch technologies it supported to one.  While some may be arguing that this reduces choice, there are certain areas of the infrastructure where choice for choice’s sake may be frivolous.

    On the other hand, there may be other areas where certain features of a data center’s existing hardware have already made those choices for it.  This is the curious situation regarding NSX, the company’s network virtualization layer.  Last year, the confusion about whether customers should run vSAN with NSX together led then-principal architect Rawlinson Riviera (now the CTO of Global Field) to write a blog post stating 1) vSAN and NSX are “absolutely and unequivocally” compatible, but 2) that doesn’t mean you can just plug them together and run vSAN traffic over a VxLAN overlay.

    As of now, said VMware’s Haag, you can plug the two compatible things together.  But you should want to first.

    “From an HCI standpoint,” he told us, “it really starts with vSphere and vSAN.  If customers want to evolve to what we call the ‘full software-defined data stack’ with full network virtualization, NSX can definitely be added on top of that.  One of the benefits we have is the flexibility to talk about multiple deployment options.”

    On the opposite side of the scale from Haag’s “full stack” is VxRail, which incorporates a turnkey version of VMware’s HCI stack, and which is currently being made available through sister company Dell EMC.

    ‘Native’ Moves to Software

    Then there’s the case of native data-at-rest encryption.  Historically, vSAN has hit a stone wall when attempting to interface with self-encrypting drives, even if they’re from parent company Dell.  For vSAN 6.6, VMware is adopting the tack of building a single cross-platform function around encrypted data, presenting this as the “native” layer — and, in so doing, enabling the presumption that any other encryption option is a one-off.

    “What we are introducing is a native, or software-defined, data-at-rest encryption,” remarked Haag.  “This is encryption built into the hypervisor, into the software stack, allowing us to fully encrypt all of the data that resides on any persistent media.”

    Other HCI vendors, he contended, achieve what they portray as security by leveraging all its functions on some unique feature of the hardware — one of which is self-encrypting drives.  Because these leverages are each unique, he concluded, they present compliance problems.  And as long as an enterprise can’t be fully compliant, it isn’t truly secure.

    “So by doing this in software, we’re allowing folks to deploy encryption on any certified hardware platform of their choice.  If they have existing SSDs in their systems, they can upgrade and deploy them on day one.  If they want to start leveraging the latest technology — the new Intel Optane NVME SSDs — and run in an encrypted environment, we’ll be able to support those out of the gate as well.”

    That gate officially opens May 5.  At that time, VMware will begin licensing vSAN 6.6 for $2,495 per CPU, or $50 per desktop for VDI-restricted workloads.

    9:12p
    Gartner Projects Worldwide IT Spending to Reach $3.5B in 2017

    Brought to You by The WHIR

    Worldwide IT spending will increase by 1.4 percent from last year, reaching $3.5 billion in 2017, according to projections released Monday by Gartner. The growth rate was projected to be 2.7 percent in the previous quarterly forecast, but was hit by the rising U.S. dollar.

    “The strong U.S. dollar has cut $67 billion out of our 2017 IT spending forecast,” said John-David Lovelock, research vice president at Gartner. “We expect these currency headwinds to be a drag on earnings of U.S.-based multinational IT vendors through 2017.”

    Including the $67 billion cut out by currency fluctuation, the growth rate would be closer to 3.3 percent, suggesting that demand will continue to increase at a healthy rate. The U.S. dollar traded regularly for less than €0.90 a year ago, but closed Monday above €0.94.

    See also: Report: Dallas and Washington, DC See Strong Colocation Growth in 2016

    The data center segment is projected to grow only 0.3 percent, after declining by 0.1 percent in 2016, as the server market continues to drag the segment. Equinix chief evangelist Peter Ferris said during his keynote address at Data Center World last week that Microsoft, Google, and Amazon each spend more in two years than the $17 billion Equinix has invested in data centers over its 18 years.

    “We are seeing a shift in who is buying servers and who they are buying them from,” Lovelock said in a statement. “Enterprises are moving away from buying servers from the traditional vendors and instead renting server power in the cloud from companies such as Amazon, Google and Microsoft. This has created a reduction in spending on servers which is impacting the overall data center system segment.”

    Enterprise software remains the biggest growth segment, and is projected to grow by 5.5 percent to $351 billion in 2017, and by 7.1 percent in 2018, after growing 5.9 percent in 2016. The device segment is projected to swing from a decrease of 2.6 percent in 2016 to a 1.7 percent growth rate, driven by higher average prices for phones in Asia/Pacific and China and iPhone sales. The IT services segment is expected to  grow by 2.3 percent, down from 3.6 percent, to $917 billion, and Gartner expects the segment to see a slight positive impact from increased infrastructure spending by the U.S. government over the next few years.

    Communication services spending, which fell by 1.4 percent in 2016, is projected to shrink by 0.3 percent in 2017, though Gartner projects it will recover with 1.3 percent growth in 2018.

    Overall, Gartner projects growth in 2018 of 2.9 percent.

    << Previous Day 2017/04/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org