Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, August 2nd, 2016
Time |
Event |
9:00a |
Line-of-Sight Antenna Paves New Data Routes for NYC Almost a century ago, it was the very symbol of technology and ingenuity: the giant transmitter tower beaming signals into the heavens. The crown jewel of the Empire State Building has always been an antenna.
Today, we’ve renewed the word “wireless” into the common vernacular; no longer does it sound antiquated, like “microcomputer,” “Hayes modem,” or “MTV music video.” And in the latest signal that everything old is new again, the biggest news in New York City data centers this week may be the completion of a very large antenna.
Its apex extends to 560 feet over lower Manhattan, at the crest of what had been one of the city’s uglier properties, at 375 Pearl Street, at the foot of the Brooklyn Bridge. It’s still known to locals as the Verizon Building, and was once known as the One Brooklyn Bridge Plaza.
 The Verizon Building in Lower Manhattan in 2011.
Today, it’s Intergate.Manhattan, complete with the funky re-spelling of “integrate” and the dot. And beginning this year, the limestone-clad tower is getting a resurfacing. Plus, its current principal owner, Sabey Data Center Properties, is taking full advantage of its nearly unobstructed location, erecting a 360-degree vantage point antenna, providing line-of-sight, wireless data center access to the boroughs, the financial district, and mid-town Manhattan.
In Sight, It Must Be Right
“As you look at the wireless market, and you look at Manhattan,” said Dan Meltzer, Sabey’s vice president for sales and leasing, speaking with Datacenter Knowledge, “you realize it’s a very small space. But with line-of-sight, you actually extend your market. So you’re able to hit Brooklyn, Queens — any area of Manhattan, north, south, east, and west. And if you have height, you can hit points in New Jersey along the Hudson River.”
Sabey has housed and managed data centers in the Intergate building for clients since the start of the decade. What the antenna adds — for many Manhattan customers, for the very first time — is an alternative mode of access for carrier-grade connectivity. Whichever wired carrier happens to serve a business on the island, will at some point connect with Verizon (the former Bell Atlantic, which had been NYNEX and New York Telephone Co. prior to that).
Among the antenna’s charter customers are: NuVisions, a provider of Wi-Fi hot spots throughout New York City; broadband ISP Windstream; Google Fiber competitor Brooklyn Fiber; and Transit Wireless, which provides free Wi-Fi service for the city’s subway system. Sabey gives these tenants the opportunity to house their data centers mere yards from the antenna.
This way, their own customers have a way to bypass — quite ironically — the service provided by the telco whose name still adorns the Intergate building.
Brooklyn Fiber, explained Sabey’s Director of Leasing George Panagiotou drives down the cost of broadband with lower-cost, gigabit access for businesses. It started out doing business not in Brooklyn itself but in the village of Red Hook, about an hour’s drive north on Broadway from lower Manhattan (on a good day). Sabey’s line-of-sight antenna will now open up new avenues for potential Brooklyn Fiber customers further south.
The Other Way ‘Round
 An artist’s rendering of the final state of the Intergate.Manhattan building.
While line-of-sight may not necessarily be customers’ principal means of broadband connectivity, Sabey’s tower gives them the option of a redundant or bypass connection, explained Panagiotou,. This way, Brooklyn Fiber and others can offer wireless ISP service for businesses that need more 9s on their SLAs.
“They eliminate what would be the cost of delivering fiber to individual buildings,” he told us, “where they will make agreements with landlords whose tenants are interested in getting high speed Internet bandwidth. They will put antennas on those buildings, and we’ll serve them from 375 Pearl Street. They’ll distribute that bandwidth to tenants inside the buildings, instead of trying to get one of the traditional carriers to pull in some fiber from the street — which would be unlikely, if the investment isn’t going to justify what kind of revenue they could generate from those types of buildings.”
In other words, just like early television transmitters changed the property values of business that were the first on the block to open their doors to viewers, line-of-sight connectivity could elevate the stature of business districts that suffer today from the logistical nightmares of obtaining high-bandwidth data service.
Many buildings in Manhattan can only be served by Verizon. And even when there are multiple providers for a property, explained Panagiotou, there are probably a handful of aggregation points — and potential points of failure — where the alternative providers connect with Verizon’s infrastructure.
“So even if you were to get more than one traditional fiber or copper service,” he said, “you’re still going to be hitting stuff that’s going to be in one manhole, one central office, or some other single point of failure. By going with a provider like Brooklyn Fiber or NuVisions, you can get wireless redundancy so that, if ConEd or someone else is down in the street, digging up sidewalks, and they accidentally rip out all the copper or fiber that’s going into one point of entry on the building. . . for customers who are truly mission-critical and want no chance of going down, they will be able to fail over to that wireless access from 375 Pearl Street.”
Of course, in the concrete jungle, not everyone can see this particular tower from where they stand. Some tenants, Panagiotou told us, have the means to triangulate a signal, relaying it from Intergate, around obstructions, and into receiving stations in Brooklyn.
Meltzer said that Sabey operates a meet-me room on the sixth floor of Intergate, where tenants have the option of connecting with the providers of their choice.
From here, Sabey is looking for opportunities to host temporary broadband access facilities for the city’s major events — conventions, sports, and outdoor gatherings. So far, said Meltzer, only a fraction of some 300 antenna positions have been staked out on the antenna — which may yet convert an old, grey box into the pearl of Manhattan.
Image of 375 Pearl Street in 2011 by Beyond My Ken, licensed through Creative Commons. Title image and artist’s rendering of refinished building, both courtesy Sabey Data Center Properties. Title image by John Neitzel. | 6:30p |
HPE Cloud, Storage Chiefs Out in Another Strategy Shift An HPE corporate blog post Monday, with the benign sounding headline, “Sharpening our Focus in Hybrid IT, Global Sales, & Storage,” announced a corporate re-alignment that leaves two of its most outspoken executives out of the company.
Bill Hilf, formerly HPE’s senior vice president and general manager for cloud [pictured above], is “leaving HPE to pursue other opportunities,” reads the post, attributed to the executive vice president of the Enterprise Group at Hewlett Packard Enterprise, Antonio Neri. The same fate is ascribed to Manish Goel, formerly the senior vice president for HPE Storage. The post does not say whether they left entirely of their own volition, nor does it apply the typical courtesy of wishing them well.
HPE’s Helion CloudSystem Group and its OpenStack Group are being merged, to form a single Software-Defined and Cloud Group, Neri writes. Ric Lewis, previously SVP/GM for data center infrastructure, will head this new group. Regular readers of Datacenter Knowledge will recall Lewis’ contributed post last March, in which he re-cast his company’s “composable infrastructure” strategy as an integration play.
“Composable infrastructure neatly resolves the dilemma by supporting both the traditional and the new environments,” Lewis wrote. “Composable infrastructure can be deployed incrementally, side-by-side with existing infrastructure, in a way that makes sense for the business.”
It was last October, at the HPE Discover conference in London, where Bill Hilf made the strongest demonstration to date in favor of a data center management strategy that was focused on workload orchestration. At his side were the head of HPE’s Enterprise Services unit, and the Microsoft general manager in charge of cloud. The three were announcing a partnership which made Azure HPE’s preferred public cloud partner, effectively replacing its own attempt at a public service, the original Helion. (Microsoft had been Hilf’s previous employer before HPE.)
“We exited the HP public cloud specifically because we were led to believe there was a lot more growth in the market,” Hilf told me at that conference, “and could participate in more growth by partnering with, in many cases, the classic partners that we had for years, as that market expanded from the drive to hybrid. What hybrid really means is, there’s a much broader ecosystem for cloud computing.”
But seven months later, HPE opted to spin off its Enterprise Services unit, where it would be absorbed by CSC. Gone was the strategy that would see HPE and Microsoft effectively sharing partners, and aligning their hybrid cloud platforms, potentially enabling composable infrastructure to link to compute and storage capacities from Azure.
Hilf’s role at the company’s next conference in Las Vegas the following June also appeared to be toned down. Yet it was Manish Goel who, in a meeting with reporters that also included Ric Lewis, defended the Microsoft partnership strategy, including with respect to questions about Microsoft’s intentions for Azure Stack. At that time, it appeared Microsoft would be agnostic about the makes and models of private servers on which Azure Stack would be installed.
![[left to right] Former HPE SVP Manish Goel, and newly appointed chief of Storage-Defined and Cloud, Ric Lewis, at HPE Discover 2016 last June.](http://www.datacenterknowledge.com/wp-content/uploads/2016/08/160608-Manish-Goel-Ric-Lewis-HPE-Discover-Vegas.jpg) [left to right] Former HPE SVP Manish Goel, and newly appointed chief of Storage-Defined and Cloud, Ric Lewis, at HPE Discover 2016 last June. “In my view — one guy’s view — public cloud will continue to become more relevant,” remarked Goel last June. “No question. However, that does not obviate the need for private infrastructure. I think the rise of public cloud is making all of us better technology providers, because it’s forcing us as technology providers to take on challenges that we hadn’t historically taken on — like, making the consumption experience of our technology dramatically different, dramatically simpler. Which is why Ric’s entire agenda is driven towards, how do we simplify?
“Tomorrow, it may be something else,” Goel continued. “It may not be simplification, because that doesn’t have the value as maybe something else. But it is certainly making us better technology vendors.”
Replacing Goel in Storage will be Bill Philbin, who came to HPE from NetApp in 2010.
For Lewis’ part, he had outlined what was, as of June, HPE’s expansion plan for its Helion Cloud Suite. At the same time, he told reporters then, HPE would continue to produce and expand its existing OneView management software “to do more automation and orchestration.”
Cloud Suite, meanwhile, would be expanded to encompass cloud brokering, cloud-native development, “and all of those things you would see from those full-stack kinds of vendors,” said Lewis in June, “because, we believe, that some customers, based on what we’ve seen in ‘wave 1’ of convergence, they just want to buy the stuff, plop it down, and have it do one of three use cases.”
Those use cases were: virtual machine vending (a la Amazon); private cloud infrastructure-as-a-service with application vending; and what Lewis described as “multi-cloud, which is the true hybrid environment, in moving applications back and forth between public and private.
“There is a fourth, which is, give up on it all and run in public cloud,” Lewis continued. “That’s not what we think’s going to happen, by and large, for everybody.”
But just last week, HPE’s corporate blog outlined what was described as a five-phase plan for “digital transformation,” which did not appear to involve buying stuff or plopping down. So once again, HPE’s cloud strategy may be in mid-metamorphosis. | 9:19p |
Canada Considers Keeping Public Data Stored Within Borders  Brought to You by The WHIR
The Government of Canada has released a cloud adoption plan this week which restricts cloud storage of much of its data to Canadian data centers. The plan calls for “secret” and “top secret” data to be stored internally, while “classified” information, including personally identifiable information, will be stored in the cloud but within Canada.
Under the plan, unclassified information can be stored anywhere, so long as it is encrypted when it crosses a border.
The country’s Treasury Board, which has been tasked with modernizing the government’s IT practices, released the Cloud Adoption Strategy for public comment, along with Security Control Profile for Cloud and Right Cloud Selection documents, which together outline a plan based on three levels of data security.
Consultations with provincial governments and over 60 industry organizations over a two-year period inform the draft plan, and public feedback will be collected until the end of September.
“Canadians expect governments to make the most of available technologies to improve service delivery,” Scott Brison, president of the Treasury Board of Canada said in a statement. “Cloud computing will help the Government of Canada get better value for taxpayers’ dollars, become more nimble in its operations, and meet the evolving needs of Canadians.”
Shared Services Canada will procure public cloud services for “data of classifications up to Plan B inclusive” over the next year, and the services will be rolled out to federal departments ahead of other levels of government and public institutions.
Peer 1 research has previously shown that Canadian firms consider data sovereignty when selecting a hosting provider, and data sovereignty measures have been passed by many nations, notably including Russia, which restricts not just government data but all data concerning Russian citizens.
However, storing data exclusively in Canada does not necessarily ensure that it will remain in the country, as research indicates that data stored and accessed within Canada often follows routes that cross into the U.S.
This post originally appeared at The Whir. | 10:11p |
AppDynamics Adds Microservices Monitoring for Hyperscale It’s a new style of application development that is having an impact on how modern, hyperscale data centers are managed: microservices. It’s a design that enables individual functions of an application to scale up to meet high demand, as an alternative to replicating entire virtual machines. Following the example of Netflix, more organizations are shifting new workloads to microservice design, including in production. It’s more efficient, it uses less energy, and it may actually be a more sensible way to design applications in the end.
But it’s a bear, or something else that starts with “b,” to manage. Now, an emerging player in the performance management space named AppDynamics is building out the latest version of its App iQ APM suite (with a small “i”) with a new component that’s designed to monitor the performance of applications that utilize microservices.
“Amazon has up to 150 microservices that are hit any time a page is built,” remarked Matt Chotin, AppDynamics’ director of product management, in an interview with Datacenter Knowledge. “One of the big challenges that an organization is going to face is keeping track of the environment. Not having to manually configure your monitoring system, and the ability to automatically build an app, is huge. Manually monitoring and tracking that, would just be very difficult.”
More Than One Way to Slice a Loaf
Maybe. Theoretically speaking, any application is the collection of all its constituent services. If you slice an application into its services, while continuing the relationship between those services, you really shouldn’t be changing the application at all. So monitoring the application (if you’re doing it right) should not change.
But there’s one critical difference: When services scale up individually, the behavior of the application as a whole can change dramatically.
“Microservices are loosely coupled services that are maintained and deployed independently,” writes Donnie Berkholz, who directs research into development and DevOps practices for 451 Research, in a note to Datacenter Knowledge. “They resemble service-oriented architecture (SOA) but in a lightweight and composable form, without all the XML and monolithic middleware.
“Our data shows that more companies are moving toward a dual agility- and risk-driven approach to IT, versus the classic penny-pinching view. Microservices serve as a strategic investment to make that transition.”
AppDynamics’ approach to monitoring microservices-driven applications, in an environment that shares space with conventional applications (“monoliths”), involves what the company calls a business transaction. That’s a fairly common phrase for something that’s, in this context, quite specific: It’s a chain of events that represent a service taking place — for example, a request for data from a database, followed by a response to that request that may contain the data, or may contain an error message.
“AppDynamics creates business transactions,” reads the company’s documentation, “by detecting incoming requests at an entry point and tracking the activity associated with request at the originating tier and across distributed components in the application environment. A default detection scheme exists for each type of framework associated with an entry point (such as a Web Service).”
So the person or people charged with monitoring services with AppDynamics don’t actually need to understand the architecture of the application, or contact the person or people who do. In monitoring the normal behavior of the application, App iQ can detect when and where API requests take place. Once again, theoretically, these would be the same points for a monolithic app as for a microservices app.
But once App iQ has definitions in place for business transactions, by AppDynamics’ definition, it can ascertain and present key performance indicators for transactions inside and outside of microservices contexts.
Coalescence
“The important thing that you focus on, when you look at the broad overview,” said AppDynamics’ Chotin, “is how these [transactions] are impacted as all those microservices are assembled together. But, with microservices, you’re talking about teams. You can’t necessarily tell a fifty-district team, ‘Hey, team, everybody look at what’s happening with the overall user experience, and then don’t worry about your individual services.’ Because in the end, one of those services might be a culprit.”
Put another way, in a microservices context, an appropriate APM will need to be able to ascertain performance metrics for the transaction model as a whole, and the individual components that comprise each instance of that transaction, as it traverses the spectrum of the organization’s data center assets.
“To gain the agility and low risk that microservices promise, it must be possible to innovate quickly and fail forward to minimize the time to recovery,” writes 451’s Berkholz. “Both of these require the ability to deploy quickly and independently of other microservices, and low risk is at odds with the manual tweaking that is common in many production settings, so automated pipelines and infrastructure are a must.
“If pieces are segmented into small chunks, and the overall user experience is not significantly impacted when one fails, this creates a dramatic benefit over monoliths that are either up or down.”
App iQ, Chotin told us, is constructed with an API that is already being put to use in environments with automated configuration management, such as Jenkins, and load balancing proxies such as NGINX Plus. That said, AppDynamics is angling to be the central repository for all applications performance.
“The challenge of the enterprise is that you have legacy infrastructure and legacy environments, new monitoring environments — you have so many different things going on,” remarked Chotin. “To have different monitoring tools for all of these pieces, is difficult. Not only is your environment complex, but your monitoring is complex. We have a unified view of how monitoring should work, that stems from the business transaction.” |
|