Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, March 14th, 2016

    Time Event
    12:00p
    Transpacific Data Center Connectivity Hub Launches in Seattle

    As American technology companies expand their business in Asian markets, and as their Asian counterparts tackle markets in the US, demand for high-speed connectivity across the Pacific is skyrocketing.

    New cross-Pacific submarine cable systems are due to come online in the near future to address that demand. The most well-known example is the Faster cable system, backed by Google and several Asian telecommunications and IT services companies and expected to come online this year.

    The other big project is the New Cross Pacific Cable System, which is backed by Microsoft and a group of Asian telcos. NCP is expected to come online in 2017.

    Both systems will land in Oregon on the US side, which will drive demand for data center capacity in the Pacific Northwest by companies wanting to take advantage of all the new transpacific bandwidth.

    Read more: Top Data Center Providers Strike Submarine Cable Deals

    Gateway Between US and Asia Near Seattle

    Recently, a big new data center campus was launched in the Seattle area to take advantage of that demand. The company behind it is Centeris, a recently formed data center provider backed by the Benaroya Company, a real estate firm based in Bellevue, Washington, which is also just outside Seattle.

    Simon Lee, Centeris board director, said the company’s leadership expects the campus to become a major US hub from where companies can access existing and new cable landing stations along the Pacific coast. “We look at ourselves as kind of one core from the US side, and there will be other cores in various countries [in Asia] that basically connect to us,” he said.

    The Centeris Transpacific Hub, located in Puyallup, will be the core from where Asian traffic will be backhauled to other interconnection hubs near the coast and further inland. The closest and most important such hub for Centeris will be the Westin building, the big carrier hotel in downtown Seattle. Other key West Coast hubs are in Silicon Valley and Southern California.

    Centeris has no plans to provide connectivity services itself. The company will focus on providing data center space and power, while relying on partners for connectivity.

    One such partner is Wave, whose network can take traffic to about 80 data centers on the West Coast. Another is Comcast, which has national reach.

    The first phase of the hub, a 50,000-square foot data hall, is already online, currently serving about five initial customers. Lee could not name any of them, citing confidentiality agreements.

    Some of them, he said, are well-known companies to people both inside and outside of the IT industry. Some are also fairly unknown but significant companies, such as a billion-dollar logistics company that’s currently using about five racks in the facility.

    Adjacent to the data center is a nearly 180,000-square foot powered shell with access to 10MW of power. Centeris plans to use it for larger customers that need more capacity and more privacy.

    Enough Land and Power for Enormous Campus

    The Transpacific Hub has the potential to become an enormous data center campus, similar in scale to some of the hubs in Northern Virginia, which connect to submarine cables that carry traffic between the East Coast and Europe. Its territory is nearly 90 acres, and it has access to 50MW of power, which can be expanded in 25MW increments, Lee said.

    Centeris would welcome other data center providers to the campus, he said. If a carrier wants its own building there, the company will be happy to provide the building and the power as well.

    Lee considers the campus’s proximity to Seattle a big advantage over other data center clusters in the Pacific Northwest, places like Quincy, Washington, or Prineville, Oregon. He calls the city – home to Amazon headquarters and a neighbor to Redmond, where Microsoft is based – the “cloud capital.

    In his opinion, a big data center and connectivity hub near Seattle that’s similar in scale to what can be seen in places like Northern Virginia, or Dallas, is “inevitable.” Land, power, and skilled workforce are incredibly expensive in San Francisco and Silicon Valley, and there is no major tech business hub in Oregon, even though there is a high concentration of submarine cable landing stations on the Oregon coast.

    The infrastructure being built out today, the submarine cables, the landing stations, and the data centers on both ends, will power connectivity between the two of the world’s largest economic regions, US and Asia, for the next 100 years, Lee said. “I’ve been seeing it for years. All these things were kind of inevitable.”

    3:00p
    How to Avoid the Outage War Room

    Bernd Harzog is CEO of OpsDataStore.

    Most IT pros have experienced it. The dreaded war room meeting that immediately starts after an outage to a critical application or service, but how do you avoid it? The only reliable way is to avoid the outage in the first place.

    First, you need to build in redundancy. Most enterprises have already done much of this work. Building redundancy and disaster recovery into systems has been a best practice for decades. Avoiding single points of failure (SPOF) is simply mandatory in mission critical, performance sensitive, highly distributed and dynamic environments.

    Next, you need to assess spikes in load. Most organizations have put in place methods to “burst” capacity. This most often takes the form of a hybrid cloud where the base system runs on premise, and the extra capacity is rented as needed. It can also take the form of hosting the entire application on public cloud like Amazon, Google or Microsoft, but that carries many downsides including the need to re-architect the applications to be stateless so they can run on an inherently unreliable infrastructure.

    However, even organizations that have designed their infrastructures to account for all of the common outage scenarios regularly encounter trouble. How can this be? The primary reasons are:

    1. Most enterprises do not have the monitoring tools in place to know the current state of their systems and applications in anything close to real-time. Most of the monitoring tools that enterprises rely on were designed for environments that prevailed a decade ago when systems were not distributed and dynamic and applications did not change daily.
    2. Enterprises do not accurately know the current state of the end-to-end infrastructure for a transaction (everything from the transaction to the spindle on the disk that supports that transaction), so enterprises have no way to possibly anticipate problems and deal with them before they become outages.
    3. Most outages are preceded by performance problems. In other words, in most cases, outages do not occur suddenly. Performance problems show up as increased response times and reduced rates of transaction throughput. So “blackouts” are most often preceded by “brownouts.”
    4. But most enterprises have extremely immature approaches to understanding both end-to-end transaction response time and throughput (across the application stack) and full stack transaction response time and throughput (again from the click on the browser to the write on the hard disk).

    In order to address the above issues a new approach to monitoring is necessary. Monitoring has to be:

    1. Focused on the correct metrics. Too many monitoring products attempt to infer performance from resource utilization metrics. This no longer works. Response time, throughput, error rates and congestion are the crucial metrics that need to be collected at every layer of the stack including the transaction, the application, the operating system, the virtualization layer, the servers, the network and the storage arrays.
    2. Real time. Every monitoring vendor claims that their product operates in real time but most really don’t. There are delays of 5 to 30 minutes between the time something happens and the resulting metric or event is surfaced in the console of the monitoring product. Look for monitoring that offers true real time monitoring, in milliseconds vs. minutes.
    3. Comprehensive. Today each monitoring tool focuses upon one silo or one layer of the stack. This leads to the dreaded “Franken-Monitor” where enterprises own many (between 30 and 300) different tools, that still somehow have gaps in between them. The other problem with this is that a plethora of tools leads to a plethora of disparate databases in which monitoring data is stored with none of them being integrated with each other. Today’s enterprises need a monitoring tool that is comprehensive with a view of the entire stack.
    4. Deterministic. Most monitoring tools rely upon either statistical estimates of a metric or rolled up averages of metrics that obscure the true nature of the problem. The focus needs to shift to actual values that measure the actual state of the transaction or the infrastructure, not an estimate or an average of these metrics.
    5. Pervasive. Many organizations only implement transaction and application monitoring for a small fraction of their transaction and applications – leaving themselves blind when something happens with an un-monitored application or transaction. The APM industry needs to undergo a big change in order to make pervasive monitoring possible for customers in an affordable manner.
    6. Embrace big data. Most monitoring tools are built around SQL back ends that limit the amount and frequency of the data that can be collected, processed and stored. In particular monitoring needs to embrace real-time big data at scale, allowing metrics to be collected from a hundred thousand servers, processed in real time and then immediately turned around for analysis and consumption.

    In summary both vendors and enterprises need to take a completely different approach to the problem of monitoring performance, if they want to avoid the dreaded outage war room.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:42p
    Obama Weighs Privacy Against Security in Age of Apple v. FBI
    By The WHIR

    By The WHIR

    File this under, politicians: they’re just like us.

    US President Barack Obama made a stop at Torchy’s Tacos in Austin, TX, before presenting a keynote at the South by Southwest Interactive (SXSW) festival on Friday afternoon.

    “I love Austin, Texas,” Obama said to a cheering crowd as he sat down with Evan Smith, founder, CEO and Editor in Chief of the Texas Tribune to discuss 21st century civic engagement in a conversation that touched on encryption, the digital divide, and the private sector’s role in modernizing government’s use of technology.

    Obama said that at this moment in history where technology, globalization and the economy is changing so fast, there are enormous opportunities, but the disruptions can be unsettling. In an hour-long conversation, Obama weighed in on several topics, including the controversial Apple v. FBI encryption case.

    Apple v. FBI: Where the President Stands on Privacy v. Security

    For obvious reasons, Obama couldn’t comment on the specific case between Apple v. FBI, but he did talk about privacy v. security at a broad-level – the delicate balancing act of ensuring citizens are safe while respecting their fundamental right to privacy.

    “All of us value our privacy, and this is a society that is built on the Constitution and the Bill of Rights, and a healthy skepticism of overreaching government,” he said, noting that the Edward Snowden revelations have elevated awareness of these issues, despite the dangers to US citizens being “vastly overstated.”

    Read more: Only 5% of Americans Unaware of Government Surveillance Programs in Post-Snowden Era

    “Our intelligence agencies are pretty scrupulous about people on US soil,” he said, though they did identify “excesses overseas.” Of course, this is to be taken with a grain of salt as many reports over the past few years have found otherwise, for example, in 2014, The Washington Post ran an investigative report that found the NSA had been tracking communications of ordinary US citizens.

    Obama said: “I am of the view that there are very real reasons that government can’t willy-nilly get into everyone’s smartphones that are full of very personal data.”

    “The question we now have to ask is if technologically it is possible to make an impenetrable device or system – the system is so strong; there is no key or no door – then how do we apprehend the child pornographer, how do we disrupt a terrorist plot; what mechanisms do we have available?”

    Part of the government’s plan here is to engage the private sector, as it has done in other areas with initiatives like the US Digital Services.

    Read more: Obama’s $19B Cybersecurity Budget to Address Talent Gap, Outdated Tech

    The US Digital Services team is “world-class technology office inside of government” that helps across agencies, Obama explained. It was brought forth after the botched government rollout of the Affordable Care Act website.

    “This was a little embarrassing for me because I was the cool, early adopter president. My entire campaign had been premised on having really cool technology,” Obama said.

    He said the private sector, and top talent from companies like Google and Facebook, are making an “enormous difference” in areas like making sure veterans are getting services on time and fixing clunky and bloated government systems.

    “The reason I’m here really is to recruit all of you. It is to [ask] you as I’m about to leave office how can we come up with new ideas, platforms, approaches to solve some of the big problems we’re facing today.”

    “I want to make sure that the next president and government from here on out is constantly in improvement mode.”

    Read more: Where do the presidential candidates stand on encryption?

    Whether it’s encryption, addressing the digital divide where many Americans don’t have access to Internet at home, or other issues, Obama said these are “solvable problems”: we just have to be willing to participate and not wait passively for someone else to solve it.

    This first ran at http://www.thewhir.com/web-hosting-news/what-obama-thinks-of-privacy-vs-security-in-the-age-of-apple-vs-fbi

    7:08p
    Vapor IO and BaseLayer Integrating Modular Data Center Tech

    This marriage has birthed a data center Russian Doll of sorts.

    Two of the more unusual data center technology companies, Vapor IO and BaseLayer, are integrating their modular data center technologies, including hardware and software, with the combination resulting in some of the industry’s highest power density data center footprint that can be deployed just about anywhere with access to power and network connectivity.

    At least one customer is already taking advantage of the combo, according to Vapor IO CEO and founder Cole Crawford. The customer, whose name he could not disclose due to confidentiality agreements, is looking to deploy “north of 20” BaseLayer modules – which are basically full-featured data centers that come in a form factor similar to a shipping container – and many of them will have Vapor IO’s Vapor Chambers inside.

    The modular data center will be built inside a warehouse north of Austin, Crawford said.

    He didn’t specify which BaseLayer modules the customer will deploy, but the Vapor Chamber, which is a cylindrical pod, 9 feet in diameter, with six wedge-shaped IT racks, now supports two of the Chandler, Arizona-based company’s modules. One can house three chambers (up to 450kW), and the other can house five, “and you’re just shy of 1MW,” Crawford said.

    “Think about it,” he said. “Thirteen hundred square feet, almost a megawatt of power. That’s awesome. Very dense.”

    One of BaseLayer's data center modules (Photo: BaseLayer)

    One of BaseLayer’s data center modules (Photo: BaseLayer)

    IO, the company BaseLayer came out of, was a pioneer in the data center provider space about six years ago, introducing a business model that combined being both a technology company and a provider of data center space and power at the same time. The company engineered and manufactured data center modules customers could buy or lease and keep them either in one of IO’s warehouses or in their own locations.

    IO also created its own Data Center Infrastructure Management software, originally named IO.OS. It used it to manage its customers’ data center infrastructure and sold it as a stand-alone DCIM software product.

    Last year, IO was split into two companies: IO the data center provider and BaseLayer the technology company. BaseLayer took over the modular data center technology and software products, and IO became its customer.

    Austin-based Vapor IO is a much younger company, which shook up the data center industry last year when it came out of stealth with its Vapor Chamber, challenging one of the most fundamental concepts in data center design: the straight data center aisle.

    Vapor Chamber from all angles (Image: Vapor IO)

    Vapor Chamber from all angles (Image: Vapor IO)

    The chamber packs a lot of IT power in a relatively small footprint. Instead of hot and cold aisles you’ll find in a typical data center, it takes in cold air from its surroundings and pushes hot exhaust air into a “hot air column” in its center. A variable-speed fan at the top of the chamber sucks the hot air out.

    Besides integrating their hardware, BaseLayer and Vapor IO are also integrating software. BaseLayer’s RunSmart OS, the DCIM software formerly known as IO.OS, will support Vapor IO’s open source server management platform OpenDCRE, Crawford said.

    Read more: Vapor IO Wants to Bring Server Management Tools Out of the 90s

    OpenDCRE does what Intel’s 17-year-old Intelligent Platform Management Interface does, feeding server health data about the operating system, CPU, or firmware to Baseboard Management Controllers. It is an open source alternative to proprietary IPMI, but it also presents data in a way that’s more appropriate for modern systems, according to Crawford, using an API and the popular JSON format.

    At the Open Compute Summit this month, Vapor IO announced updates to OpenDCRE, which now supports encryption and will also support OpenBMC, Facebook’s open source alternative to proprietary BMCs, and Redfish, an open standard for hardware management.

    << Previous Day 2016/03/14
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org