Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, April 21st, 2016

    Time Event
    12:00p
    Survey: Half of IT Pros Have No Edge Data Center Plans

    Edge data centers have been a hot topic since about two years ago, fueled by the grand expansion ambitions data center providers that chose to go after the edge market had.

    Companies like EdgeConneX, whose expansion ambitions were the grandest (it went from zero data centers to 20 in a period of two years), and vXchnge, which also expanded quickly, primarily by buying existing facilities (in one deal last May, for example, it acquired eight SunGard facilities), have gone after the demand for data center space outside of the top markets.

    An edge data center, essentially, is a facility where long-haul network carriers interconnect with local ISPs and internet content providers who cache their data in the facility so that they don’t have to pay to transport it from the big cities. The effect is described as extending the internet’s edge, “edge” meaning the last stop from where content is delivered to the consumer.

    Read more: How Edge Data Center Providers are Changing the Internet’s Geography

    While an interesting and relatively new topic, however, the amount of interest IT professionals have in actually using edge data centers turns out to be mild, according to a recent survey by Green House Data, a service provider with data centers in Oregon, Wyoming, and New Jersey.

    The company surveyed close to 500 IT pros, almost 40 percent of whom were “executive level,” and found that the interest that they do have in edge data centers is mostly for future deployments. Eighteen percent of respondents said they use an edge data center today, and 46 percent said they are planning to add one in within the next 12 months.

    More than half of respondents said they have no plans to add an edge data center.

    Edge-DC-future-plans

    The survey also found that not everybody agrees on what an edge data center is. More than half said the most important features of an edge data center were high-reliability design and carrier neutrality, which are really the top requirements for any colocation facility.

    Another 45 percent were closer to the mark, saying the top requirement were access to a wide variety of CDNs (Content Delivery Networks) and serving a major portion of bandwidth to the local population.

    Edge-DC-essential-factors

    Interestingly, only about one-quarter of respondents said edge data centers had to be located away from major metro areas. It’s important to make the distinction between a top data center market and a major metropolitan area, since there are many metros that have one characteristic but not the other.

    For example, EdgeConneX considers Phoenix an edge data center market, but the Phoenix-Mesa-Scottsdale metropolitan area ranks as 12th largest in the country, much bigger than San Jose-Sunnyvale-Santa Clara, which ranks as 35th largest in terms of population but which no-one would argue is a top data center market and a core location in the internet’s geography.

    Edge data center companies do want to be in major metros that don’t already have key network interconnection and data center hubs, since their core value proposition is reducing the cost of delivering content to end users.

    Edge-DC-advantages

    3:00p
    The Future of Computing – Moore’s Law, but Not as We Know it

    Darren Watkins is Managing Director of VIRTUS Data Centres.

    Moore’s Law may well be coming to an end with respect to microprocessors, but if the speed of processing power is to continue to develop (especially in today’s digital world of Big Data), other areas of computing need to be examined if it is to progress and improve.

    Drawing on vast numbers of crunching resources in the cloud is one of the main ways that computing can continue to advance. By sharing computer capacity, processing capability improves which enables businesses to be more effective and innovate.

    I am old enough to remember SETI (Search for Extra-Terrestrial Intelligence) when it was big in the ’90s. It was software that you could download so when you were offline your computer capacity could be shared with systems around the world and mine massive data to search the universe for extra-terrestrial intelligence. This is one of the first examples of cloud – using shared resources.

    The principle is the same for public cloud today. Rather than having 20 owned computers in a data centre, by connecting to a multitude of other systems through the cloud, people can share the benefit of combined processing power.

    So how does this relate to Moore’s Law? The cloud has to live somewhere, and the data centre is its home.

    Cloud has been one of the most talked about subjects in the tech industry for more than 10 years, and it has taken that long to become mainstream. But now it is, and the take up is meteoric. For a long time, there was an oversupply of data centre space in the market, but space will quickly be consumed if data centre companies don’t continue to build. Today, capacity needs have increased exponentially and companies don’t buy what they used to a few years ago when the average take up was a couple of hundred kW. Now businesses are buying multiple megawatts in one go. And there aren’t many data centre providers who can deliver that kind of space.

    Moore’s Law is about the doubling of processing power every two years. If you look at the consumption of cloud to satisfy the future of computing, Moore’s Law still applies at the data centre level. If we think of a data centre as a silicon chip (because it effectively provides processing), capacity will need to continue to double year on year. VIRTUS is a good example. It has gone from 6MW when it had one site, to 40MW and three sites in 18 months. In these terms, VIRTUS will need 80MW or more of data centre capacity in a further 18 months. Data centre organisations need to work very closely with cloud providers to understand their prolific growth rates if they are to be able to meet the demands of the future of computing.

    But how else can data centre providers prepare for the potential end of Moore’s Law for microprocessors? In the UK we don’t have an abundance of real estate on which to build data centres, so we need to look to new technologies if we are to improve computing capacity. Increasing speed and the availability of power will be major factors.

    Photonics is already being looked at to increase processing speed, albeit at the early stages of research. Some labs, for example, Intel in Texas, are testing photonics which uses light so data is processed more quickly with no resistant losses. This stops heat being generated and enables processing at the speed of light, reducing the need for so many processors because they are much quicker. This will further enable Moore’s Law to increase processing capabilities, thus starting the cycle again.

    For any computer to work, it needs power, and data centres need lots of it. If Moore’s Law applies to data centres, it will also apply to power. The danger is that the UK could face a power shortage in the future because of the rate of consumption and the time it takes to build power plants. The most innovative data centre providers are mitigating this potential risk by future-proofing their energy requirements. At VIRTUS, this is an area we are already focused on. We are investing time and resource to look into self-generation of power by standard means and alternatives such as nuclear batteries. By looking ahead, we can continue to aid the future of computing.

    So, Moore’s Law may be coming to an end in terms of microprocessors, but it is only moving along the supply chain and the future of computing will continue to improve.

    5:52p
    Data Center Chief Dean Nelson Leaves eBay

    Dean Nelson, who has overseen eBay’s global data center strategy for more than six years, has left the company. Eddie Schutter, a former top infrastructure and product development engineer, who joined eBay last year, has stepped into Nelson’s role, a company spokesperson said in an email.

    Nelson has been the face of data center innovation at eBay during his time there. On his watch, the company deployed in production some of the more unusual critical infrastructure ideas, such as containerized, or modular data centers, ultra-high-density power and cooling infrastructure, and fuel cells.

    One of his more recent projects, a data center eBay designed and built in South Jordan, Utah, is likely the world’s only data center powered entirely by fuel cells. The facility, which came online in 2013, has no diesel generators, using the local utility grid as its backup power source.

    Read more: A Closer Look at eBay’s Bloom-Powered Data Center

    Before Utah, the company launched a data center in Phoenix, which had both traditional raised-floor space inside and data center containers on the roof of the building for rapid, high-density capacity expansion. It was also a showcase for energy efficient design – a high-density data center that relied primarily on outside air for cooling in the hot Phoenix climate.

    Nelson joined eBay after about six years at Sun Microsystems, where he started as a technical manager at the company’s data center engineering labs and worked his way up to senior director of global lab and data center design services. In 2010, Sun was acquired by Oracle.

    Read more: eBay Shifts to Water-Cooled Doors to Tame High-Density Loads

    Nelson’s LinkedIn profile says he has founded a new organization called Infrastructure Masons, a non-profit collaboration and information sharing group for infrastructure professionals. Other “masons” in the group, making up its advisory council, include data center heads from Google and Microsoft, Switch founder and CEO Rob Roy, former Digital Realty Trust CTO Jim Smith, and eBay’s Schutter, according to the description on LinkedIn.

    “These leaders have personally developed more than $1B of technical infrastructure each,” the description reads.

    We have reached out to Nelson and will write an update with more details once we hear back.

    Schutter started at eBay in July 2015 as head of global data center network and security services, reporting to Nelson. He joined the company after more than four years at AT&T, where he first worked as lead principal technical architect of global technical space operations and later as director of new technology product development engineering for AT&T Labs.

    7:01p
    Microsoft: Bigger Underwater Data Center in the Works

    Microsoft’s mad-scientist data center research crew appears to have liked the results they’ve seen after submerging a relatively small underwater data center pod somewhere off the coast of California last year as a test. The team has stepped up its underwater data center ambitions, the project’s lead told a conference in New York Wednesday.

    While still in preliminary planning stages, the next underwater deployment may be about four times the size of the first pod, or about the size of a shipping container, Ben Cutler, the project’s manager, said, according to Data Center Frontier.

    The first pod, a 10-by-7-foot cylindrical shell that contained a single rack of servers, went underwater around August of last year. The Project Natick team pulled it out and brought it back to the Microsoft headquarters in Redmond, Washington, in December to collect experimental data.

    Read more: Cloud Underwater? Microsoft Tests Submarine Data Center

    There are multiple motivations behind the experiment, one of them being that half of the world’s population lives close to ocean shores, and it’s a lot easier to get permits to submerge equipment underwater than it is to construct data centers on land, Cutler told DatacenterDynamics.

    He presented on the project at this week’s DatacenterDynamics Enterprise conference in New York.

    Microsoft also likes the consistency of deploying close to the ocean floor, which has relatively consistent water temperature and doesn’t get disturbed by storms and currents. “The ocean is more of a standard place,” Cutler told DCD. “It’s more consistent, both physically, and in the laws in the ocean, which are more consistent.”

    Read more: Microsoft Moves Away from Data Center Containers

    There’s also a sustainability aspect to the project. The vision, in the long run, is to have underwater data centers that are only connected to shore by a network cable. They would be powered by turbines that leverage tidal energy and cooled by ocean water. These submarine IT facilities could also become artificial reefs, becoming ecosystems filled with marine life.

    Microsoft project natick assembly

    After 105 days of operation 30 feet deep, results Cutler’s team have observed from the first capsule are encouraging. None of the hardware in the pod failed, and its cooling system performed more efficiently than the researchers expected.

    They are now planning a much larger experimental deployment, which could go up to as much as half a megawatt of IT capacity, he said. In the long run, Cutler envisions entire 20 MW seaborne server farms providing low-latency cloud services to densely populated coastal areas around the world.

    There are many barriers Project Natick will have to overcome before the idea becomes a viable solution.

    One potential problem could be warming of the water around a larger server farm. While the artificial-reef aspect of it may be good for marine life, as our contributor Mark Monroe writes, the heat it would generate could create microclimates and attract unexpected species.

    Cutler told DCD that the next Natick deployment will probably be in deeper waters, where there is less marine life, and have a “spaghetti-like” heat exchanger, making it less attractive for plants and animals.

    Another problem the researchers will have to address is slack tide. They will have to figure out how to power the servers during the two periods each day when there is no tidal energy to move the turbines.

    There’s also the question of cost. According to Monroe, it can cost 10 to 20 percent more to build water-tight modules than it costs to build on-land modular data centers in use today.

    Read more: Reality Check: Can Underwater Data Centers Really Work?

    7:21p
    Capital One Builds Tool to Cut Its AWS Usage
    By Talkin' Cloud

    By Talkin’ Cloud

    Capital One, an Amazon Web Services customer, has released a new open source tool that helps organizations define policies around cloud usage.

    Called Cloud Custodian, the tool is a rules engine that Capital One has used internally to reduce AWS resources by 25 percent, according to a report by TechCrunch.

    The announcement was made this week at AWS Summit in Chicago where AWS made a number of releases and updates.

    Cloud Custodian uses CloudWatch Events, which AWS released in January to provide an efficient way to monitor events. The tool also leverages the Lambda service, announced at last year’s re:Invent conference, which can launch a set of resources for a given set of rules for a set period of time, according to TechCrunch.

    By using Cloud Custodian, AWS customers can monitor resources and turn off instances when they aren’t being used. The result is a cost-savings that could be fairly significant depending on the number of instances in use.

    Cloud Custodian is available on Github.

    Netflix is another AWS customer that has released open source tools developed internally for managing its AWS own cloud infrastructure, including a tool it developed to monitor security threats.

    This first ran at http://talkincloud.com/cloud-computing/capital-one-open-sources-tool-help-monitor-aws-usage

    8:00p
    IT Innovators: Using Virtualization to Realize Agility and Cost-Savings in Network Services

    As the movement to virtualize and automate applications and services for faster deployments and cost savings gains momentum, network function virtualization (NFV) has become an important technology to watch. In advance of his presentation recently at the NFV World Congress, “Getting the Stars to Align,” Ron Haberman, vice president of Nokia’s CloudBand Business Unit, sat down for a Q&A with IT Innovators to discuss NFV and how it impacts IT professionals.

    What is the promise of NFV?

    NFV is changing the paradigm on the way network services are created. It has been, and continues to be, an evolutionary process with many piece parts in the mix. That means we’re right now in a bit of a chaotic environment. We have standards bodies defining the architecture, open source organizations creating software, multiple ecosystems building specific VNFs to work with different technologies, along with different vendors creating virtual network functions (VNFs)—all separate from each other. Now, the focus is on bringing those parts together and lining them up to provide end-to-end solutions that can be deployed. To do that, the industry needs to shift from creating technology to enabling specific use cases.

    How is this promise realized?

    In my view, it’s primarily about focusing on specific use cases and growing them into full-blown systems as opposed to going about it the other way around. There are megatrends around what is happening in the Internet of Things (IoT) and the move to 5G; the talk is to try to create some alignment between all of the activities, the megatrends and how they correlate.

    What use cases would you prioritize?

    Today we’re looking primarily at Voice over LTE (VoLTE), security services, service chaining and virtualized customer premise equipment (vCPE), which is an instance of software running in the cloud that effectively creates the IP connectivity for a particular branch in an enterprise to connect to a VPN, firewall, Internet or other services.

    What is the link between NFV and the software-defined data center (SDDC)?

    Moving a function to the cloud in a SDDC, being able to launch it on demand and scale it dynamically as necessary, requires a lot of factors to be in place. You need to have an appropriate data center based on proximity, ownership, capacity, and other factors. So as we talk about NFV, we look at a segment of the generic cloud type of deployments that add feature needs in the different layers, whether it’s the data center itself, the orchestration or the VNF management.

    Is virtualization a stepping stone in the transition to SDDC?

    Yes. The ability to virtualize the type of communication you have in the enterprise—whether voice, data, text messaging, or text collaboration in groups—is just one example we see more and more in the market. Another example is the creation of private access point names (APNs) as an extension of the mobile network such that mobile devices that are part of a particular enterprise can access the network into its own instance.

    What innovations in NFV will have the most impact on the industry?

    At a very high level, the goal of NFV is to automate and create as much agility in the ability to deploy services as possible. The normal model of bringing an application, or VNF, and upgrading it in a network to provide it to the enterprise customer base, has been measured in months, sometimes even many months. What NFV brings to the table is, first and foremost, uniformity of the hardware. With the cloud, you have a resource that you can use for pretty much any application you bring. Furthermore, the automation coming from the orchestration software is such that you can actually shorten the length of time to deploy a particular VNF and test it automatically. The result is that you can now connect it to an enterprise in just a matter of seconds.

    What does this innovation mean to IT professionals?

    Their ability to get a service attached to their virtual private network (VPN) today will happen much, much faster. Instead of waiting for service providers to add that over time, it will be available faster. And the management, the expansion of an instance, for example, will be automated and in their control. They won’t need to wait for service providers to deploy more appliances because it’s available in the cloud.

    Can IT professionals influence the adoption of NFV?

    I think that IT professionals are actually driving the use cases quite a bit. They can influence which type of VNF is prioritized and, with that, the type of use cases that service providers would care to deploy. The more of those use cases and software they identify and discuss with their service providers, the more likely it will be that those use cases and software actually make it into these networks in the short term.

    How does this collaboration take place?

    It takes place via discussions and presentations in environments like this conference. We normally don’t see even large enterprises present their needs. I think that presenting and participating at industry events, particularly the OpenStack Summit, would go a long way in helping define those needs. Working with vendors like us would be very beneficial to help connect the dots.

    Christy Peters is a writer and communications consultant based in the San Francisco Bay Area. She holds a BS in journalism and her work covers a variety of technologies including semiconductors, search engines, consumer electronics, test and measurement, and IT software and services. If you have a story you would like profiled, please contact her at christina_peters@comcast.net.

    The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.

    << Previous Day 2016/04/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org