Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 16th, 2014

    Time Event
    11:56a
    Data Center World Asia-Pacific Symposium

    The Data Center World Asia-Pacific Symposium will be held Monday, September 1 through Wednesday, September 3 at the Intercontinental Hotel in Melbourne, Australia.

    This three-day event will bring together industry leaders throughout the region, providing a comprehensive conference covering the entire data center – from day-to-day operations to facilities-related issues, including the latest technology and management techniques, best practices, disaster recovery, power and cooling, asset management, cloud computing and more.

    For more information about the symposium and to register for the event, follow this link.

    To view additional events, return to the Data Center Knowledge Events Calendar.

     

    12:00p
    Iceland Utility Touts Cheap Clean Energy to Lure Data Centers

    There’s a lot to like about Iceland as a data center location. For one, it offers an immense amount of renewable energy and lets a provider lock down a rate for several years. With tech giants building in other Nordic countries, such as Facebook in Sweden and Google in Finland, Iceland might finally be primed to attract data center business on a big scale.

    Iceland has long been discussed as a potential emerging data center market, and a number of groups in the country have been recently stepping up its promotion as such. Icelandic companies, such as Iceland Air and Bank of Iceland, the city of Reykjavík and the foreign ministry cooperate to build the Iceland brand.

    Rikki Rikardsson, director of sales at for the country’s national power company Landsvirkjun, is a member of this group. The utility is targeting the data center industry and has a compelling power proposition.

    “We offer contracts for 12 years, which is unusual for greenfield investments,” Rikardsson says.” It gives visibility for 12 years into your power rates, providing a fixed price.” Paying 4 cents a kilowatt for 12 years is a compelling proposition. Verne Global is one example of a data center company that has taken advantage of fixed-cost power.

    Many companies looking to build data center space in Europe already have Iceland on their short lists, Rikardsson says. “I can confirm that we have several companies looking at building data centers in the several tens of megawatts today in Iceland. Whether they build or not, time will tell. The data centers that we have are growing. Verne Global is growing, doubling their business every few months. Data centers are an emerging industry in Iceland. Time will tell if we can build a big cluster out of it.”

    Latency: not everyone’s concern

    There have been some worries regarding distance and latency when it comes to housing infrastructure in Iceland, but Rikardsson says these concerns are going away.

    “The vast majority of those we talk to are not too concerned about latency. The number of routers and switches you go through, you can have a situation where even if you’re in New York, you’ll get similar or the same response time from Iceland than something in Virginia. A bigger effect is the router and switches between, rather than distance.”

    There are also many applications that aren’t as latency-sensitive as others. “We realize we won’t be the ultimate solution for everyone,” he says.  “But companies like Google say that distance is manageable. It’s less manageable to build  if you don’t have power security.”

    Hundreds of megawatts? No problem

    In terms of power security, Iceland is touting its success with another power-hungry industry: aluminum production. “We now have Europe’s second-largest production of aluminum, a very power-intensive industry,” says Rikardsson. “We’ve built an aluminum industry in 40 years. We went from nothing to second for the world’s most power-intensive industry.”

    Aluminum production facilities take from 400 megawatts to 600 MW — hundred times average data center power requirement. “Our record of supplying power to aluminum has been very reliable.”

    Choosing Iceland also “protects” companies from potential carbon tax programs many nations are working to implement. Potential future carbon taxes make non-renewables less attractive. Public pressure from Greenpeace to clean up data center power also makes Iceland’s, and the other Nordic countries’, renewable energy abundance valuable.

    It’s hard to miss the amount of tech giants building massive data centers in the Nordics. “They are basically looking at Iceland and include it as an option as they look at Europe. We’re happy about that. We’re also happy that Google and Facebook is moving to the Nordics because it proves the concept. There’s renewable power, suitable climate, generally stable political environment.”

    Just don’t build on top of a volcano

    Iceland is relatively safe from natural disasters, save for one concern: volcanic activity. “Just as any other place, there will always be risks,” Rikardsson says. “In New York there’s flooding; in Silicon Valley there’s earthquakes. Risks are very manageable in Iceland. “It’s one of the better places out there. Just don’t locate on top of the volcano,” he jokes.

    In terms of data security and policy, Iceland is on the forefront of the debate. “Iceland aims to be compliant and in line with European data protection policies. Europeans are looking to consolidate the way they address this. We’ll continue to upgrade standards. It’s something current today and being talked about a lot.

    “The aim is you don’t need to put data centers in every country. You want to be able to place it in a region and be sure that the data privacy and security interests are addressed. We certainly aim to be part of that.”

    If the data center industry does start focusing more on clean power — cheap power has always been its prerogative — Iceland has a real shot at becoming a popular destination.

    12:30p
    Internet of Things: What Does it Mean for Data Centers?

    Bhavesh Patel is Director of Marketing and Customer Support at ASCO Power Technologies, Florham Park, NJ, a business of Emerson Network Power.

    At its most basic, the Internet of Things (IoT) is the widespread usage of any sensing device that has a chip and an IP address and can communicate. Put even more succinctly, it is communication among physical objects.

    Sophisticated machine to machine (M2M) applications whereby connected sensors collect data about real-time operating conditions of equipment and use software to analyze the information and set up responses if certain data points coincide or if the data sets off an alarm are on the rise.

    While a lot of the growth is in the vertical markets of home automation, medical connected devices, wearable devices, and connected cars, there will also be dramatic growth in data center and other facility management.

    An example of the IoT at work in a data center is when portions of HVAC systems are operating in a smart building environment using “big data.” Just as occupancy sensors are used to control lighting circuits in offices, multiple smart thermostats located throughout a data center feed information and control to HVAC circuits.

    Today, sensors capture the presence of an individual, however, the information is not analyzed and the benefits of the data are not maximized. Mining the data, rather than just capturing it, can lead to profitable analysis of patterns and trends.

    Growth in the Internet of Things

    Growth in the IoT is staggering to think about. Approximately 90 percent of total data in today’s world did not exist two years ago so it is not surprising that much of it is not being optimized.

    What’s more, the number of Internet connected devices surpassed the world’s population 3 years ago. By 2020, there will be 5 to 10 times the number of devices sold with native Internet connectivity as will be Internet connected PCs or smartphones. Multiple sources that analyze the IoT believe that by 2020 there will more than 25 billion Internet connected devices.

    The beginning and beyond

    The IoT starts with sensors. In data center management, the IoT relates to systems that sense, transfer and act on information wirelessly, that adapt to and anticipate facility needs and drive decisions about operation, efficiency, capability, etc. and that will proactively manage the environment.

    As the IoT evolves, data centers will migrate to clusters of management systems. Each cluster will feature detailed monitoring, measurement and control capability within itself and feed overview and status information to others via a Building Management System (BMS) for decision-making using aggregated data.

    However, better use of big data requires dynamic visualization which can lead to predictability of a facility’s future data. Components required to effectively visualize and use data in data centers include sensors, microcontrollers, actuators, communication chips, platform and application software, and wireless or wired communications.

    As identified by Gartner, potential challenges to deal with the velocity, volume, and structure of the IoT data include security, consumer privacy, storage management including types of generated data to be stored and accessed and used cost-effectively, increased investment in server technologies, and increasing inbound data center bandwidth requirements in order to process all the data.

    Gartner also mentions that the IoT will likely force facilities to collect data in multiple mini-data centers where the initial processing can occur before being forwarded to a central site. All this influx of data will require enhanced and optimized data management at data centers, not only of the data but also of the primary and backup power for the facility.

    Newly evolving sophisticated technologies for gathering, monitoring, analyzing and acting on data from building systems will gain traction in data centers. For example, a proactive Data Center Infrastructure Management (DCIM) system which gathers information from many types of sensors and associated components can provide a holistic view of data center performance, enabling a broad range of informed decisions in multiple areas, from IP performance to asset management.

    Data centers will also take advantage of a more specialized Critical Power Management System (CPMS) that is designed to help ensure continued operations and would help with what-if analysis or in preventing power problems. These systems are not exclusionary but rather would work well in concert with other facility management systems.

    What this all means

    What does “Internet of Things” mean in data center management? It could mean a system that senses, transfers, and acts on information wirelessly. It could refer to a system that adapts to and anticipates facility management needs. It could be a system that proactively manages your environment.

    In 2014, data centers are only at the beginning of the change phase to the IoT. Whereas today, monitoring power and backup power still calls for someone physically walking up to that monitoring equipment, once everything becomes digital, the monitoring of the information as well as the control of the power will be achieved through digital technology over the Internet.

    There are elements of that futuristic concept being used today in data centers but they are not integrated, rather, they are used independently. Down the line, when collected data at data centers are connected to the Internet, analyzed and used intelligently, all that data will be used to predict the future and facilitate better business decisions. This is an evolution that will happen at data centers, changing the structure of data center management.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    12:30p
    QTS Enterprise Business Chief Mertz Takes Helm at Battery Analytics Firm Canara

    Canara, a San Rafael, California-based company whose devices continuously analyze data center backup batteries to predict when they are going to die, appointed former QTS exec Tom Mertz as its new CEO.

    He replaces the company’s previous CEO Thomas Barton, who left the company last year. The company has not provided a reason for his departure.

    Mertz comes to Canara (formerly IntelliBatt) after about two and a half years at QTS, where he oversaw the data center services giant’s enterprise business. Prior to QTS, he led national sales at Lee Technologies, a data center operation outsourcer acquired in 2011 by the French energy management conglomerate Schneider Electric.

    His reasons for leaving QTS to join Canara were simple: “Canara had fantastic clients and a great technology,” he said. Its predictive analytics technology is unique in the market and can make a great positive impact on a data center operator’s capital and operating budget.

    Tom Mertz, CEO, Canara

    Tom Mertz, CEO, Canara

    “We can predict, typically within a very small window [of time], when a specific battery is going to die,” he said. This can save a lot of money, since the common industry practice is to use a standard battery replacement schedule (every three to five years), which means lots of batteries get replaced whether it’s necessary or not.

    Among Canara’s biggest customers are Digital Realty Trust, Equinix, ViaWest, Charles Schwab and Fidelity Investments.

    The company’s portfolio includes a number of products other than its monitoring and predictive analytics offerings. They include batteries themselves, as well as battery racks and cabinets. It also provides installation and maintenance services for backup power systems.

    • 270,000 batteries monitored by Canara daily
    • 2,400 UPS systems monitored total
    • 1 million-plus batteries monitored to date

    Going after the enterprise

    The majority of Canara’s current customer base consists of data center providers, Mertz said. A small portion of users are enterprise operators, which is one of the things he plans to change.

    Because about 80 percent of enterprises still operate their own data centers and most of them have acute budget concerns, he expects this space to be a major source of growth for the company.

    His former employer QTS is not a Canara customer, but Mertz thinks it could very well become one at some point.

    “A lot of large providers like QTS are struggling now to figure out how to preserve capital,” he said. With 3.8 million square feet of data center space under management, publicly traded QTS would be a juicy account for Canara.

    Future plans

    In the near term, Mertz expects Canara to continue pursuing sales and growing. The company is also in the midst of fine-tuning its product-development process to make sure products are developed efficiently and the process is effective enough to meet future growth demands.

    The long-term vision is for Canara to be the best provider of predictive-analytics solutions for data center power backup systems. Every step the company makes in the future, including potential acquisitions, will be made in line with that vision, Mertz said.

    He also plans to open an office in Atlanta where he and other executive staff will work. Canara will retain its current San Rafael offices as well.

    4:43p
    OnApp Blends 170 Cloud Providers Into One Cloud

    The big cloud infrastructure service providers, such as Amazon Web Services and Microsoft, may now have some serious cloud competition: a united front of about 170 providers wordwide running on OnApp’s new federated cloud.

    The service lets cloud providers share compute resources (CPU, RAM and storage) through a central location. Providers atop can leverage a federated cloud spread out globally that has twice the number of locations out of the gate than Amazon and Microsoft clouds combined.

    Service providers use OnApp as a platform to launch clouds, and the company is now leveraging its unique position to unite these clouds and win by strength in numbers it has amassed. Having been the basis of over 2,000 clouds worldwide, its federated cloud is providing a central place to buy and sell excess compute resources through a single API, a single SLA and a single bill.

    The OnApp Federation is a network of 167 service providers offering capacity at 170 locations in 113 cities, across 43 countries. “The federated compute model offers access to infrastructure in hundreds of locations worldwide, and it’s all handled by a local service provider,” said Kosten Metreweli, chief operating officer at OnApp. “Say a customer wants to deploy an app in 10 countries, normally they’d have to find a service provider in each region. Now they can do it all through their current provider.”

    Building on federated CDN success

    The company started by offering a federated Content Delivery Network, which allowed service providers of any size to offer geographically diverse CDN to customers, regardless of their size. This offering competes directly with the large CDN providers like Akamai and has had a lot of success.

    Extending that model to a different set of services, the new federated cloud offering lets any service provider in the ecosystem compete with the largest clouds without having to invest in that global infrastructure.

    The new global compute capability is built on the OnApp market, a wholesale cloud marketplace for service providers. OnApp simplifies running a workload across multiple clouds and all the commercial relationships that the customer has to maintain.

    “We’re all about helping service providers drive out the cost of doing cloud. It’s the power of shared economies,” said Metreweli. “Companies like Airbnb and so on are able to grow extremely rapidly because it helps people monetize existing assets. It’s similar to that. It’s a much more modern way to build, taking advantage of what’s there already.”

    OnApp created its own virtual service provider that sits on top of the market. “In order to do that, we’ve been working with a set of launch partners to run this virtual provider,” said Metreweli. “At launch, we’ll have two to three times the number of locations that AWS and Microsoft have combined.”

    In the preview phase, 50 OnApp providers will provide compute capacity for the OnApp Federation, with nine locations available for testing in the US, UK, Ireland, the Netherlands, Russia and Belgium. The company expects this number to grow at least ten-fold in the next 12 months.

    “In this first release we’re providing a spot market,” said Metreweli. “Going forward we’ll introduce new capabilities and financial products, like the ability to buy futures, etc. All of that is relatively easy to do.”

    Cloud Brokerages vs. Cloud Federation

    There have been several attempts to establish cloud brokerages by the likes of SpotCloud. These are third parties that attempt to create a marketplace where cloud compute is bought and sold. However, many of these attempts have not taken off. Cloud federation is a different model because in its case, the aggregator adds value beyond providing a centralized marketplace.

    “One of the reasons that is much easier for us is we’re a service provider to the service provider market,” said Metreweli. “The problem is keeping supply and demand in step. It’s easy to generate supply, but not demand. We’ve solved that: all of our supply and demand is the same group of people, service providers. If you’re in the market, your selling and buying already. It’s a self-reinforcing thing.”

    OnApp’s federation gives service providers the flexibility to be as big and diverse or as local and niche as they want to be without investing in more infrastructure, Ditlev Bredahl, CEO of OnApp, said. ”By connecting hundreds of OnApp cloud providers around the world, we’re helping each one of them break through the scale barrier, using the OnApp Federation to host customer applications exactly where they are needed, on infrastructure with the right performance, price and service level. And they can do it on demand with no capex. This is what cloud is supposed to look like: true global coverage, pay-as-you-go and real diversity.”

    5:00p
    HostingCon 2014: The Future of Cloud Services for Managed Service Providers with Ashar Baig, GigaOM

    logo-WHIR

    This article originally appeared at The WHIR.

    The cloud is a $160 billion market, and yet 80 percent of service providers in North America don’t offer cloud services.

    In a presentation Monday morning at HostingCon 2014GigaOM research director Ashar Baig discussed cloud adoption trends as they apply to managed service providers, and said that in order to survive, service providers can’t be a “one-trick pony” and will have to bundle cloud services. They will also have to keep an eye on broader trends like the Internet of Things, wearables and robots, for example.

    “A lot of you guys are concerned about day-to-day, but you have to be aware of these trends,” Baig says. “MSPs should stay abreast on these types of technologies and what is happening in the market.”

    If service providers fail to adapt, the weaker ones won’t be able to stay in business as larger providers and telecos come into the market with a wide range of cloud services.

    The MSP market is mostly break and fix type companies that Baig says will not be sustainable long-term as they try to compete on price in smaller, regional areas.

    Cloud brokers, who compete with MSPs, sell all kinds of branded cloud services, and while the market is growing, Baig calls it a “stop-gap” until larger providers start offering the same services.

    In terms of the larger cloud providers like Google, Microsoft and Amazon Web Services, it is pretty much a level-playing field, according to Baig. In fact, market share doesn’t really matter since the name of the provider is not one of the five decision criteria for enterprises or CIOs. And despite all of the public cloud price cuts, companies will soon choose their service provider based around their core competencies and what they bring to the table rather than what they cost.

    Customer acquisition is a growing challenge for MSPs. Baig says service providers are looking for strategies to survive. One of those strategies, which many MSPs are using with success, is establishing two separate sales teams: one for new customers and one for existing customers.

    According to Baig, bundling is very important for stickiness. In 2013, Baig says there was a lot of bundling from cloud providers, including backup, virtual disaster recovery and proactive network monitoring. The fastest growing areas in cloud are big data, mobile, systems management, backup/DR, IT helpdesk, and security, which remains a primary inhibitor of cloud adoption.

    Baig says the reality is that people aren’t buying backup anymore, and that they want to buy DR as a service because it’s what “the CIO can relate to.” He says that MSPs are looking to provider higher value services because it is easier to sell disaster recovery instead of data backup.

    The future of cloud services is hybrid clouds, Baig says, with 55 percent of MSPs offering hybrid or multi-cloud services in the next 2-3 years. In five years, hybrid cloud will be the core cloud strategy.

    “There will always be a need for mainframes, tapes, HDDs and private cloud,” he says.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/hostingcon-2014-future-cloud-services-managed-service-providers-ashar-baig-gigaom

    5:07p
    Verizon Lets Other Carriers Sell Services in More of Its Data Centers

    When Verizon acquired data center provider Terremark in 2011, forming the foundation of its cloud play, there were questions as to whether Terremark would remain carrier neutral following acquisition by a massive carrier.

    Carrier neutrality has not traditionally been common practice for Verizon, but that is something the company has been changing.

    Verizon has been gradually adding more and more carrier-neutral data centers to its footprint, where customers now have the option of selecting third-party network providers for carrier diversity and redundancy. The most recent example is the addition of five facilities – in Boston, Denver, New York, Virginia and Seattle — to the roster of its carrier-neutral facilities.

    While it operates one of the largest, most diverse networks in the world, customers demand options, and Verizon is now giving them what they need. The company now has a total of 16 colocation facilities that offer unrestricted interconnection with multiple carriers, which is still a small part of the overall fleet of data centers.

    Verizon data centers normally have dual-carrier network connectivity, Verizon acting as the default network and an alternate, pre-selected carrier providing backup network services.

    Carrier neutrality has been consistently rising as a need for colocation and managed services facilities, and it is very likely that Verizon’s enterprise hosting business has lost some business in the past because it did not offer it.

    “More and more enterprise and government clients are opting to utilize colocation services to augment existing data center capacity rather than build their own facilities,” said Guy Tal, manager of data center interconnection services for Verizon Enterprise Solutions. “By loosening previously restrictive interconnection policies, we are meeting customer requirements for flexibility and choice, thereby enabling an easier migration path to using a third-party provider for critical data center services.”

    Benefits of these facilities becoming carrier-neutral include:

    • Greater choice and flexibility in selecting a network service provider to connect the Verizon colocation data center to client offices and facilities, as well as universal Internet connectivity for partners and customers.
    • Enhanced carrier redundancy.
    • Ability to fully leverage existing relationships with network service providers.
    •  Streamlined processes and lower costs associated with carrier management.
    8:11p
    Raritan’s Latest DCIM Release Automates Change Management, Adds Capacity Map

    In the latest release of its data center infrastructure management software Raritan has added a comprehensive single-pane-of-glass map of the data center’s capacity and a change-management workflow engine.

    Capacity and change management are two of the most important features of DCIM. Both address efficiency: one from capacity utilization standpoint and the other from the operational-efficiency angle.

    “DCIM solutions … help organizations do more with their data center resources by providing operations insight and automating key manual tasks so that work is done quickly and accurately,” IDC analyst Jennifer Koppy said in a statement.

    Raritan’s new Data Center Map provides a comprehensive capacity overview, including space, power, cooling, network ports and power connections. It uses red, yellow and green indicators to show capacity and health of the resources.

    There is also a new “inspector widget,” which compares capacity budgeted to capacity utilized in real time.

    The Bidirectional Workflow Engine automates the process of moving or adding equipment. The feature integrates more tightly Raritan DCIM and ITMS (IT management services) by other companies, such as BMC and ServiceNow.

    Integration with ITMS is a key feature for DCIM software. Most data center operators who buy DCIM want it to work in concert with ITSM software they use, according to Mark Harris, vice president of marketing and strategy for DCIM vendor Nlyte.

    Other features added in the latest release of Raritan DCIM:

    • Enhanced Analytics and New Automated Reports. Automatic generation and distribution of key data center reports, such as energy bill-back reports by customer, reports identifying ghost and power-hog servers and monthly peak power reports.
    • Quick Move.  Select and quickly move assets to desired location with support from Intelligent Capacity Search feature that recommends asset placement locations based on how much space, power, cooling and connectivity is in a cabinet. All disconnect and reconnect change requests are automatically validated and issued. The DCIM system captures all requests and work orders in an audit log.
    • Automatic Alert Notifications of Cabinet-Level Events. The release expands threshold violation alerts to the cabinet-monitoring layer.  It warns of cabinet threshold violations, such as capacity levels.

    << Previous Day 2014/06/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org