Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 17th, 2016

    Time Event
    1:00p
    Data Center Extends Cloud’s Edge to Minneapolis

    Editorial-Theme-Art_DCK_2016_Feb

    This month we focus on data centers built to support the Cloud. As cloud computing becomes the dominant form of IT, it exerts a greater and greater influence on the industry, from infrastructure and business strategy to design and location. Webscale giants like Google, Amazon, and Facebook have perfected the art and science of cloud data centers. The next wave is bringing the cloud data center to enterprise IT… or the other way around!

    Just like a popular YouTube video is cheaper to deliver from a data center that’s in the same geographical region than from a remote one, both providers and users of enterprise cloud services benefit if the services are delivered from a local data center.

    Quickly growing adoption of cloud services by enterprises has driven edge data center specialist EdgeConneX to locate its latest facility in Minneapolis. The Minneapolis-St. Paul metro has a population of about 3.8 million, yet digital content and cloud services consumed by its residents and companies have traditionally been served from data centers 400 miles away, in Chicago, Clint Heiden, chief commercial officer at EdgeConneX, said.

    “When you have a [market] the size of Minneapolis-St. Paul pulling from another core market like Chicago, that to us screams like an edge market,” he said.

    Over the last two to three years, EdgeConneX has built 23 edge data centers in markets around the US, where, like in the Twin Cities, users of digital services had traditionally been served from data centers located elsewhere.

    Serving content from afar becomes expensive for content providers, who end up paying a lot of money for transporting data over long distances, while the quality of user experience suffers. An edge data center is a place where content and cloud providers cache data that’s in high demand in that particular location, and where local Internet Service Providers, the so-called “eyeball networks,” access that data and deliver it to their customers.

    By building data centers with those essential elements – content providers and eyeball networks – in markets that don’t already have them, companies like EdgeConneX in effect extend the internet’s edge.

    Read more: How Edge Data Center Providers are Changing the Internet’s Geography

    The big ISPs at the new Minneapolis data center, which came online late last year, are Charter Communications, Comcast Corp., and one other company whom Heiden declined to name citing confidentiality agreements. Comcast is an investor in EdgeConneX and has presence in most of the data centers in its portfolio.

    He also declined to name any content or cloud providers that were using the facility, but said the primary purpose of Comcast being there was to enable direct private network connections to major cloud providers. These types of cloud onramps enable companies to bypass the public internet when using cloud services, leveraging private links which are reportedly faster and more secure.

    Earlier this year, EdgeConneX started building out its presence in Europe. Starting with a data center in Amsterdam, the company plans to take its edge data center model to Ireland, Italy, France, Austria, Germany, and the UK.

    Sites Managed Remotely with DCIM Built in-House

    You are not very likely to encounter an EdgeConneX employee at one of its data centers. Save for a few sites, the company manages its substantial portfolio remotely, using EdgeOS, a Data Center Infrastructure Management system designed in-house.

    None of the DCIM software tools on the market ticked all the boxes on the company’s list, so it had to live with the expense of developing its own solution, Heiden said. EdgeConneX needed a system that would enable it to automate management of a massive fleet of data centers and also enable it to offer customers similar price points to wholesale data center providers.

    EdgeOS can authenticate users remotely, including facial recognition and badge-controlled access, and monitor customer SLAs all the way down to component level, humidity, and temperature. It can be used to accept shipments, enter customer tickets, or monitor rack density and overall energy efficiency.

    Uniform Design across all Edge Data Center Markets

    As much as possible, EdgeConneX sticks to the same design in every data center build it undertakes. “Each of our data centers is roughly 90 percent similar to the rest,” Heiden said.

    It’s cheaper to replicate a design as much as possible than to start from scratch every time, especially if the company is expanding at EdgeConneX’s rate. But the service provider is also aiming to make its data centers in different locations feel similar to customers – the same uniform approach retail chains have adopted.

    The Minneapolis facility came online with 2MW of power capacity and 15,000 square feet of raised floor, but there’s room to expand north of 4MW if necessary, Heiden said.

    The most important design element that distinguishes edge data centers, or any data centers that host cloud services, is high density. EdgeConneX designs its facilities to provide 15kW to 20kW per rack. Customers that need densities like that “tend to be your internet content and cloud companies, and also the broadband community,” he said.

    Not Expanding as Quickly as Expected

    In early 2015, Heiden told us EdgeConneX would add 10 data centers to its portfolio before the end of the year, which would take it from 20 data centers at the time to 30. Minneapolis being its 23rd location, it obviously didn’t expand as rapidly as he had expected.

    “We didn’t quite get to 30; we ended up at 23. A number of mergers got in the way and slowed some of the growth,” he said this week but declined to elaborate. However, the company continues to believe that the number of edge data center markets in the US is “far north of 30.”

    Some of the 23 facilities EdgeConneX has built filled up quickly, and some have seen slower fill-up rates, which was expected, since market dynamics differ from region to region. “Miami has filled very quickly; Austin fills very quickly; Denver, Seattle, Santa Clara, Phoenix – those markets fill quicker than Jacksonville and Tallahassee,” Heiden said.

    “But in general, the footprint has very good utilization. We’re happy with the fill rate.”

    4:00p
    Planning for IoT Analytics Success

    Srinath Perera, Ph.D., is Vice President of Research at WSO2.

    For a growing numbers of companies, the compass points toward the Internet of Things (IoT) as a pathway for improving customer service, enhancing operations, and creating new business models. In fact, IDC predicts that by 2020, some 32 billion connected IoT devices will be in use. The challenge is extracting timely, meaningful IoT data to enable these digital transformations. Following are five critical demands enterprises need to consider in developing their IoT analytics strategies.

    IoT Analytics Must be Distributed

    Most enterprise IoT environments are inherently distributed. Like spider webs, they connect a myriad of sensors, gateways and collection points with data flying between them. Moreover, these webs constantly change as components are added and subtracted, and data flows are modified or repurposed.

    Such environments place multiple demands on analytics. First, the software has to handle a variety of networking conditions, from weak 3G networks to ad-hoc peer-to-peer networks. It also needs to support a range of protocols, often either the Message Queuing Telemetry Transport (MQTT) or Common Open Source Publishing Platform (CoApp), and then either ZigBee or Bluetooth low energy (BLE).

    The dynamic quality of IoT implementations means analytics solutions should have the flexibility to expand or contract to match the load. Deploying analytics in the cloud is one option. However, many IoT deployments have on-premises aspects, such as machines on the factory floor or kiosks in stores. Therefore, an IoT analytics solution may need to scale across a hybrid environment leveraging both the cloud and on-premises systems. Additionally, the software must have a distributed architecture with the ability to run multiple queries across multiple systems—and scale while doing it.

    Some Analytics Should Occur at the Edge

    IoT data gets really big, really fast. Consider the Distributed Event-Based Systems (DEBS) Grand Challenge 2014 use case: 40 houses with 2,000 sensors generated about 6 billion events in four months. Imagine the sea of data generated by 4 million such homes. That’s billions of events per second being pushed out for processing.

    However, many businesses only need an average over time or insights into trends that exceed established parameters. The answer is to conduct some analytics on IoT devices or gateways at the edge and send aggregated results to the central system. This facilitates the detection of important trends or aberrations, such as temperature changes or failed access attempts, and significantly reduces network traffic to improve performance.

    Such edge analysis requires very lightweight software, since IoT nodes and gateways are low-power devices, which limit the available strength for query processing. To address this challenge, several companies are working on edge analytics products and reference architectures. Still, because edge computing is heavily contextual, there is no one-size-fits-all solution.

    IoT Analytics are Event-Driven

    IoT data are essentially streams of events. Therefore, analysis to support real-time interactions, whether triggering a thermostat or a fraud alert, requires some form of complex event processing (CEP) and streaming analytics. The software should handle time-series data, time windows, moving averages, and temporal event patterns. Two popular open source technologies for real-time event processing are Apache Storm, which should be used in combination with a CEP engine, and Apache Spark. Another option is the cloud-based Google Cloud DataFlow. With each offering, there are tradeoffs, so an IoT implementation’s specific requirements will determine the technology approach.

    IoT Data Comes With Uncertainty

    The ordering of inbound IoT data is important. For example, a progression of events may indicate that an engine part is heading for failure. At the same time, lots of nodes are pushing data through low-bandwidth IoT networks, and sometimes nodes fail, creating issues about whether sensors keep data and send it later. Other challenges include collection latency, duplicate messages, and reliability.

    IoT analysis needs to handle these concerns. For example, time windows and temporal sequence-based queries will require special algorithms to ensure the proper order of inbound data. Google Millwheel addresses some problems in this space by providing fault-tolerant data stream processing and is worth evaluating. However, at this time, many IT organizations will need to develop custom rules and queries to support their IoT analytics implementations.

    Predictions Produce More Value

    Most IoT implementations calculate descriptive analytics, such as mean, median, and standard deviation. However, the maximum impact will come from applying predictive analytics for applications, such as fraud detection, proactive maintenance, and health warnings, to name a few.

    Increasingly, machine-learning algorithms compliment statistical models for handling prediction. These algorithms will automatically learn from examples, providing an attractive alternative to rules-only systems, which require professionals to watch rules and evaluate their performance.

    Several frameworks for machine learning have emerged in recent years. These include Apache Spark MlLib, Dato GraphLab Create, and Skytree. Meanwhile other organizations continue to develop new algorithms. While more research is needed, a thorough understanding a company’s IoT scenario can help in determining the best alternative.

    One final note: The market for IoT analytics technologies is still nascent. So adopting a flexible and open architecture for today’s analytics challenges will best position an enterprise to capitalize on emerging technologies in this arena tomorrow.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:17p
    Google’s Tweakable Cloud VMs Now in General Availability

    Even though the Amazon cloud now offers close to 80 different kinds of rentable cloud servers, while Microsoft’s cloud provides about half that amount, Google says neither of its big public cloud rivals necessarily has the perfect option for every user.

    For that reason, Google announced today the launch of customizable cloud VMs, where a user can tweak CPU core count and memory size individually and independently of each other. Since the service was launched into Beta in November, Google has seen users create cloud VMs with virtual CPU-to-memory ratios not available from any major cloud providers, Sami Iqram, product manager for Google Cloud Platform, wrote in a blog post Wednesday announcing the launch of Custom Machine Types into general availability.

    The point is to make public cloud cheaper for customers. Presumably, the better they can match the configuration of their cloud infrastructure to their application needs, the less they will spend on capacity they don’t need. Google started charging for its cloud VMs by the minute instead of by the hour several years ago for the same reason. Microsoft followed, while Amazon did not, continuing to charge by the hour.

    Google Custom Machine Types

    (Source: Google)

    The three giants have essentially commoditized cloud infrastructure services, announcing big price cuts one after another over the recent years. Things like greater and greater customization options to allow users to save and various discount schemes based on usage dynamics they have devised illustrate that there’s a limit to how far simple price cuts can go. Cloud data centers are expensive to build and operate at the scale that is required from leading cloud service providers today, so it isn’t a race to zero as much as a race to get closer to zero than the competition.

    Read more: The Billions in Data Center Spending behind Cloud Revenue Growth

    Pricing for Custom Machine Types is simple. There’s a flat rate for each virtual CPU core and for each Gibibyte of memory you spin up.

    Custom cloud VMs come with CentOS, CoreOS, Debian, OpenSUSE, Ubuntu, and now also Red Hat Enterprise Linux and Windows. They are supported by Google Container Engine and Deployment Manager tools.

    6:32p
    IBM Embraces Blockchain with New Bluemix Cloud Services and Code

    the var guy logo

    By The VAR Guy

    Is blockchain — the distributed database behind cryptocurrencies like Bitcoin — ready for prime time? IBM clearly thinks so. This week, the company announced new Blockchain-as-a-Service offerings in the cloud, a move that follows its recent contribution of 44,000 lines of open source code to the Hyperledger project.

    Blockchain is a type of distributed database. It’s designed in a way that ensures that all transactions are public, yet no centralized party has exclusive control over them. The technology has become explosively popular because it powers Bitcoin, the open source, peer-to-peer payment system — although it can be used for much more than that.

    Read more: What the Bitcoin Shakeout Means for Data Center Providers

    IBM’s new cloud-based blockchain offering is available from its Bluemix cloud. It’s designed to help DevOps teams build applications that use blockchain technology. Those applications can also be deployed directly on IBM z Systems servers, the company says.

    IBM is also promoting integration of blockchain apps with IoT devices for purposes that extend beyond payment. “Devices will be able to communicate to blockchain-based ledgers to update or validate smart contracts,” IBM says. “For example, as an IoT-connected package moves along multiple distribution points, the package location and temperature information could be updated on a blockchain. This allows all parties to share information and status of the package as it moves among multiple parties to ensure the terms of a contract are met.”

    Late last year, IBM became a founding member of Hyperledger, a collaborative project organized by the Linux Foundation to promote open source blockchain technology. It has contributed 44,000 lines of code to that initiative so far.

    To be sure, IBM is not the only organization backing blockchain. But it is one of the biggest to be investing in it is a major way. Just as Big Blue played a pivotal role in convincing the industry that Linux was ready for the big leagues when it announced a billion-dollar investment in the kernel nearly two decades ago, it is now throwing its support behind another emerging technology that is likely to become hugely important in niches from banking to IoT.

    This first ran at http://thevarguy.com/open-source-application-software-companies/ibm-embraces-blockchain-new-bluemix-cloud-services-and-co

    8:26p
    What IT Managers Need to Know about Data Center Cooling

    For any data center cooling system to work to its full potential, IT managers who put servers on the data center floor have to be in contact with facilities managers who run the cooling system and have some degree of understanding of data center cooling.

    “That’s the only way cooling works,” Adrian Jones, director of technical development at CNet Training Services, said. Every kilowatt-hour consumed by a server produces an equivalent amount of heat, which has to be removed by the cooling system, and the complete separation between IT and facilities functions in typical enterprise data centers is simply irrational, since they are all essentially managing a single system. “As processing power increases, so does the heat.”

    Jones, who spent two decades designing telecoms infrastructure for the British Army and who then went on to design and manage construction of many data centers for major clients in the UK, will give a crash course in data center cooling for both IT and facilities managers at the Data Center World Global conference in Las Vegas next month. The primary Reuters data center in London and a data center for English emergency services – police and fire brigade – are two of the projects he’s been involved in that he’s at liberty to disclose.

    If IT managers simply communicate parameters of the equipment they have in the data center or are planning to install, facilities managers should be able to determine the optimal spot for that equipment on the IT floor. Facilities managers need to know the thermal profile and power requirements of IT equipment in order to utilize data center cooling capacity efficiently.

    Jones’s presentation will include a quick overview of the basic concepts in data center cooling and guidance on matching operational parameters of IT equipment to areas of the data center with the most appropriate temperature and humidity ranges. He will also go over newer cooling efficiency concepts, such as containment and free cooling, as well as the essential need to continuously measure the system’s performance.

    The presentation will not be basic, “but it’s not in-depth where we’ll go into cooling equations,” he said. “It would cover a good cross-section of IT professionals, as well as technicians and managers.”

    Another portion of the presentation will cover the basic steps of creating a preventative maintenance program for the cooling system. It starts with measuring and monitoring, which includes gathering sensor data, doing static pressure checks, using thermal imaging, and applying appropriate metrics to understand how the system is operating.

    The next steps are functional testing, or how to make sure the equipment is tested properly, and controlling and improving airflow management – things like making sure perforated raised-floor tiles and air vents are aligned correctly.

    Jones will go through basic visual checks that can be done to better understand capacity of the cooling system and give an overview of affinity laws – the physics of pumps and fans. A fan, for example, doesn’t necessarily move more air if it spins faster. If it’s spinning too fast, some air “slips” off the blades, which means the fan is wasting energy. Jones will explain how to determine the optimal fan speed.

    Data center managers make a lot of mistakes that result in inefficient data center cooling. One of the biggest problems is poor understanding of airflow management, which is something both IT and facilities staff play a role in.

    The company can spend a lot of money delivering cold air to the data center floor, but if technicians install cabling in a way that obstructs air flow, neglect to cover empty rack spaces with blanking panels, or simply don’t know how to determine where in the rack is the best spot for a particular piece of equipment, a lot of conditioned air doesn’t get to the equipment at all or gets mixed with hot exhaust air.

    Another common mistake is overcooling. A lot of modern IT equipment works well in higher temperatures than most data centers provide. The unfortunate reality is that most data centers have a mix of old and new IT gear, which means data center managers need to have a finer understanding of their cooling system and airflow on their data center floor to take advantage of higher operating temperatures while making sure older equipment stays sufficiently cooled.

    Greater understanding of how data center cooling systems work by everyone who works in the data center can make a big difference in how efficiently the facility runs, optimizing its energy use and use of the company’s resources.

    Want to learn more? Join Cnet’s Adrian Jones and 1,300 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website.

     

    10:17p
    VMware Refreshes Cloud Management Platform

    WindowsITPro logo

    By WindowsITPro

    I recently met with Marke Leake, senior director of product marketing for the Cloud Management business unit at VMware to talk about the launch of VMware’s latest vRealize Suite 7, VMware’s cloud management platform. The suite is an enterprise-level hybrid cloud management platform designed to enable IT to better manage both on-premise and hybrid cloud infrastructures as well as enabling faster automated deployment of applications and services.

    Marke pointed out that since Q4 2015 VMware has completely refreshed all of the components in the vRealize Suite. These updates are the result of months’ worth of interviews and surveys with their existing customer engagements.

    The vRealize Suite 7 release updates two of the main components of their vRealize Suite: vRealize Operations 6.2 and vRealize Log Insight 3.3. In Q4 2015 VMware delivered updates to the other two members of the suite: vRealize Automation 7.0 and vRealize Business for Cloud. vRealize Automation uses a policy-based framework to automate the deployment of IT services. vRealize Business for Cloud 7.0.1 is one of the features that separates the vRealize Suite from other cloud management products as it employees industry standard metrics to enable you to understand the costs of deploying an application as well as running a service in the cloud verse running it on-premise.

    Some of the new features in vRealize Suite 7.0 include:

    vRealize Operations 6.2

    The new vRealize Operation 6.2 provides an intelligent workload placement capability. It now provides tight integration with Distributed Resource Scheduler (DRS) and can move workloads across servers, clusters or data centers. A new Workload Utilization Dashboard enables you to visualize the workloads.

    You may also like: Eight Key Features for IT Managers in Latest Docker Release

    vRealize Log Insight 3.3

    Enhancements to vRealize Log Insight 3.3 include a new simple Query API for easy integration into existing processes and Web Hooks support for third-party application integration. The new release also provides support for pure IPv6 environments.

    New Editions and Portable Licensing Unit model

    VMware also introduced a new Standard edition of the vRealize Suite. The Standard edition includes vRealize Log Insight, vRealize Operations and vRealize Business. The VMware vRealize Suite also provides Advanced and Enterprise editions. The Advanced and Enterprise editions possess features for more advanced automated data center operations and application deployment and updating.

    With the vRealize Suite VMware provided a new licensing model called the Portable Licensing Unit (PLU). The PLU model enables customers to manage workloads regardless if they run on physical servers, VMware vSphere, third-party hypervisors, or on supported public clouds. Pricing for the vRealize Suite 7 is Standard edition at $3,745 per PLU, Advanced edition at $6,245 per PLU and Enterprise edition at $7,745 per PLU.

    This first ran at http://windowsitpro.com/hybrid-cloud/vmware-refreshes-their-cloud-management-platform

    << Previous Day 2016/02/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org