Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, July 22nd, 2015

    Time Event
    1:00p
    CenturyLink Expands Data Centers in Six Markets

    Data center service provider CenturyLink has completed data center expansion projects in six markets in the first half of the year, adding a total of 10.8 MW of power capacity to address growing demand.

    While the Monroe, Louisiana-based company provides a plethora of upper-layer services, from dedicated hosting to Platform-as-a-Service, its colocation customers continue to take up the bulk of the space in its 55-plus data centers in North America and Europe, Keith Bozler, senior director of colocation product management at CenturyLink, said.

    Decisions on where to expand are driven largely by demand patterns, Bozler explained, and the latest round of expansion was to address growing demand in Boston, Minneapolis, Phoenix, Seattle, Washington, D.C., and London. The company also recently entered new markets: central Washington State, where it launched an 8 MW data center earlier this year, and Australia, a market it entered through a partnership with its Australian peer NextDC.

    While most of the space in CenturyLink data centers is occupied by colocated customer gear, the company rarely signs pure colo deals, Bozler said. Customers usually combine colo space with its other services and ultimately deploy some form of hybrid infrastructure.

    But data center expansion projects provide capacity to grow the colocation business unit first and foremost, he explained.” There’s always some sort of combination of colocation and network [or] colocation and managed,” he said.

    The foundation of CenturyLink’s colo business was its $2.5 billion acquisition of data center service provider Savvis in 2011. Since the acquisition, the company has aggressively pursued the services higher up the stack, but colocation has always been core to its strategy, as Drew Leonard, VP of colocation product management at CenturyLink, told us in an interview last year.

    The general aim is to become a provider to whom customers come for all of their data center services needs, be they colocation, hosting, or Infrastructure-as-a-Service. The company has built a technology platform that unifies all of its services under a single pane of glass, aiming to make it easy for customers to stand up infrastructure that combines a variety of types of IT resources.

    It uses the same platform to make its data center capacity expansion decisions. CenturyLink has a group of data scientists on staff analyzing usage data generated by the platform and helping make a variety of operational decisions.

    Earlier this month, CenturyLink launched a bare-metal cloud service companies can use to spin up dedicated physical services almost the same way they spin up cloud VMs.

    3:00p
    The Paradox of Complexity and Efficiency in Modern IT Infrastructure

    Omer Trajman is Co-Founder and CEO of Rocana.

    What’s in your data center? While a seemingly innocuous question, the increasing levels of abstraction such as IaaS, PaaS, and SDNs present a real challenge to IT operations and security teams. The dynamic allocation of applications and components adds yet another level of complexity to IT operations. How do IT teams inventory what systems and software are running when they are constantly changing? How do you debug a performance problem when the application code may be migrating from on-premise to off-premise servers and back again?

    Consider the case of Thor, a 44-year old IT admin for a Fortune 500 telecommunications firm. When Thor started his career, he managed a small set of servers that ran enterprise applications. Each application was installed on a specific server, and Thor and other admins gained familiarity with each of these servers and their interconnections. This “tribal knowledge” was the basis by which troubleshooting was done. Thor would hear other IT admins say things like, “Oh, yeah, that server connects to the Sun workstation in building 4200. That network connection is a little flaky.” Now consider Thor’s plight as he manages an SOA application with several thousand Java components that talk to cloud servers managed by SaaS application providers. And those SOA application components are implemented as a PaaS, which dynamically scales the number of nodes up and down to meet demand. How can Thor determine whether the users are experiencing performance problems because node scaling is not keeping up, or if there is a systemic problem with the connection to the cloud-based application database?

    Infrastructure complexity has brought about the death of tribal knowledge. At the same time, monitoring and management tools haven’t kept pace with the rate of change of underlying technology. Of course, vendors have tried to solve the problem for their part of the stack, leading to a proliferation of monitoring and management silos. In order to answer a seemingly simple question like, “What systems and software are running and where?” in a modern infrastructure, Thor might have to consult a half dozen tools or more to get raw data, and then struggle to merge the data into something sensible. It may be possible for Thor to “brute force” his way to a monthly report, but it certainly would be a time-consuming, error-prone, and headache-inducing process.

    Often these siloed, domain-specific tools also limit the data available by retaining data only from the sources they deem important, or by limiting data retention to extremely short periods, or both. Much like politics, the perspective provided by each of these silos creates factions, with differences of opinion that are difficult to reconcile. Since every group is working from a different set of information, there is no common base of information across teams. So, how are performance problems and outages resolved?

    Here are “Seven Must-Haves for Monitoring Modern Infrastructures” as a way to help people like Thor answer the previous question:

    1. Collect data from all systems in one repository so there is a “single source of truth” for all to share.
    2. Maintain all machine data (syslogs, application logs, metrics, etc.) for extended periods of time – months for performance data and years for security data – so you can go back in time for forensics and to test models.
    3. Create a fault-tolerant, loss-less data collection mechanism.
    4. Ensure monitoring systems are more available and scalable than the systems being monitored (credit to Adrian Cockcroft of Netflix).
    5. Perform real-time analysis on data so issues can be surfaced before they become crises.
    6. Use anomaly detection and machine-learning algorithms to create an “augmented reality” for IT admins, helping guide them through TBs or even PBs of data.
    7. Provide a publish/subscribe mechanism that has real-time aggregation, transformation, and filtering of data for sharing with visualization tools, R-based models, and other tools.

    With this powerful “hammer” in hand, IT admins like Thor can begin implementing solutions that bypass the brute force approach and start augmenting operations. You will be able to answer the question, “What’s in your data center?”, and gain awareness so that you can also answer the question, “What’s going on in your data center?”

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    3:30p
    Nomadic Virtual Machines May Be Key to Lowering Energy Costs

    In theory at least, virtual machines enable IT organizations to pursue more flexible approaches to maximizing IT infrastructure utilization that reduce energy costs. The trouble is that not many organizations have the processes in place to do anything more than manage virtual machines in much the same way they manage a physical server.

    Virtual machines make it simpler to optimize when application workloads should optimally run, which by definition reduces energy costs by driving up the utilization rates of physical IT infrastructure, according to Clemens Pfeiffer, president and CEO at Tier44 Technologies, the data center management software startup that took over intellectual property from the defunct outfit Power Assure.

    Pfeiffer is speaking on the subject at the Data Center World conference in National Harbor, Maryland, this September.

    “Most of the approaches to deploying virtual machines are still fairly static,” he said. “There’s not much dynamic allocation of workloads across the data center.”

    According to him, most IT organizations are being overly conservative when it comes to taking advantage of virtual machine management software such as VMware vMotion to move application workloads around the data center. Most application workloads don’t need to run 24/7.

    By figuring out which workloads need to run when across a virtual data center environment, energy costs can not only be sharply reduced; the whole IT environment becomes more resilient, Pfeiffer said.

    Historically, of course, IT managers have been conditioned to think of stability of the IT environment as their ultimate goal. As such, many IT managers think the less dynamic the IT environment is the more resilient it is. In reality, the ability to move virtual machines quickly actually introduces more resiliency, because in the event of an IT equipment failure, the virtual machines running on a server can be moved to another server in a matter of minutes, he said.

    At some point every piece of IT infrastructure equipment for one reason or another is going to fail. Given that assumption, it’s clear that virtual machines will at some point need to move.

    Pfeiffer is simply making the case that rather than viewing the movement of virtual machines as an occasional event, the time has come to put a series of processes in place that wind up making the movement of those virtual machines a routine exercise that saves money on energy costs by driving up utilization rates well above the normal 20-percent range in seen in the typical data center environment.

    That may require some additional IT expertise and reliance on IT automation technologies, but the cost savings will make the effort well worth the trouble.

    For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Edward’s session titled “Optimizing IT Reliability, Performance, and Energy Efficiency in Virtualized Data Centers.

    4:36p
    Nlyte Integrates DCIM Software With Three ITSM Platforms

    Looking to make it simpler to integrate its data center infrastructure management software with a variety of IT service management (ITSM) offerings, Nlyte Software has repackaged its software within three editions that come with all the connectors required to integrate with ITSM solutions by BMC Software, HP, and ServiceNow.

    Integration with leading ITSM platforms is becoming an increasingly important feature for DCIM software, and numerous vendors in the space have been working to ensure their solutions gel with the likes of ServiceNow or BMC. Such integration links data center asset management with service management and, in some cases, even cloud management, providing a more holistic view of the infrastructure from top to bottom.

    Nlyte has developed a framework meant to make integration with a variety of ITSM solutions quicker and easier. CommScope, one of Nlyte’s competitors, recently integrated its iTRACS DCIM software with HP’s ITSM platform.

    Mark Gaydos, chief marketing officer for Nlyte, said Nlyte for BMC ITSM, Nlyte for HP ITSM, and Nlyte for ServiceNow ITSM will make it simpler for IT organizations to integrate the management of infrastructure within the data center within larger management frameworks.

    “When it comes to ITSM the data center is the last frontier,” said Gaydos. “In a lot of places data center administrators are still using spreadsheets and Visio drawing tools.”

    Overall, IT organizations have been moving toward managing IT as a service by invoking APIs to programmatically manage IT environments at scale. As part of that trend, Gaydos said, Nlyte enables seamless integration between ITSM systems and the workflows used to manage IT infrastructure.

    Specifically, IT organizations can use Nlyte’s DCIM software to ensure information within their configuration management database (CMDB) and asset management systems is synched in real time with their ITSM software, including providing the exact location of assets, resources consumed by those assets, and how assets are interconnected with one another. Nlyte also sends alerts when new resources are brought online or when resource limitations might be reached. In addition, alerts relating to scheduled downtime and migrations and consolidations are also sent.

    Given the volume and density of modern data center environments, Gaydos said, it is becoming next to impossible for IT organizations to manually keep track of all the changes. At the same time, senior IT leaders want to be able to dynamically scale those environments, which is hard to do without knowing what resources are actually available at any given moment.

    In general, ITSM software is widely seen as a mechanism for unifying application and IT infrastructure management. Leveraging the APIs that providers of applications and manufacturers of IT infrastructure now routinely expose, it’s become possible to manage IT using programming tools to dynamically scale IT resources up and down as required.

    The challenge, said Gaydos, is making sure that what the IT organization thinks is occurring within its ITSM frameworks actually aligns with what it actually occurring inside the data center.

    5:01p
    Linux Container Standards Org Adds Members, Including Oracle

    Roughly 30 days after launching the project, Docker revealed that 11 additional vendors, including Oracle, have now signed on to support the Open Container Initiative. The company made the announcement at the Open Source 2015 Conference (OSCON) in Portland today.

    Intended to make it possible to share images across multiple types of Linux containers, the announcement comes on the heels of the formation of a Cloud Native Computing Foundation, announced this week, which incorporates OCI technology into its core specification.

    Launched as the Open Container Project, the effort was renamed to Open Container Initiative to avoid confusion with other OCPs, such as the Open Compute Project or the Linux Foundation’s Open Compliance Program.

    In addition to signing up new members, David Messina, vice president of marketing for Docker said, the company is also making available today a draft of the charter under which OCI will operate.

    “We’ve made a lot of progress in a month,” said Messina. “The end goal is to create a specification that enables Docker to easily run across multiple data center environments.”

    While Docker initially evolved as an alternative to running virtual machines on a Linux platform, the number of places IT organizations are running applications that are distributed has increased substantially. In most cases Docker containers are deployed on bare-metal servers by developers.

    But in some production environments IT operations teams have taken to deploying Docker containers on VMs in order to leverage their existing management tools. At the same time, Platform-as-a-Service environments, such as Cloud Foundry and Deis, have emerged as vehicles for deploying Docker containers as well. The end result is a broad mix of Docker container deployment scenarios that OCI is meant to ensure a level of transparency and interoperability across.

    In fact, with enough vendors signaling OCI support any initial concerns IT organizations might have had about becoming locked into a particular Linux container implementation are quickly becoming a non-issue, which means IT operations teams should have greater confidence in making use of Docker containers in production environments, said Messina.

    While there is no doubt that Docker containers are all the rage among developers, it’s clear that IT operations teams are still coming to terms with all the implications of supporting those applications in production. But with more vendors signaling support for OCI it’s now only a matter of time before those applications start to be deployed in ever increasing numbers.

    Vendors and IT organizations that have committed to OCI now include:

    AT&T, ClusterHQ, Datera, Kismatic, Kyup, Midokura, Nutanix, Oracle, Polyverse, Resin.io, Sysdig, Suse, Twitter, Verizon, Amazon Web Services, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Intel, Joyent, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat, and VMware.

    6:41p
    IBM’s Machine Learning Tech Takes on Solar Power’s Flakiness

    By Joanna Glasner

    Some of the biggest data center operators, companies like Google, Facebook, and Microsoft, buy a lot of renewable energy to clean up the power supply of their cloud infrastructure. Yet, their ability to rely on wind and solar has long been limited by changing weather patterns and their effect on generation levels, putting renewables in the category of “intermittent” energy sources.

    Now, through the power of Big Data and machine learning, new forecasting technology promises to alleviate some of that uncertainty.

    Earlier this month, IBM disclosed that solar and wind forecasts it co-developed using machine learning technologies are proving to be as much as 30 percent more accurate than ones created using conventional approaches. Called the Self-learning weather Model and renewable forecasting Technology, or SMT, it continuously analyzes and improves solar forecasts derived from a large number of weather models.

    Forecast accuracy is better primarily thanks to the sheer enormity of the datasets available, Hendrik Hamann, manager of the Physical Analytics group at IBM Research, said. He is part a team of more of than a dozen in-house researchers, working in collaboration with the Department of Energy, who spent over two years developing the forecasting technology.

    The project used up a big chunk of the petabyte of storage dedicated to it, tapping into the DoE’s high-performance computing facilities for processing power. Data sources include sensor networks, local weather stations, cloud motion tracked by sky cameras and satellites, and historical records going back several decades. Variables are plugged into multiple forecasting models, with the system continuously tracking how they work under varying conditions.

    “We can actually see which one of those models or forecasting systems has performed better than others,” said Hamann, adding that the technology’s applications aren’t confined to solar. Researchers are looking to apply similar machine learning systems for wind and hydro power, and Hamann sees potential for application in other industries, such as tracking soil moisture levels for agriculture.

    Over the next year, the focus will be on integrating the solar forecasting technology into the operations of energy producers. IBM and the DoE are working with operators including ISO New England, an electricity provider for six New England states, on putting the system to practical use. Implementation is particularly timely for the New England region, which in the last five years has gone from just 44 MW of installed solar photovoltaic resources to 1,000 MW.

    Meanwhile, power companies in virtually all geographies are adding solar capacity at a brisk pace. In 2013, solar was the second-largest source of new electricity generating capacity in the US, exceeded only by natural gas. However, the difficulty in producing accurate solar and wind forecasts has required electric utilities to hold higher amounts of energy reserves as compared to conventional energy sources.

    A key goal for the DoE is to make it easier and more cost-effective to implement solar. Funding for the forecasting project came from the department’s SunShot Initiative, a federal research program aimed at making solar fully cost-competitive with traditional energy sources before the end of this decade. If progress is on track, SunShot researchers predict that solar power could provide as much as 14 percent of US electricity demand by 2030 and 27 percent by 2050.

    The team of scientists from IBM and the National Renewable Energy Laboratory are presenting a paper on their preliminary findings this week at the European Control Conference in Linz, Austria.

    7:42p
    Alibaba Cloud Division Aliyun Shares Cloud Privacy Commitment as it Vies for Global Business

    logo-WHIR

    This article originally appeared at The WHIR

    Alibaba’s cloud division Aliyun held its inaugural Data Technology Day in Beijing on Wednesday, where it revealed a new lineup of cloud products and solutions and a “Data Protection Pact” to assure enterprises of the security of data in the cloud.

    Aliyun shared its vision for the future of what the company calls “the Data Technology economy” with over 2,000 developers, entrepreneurs, government agencies, industry players and partners, an economy in which Aliyun is planning to be a major global force.

    Aliyun president Simon Hu also said that the company will invest in data centers in the US, India, and the Middle East as part of a bid to challenge AWS for cloud market share lead in three to four years, Bloomberg reports. Aliyun announced a joint venture with Dubai holding company Meraas to deliver services to the Middle East in May.

    The Data Protection Pact is essentially a data privacy commitment to customers, but also a suggestion or challenge to other cloud service providers to treat data similarly to how the financial industry treats customer’s money. The pact includes a three-point proposal “to the technology industry and the entire society.”

    Aliyun asserts that data generated on its platform is owned by the customer, who has rights to free and safe access, and to share, exchange, transfer or delete it at any time. It says the selection of services to securely process data is the right of the customer, and that data cannot be altered or transferred by Aliyun. Finally, it says that customer data protection on the platform is Aliyun’s responsibility, as money deposited in a bank becomes that institution’s responsibility. This means that similar management, control and internal auditing systems, as well as threat protection, data recovery and related security practices must all be adopted.

    The company presented more than 14 cloud products and 50 solutions it has developed for enterprises and developers across eight sectors, including gaming, multimedia, government, medical, IoT, and finance. More than 200 partners provide additional solutions, and Aliyun expects that number to grow to over 2,000 in the next few years.

    “The huge amount of data and advanced computing capacity has brought great business opportunities to the industry,” said Wensong ZHANG, Chief Technology Officer of Aliyun. “Deep learning and high-performance computing have been widely adopted in Alibaba Group for internal use. Aliyun will roll out high-performance computing services and accelerators based on GPU (Graphics Processing Unit) technology that could be applied in image recognition and deep learning to expand the boundaries of business.”

    New solutions include SSD cloud storage servers, batch computing services used in gene sequencing and graphics rendering, and Virtual Private Cloud systems for cloud and hybrid databases compatible with Oracle systems.

    Aliyun partnered with EnterpriseDB last week to offer customers a diversified relational database suite, and launched an international partnership program in June. Those moves forecasted Wednesday’s announcement of a global enterprise services push, but the company will also need to avoid disruptions like the one it suffered in June to leverage its huge investments.

    This first ran at http://www.thewhir.com/web-hosting-news/alibaba-cloud-division-aliyun-shares-cloud-privacy-commitment-as-it-vies-for-global-business

    << Previous Day 2015/07/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org