Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, May 13th, 2015

    Time Event
    12:00p
    What Digital Realty’s New CIO is Up To

    SAN FRANCISCO – About 10 years ago, when he was CIO at Align Technology, a Silicon Valley-based maker of dental aligners, Michael Henry applied Lean, a methodology for eliminating waste from the manufacturing process developed by Japanese manufacturers during the second half of last century, to optimize the processes that occur between the time a dentist takes an image of the customer’s dental structure and the time the customer’s aligner arrives at the dentist’s office.

    Today, a decade later, he says the same concepts can be applied to streamline operations at Digital Realty. Henry, 52, recently became the global data center real estate and services giant’s first CIO.

    “While the business models are different, and certainly some of the pain points in the market segments are different, the way technology adds value for companies is not dramatically different,” he says. It’s ultimately about using technology to deliver whatever service or product the company provides faster and to improve quality.

    At Align, he used a combination of process redesign, automation, and making operational data available to staff in real time. Manual packaging was replaced by automation, for example, and so were some aspects of creating digital models of customers’ dental structures.

    Michael Henry, CIO, Digital Realty

    Michael Henry, CIO, Digital Realty

    As CIO at Rovi, his most recent gig prior to joining Digital, Henry and his team transformed infrastructure into “infrastructure as code,” building infrastructure and platform services to automate and bring a manufacturing mindset to software development with the goal of improving time to market and code quality.

    In ‘Data Acquisition Mode’

    When we sat down with Henry in April at the company’s sunny head office with windows overseeing San Francisco Bay in the city’s Financial District, it was too early to talk specifics about what changes he had in mind for Digital’s processes. He had only started at the company early that month and was still in “data acquisition mode,” as he put it.

    But one of the directions he’s looking in is a direction set by Bill Stein, Digital’s recently appointed CEO who has served as the company’s CFO since its IPO in 2004. Henry is looking across industrial, operational, and enterprise systems to “help achieve Bill’s vision of getting information out of those systems and unlocking it, so that people can make better decisions faster to drive value” he says.

    Data and data analytics have always been CIO’s best friends, but capabilities for data collection and analysis made possible by modern technology are unprecedented, so that’s one area where we can expect more focus for Digital. One option could be applying analytics and visualization across operational data from data center facilities to respond to outages faster and more effectively, Henry says.

    The Power of Modern Analytics Tech

    He seems generally excited about the analytics capabilities that are accessible nowadays. The way companies have traditionally addressed Big Data has been building out internal data warehousing systems and analytics applications. But with the volume of data companies gather now growing exponentially, many are realizing that more often than not you just cannot buy enough hardware and evolve your in-house analytical capabilities quickly enough.

    The good news is that’s no longer necessary. “There are companies now on the market that are starting to make it very easy for you to do that,” Henry says, noting companies like Tableau and Burst as examples.

    “They’re beginning to take some of Burst’s data warehousing in the cloud and analytical capabilities and Tableau’s visualization capabilities and allow people to get stuff done in days that used to take months before,” he says.

    Digital Starts New Chapter

    Henry joins Digital during a time of big changes, and his role is one of them. As the data center industry evolves, the company, whose traditional bread and butter have been big long-term space-and-power leases, finds itself needing to evolve too.

    Digital has placed a lot more emphasis on partnering with service providers that can add things like managed hosting, cloud, or cloud connectivity services on top of its traditional offerings. The company has also been selling non-core real estate assets promoting more actively its retail colocation services in markets where it has robust colocation facilities, such as San Francisco and Dallas.

    Now that its new management team is fully rounded out – the company also appointed a new COO and CFO in April – Digital is set to begin the next chapter, eyeing international expansion and maximizing the value of its existing footprint in the U.S., Europe, and Asia.

    12:48p
    Loud Partners Taps IO Modular Data Center Infrastructure To Double Capacity

    IO colocation is powering New York City-based Loud Partners’ managed services business. Loud Partners has doubled its capacity at IO, and moved from a shared data center environment to modular data center infrastructure.

    IO allows Loud Partners to offer colocation services without them having to invest resources in building and maintaining their own data center. Alexander Zhivov, director of infrastructure at Loud Partners said shifting to a modular data center enabled them to quickly and incrementally provision capacity to support the growing business.

    “This flexibility has been essential in helping us grow while also managing our costs,” said Zhivov. The company was also able to install one of its largest clients alongside its deployment.

    Loud Partners is a customer in IO’s Phoenix and New Jersey data centers. The company hopes to expand internationally into IO London (Slough) and Singapore.

    IO’s massive New Jersey data center overlooks the New Jersey Turnpike and was once a printing plant for The New York Times. It serves as the East Coast beachhead for IO and as a proving ground for the company’s bid to transform the way data centers are built and deployed.

    Like many managed services providers, Loud Partners has to deliver a hybrid portfolio of both cloud and traditional IT services to clients. The company offers disaster recovery, colocation and application hosting to firms ranging from small to large in the Tri-State area. Its customers include financial, healthcare, marketing, law and non-profits.

    Alexander Mikhaylov, Loud Partners director of support services, touted IO’s ability to fix problems remotely through software.

    Last December, IO split into two companies. IO continues to operate as a data center provider, while the other, called BaseLayer, is a technology vendor, selling data center containers and data center infrastructure management software IO.OS.

     

    1:00p
    CenturyLink Opens Hydro-Electric Powered Washington Data Center

    CenturyLink opened a new data center in central Washington State and will tap the abundant hydro-electric power in support of its hybrid IT services portfolio. More than 85 percent of the utility power supplied to the facility is hydro-electric. The data center will have an initial 8 megawatts but will ultimately support up to 30 megawatts of IT load.

    The data center is located in Moses Lake, Washington, with electricity supplied in part by the nearby Columbia River. Data centers have appeared all along the river in both Washington State and Oregon. The river and its hydroelectric power are supporting a growing northwest data center scene, one also complemented by low natural disaster risk (lowest seismic rating in the western United States) and friendly sales tax breaks on data center equipment purchases.

    In addition to low cost hydropower, the Washington climate allows for significant use of free-air cooling, which drives better Power Usage Effectiveness.

    The new facility provides power costs and efficiency metrics that rank among the best in the industry, according to Dave Meredith, senior vice president of CenturyLink.

    The Washington data center is yet another example of CenturyLink’s recent commitment to environmental sustainability. In Sunnyvale, CA, the company is using natural gas, which may help the colocation provider make its services more attractive to customers that care about powering their infrastructure with clean energy. It also qualified for the climate change tax break in the UK.

    There reportedly isn’t a lot of interest in clean energy among typical colocation customers, however, the tide is rising, and CenturyLink is thinking ahead. Very visible renewable energy moves and investments on the part of cloud providers have put data center renewable energy use in the spotlight. In terms of customers themselves, one very recent example is Etsy, which noted the importance of renewable energy when selecting a provider.

    Last year, the NRDC said colocation providers needed to play a bigger renewable energy role.

    Moses Lake is about 35 miles from Quincy, Washington, one of the state’s primary data center markets, and home to Microsoft’s massive campus. Dell and Yahoo have data centers in Quincy, as well as multi-tenant providers Vantage Data Centers, and Sabey, which recently kicked off construction of a second massive Quincy data center.

    “The central part of Washington State is one of the geographies in which I see substantial potential for further growth as a data center hub,” said Kelly Quinn, research manager with IDC, in a press release.

    Quinn said the location should help enable CenturyLink’s new data center to provide customers with the ability to achieve higher levels of density without incurring high power costs. It also addresses “green” conscious customer needs.

    The central Washington data center will offer on-site data center and network services in addition to cloud access, colocation and managed services.

    CenturyLink continues to expand both in North America and internationally. It recently partnered with NextDC to enter the Australian market and built its first cloud center in Asia.

    The company is expanding its services in addition to footprint. CenturyLink grew its managed database offerings through acquisition of NoSQL Database-as-a-Service Firm Orchestrate in April. It acquired cloud disaster recovery provider DataGardens in late 2014.

    2:17p
    Web-Based Transient Detection Can Enhance Data Center Electrical System

    Bhavesh Patel is Director of Marketing and Customer Support at ASCO Power Technologies, Florham Park, NJ, a business of Emerson Network Power.

    Today’s data center customers want and expect constant access to stored data. Therefore, clean power and system reliability are paramount.

    Many electrical components at data centers are very sensitive to power anomalies that can damage equipment and put a data center at risk of downtime, a situation management always wants to avoid.

    The monetary cost of unplanned partial and total outages from business disruption and lost revenue can be significant, with overall costs related to the duration of the outage and the size of the data center.

    Surge events, transients, swells, VTHDs, and other power system anomalies can be problematic and put a facility at risk.

    To help optimize clean and reliable power, data centers should proactively monitor, measure, and manage their facilities’ electrical systems on a 24/7 basis. Understanding the severity, type, and timing of a power quality event allows personnel to more effectively manage a data center’s electrical system.

    Implementing a web-based transient detection monitoring system can contribute to more effective management of the electrical system. By combining surge suppression hardware and dedicated software that proactively monitors and measures the data center’s electrical system, it can provide a way to detect the occurrence of abnormal power quality events. This provides knowledge about data center management that can be used to predict and address potential problems before they happen. The combined technology goes beyond what is typically available with standard power meters.

    This type of advanced transient detection system can give the ability to monitor RMS voltage real-time at every connected panel. It can also track system anomalies such as transients, surges, swells, crest factor, phase loss/outages and VTHDs and include several preset conditions that would trigger alarms.

    A system that features an embedded web page interface that provides easy and full access to the accrued data gives management and others with a need to know a virtual tool that makes it easy to scroll through real-time measurements and analyze data from any installed location.

    This new technological solution enables real-time power quality measurements, date and time-logged events, sources of power quality issues, differences between locations, and statistical summaries. Typically, the surge suppressors with advanced detection and power quality analysis capabilities integrated into them should be installed at electrical panels and at various locations. These include, the service entrance, at distribution panels, at branch panels, and at individual equipment locations feeding a facility’s most critical business operations.

    A full-featured system could monitor and analyze recently occurring recorded system anomalies and may also include multiple user configurable alarm thresholds. Data can be accessed 24/7 at the device onsite directly or via a browser-accessible network, which enables remote monitoring.

    Incorporating advanced transient detection monitoring into a surge protection device is worth looking into not only for new installations but for retrofits and additions. Understanding the severity, type, and timing of a surge or other anomaly and analyzing detected trends with timely and accurate information will provide added business intelligence to better manage the data center’s electrical system.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:14p
    Venom Security Vulnerability Threatens Data Centers

    A security research firm is warning that a new zero day vulnerability called Venom could allow a hacker to take over vast portions of the data center from within, reports ZDNet.

    Venom is potentially bigger than Heartbleed. A common legacy component in ubiquitous virtualization software such as Xen, KVM, and Oracle’s VirtualBox can allow a hacker to infiltrate potentially every machine in a data center network. VMware and Microsoft Hyper-V are not affected.

    That component is a legacy virtual floppy disk controller. For younger folks, floppy disks look like that save icon that pops up in games and are largely ignored these days, much like the legacy component with the vulnerability. The bug has gone unnoticed for more than a decade, security expert Dan Kaminsky told ZDNet.

    Virtualization has led to more densely packed servers filled with cordoned off virtual machines that share resources controlled by a hypervisor. Venom, which stands for Virtualized Environment Neglected Operations Manipulation, allows access to the entire hypervisor, as well as every other network-connected device.

    The bug is found in open source computer emulator QEMU. Specially crafted code sent to the virtual floppy disk controller can allow a hacker to break out of their own virtual machine and access other machines, regardless of the owner. To exploit, root privileges are needed.

    The researcher who found the bug was Jason Geffner. Geffner is a senior security researcher at Crowdstrike. CrowdStrike has worked with software vendors to patch the bug before the vulnerability was publicly exposed today.

    Heartbleed allowed those with malicious intent to grab data from the memory of servers running affected versions of widely used and open source OpenSSL encryption software.

     

     

     

    3:30p
    Big Switch Networks Embraces VMware NSX

    Looking to bridge a growing divide inside the data center Big Switch Networks announced today at the Open Network User Group (ONUG) Spring 2015 conference that its software-defined network (SDN) can now be integrated with NSX network virtualization software from VMware.

    Prashant Ghandi, vice president of product management and strategy for Big Switch Networks, says that because NSX is widely deployed, IT organizations want to be able to embrace an open SDN platform that supports both NSX and emerging open source technologies that are deployed directly on top of x86 servers or commodity silicon, rather than switches based on proprietary ASIC processors.

    In addition to supporting NSX, version 2.6 of the Big Cloud Fabric (BCF) controller from Big Switch Networks can also be integrated with VMware vCenter Server management software to simplify administration across both environments.

    Finally, Big Switch Networks has also included flow tracing abilities across the fabric topology regardless of the type of source leaf switch, associated spine switch or destination leaf switch implemented. In addition, Ghandi says BCF also enables administrators to keep virtual machine and host properties, virtual machine mobility events, and host connectivity to the BCF leaf switches.

    Ghandi says Big Switch Networks is trying to introduce a level of network agility that is currently missing from most data center environments. While virtual machines can be configured in a matter of minutes, it still takes days and weeks to provision associated networking resources. Because BCF doesn’t require a physical switch to be deployed, the SDN environment running on a bare metal server is simpler to deploy and manage, says Ghandi.

    “We’re trying to reduce capex costs while also providing operational simplicity that provides more agility,” says Ghandi. “Today everybody else gets to have fun while the network guy still has to do all the monkey work back in the lab.”

    Ghandi also notes that introducing higher levels of automation into the networking environment will also enable Big Switch Networks to soon begin applying deeper levels of analytics to maximize the overall efficiency of the networking environment.

    Just about everybody at this point agrees that the future of networking along with the rest of the data center is software defined. Not nearly as clear is the path to which SDN will be achieved inside the data center. Cisco, for example, makes a case for adding a layer of software on top of networking infrastructure that is already widely deployed inside data centers. Relative upstarts such as Big Switch Networks are making the case for moving SDN on to commodity IT infrastructure that is substantially less expensive than proprietary networking equipment.

    Regardless of the path chosen the one thing that is for certain is that for better or worse network management inside the data center will soon be tightly integrated with every other piece of IT infrastructure inside the data center.

     

     

     

    4:29p
    Learn About a New Next-Gen Data Center

    As an integral part of its 25-year expansion plan, premier anti-aging company Nu Skin needed a new next-gen data center to expand its global headquarters and provide a unique interactive experience. Its existing data center was showing signs of aging (which didn’t seem right for an anti-aging company).

    Fortunately, the company owned land directly across the street from its primary corporate complex. It purchased additional property from the city and kicked off an ambitious plan to link the existing facility to a new building that would house a Tier III data center. Nu Skin leadership envisioned a new sales associate experience dubbed Innovation & Data Center spotlighting nine locations across the new complex through an interactive Nu Skin narrative.

    The original plan was to build it internally, but after evaluating the scope of the project, Nu Skin joined forces with Commscope. In this white paper, we learn about the collaboration between Nu Skin and Commscope to build a new next-gen data center. The year-long project was completed on time and has already resulted in increased reliability and performance – jumping from a Tier I to a Tier III data center! Read more…

    Download Now

    4:30p
    Chinese Cloud Provider Aliyun Enters Middle East Partnership

    logo-WHIR

    This article originally appeared at The WHIR

    Aliyun, the cloud computing division of Chinese ecommerce powerhouse Alibaba, is pursuing a joint venture with Dubai-based holding company Meraas to deliver system integration services in the region.

    The joint venture will provide application development, service-oriented architecture, testing, validation, citizenship e-services and big data operations with a special focus on analytics, revenue-generation and payment solutions.

    Aside from the technology venture, Meraas will also develop a “technology-oriented master-planned integrated community” that includes a Tier 3 data center facility that will be used to deliver services. The facility will also include hospitality, residential and commercial spaces, retail and restaurants, and be positioned at part of part of Dubai’s “Smart City” plan aimed at attracting ICT, media, finance and R&D companies to Dubai.

    Public cloud services in the Middle East and North Africa region are expected to grow 17.1 percent this year, reaching $851 million in 2015, according to a recent report by Gartner. Software-as-a-service is the largest segment of the cloud services market in MENA, and is expected to grow 25 percent in 2015 to a total of $205.7 million. In the UAE alone, IDC anticipates managed and data center services markets to grow nearly 20 percent per year on average between 2013 and 2018, reaching $971.8 million by 2018.

    It’s also interesting to note that Aliyun opened its first overseas overseas data center in March, with the launch of its Silicon Valley facility. While initially aimed at Chinese enterprises based in the US, the move signals a clear move to serve clients more globally. The Meraas deal marks its first steps into the MENA region, but could be among many international expansions.

    With 23 percent of the Chinese cloud computing market, Aliyun is China’s largest cloud computing platform, and it has been expanding its capabilities with recent acquisitions including buying Dropbox-like cloud storage service known as “Kanbox” in September.

    The similarities between Alibaba and Amazon are striking, and Aliyun has become widely known in China, much like Amazon Web Services in the Americas and Europe. Until recently, MENA has largely been ignored by major cloud providers, and could be a viable market for Aliyun’s international expansion.

    This first ran at: http://www.thewhir.com/web-hosting-news/chinese-cloud-provider-aliyun-enters-middle-east-partnership

    5:00p
    Optimizing Cloud and Virtualization – With the End-User in Mind

    Let’s take a step back from the usual infrastructure design conversation and dive into a new architecture metric: end-user performance. Often, some of the best cloud and virtualization designs fail to take the end-user into consideration. Sure, they’ll understand the workload or what needs to be delivered. But what about truly optimizing the computing experience of the user when the data reaches its destination?

    Creating a platform with the end-user in mind is somewhat of a new, backwards, approach to creating a full on solution. Although not all encompassing, some of the following technologies and solutions can play a big role on how the end-user processes and manipulates data on their end.

    • WAN Optimization. This is something that we actually cover fairly regularly. WAN optimization has come a long way in helping data get from one point to another faster and with better manageability. Good WAN optimization technologies don’t only help data transfer – they help that information arrive faster. Resulting in happier and more productive end-users.
    • Switch and Network QoS. QoS on the LAN and WAN has been used since the technology was introduced. Prioritizing traffic based on their demand is a crucial part of keeping the right data transmitting at the right speeds. Whether its video conferencing or VoIP – QoS can help speed up network data transmission. Here’s one more note – make sure to set QoS metrics on your wireless architecture as well. Proliferation of wireless devices has created density headaches for some wireless controllers – always keep an eye on how many users are connecting and how you can continue to optimize those sessions.
    • User Virtualization. User virtualization basically abstracts the hardware layer from the user. This means that all user settings are transferred between operating systems, software components and end-point devices. This gives the user the freedom to know that their software settings will remain whether they’re using an iPad or a Windows-based machine. Remember, allowing the user to carry their settings in their “pocket” also lets you integrate better with cloud resources. This makes the user a lot more agile with their apps and data sets.
    • Mobility/BYOD. Organizations are actually creating projects with BYOD and mobility at the forefront of their thinking. The mentality now is “how can we enable our user base to use their devices and rely on us less?” This is where BYOD and efficient control mechanisms are set in place. BYOD can be a very powerful end-user performance enhancer – when properly managed, of course.
    • Profile Controls. Almost every IT engineer has had to work with a profile at one point or another. With so many different systems out there, managing a user’s profile has become challenging. This is where software tools can really come into play. Controlling a user’s personal settings, profile metrics and other variables within a user setup can stop profile corruption and create a smooth profile management system. Users enjoy having their personal backgrounds, icons in a specific spot or other elements of personalization. Granular profile control can help transfer those settings seamlessly.
    • Content Redirection. This is all about latency control. There are no technologies out there which can help gauge latency and allow the administrators to render content either server-side or client-side. Why is this important? Because this process can be done dynamically! Creating a transparent environment for the user while still delivering high-end content will make the user happier and more apt to use the system.
    • Data “on-demand.” We now reside in an always-on society. There is a direct need for data to be available at all times on any device. So, how can IT administrators accommodate? Better cloud-based file sharing technologies are becoming available where there is a direct capability to tie into the copperplate network for security and better delivery. Integration with security policies, AD and even mail systems are taking technologies like ShareFile in a whole new direction.

    Creating a positive computing experience has always been a goal of the IT department. After all, a happy end-user is one that won’t be calling help desk. Still, organizations are trying to continuously improve that experience with concepts like BYOD, optimization mechanisms and incorporating WAN-based technologies.

    There will be growing focus around the end-user. Remember, new concepts around IoE and IoT will be making an impact around current business functions. Furthermore, conversations around wearables technologies are creating new kinds of business use-cases for a number of different verticals. In June, Cisco released their Visual Networking Index report which helped paint a clear picture to the truly emerging IoT trend.

    • The number of devices connected to IP networks will be nearly twice as high as the global population in 2018. There will be nearly three networked devices per capita by 2018, up from nearly two networked devices per capita in 2013. Accelerated in part by the increase in devices and the capabilities of those devices, IP traffic per capita will reach 17 GB per capita by 2018, up from 7 GB per capita in 2013.
    • Traffic from wireless and mobile devices will exceed traffic from wired devices by 2018. By 2018, wired devices will account for 39 percent of IP traffic, while Wi-Fi and mobile devices will account for 61 percent of IPtraffic. In 2013, wired devices accounted for the majority of IP traffic at 56 percent.

    Building an environment capable of supporting these kinds of new demands will allow your organization to stay very agile and competitive in the market. Most of all – you’ll be creating an ecosystem focused on the end-user and their productivity. This kind of architecture almost always pays dividends when it comes to helping your business out-compute and out-compete others in the market.

    << Previous Day 2015/05/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org