Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 29th, 2014

    Time Event
    11:45a
    HP Starts Shipping 64-bit ARM Servers

    Nearly three years after HP first introduced project Moonshot and showed a prototype Redstone platform, a server based on an ARM Server-on-Chip by now-bankrupt Calxeda, the company announced on Monday general availability of two models of ARM servers in its Moonshot line.

    The ProLiant m400 is a general-purpose machine, built primarily for large-scale cloud service providers and Internet companies and powered by X-Gene, the 64-bit ARM SoC by Applied Micro. The ProLiant m800 is powered by a 32-bit ARM SoC by Texas Instruments and is more of a niche product, aimed at highly specialized workloads that can take advantage of TI’s advanced digital signal processing capabilities.

    Cambridge, England-based ARM Holdings licenses its processor architecture to chip manufacturers. Its low-power chips power most of the world’s smartphones, but there has been interest in adopting the architecture for the server market, which has become increasingly conscious of energy consumption.

    HP is the first major server vendor to bring an ARM-based machine to market. Applied Micro was first to market with a 64-bit ARM SoC for servers, and its closest competitor in the space is AMD. Calxeda was considered another leader, but late last year it went out of business, failing to raise enough money to sustain itself.

    ProLiant m400: Built for web caching

    Paul Santeler, vice president and general manager of HP Moonshot, said HP expected web-scale service providers to adopt the m400 ARM servers for web caching workload. The server has high memory bandwidth and high IO throughput, making it good at moving a lot of data into memory quickly.

    Caching applications, such as widely used mamcached, perform well with such memory-optimized servers. “It’s really about getting all that data not on disk drive but bringing it into memory,” Santeler said.

    A single Moonshot chassis packs 45 m400 nodes. Each cartridge, carrying an eight-core chip, memory, flash storage and dual-channel 10Gb connectivity, consumes 75 watts at full speed and 42 watts when idling.

    A single chassis with 15 cartridges, a network switch and three power supplies starts around $58,000.

    The m400 cartridge

    The m400 cartridge

    The server comes preloaded with Canonical’s Ubuntu operating system, including Juju, Ubuntu’s orchestration software.

    With Moonshot, HP is not going after the largest of the web-scale operators – the likes of Facebook or Google – who design their own hardware. The m400 is made for service providers who are large but not large enough to merit their own hardware design shops.

    Another market where HP sees a lot of potential for ARM servers is the mobile developer space.

    “This is actually a very good software development platform for … mobility applications,” Santeler said. “So we think there’s going to be a market there.”

    ProLiant m800: A DSP-heavy niche server

    The m800 is optimized for real-time data processing. The magic here is not the ARM chip but the integrated digital signal processors, or DSPs, which are a TI specialty.

    Each SoC features four ARM cores and eight DSPs, and all SoCs within a chassis are interconnected. “We actually were able to connect all of the SoCs inside this system … so they can share and pass memory back and forth,” Santeler said.

    TI has libraries of DSP applications for things like video processing, encoding, audio analysis, etc. The m800 was built for customers that have special needs in one of these areas. eBay’s PayPal, for example, uses m800’s real-time processing capabilities for data analytics, according to Santeler.

    Each ProLiant m800 cartridge has four processors, each combining four ARM cores and eight DSP cores. The cartridge also includes memory, optional flash storage and networking.

    The m800 cartridge

    The m800 cartridge

    A single cartridge draws 87 watts maximum and 48 watts when idling. A full chassis fits 45 m800 cartridges.

    Starting price for a 15-cartridge m800 chassis with a switch, 32GB of flash memory and four power supplies is about $82,000.

    12:00p
    Thirty Companies Buy Into Cisco’s ACI-Dependent Cloud of Clouds

    Cisco has secured a lot of new partners in its $1 billion initiative to build a Cisco cloud of clouds, a global network of data centers operated by a mass of service providers offering nearly every cloud service imaginable.

    The company said on Monday that 30 new partners (scroll down for the full list) have signed on since it first announced the Intercloud initiative in March. They include data center and cloud providers, as well as “cloud aggregators,” resellers and software vendors.

    The 25 cloud providers on the list have also bought into Application Centric Infrastructure (ACI), which is Cisco’s answer to competing Software Defined Network technologies, including the OpenFlow standard. The Intercloud will be based on OpenStack and automated using ACI technology.

    Hooking the ecosystem on ACI

    Cisco has taken a very different approach to tackling the cloud services market than other incumbent IT vendors have. HP, for example, has been building its own cloud services business, acting itself as the provider. Dell’s strategy has been to act as a reseller of other providers’ cloud services and do cloud consulting. IBM bought SoftLayer and made that its cloud services business.

    Instead of becoming just another service provider, the networking giant decided to build out a global Cisco cloud platform based on OpenStack and, to a large extent on its own proprietary technology (ACI). While it will also be providing a lot of cloud services itself, a big part of the strategy is selling the technology that enables Intercloud into the massive ecosystem of partners the company is hoping to build.

    Cisco will even finance partners’ purchases of its gear needed to participate in Intercloud. Cisco Capital has set aside $1 billion to loan to companies that want to buy and implement ACI.

    Commitment from 30 new companies potentially expands the reach of the Intercloud network to about 250 data centers in 50 countries, according to Cisco. ACI is part of the Intecloud architecture, so that is 250 potential data centers that will need it.

    Equinix joins the party

    One of Cisco’s new Intercloud partners is Equinix. Its role in the partnership is to stand up a hosted private cloud solution together with Cisco that will be offered in its data centers through the Equinix Cloud Exchange.

    The colocation provider also bought some new Cisco networking products, including Nexus 9000 switches and Cisco APIC (Application Policy Infrastructure Controller), both components of the ACI infrastructure, for its cloud exchange.

    Lots of native Cisco cloud services promised

    Cisco will provide its own suite of cloud services on the network, and so will all the providers connected to it.

    The list of services Cisco said it would provide is long. About 15 services long, it includes everything from Platform- and Infrastructure-as-a-Service to collaboration, virtual desktop and energy management.

    Intercloud Fabric now on sale

    As part of Monday’s announcement, Cisco kicked off general availability of its Intercloud Fabric solutions.

    There are two flavors of the solutions that start shipping this week: for businesses’ internal use and for service providers.

    The business one includes the Fabric Director, an IT admin portal for lifecycle management of physical, virtual and cloud workloads, and the Fabric Secure Extender, which connects multiple clouds.

    The provider solution is a virtual appliance that enables providers to offer hybrid cloud services without adding their own APIs.

    Cisco has sold the provider fabric to BT, the large London-based telco, which will use it to build hybrid cloud services which will connect to Cisco’s cloud and its other provider partners.

    List of Cisco’s new Intercloud partners:

    Adapt, ANS Group, BT, CGI Group, Cirrity, CTI, Data#3, Deutsche Telekom, Ethan Group, Infront Systems, Lightedge Solutions, Logicalis, Long View Systems, Netelligent, OneNeck IT Solutions, OnX Enterprise Solutions, Oi, Optus, Peak 10, PT Portugal, Proxios, Quest Technology Management, Groupe Steria, Virtustream, Dimension Data, Forsythe Technology, Presidio, World Wide Technology, Comstor, Ingram Micro, Tech Data

    3:30p
    You, Your Mobile Life, and the Disruptive Technology Behind it All

    David Nicholson is the Chief Strategist with EMC’s Emerging Technologies Division.

    This story isn’t about someone like you, it’s about you. Your unique habits, preferences and expectations.

    It’s about how you like to start your holiday shopping early and how you are the master of multi-tasking, regularly completing your banking on your smartphone while you’re in transit on your daily commute. It’s the expectation that you have instant access to the right information to help you manage everything important to you – from the big stuff like managing your finances and your health to the more personal stuff like updating your social media status to stay connected with your friends.

    You are now a “Market of One” (breaking free from the “Market of Many”). Technology has given you free agent status to the brands that want your business and more importantly, your loyalty.

    Why does this matter? Because a remarkable, nearly-invisible technology rests at the intersection of billions of people around the world and their every waking moment…flash technology.

    The technology behind the change

    Flash enables a unique window to our own personalized world, and for some is woven throughout life as a constant companion as they run, sleep, wake up, talk, eat, sit and walk. It makes today’s on-the-run lifestyle possible.

    We now expect brands to know us and earn our loyalty by delivering all of our information and preferences to our mobile devices as a Market of One versus the traditional Market of Many. With billions of people, devices and brands competing for customer loyalty and attention, flash is a core technology that was born in the consumer world, has grown up quickly—gotten more affordable—and now is a strategic element of most next generation data centers.

    Powering the Market of One experience

    Flash storage technology remains one of the biggest technology disruptors in the history of computing—both consumer grade and enterprise grade flash. Whether a powerful small chip to turbocharge the performance in a phone or an enterprise drive that is turbocharging a database application—flash is central to delivering orders of magnitude better performance to the databases, virtual environments and analytics powering this Market of One experience to users.

    For instance, before, two diners standing on the same street corner using Google to search for a nearby restaurant would get the same list of results. Contrast that with today and the consumer’s elevated expectation—the vegetarian expects the results to factor in their food preferences and the diner who doesn’t own a car expects to only get options back that are within walking distance. This personalization paired with today’s expected levels of “give it to me now” performance that are based on daily interactions with our devices simply would not be met without flash technology.

    It’s not often talked about… it’s buried deep within the IT infrastructure—sometimes within a storage array and sometimes within the server—but its impact on performance is immense.

    How the Market of One is transforming IT

    In the past, a query from a mobile device for something a user desires would be fulfilled in what’s known as a “Web 1.0” fashion. The device retrieves static data and serves it up as quickly as possible—averaging 10’s of data queries. The IT infrastructure could handle this retrieving and serving at the speed of traditional spinning disk drives because it was built to serve a Market of Many.

    Contrast that with today’s Market of One world, whereby mobile users expect their personal flash experience to be just that, their own PERSONAL flash experience, and not a dumbed-down experience serving static information. The ability to master this is setting companies apart from others, building brand loyalty, and truly changing how consumers select the brands they want and use in their lives.

    For brands to deliver the Market of One experience it means tremendous change is underway with their back-end technology and how their IT departments operate. It takes hundreds of queries to construct what is presented to a user as a Market of One —that is a 10X increase in the back-end activity required to deliver that end user experience. It’s a massive change, on an enormous scale. There are billions of people expecting a Market of One experience.

    It takes two: the global analytics engine

    This is not a one-way street where users simply request and receive information to their device. Mobile also means “mobile as sensors”—in an era of Big Data. Machine data generated by an increasing number of devices is also fed back into the “global analytics engine” for organizations to slice and dice as ingredients to improve the customer experience.

    What does this mean for organizations? It means their IT workloads will require varying levels of scale, performance, capacity. These enterprises will have an insatiable appetite for flash. Many of these performance-hungry workloads will continue to evolve to not just use, but rely upon enterprise flash.

    Flash, which launched the world’s ability to forever change how consumers learn, shop, navigate, is also forever changing and transforming how organizations are building their data centers to deliver the personalization and convenience of a Market of One – to you, for you and by you.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Data Center Jobs: ViaWest

    At the Data Center Jobs Board, we have a new job listing from ViaWest, which is seeking a Vice President of Critical Infrastructure in Englewood, Colorado.

    The Vice President of Critical Infrastructure is responsible for recommending and overseeing implementation of critical facility best practices ensuring alignment with the strategic plan, partnering with the data center innovation team to evaluate expansion and acquisition opportunities, ensuring systems are in place to execute master planning for power distribution and cooling, overseeing capacity planning and forecasting standards, executing data center operational standards, and building an engineering and leadership pipeline for the future. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    4:02p
    Financial Services Firm Takes 1.2MW at Ascent’s Chicago Data Center

    Ascent and Carter Validus Mission Critical REIT announced TransUnion has taken a newly constructed CH2 Chicago data center suite. The credit and information management services provider has signed for 1.2 megawatts of capacity at the site.

    Ascent collaborated with TransUnion to custom design the data center to suit its specific needs. The project was completed in seven months, from design to build-out, with the suite delivered to TransUnion in May 2014.

    The Northlake, Illinois, data center is a 250,000 square foot multi-tenant facility. Carter Validus purchased it from Ascent in May for $211.7 million, but Ascent remained in charge of facility management.

    Ascent offers a purpose-built “Dynamic Data Center Suite” product (customizable wholesale colocation) and shared infrastructure colocation suite (closer to retail colocation). Offering a blend of both wholesale and retail, as well as working with customers to customize space are two major trends in the wholesale market.

    TransUnion is using a Dynamic Data Center Suite.

    Each customer in the multi-tenant facility can have its own entrance, security access and shipping and receiving area, as well as dedicated mechanical and power infrastructure. The approach provides Ascent with the flexibility to offer different suite designs within the same property.

    The data center offers free cooling for approximately 45 percent of the year, hot aisle containment and an on-site substation.

    Other tenants at CH2 include the cable company Comcast Corp. and an unnamed national retail chain.

    “The new data center represents a significant investment in the transformation of TransUnion’s technology and data center infrastructure and our growing commitment to efficient facilities,” said Josh Neyer, global head of data centers for TransUnion. “With this new environmentally friendly data center, we can expand capacity and better serve our customers. The partnership with Ascent and numerous other suppliers allowed us to successfully deploy a uniquely efficient space on a fast-tracked timeline.”

    The Chicago market remains strong in both the city and the suburbs. Ascent, around since the late 90s, has built several data centers in the suburbs and announced its first facility in the city proper, CH3, last year.

    4:15p
    Rackspace Reboots Cloud Servers to Apply Xen Security Patch

    Rackspace had a Xen hypervisor-based cloud reboot over the weekend. Last week, Amazon Web Services told customers it was rebooting a small portion of its EC2 fleet for the same reason.

    The reboot is in order to patch a known issue that affects all Xen environments. All cloud providers who use Xen as foundation will undergo some patching over the next few days.

    Given the security-sensitive nature of the problem, Rackspace is withholding some details, citing concerns about customer safety, which was Amazon’s approach as well.

    “Our engineers and developers continue to work closely with our vendors and partners to apply the solution to remediate this issue,” wrote the company. “While we believe in transparent communication, there are times when we must withhold certain details in order to protect you, our customers.”

    The reboot will be necessary for all Standard, Performance 1 and Performance 2 cloud servers within Rackspace’s Infrastructure-as-a-Service offering.

    The reboot started on Sunday and will go on until Wednesday, much like AWS, as the company rolls through different regions one at a time. Maintenance for the next region doesn’t begin until the previous one is complete.

    The company is urging customers to take proactive steps to ensure proper operations after the reboot. Customers should:

    • Verify all necessary services (Apache, IIS, MySQL, etc.) are configured to start on server boot
    • Ensure server images are up-to-date and file-level backups are enabled. Confirm that you have backups of all critical data
    • Confirm that any unsaved changes, such as firewall rules and application configurations, are saved

    Rackspace will communicate with customers via email and a status page.

    4:23p
    MemSQL Raises Series C from In-Q-Tel, Updates Database

    Database startup MemSQL released version 3 of its database, combining in-memory row store with a flash-optimized column store, and announced strategic investment from In-Q-Tel.

    A key feature of the new release is the ability to automate cross-data center replication to let businesses maintain a backup copy of data for disaster recovery or for use in read-heavy operations. The other big push to help improve performance is integrating a tiered storage architecture, which joins transactional in-memory row store and highly compressed flash or disk-based column stores.

    The company said advanced data compression techniques and the use of flash, SSD or disk as a storage medium allowed itss column store to offer good price performance while maintaining MemSQL’s data loading, concurrency and query execution speed.

    Additional features include bulk data loading from a file system or Amazon S3 and support for views as well as table, column and row level security.

    MemSQL also reaised a Series C round led by In-Q-Tel, which has previously invested in various technology solutions, such as Palantir, in support of the U.S. Intelligence Community.

    MemSQL said it has taken in $45 million over four rounds since it began in 2011 but did not disclose what the size of the latest round was. The company said the investment will help further develop in-memory databases for U.S. Government applications.

    In-Q-Tel has reportedly invested $50 million in Big Data analytics company Palantir.

    “MemSQL’s unique offering delivers fast data processing to allow organizations instant access to key information, while effectively managing their infrastructure and preventing anomalies,” said George Hoyem, partner on IQT’s investment team. “As Big Data analytics increasingly becomes a priority for the government agencies we support, we are confident MemSQL’s technical excellence and strong engineering team will provide our customers with the capabilities to extract greater value from their data.”

    5:00p
    Latest OpenDaylight Release Helium Out

    OpenDaylight is an open source framework for software defined networking (SDN) that has been gathering steam. The project has released its second open source codebase to the public, dubbed Helium. The previous release, called Hydrogen, came in February.

    OpenDaylight is an open platform for network programmability to enable SDN and Network Functions Virtualization (NFV) for networks of any size and scale. The goal of the project is to create a common SDN controller, and Helium is the code base that powers the controller.

    OpenDaylight is in good position to do for SDN what OpenStack did for cloud. It has increasing support among the biggest vendors in the networking space and an active membership overall with over more than coders and growing. Brocade, Cisco, Red Hat, IBM and Citrix are among the supporters.

    That list includes a few companies that have taken the proprietary path when it comes to network management, leading to some skepticism. There is also concern that the framework is not mature enough.

    The Helium release is hoped to dispel skepticism, and the project has won over some former holdouts, such as HP, which recently stepped up its support.

    There are improvements in the Open vSwitch Database integration project and a technology preview of advanced OpenStack features.

    New in Helium is the ability to perform clustering for better failover, as well as enhanced security, authorization and permissions and some general bug fixes. It allows users to manage a network through declarative policies.

    Brocade released an OpenDaylight SDN controller called Vyatta earlier this month.

    “The momentum behind the OpenDaylight Project is unlike anything else the networking industry has experienced and that is because the customer demand for an open, software-defined platform is louder than ever before,” said Neela Jacques, executive director of the OpenDaylight Project, commenting on Brocade’s controller release.

    6:16p
    IT Monitoring Solution Dataloop.IO Granted $800k in Early Seed Funding

    logo-WHIR

    This article originally appeared at The WHIR

    London-based infrastructure monitoring software start-up Dataloop.IO has closed an early seed round of $800,000 (£480,000) which will help Dataloop.IO accelerate its product development and hire new talent in preparation for a public launch later this year.

    The company’s monitoring software is designed to eliminate the hassle of setting up open-source monitoring tools, deeply monitor online services, and quickly alert companies to broken features and downtime.

    In an interview with The WHIR, Dataloop CEO and co-founder David Gildeh said Dataloop’s monitoring solution offers several advantages over custom-coding solutions from scratch or build one using open-source code.

    Launched in October 2013, the company has been working with Rackspace, Blinkbox, and Hive Home from British Gas to develop its software.

    Alfresco, an open-source document management company, was where Gildeh met his fellow Dataloop co-founders. He had sold his first startup, which provided file sharing and collaboration software, to Alfresco. Following the sale, he headed the enterprise cloud business at Alfresco, where he met future Dataloop co-founders Steven Acreman and Colin Hemmings who were running the operations for the cloud service.

    Dataloop was chosen in 2013 to be part of the first cohort of Microsoft Ventures’ London Accelerator.

    Dataloop’s latest funding round was led by Forward Partners, and angel investors include Alfresco co-founder John Powell, Just-Eat.com chairman John Hughes, Huddle co-founder Andy Mcloughlin, and SecretEscapes.com co-founder Troy Collins.

    After interviewing dozens of COO’s and operations teams in the UK and the US, the Dataloop team found that there’s no “great product” in the infrastructure monitoring space yet. “It’s a massive pain; companies are spending loads of time and resources building their own monitoring systems,” Gildeh said.

    In their research, they found that some companies use Zabbix and Sensu to power their monitoring solutions, but around half of companies were monitoring their IT infrastructure using the open-source Nagios platform, which can be difficult to setup. Also, having debuted in 1999, Nagios is showing signs of age, and has hardly kept pace with the explosion in DevOps and modern cloud infrastructures.

    Companies with the available development resources like Netflix and Twitter have managed to build their own custom monitoring solutions from scratch that suit their needs, but this is unrealistic for smaller and less tech savvy organizations.

    In effect, Dataloop provides to its customers IT monitoring features similar to the ones created by these companies.

    Dataloop also paid special attention to making the solution easy to use and with built-in colorful visualizations that display data. “We are really focusing on making as simple to use as possible and wrapping it up in a really nice UI,” Gildeh said.

    Dataloop is pioneering a real-time alert service that can be compared to IFTTT. The client can write rules around multiple metrics, such as CPU or disk space use, to ensure that only important alerts are sent.

    “That’s important because right now one of the biggest problems in monitoring is that people are just spammed constantly with emails coming out of their monitoring tools,” he said. “One of the companies we know gets 5,000 a day. When you get 5,000 emails a day, you’re not going to look at any of the emails in fact, it gets so bad that you get notified of errors when users start complaining, which is not where you want to be with your monitoring.”

    In upcoming releases, Dataloop will be adding the ability to send scheduled reports that give a bird’s eye view of IT systems.

    Dataloop is currently working with 20 companies in a private beta of sorts. Dataloop was chosen by Hive Home from British Gas, the UK’s largest energy and home services company, to monitor the cloud solution hosting an initiative to bring customer hot water tanks online.

    “What they’re launching is an Internet-of-Things play,” Gildeh said. “For £200. you get a device on your boiler and you can use your mobile phone to control it. It’s kind of like Nest for the UK, except they have more users than Nest.”

    As it nears wide release, Dataloop is also working on ways to make its solution easier and less expensive for smaller companies to deploy so that they, too, can identify problems with their IT delivery before users even notice.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/next-gen-monitoring-solution-dataloop-io-granted-800k-early-seed-funding

    6:52p
    Senate Aims to Make Government Data Center Consolidation Law

    The U.S. Senate has voted in favor of legislating consolidation of the federal government’s out-of-control data center portfolio earlier this month.

    The bi-partisan bill, titled the Federal Data Center Consolidation Act, seeks to ensure that the 24 agencies that are part of the Federal Data Center Consolidation initiative (now in its fifth year) follow the action plan set out by the initiative and implement regular reporting on the progress. It would also legislate oversight of the initiative’s progress by the Office of Management and Budget and the Government Accountability Office.

    By the most recent count, the agencies involved have about 9,600 data centers around the country. GAO estimates that the government stands to save about $3.1 billion if agencies shut down unnecessary data center capacity.

    FDCCI, rolled out by Vivek Kundra (the first White House CIO ever) in 2010, set out a series of deliverables on a timeline for agencies, which consisted of the basic stages of inventory, identification of unnecessary facilities, creation of consolidation plans and execution. There are also reporting requirements along the way.

    But, as GAO has found repeatedly throughout the initiative’s life, many of the agencies have struggled to meet the deadlines. Some even failed to complete inventories.

    The latest GAO report on FDCCI’s progress was published last week. It found that not were many agencies still behind on creating consolidation plans, they were also unable to accurately track or forecast savings achieved from consolidation. Some couldn’t even produce as much as the baseline power consumption figures for their data center.

    The GAO found that the agencies’ total savings estimate was about $880 million, as opposed to its own estimate of $3.1 billion. Some couldn’t calculate their baseline data center costs, while others were simply underreporting the savings they were expected to achieve.

    The bill, whose next stop is House of Representatives, would set hard deadlines and make data center inventories and consolidation strategies a requirement. GAO would be required to verify their inventories, and OBM would be required to report to Congress routinely on savings realized.

    Senator Tom Coburn, an Oklahoma Republican and one of the bill’s chief sponsors, said GAO itself considered the piece of legislation essential to ensuring progress. “The bill is a crucial component in our efforts to reform the way the federal government acquires and manages IT,” he said in a statement.

    The GAO has endorsed the bill, which also enjoys support by the Professional Services Council and the IT Industry Council.

    6:59p
    QTS Realty Expands Disaster Recovery Portfolio with DRaaS

    QTS Realty Trust announced a new disaster recovery solution dubbed Disaster Recovery as a Service (DRaaS) Monday. DRaaS is a cloudy counterpart to the company’s existing managed disaster recovery portfolio.

    The DRaaS offering is a software-based product that installs seamlessly onto existing IT infrastructure, according to QTS. DRaaS is application- and hardware-independent and suitable for physical and virtual environments.

    It works with different storage and server technologies, aiming to extend the life of legacy assets. It is also tunable down to selecting what virtual machines you want to protect.

    It provides real-time protection and instant workload recovery on any combination of physical, cloud or virtual servers.

    Users can select the specific virtual machines they want to protect, replicate single or multiple VMs and protect multiple Virtual Machine Disks (VMDKs) connected to the same VM. No storage configuration is necessary.

    Customers are able to manage and customize how systems will run in the event of a disaster without additional hardware or data center application dependencies. Parameters for replication and recovery can be established at desired intervals, be it 15 minutes or every five hours, depending on the business need.

    The offering expands the company’s disaster recovery offerings, which include website failover, QTS DR On-Demand and QTS DR High Availability. DRaaS brings in an entirely software-based product.

    QTS launched the On-Demand and High Availability DR offerings in August of last year. On-Demand is a fully managed, image-based replication service for customers with virtual server environments. It utilizes geographically dispersed online servers that are activated in the event of a disaster. The DR High Availability service provides real-time, continuous data replication of both physical and virtual server environments.

    “Today’s complex IT infrastructures and demanding business requirements make being prepared for unplanned interruptions more critical than ever,” said Jim Reinhart, chief operating officer, development and operations, QTS. “We’ve designed QTS Disaster Recovery as a Service to be consistent with our overall mission to provide high-value, high-benefit, cost-effective customer solutions.”

    11:00p
    CenturyLink Launches Shanghai Data Center, its First in Mainland China

    CenturyLink Technology Solutions has established a data center in Shanghai, its first location in mainland China. The company has two in Hong Kong, two in Singapore and one in Tokyo.

    The facility is operated by GDS, a major Chinese data center provider. CenturyLink also partnered with local IT services provider Neusoft which buys the hardware and leases the data center space on its behalf.

    The world’s second-largest economy is also one of its fastest-growing IT markets. A recent Gartner estimate is that IT spending in China will reach $375 billion in 2015.

    Staying within the Great Firewall of China

    CenturyLink went through the trouble of setting up a data center in mainland China even though it already has an extensive footprint elsewhere in Asia Pacific primarily to address customers’ concerns with data sovereignty and performance. There is always some difference in latency for a service from location to location, but this difference is bigger in China, where all cross-border network traffic goes through what is popularly referred to as the Great Firewall of China.

    Officially dubbed the Golden Shield Project, it is essentially an Internet surveillance and censorship system. Not only does it slow down the movement of data to and from mainland China, there is also a lot of potential for errors, since the system can shut off entire blocks of IP addresses which sometimes include addresses it did not mean to shut off, Brian Klingbeil, CenturyLink’s senior vice president of international development, explained.

    Therefore, a service that caters to customers within the country’s borders generally performs better if it is hosted within those borders and doesn’t have to pass through the Golden Shield. “Enterprises wanting to compete in China need to host their IT in China,” Klingbeil said.

    Partner acting as eyes and hands on the ground

    It is difficult for an outside company to set up a business in China. Chinese regulations preclude a company like Monroe, Louisiana-based CenturyLink from owning or physically operating IT equipment in the country, for example, hence the partnership with Neusoft.

    CenturyLink designed the environment within the GDS data center and manages it remotely, but its eyes and hands in the facility are Neusoft. The partnership also gives CenturyLink access to a local firm that can help its non-Chinese customers establish operations in the country.

    CenturyLink’s pod in the GDS facility is the same as its pods elsewhere around the world, Klingbeil assured. It is a multi-tenant managed hosting environment, but the company can also provide some colocation space in the facility for customers who need a hybrid setup.

    Customers using the Shanghai location get the same portal, the same management team and the same support desk as they get in other CenturyLink locations around the world, of which there are close to 60.

    Growing Chinese data center provider

    GDS has 17 data centers in mainland China and Hong Kong. In July, an investment company called Singapore Technologies Telemedia bought a 40-percent stake in the data center provider with the plan to grow its footprint further.

    In the past, ST Telemedia has also invested into Savvis, which CenturyLink bought in 2011 and used as the foundation of its data center services business. Clingbeil used to be chief operating officer at Savvis.

    Another past recipient of ST Telemedia investment is Equinix, the U.S. based colocation giant.

    The other GDS owners are SB China Venture Capital, International Finance Corporation and China Everbright.

    << Previous Day 2014/09/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org