Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, August 21st, 2014

    Time Event
    12:00a
    VMware Buys CloudVolumes, Which Divorces Application from OS

    VMware has bought CloudVolumes, a startup whose software makes an application independent from the underlying operating system deployed on a virtual machine. The idea is to abstract all underlying infrastructure, including the OS, to enable the application to be moved from one environment to another instantaneously.

    While CloudVolumes works for deploying applications on servers, for VMware, the move is aimed primarily at bulking up its desktop virtualization product portfolio collectively called Horizon. Sumit Dhawan, senior vice president and general manager of desktop products and end-user computing at VMware, said combination of CloudVolumes and Horizon would enable clients to build real-time application delivery systems.

    Such a system can allow a company to manage all applications centrally, including availability and updates. It also enables an IT admin to deploy an application to virtualized environments on desktops, servers or in the cloud.

    “Customers are looking to modernize their existing Windows application delivery architecture to be more like mobile IT,” Dhawan said.

    VMware did not disclose the acquisition price.

    Essentially, CloudVolumes splits the elements needed to deploy an application into multiple buckets (it calls them “application management containers”): middleware, applications and settings in one; configuration licenses, data and files in the other. The former sits on shared volumes and the latter on a unique volume for each application.

    CloudVolumes actually disaggregates the software from the operating system running on a VM. With components of the application stack organized into different volumes, the volumes can be quickly attached or detached, making it easy to move them between different environments instantaneously.

    Here’s CloudVolumes’ graphic illustrating how it works versus how traditional virtualization works:

    Click on the image for a larger version

    Click on the image for a larger version

    12:30p
    Data Center Security Firms vArmour, Guardicore Close Funding Rounds

    Virtualization of IT, from servers to networks, is giving rise to the need for a new breed of security, different from the current security solutions that are tailored for traditional data centers and networks. Israeli security startup GuardiCore raised $11 million in a Series A funding round, and another company working in the field, called vArmour, raised a $15 million Series C.

    Both companies are addressing increasing and changing security needs that come with virtualization. Both argue that traditional security methods don’t address the rapid evolution of data center architecture.

    The rise of the software-defined data center means traffic rates within the data center are climbing, and current security solutions are tailored for traditional data centers and networks.

    Virtualization means more software and data is packed into a single server. Its tougher to protect files and locate intruders due to the file’s location being more tenuous than with traditional, physical data center infrastructure.

    vArmour secures data-defined perimeter

    vAmour is taking an entirely software-based approach to securing the increasingly virtualized data center and cloud environments.

    As massive, growing amounts of data continue to be distributed on a global scale, security controls need to move deep into the data center and be as dynamic as the applications and data they protect. There, at the “data-defined perimeter,” vArmour provides needed protection dynamically and securely by giving enterprises instant visibility and control of their east-west traffic flows for both old and new data center architectures.

    Former Palo Alto Networks CEOs, Dave Stevens and Lane Bess, also recently joined vArmour’s board.

    “Virtualization is driving a revolutionary change in architecture across the data center,” Stevens said. “Enterprises are simply not able to use legacy security and networking technologies to protect today’s highly virtualized environments. The vArmour team has been able to design a solution, unencumbered by legacy thinking or technological trappings, that meets these challenges in a radically new and more effective way. I fully expect them to reform how enterprises protect their new, data defined perimeter in today’s reality of pervasive virtualization, constant threats and ongoing security breaches.”

    vAmour has raised $36 million to date, previously raising a $15 million Series B in December 2013. Columbus Nova Technology Partners, Citi Ventures and Work-Bench Ventures led the latest round while the previous round was led by Menlo Ventures.

    GuardiCore sets up an ‘ambush’ server

    Guardicore’s first platform component is called Active Honeypot. It dynamically reroutes traffic without the attacker knowing to a highly monitored stealth “ambush” server. It then provides insights into the nature of the attack. The company is currently testing the technology with some potential customers.

    “GuardiCore is developing a completely new breed of network security,” said CEO Pavel Gurvich. “Powered by software-defined networking methodologies and recent advances in virtualization, our solution is scalable to multi-terabit traffic rates.”

    GuardiCore’s founders Gurvich and Ariel Zeitlin are both veterans of Israeli Defense Forces’ technology units.

    Architectural changes in data centers have led to an explosion of intra-data center (east-west) traffic at terabit levels. GuardiCore says that state-of-the art security techniques such as IDS, IPS, sandboxing, deep packet inspection and threat emulation, cannot scale to these data traffic rates and are therefore largely considered to be impossible to apply inside data centers. GuardiCore makes these security techniques scalable to keep pace with data traffic.

    “As the data center evolves to a more software-defined model, enterprises need to think about security in radically different ways,” noted Scott Tobin, general partner, Battery Ventures. “Traditional security techniques have focused on keeping the bad guys out of the perimeter. But as we’ve seen in recent high-profile security breaches, these methods are far from complete. GuardiCore’s approach assumes you have already been compromised and provides levels of visibility and protection that were previously unattainable.”

    GuardiCore’s investment round was led by Battery Ventures, with participation from Greylock IL, an affiliate fund of Greylock partners, as well as undisclosed strategic partners.

    1:00p
    Survey: IT Workers Not Confident in Federal Data Center Reliability

    Data center reliability is a top federal agency priority, but a recent survey by MeriTalk found that government IT workers don’t have too much confidence in federal data center uptime.

    Eighty percent of federal IT workers said data center reliability was a top agency priority, and 42 percent of field workers said downtime has affected their ability to deliver on their mission.

    The federal IT space is undergoing massive consolidation while adopting a cloud-first approach. The survey, however, reveals that IT is calling for additional budget and data center capacity. An average agency has half of the storage, power and personnel they say they need to ensure reliability and agility.

    Agencies are trying to cut costs and consolidate while trying to maintain reliability. However, the survey reveals that reliability is not being handled as well as it could be.

    Nearly three-quarters of those surveyed (73 percent) believe it is possible to successfully consolidate while maintaining scalability. Currently, 74 percent of agency systems run on-premise and 26 percent run in the cloud.

    While federal IT pros say data center reliability is a top priority, only 19 percent are fully confident in their department’s ability to meet their most critical uptime and failover Service Level Agreements (SLAs).  Tight budgets and legacy technology are a common reason IT workers give for their lack of confidence, according to the survey.

    In the past month, 70 percent of agencies experienced downtime of 30 minutes or more, and 90 percent of field workers said the downtime affected their ability to do their jobs. Three quarters of IT pros said reliability has been improved over the last two years, however, with active investment in upgrades, backups and security.

    The leading causes of downtime were network or server outages at 42 percent, followed by connectivity loss at 29 percent. Just four percent of incidents were caused by a natural disaster. Sixty-four percent of field workers graded their IT department at an A or B when it comes to recent downtime management, emphasizing the need for better communication as a top priority.

    In addition to loss of productivity, one in three field workers admitted to using a personal device while one in four used a workaround like Google apps. Both activities are potential security risks. A number of employees are chosing productivity over security in the event of extensive downtime.

    Real-time information access saves the average federal field worker more than 800 hours in productivity each year. That equates to about $32.5 billion in annual productivity savings.

    Thirty-six percent of field workers gave their IT departments a grade of C or lower for recent downtime management, and just 29 percent said they believe their IT departments fully understand the effect that downtime has on their ability to work.

    1:16p
    Transforming Data Center Generalists into Specialists

    Tom Roberts is President of AFCOM, the leading association supporting the educational and professional development needs of data center professionals around the globe.

    As demand for data centers and interconnection services surge in the Asia Pacific, many providers are flocking to the region and setting up shop. In fact, some 200 colocation facilities have taken root with India, Japan and Hong Kong combining for the top three spots.

    And Melbourne isn’t far behind. This second largest and fastest growing city in Australia is fast becoming a technological hub with more than 8,000 information and communication technology companies. IBM has its Asia-Pacific Software Solutions center, Microsoft and Ericsson have R&D centers, and Fujitsu has software development operations located in the city. Agilent, eBay, Cisco, NetApp, EMC and others also have offices there. Digital Realty Trust, Inc. opened its second data center in Melbourne last year.

    It also happens to be the location for the upcoming Data Center World—Asia/Pacific Symposium, September 1st through 3rd at the InterContinental, The Rialto in Melbourne.

    Skills shortage

    This influx of data centers and technology created a shortage of 4,600 IT professionals at the end of 2013 in Australia alone. Don’t get me wrong. It’s not a lack of bodies causing the problem; it’s the lack of pros trained in and knowledgeable enough about the newest technologies and trends facing today’s data centers.

    The generalists of yesterday must become specialists today in order to fill the gap. That’s where Data Center World comes in. Our goal is to contribute to the ongoing need for education in the Asia-Pacific region so that a “western” migration, so to speak, doesn’t take place to fill the vacant positions.

    Forrester Consulting recently conducted a survey on behalf of Digital Realty to find out what will drive the next wave of data center capacity—and need for more specialists—in Singapore, Hong Kong, Japan and Australia. Virtualization, Big Data and consolidation were the top echelon followed closely by business growth, business continuity and storage growth.

    Here’s a breakdown according to country:

    • Hong Kong: data center consolidation; and Big Data related technology investments
    • Australia: Big Data related technology investments; virtualization; and data center consolidation
    • Singapore: Big Data related technology investments; and virtualization
    • Japan: data center consolidation; virtualization; and storage growth

    It’s not surprising that 50 percent of survey respondents also said their budgets will grow up to 10 percent over the next year, with nearly 60 percent from Australia and Singapore saying so. The majority indicated that they are planning some form of expansion within the next four years as well.

    Embracing new technologies

    The data center industry is flourishing across the globe, and there are more opportunities than ever for professionals in this field. The ones most likely to succeed moving forward will embrace new technologies, expand their skills and stay current with the drivers of next generation data centers.

    Regardless of the region in which you live and work, if you find yourself falling behind the data center times, commit to furthering your education and broadening your knowledge. It is the only way to guarantee your position in the industry.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:30p
    Speaker on Cooling Trends: Innovative Economization Increases ROI

    Cooling is changing in the data center, and the latest technological advances, while not implemented universally yet, are making headway in specific instances in the data center space. Generally, cooling tons of air with indirect air-side and water-side economization are very prevalent and effective techniques for controlling data center temperatures.

    At the upcoming Orlando Data Center World conference, Scot Heath, PE, Senior Engineer, of CLEAResult, will present about cooling in a session titled, “A Case Study on Innovative Economization to Maximize ROI.” Heath joined CLEAResult last summer, after nearly 30 years at HP. Over the course of his career, he’s held a variety of design and management roles from integrated circuits to data center cooling. 

    Data Center Knowledge asked him a few questions about cooling trends in today’s data center.

    “There’s a chasm between what’s being touted and what’s being implemented. The solutions being touted include immersion cooling, liquid cooled heat sinks, and direct and indirect air-side economization. What’s being implemented in volume is a much smaller set,” Heath said.

    Some newer technologies are being deployed to bring down cost and increase efficiency. “Direct air-side economization is one of the larger slices of the pie for new construction. In the U.S. and many other countries, it’s often a very viable alternative, but never without the ability to ‘close the windows’ so to speak.”

    He added that environmental hazards, such as large dust storms, pollen and airborne gaseous contaminants, could pose a threat to data center systems by clogging filters and potentially inducing long-term damage to IT equipment. 

    Other approaches such as indirect air-side and water-side economization both circumvent all the issues associated with bringing in outside air, Heath said. “These methods typically both use evaporative cooling so areas with significant wet bulb hours in the usable range are targets for such implementations. Water-side economization in particular is attractive for retrofits since the internal infrastructure of the data center often doesn’t need to change to employ this technique. Air-side is typically limited to new construction due to the volume of air which must be moved through the heat exchange devices or in/out of the building the case of direct air side.”

    Asked about liquid cooling, he said, “The various techniques which directly couple a fluid (state change or not) to the heat sources in the IT equipment are boutique. They have found some niche acceptance, such as high performance computing where ultimate density is required for ultimate performance, and existing data centers where space and geometry constraints made their deployment attractive, but the overwhelming inhibitor is the cost and lack of support from the IT vendors. Support of these cooling solutions is a chicken and egg problem. If the demand and volume were big enough, all the major players would offer these solutions in their mainstream products.” 

    “Google, Facebook, Amazon, etc. all use front-to-back air-cooled devices, so the market for direct liquid cooling is extremely limited. Air is still king and will be for the foreseeable future, making just enough air that is just cool enough as inexpensively as possible is the name of the game,” he said.

    What is ahead in the future of cooling?

    “As for the future, I can only offer advice in the form of insurance,” he said. “For both the enterprise and service provider, new construction should always include the ability to support close-coupled water cooling. Whether it’s a single large application/customer that has a unique need for liquid in the rack or just a very dense air-cooled solution that requires more air than can economically be delivered, the ability to provide water to the rack will become increasingly important.

    “I just witnessed a service provider bring up a 1 MW single customer load in 3 days, and they won that business because they could support that density. In this case, it was air cooled, but the next could be water, and having some valves on the loop that can be simply opened is pretty cheap insurance. Dr. Roger Schmidt, IBM Fellow spoke recently at the New England 24×7 Exchange chapter meeting just outside Boston, and focused entirely on direct liquid cooling showing multiple examples of installations which are already in operation. The volume is small, but the rewards are great if it happens in your data center.”

    Find out more on the latest cooling technologies and increasing your ROI

    Want to learn more? Attend Scott Heath’s presentation at Orlando Data Center World, or any of the other 20 trends topical sessions curated by Data Center Knowledge. Check out the conference and register at Orlando Data Center World conference page. Early bird registration pricing ends this week, August 22.

    3:17p
    Violin Beefs Up All-Flash Arrays With Dedupe, Compression

    Violin Memory announced a new Concerto 2200 Data Reduction solution, adding data deduplication and compression capabilities to its all-flash Violin 6000 and 7000 arrays. As an extension of its all-flash array solutions the company says the new data reduction offering will deliver up to 672 terabytes of usable storage, with deduplication rates commonly between 6:1 and 10:1.

    Multiple workload environments

    Deduplication is an essential element to storage requirements that will help maximize capacities for the enterprise with this new offering. It also brings Violin up to par with competitors that have this feature already in their product.  The company aims to give maximum performance and capacity improvements for customers with virtualized environments, and this new solution achieves that through granular control at the file, share and share group level.

    “Granular, inline deduplication and compression are powerful tools for customers to maximize storage efficiency while optimizing performance at the application level,” said Eric Herzog, chief marketing officer and senior vice president of alliances at Violin. “We see competitors who offer ‘always on’ deduplication and compression, but we know that, depending on the customers’ workloads, performance may suffer as a result of the ‘always on’ approach.” 

    The deduplication and compression features are initially targeted at Virtual Desktop (VDI) and Virtual Server (VSI) infrastructure. The company says that its Concerto 2200 dashboard will present the critical information on data reduction rates so that customers can see the effective rate of deduplication on their workload and use that information to remove the shares from deduplication, or add additional similar workloads that will benefit from data reduction. 

    “Violin Memory’s new Concerto 2200 array update with inline deduplication capabilities brings value to customers with performance and capacity improvements for workload demands,” Randy Kerns, senior strategist at the Evaluator Group, said. “Combining the tier one all-flash array performance with features for scalable virtualization implementations, Violin has a solution for enterprises to improve their economics.”

    5:10p
    OVH Drops 10,000-Server Container Into Montreal Data Center

    OVH is adding capacity to its data center just outside of Montreal using a container with 10,000 servers. The European hosting giant opened the facility last year to serve North American customers. It now touts 50,000 hosting customers in the region and is adding 10,000 additional servers to meet the demand.

    OVH first announced its intentions to enter North America in 2012 and launched the Quebec facility (its second in the region) last January with capacity for 360,000 servers. The OVH team built two hosting towers of 10,000 servers each within six months of opening. The third operational phase brings another 10,000 servers.

    The container of 10,000 servers used for the most recent expansion was designed in Poland and brought to Canada by ship. The company says the container was less expensive to develop than the first two towers and more energy efficient.

    It will be directly powered through existing electrical structures for one of the towers and run on lower voltage (415 volts instead of 480 volts). According to OVH, this enables it to leave transformers and modulators out of the design.

    The company raised $181 million to build data centers in North America. Quebec was a logical choice for the French company given language and cultural similarities.

    It focuses on green design and has taken a unique approach, overseeing every aspect of the manufacturing process. It builds custom servers, containers and data centers shaped like giant cubes in the case of France.

    The plan is to eventually build three data centers to cover the vast territory of North America.

    OVH’s first North American data center in Beauharnois, Quebec, is housed in a former Rio Tinto Alcan alumnium plant with an airflow design reminiscent of the Yahoo Computing Coop, designed to allow waste heat to rise and exit through a central ceiling vent. The building is located alongside a dam that will provide 120 megawatts of hydropower to support the facility.

    Founded in 1999 by Octave Klaba, the company now has more than 170,000 dedicated servers in its 15 data centers.

    5:54p
    Chinese Firm to Stand Up HP Helion Community Clouds in China

    HP has partnered with Chinese content delivery and cloud services company Beijing UnionRead Information Technology to build and operate HP Helion cloud infrastructure for enterprise customers based on OpenStack.

    Chinese regulations make it difficult for foreign companies to do business in China unless they do it together with a Chinese partner. UnionRead is the first provider in the country to deploy Helion solutions – a cloud services portfolio HP unveiled as part of a $1 billion investment program in May.

    HP has been doing business in China since the mid-80s and has hundreds of private cloud customers in the country, Robert Mao, chairman of the China region for HP, said.

    UnionRead will stand up Helion community clouds. These are clouds shared by companies doing business within certain industry verticals or geographies.

    The idea is to provide a shared infrastructure to a set of customers with similar security, privacy, performance and compliance needs. A dedicated UnionRead team will build, operate and support these community clouds.

    The clouds will provide services in all three of the big cloud infrastructure services categories: Infrastructure-as-a-Service, Platform-as-a-Service and Software-as-a-Service.

    HP Executive Vice President and CTO Martin Fink said he thought community clouds would be successful in China with certain “communities of interest,” such as automotive supply chain companies or city governments.

    UnionRead Chairman Peng Yang added healthcare, education, transportation and finance verticals to the list.

    6:55p
    Level 3 Opens Northern Virginia Data Center

    Level 3 opened a 12,000 square foot, 1.5 megawatt data center in Herndon, Virginia. The data center is one of the company’s “Premier Elite” facilities, its highest tier of data center.

    The facility is a standalone building. It has  three 480 voltage utility feeds distributed through a fully redundant configuration. There are two 2 megawatt and one 1600 kW diesel generators.  36-plus hours of fuel is stored in a 9000-gallon main fuel tank.

    In its Premier Elite-branded facilities, Level 3 provides biometric security and power densities of 5 kilowatts per rack and higher. Premier-branded facilities have  power densities of 3kW and Premier Select feature up to 5kW. The distinctions between its two other tiers of facilities are largely related to security and power density.

    This is the company’s third Premier Elite facility in the U.S. The company operates another 17 throughout Europe in Asia. Premier Elite facilities represent a tiny but growing fraction of the company’s overall data center footprint of more than 350 facilities.

    Northern Virginia is a top-tier market and continues to see massive amounts of data center activity.

    “The northern Virginia market is one of the top three data center markets in the world according to Ovum’s global data center tracking information,” Mike Sapien, principal analyst from Ovum Global Enterprise Services, said. “Enterprise customers are seeing improved economics in using outside data centers, as well as avoidance of the capital to upgrade their aging private data centers. The use of external third-party data centers are necessary to address expanding their global coverage, increased use of cloud services, and improving application performance.”

    7:00p
    OpenStack’s Search for True Self

    While the name OpenStack appears everywhere you turn in the IT world, there isn’t really an agreed-upon set of technologies the trademark describes. The number of technologies in the OpenStack ecosystem is growing, and there is a push underway within the OpenStack Foundation to define what is and what isn’t OpenStack.

    The goal of this push is to ensure interoperability between OpenStack clouds, Mark Collier, chief operating officer of OpenStack, said. “One of the things people envision is interoperation,” he said. “If you want to use the same tools, you want to use [them] for any OpenStack cloud. Now that we’ve gotten to this critical mass, we want to live up to that vision.”

    The community operates around a six-month release cycle with frequent development milestones. Given the frequency of updates, definitions trail behind the releases. The foundation’s board is close to completing definition work for Havana, the eighth release of the open source cloud architecture, which came out about one year ago.  They will then move to the most recent Icehouse release, but the next one, Juno, is already slated for release in a few weeks.

    The board is trying to come to a consensus across the ecosystem on a minimum set of requirements a cloud needs to meet to rightfully use the OpenStack trademark. Board member and Dreamhost CEO Simon Anderson said components of the baseline set – Nova for compute and Swift for block and object storage – are fundamental building blocks, but the definition is “soft.”

    Dreamhost, a hosting company, has an OpenStack cloud but uses Ceph instead of Swift, which would disqualify it had the definition been a hard one.

    “Smart end users are seeing it as ‘what do I need to use in this broad set of software? The trademark issue will definitely help, specifically around API compatibility,” he said.

    Same tests for everyone

    In addition to better defining itself, the OpenStack foundation is working on a set of tests to qualify interoperability between different companies’ technologies in the OpenStack ecosystem. “We’re just confirming that a cloud operates the way you’d expect, that it has common behaviors,” Collier said.

    “What we’re basically doing as a community is we’re taking the same kinds of tests. As we release new versions of OpenStack, we make sure it doesn’t break downstream.”

    Behind the testing effort is Tempest, an open source project that contains many different types of integration tests.

    “We’re getting to the point where the tests can be run by different companies that use the [OpenStack] trademark,” Colliers said. “There will be a grace period to get used to the idea of passing all of these tests. Right now we’re socializing it, getting [it in] the hands of users and seeing if there are any red flags. We absolutely didn’t want to dictate that day one you have to pass all of these tests.”

    Why now?

    OpenStack is now four years old, while the OpenStack Foundation is two. Collier explained that standards around definitions and testing for interoperability have become a focus now because OpenStack has reached a critical mass of real-world installations and users.

    “There wasn’t a framework for how to talk about it. Now there is,” Anderson said. “My sense being an insider, is that there’s a lot of will to get things done, a consensus from a large number of companies.”

    The foundation consists of separate but overlapping responsibilities. “During the last couple of board meetings, we’ve done joint meetings with the technical community,” he said. “Combining commercial governance together with the technical community has brought a lot of discussion of the strategy around OpenStack.”

    Jonathan Bryce, executive director of the foundation, said there were two aspects to the definition work: one about the expected capabilities and the other about specific lines and modules of code under the hood. “That gets into a lot of detail,” Bryce said about the latter. “That’s where there’s the most discussion.”

    User-driven decisions

    Just as users have been instrumental to evolution of technology behind OpenStack, they have played an important role in the process of documentation, testing and development, as well as the interoperability programs, Bryce said.

    “Here’s what I always come back to: users are who ultimately should have the strongest voice,” Collier said. “Four years ago there were not a lot of users; it was prototyping and a lot of companies experimenting. The users are someone to go to as a tie-breaker. It’s much different than a traditional proprietary software model. We strive to involve the users in the process. We also have some analytics, a user survey, all that data to leverage. That has been a godsend in tie-breakers during our discussions.”

    << Previous Day 2014/08/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org