Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, August 25th, 2014

    Time Event
    12:30p
    Advanced Analytics for Better Business Results

    TeamQuest Director of Market Development, Dave Wagner, currently leads the cross-company Innovation Team and serves as worldwide business and technology thought leader to the market, analysts and media.

    The concepts behind Big Data have been around for a while, with many successful business analytic solutions having been adopted by businesses over the last few years. A challenge of Big Data is in determining how best to glean advantages that positively impact IT’s ability to efficiently ensure the balance of infrastructure cost, risks, and performance to support those lines of business.

    Beyond descriptive and through predictive

    Many IT veterans know that predictive analytics predate Big Data by several decades. What is evolving, however, is an understanding that the two technologies work well together. Further increasing interest is an emerging approach going beyond descriptive, through predictive, to prescriptive analytics – technology that recommends specific courses of action and forecasted decision outcomes.

    Per Gartner, more than 30 percent of analytics projects by 2015 will provide insights based on structured and unstructured data. The promise of predictive and prescriptive analytics is appealing to IT decision makers because it adds a highly proactive future view of mission critical processes and resources – not only potential issues, but also optimization decisions.

    Speaking at a TeamQuest ITSO Summit, Mark Gallagher, of CMS Motor Sports Ltd. described how Formula One data analysts successfully analyze data to not only ensure the safety of team drivers but to win races.

    “In 2014 Formula One, any one of these data analyst engineers can call a halt to the race if they see a fundamental problem developing with the system like a catastrophic failure around the corner. It comes down to the engineers looking for anomalies. Ninety-nine percent of the information we get, everything is fine,” Gallagher said. “We’re looking for the data that tells us there’s a problem or that tells us there’s an opportunity.”

    Picking up on Gallagher’s Formula One example, it is apparent that there is an overlying theme of exception-based, predictive and prescriptive analytics, as well as the game changing nature of measuring and analyzing what matters. In my opinion the best way to describe this is by equation: good data + powerful analytics = better results.

    Prescriptive analytics tools for IT and business

    Having progressed along the spectrum from descriptive to diagnostic to predictive analytics, the resulting business intelligence can be used to see ahead, plan, and make decisions when there are too many variables to evaluate the best course(s) of action without help from advanced technology.

    Prescriptive analytics tools develop business outcome recommendations by combining historical data, business rules and objectives, mathematical models, variables, and machine-learning algorithms. This enables virtual experimentation when real-world trials are too risky or expensive. Beyond insight, prescriptive analysis can foresee possible consequences of action choices, and conversely, recommend the best course of action for the desired outcome(s).

    Advanced analytics is still in the early stages; Gartner surveys show most organizations are assessing current and past performance (descriptive analytics), with only 13 percent making extensive use of predictive analytics. Under 3 percent use prescriptive capabilities. Growth is certain. Gartner analyst Rita Sallam claims, “Those that can do advanced analytics on top of Big Data will grow 20 percent more than their peers.”

    Continuous optimization vs. assured performance

    The heart of IT infrastructure optimization lies at the intersection of financial efficiency, resource use effectiveness, and assured customer service. As the IT systems and software that comprise data center infrastructure become ever more sophisticated, automated, and complex, so must management and optimization approaches.

    Advanced analytics and intelligent automation deployed across all infrastructure domains will equip IT to cost-effectively scale and optimize resources ahead of the demand curve, yielding improved business agility, market share and customer experience with increased revenue and reduced risk.

    Understanding application performance must become deeply integrated with data center management tools and data for automatic provisioning of resources to be simultaneously cost-effective and service risk minimizing. Automated provisioning of storage, bandwidth, and computing power is a primary benefit of virtualization and a powerful capability of SDDCs. But without integrated business intelligence all that is likely to happen is that sub-optimal decisions will be automatically implemented more quickly than ever – with no assurance of continuous, acceptable service performance.

    Bridge all the silos!

    When teams and tools bridge silos, the synergy becomes the basis for competitive advantage. Gathering good data streams—metrics that matter to both business and IT— and correlating them through powerful analytics will amplify bottom line results.

    As an example, by measuring and analyzing more than just power utilization effectiveness (PUE), the focus of continuous optimization shifts to risk reduction, revenue growth, decreased capital and operating expenditures, and enhanced customer experience. What does it mean for a data center to be the most efficient possible according to industry standard PUE? What are you getting for your use of that “efficient” power? How much work is accomplished? If high PUE goes to servers that are not cost-effectively accomplishing useful work in service to the business, is that really efficient?

    IT teams will be more successful if they’re able to look at the right data in combination with powerful analytics. IT must understand what’s important to the business to be successful by delivering accurate, strategic advice – sometimes in a matter of seconds.

    The spectrum of analytic approaches

    It’s important for IT to use the “descriptive, predictive, prescriptive” spectrum of analytic approaches which reinforces that it’s not just about getting good information; it’s about knowing what to do with that information, when, and importantly, why.

    This spectrum journey can be started with whatever existing level of tool, process, and skill maturity is extant within IT environments and yield immediate and game changing results toward complete data center optimization.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    12:53p
    Skanska Named Contractor For Facebook Data Center Project

    Swedish construction company Skanska was named general contractor for Facebook’s new data center project in Sweden. The project is estimated to be worth SEK 530 million (about $76.5 million USD).

    The Facebook data center began construction in March and is expected to be completed in the first half of 2015. Skanska will be responsible for all infrastructure on the large project, including utilities and roads. It will also build all sub-structure, concrete works and office and facilities buildings.

    This will be Facebook’s second data center in Sweden. Facebook said it initially chose to build in Sweden because the country offered access to low-cost renewable energy, a cool climate and a strong pool of skilled workforce. The social network continues to recognize the benefits of building data centers in Sweden.

    Facebook’s decision to initially build a data center in Luleå directly created nearly 1,000 new jobs and generated local economic impact that amounts to hundreds of millions of dollars, according to a recently completed study by the Boston Consulting Group, which the social network company hired to assess its impact on the local economy.

    Since Facebook first broke ground in Luleå in 2011, it has created 900 direct jobs in Sweden and generated SEK 1.5 billion (about $225 million) in domestic spending. The project’s overall economic impact in the country so far has been about SEK 3.5 billion (about $524 million). That impact is growing.

    The study commissioned last July also estimated the economic impact of the new facility. It is expected to generate SEK 9 billion ($1.35 billion) of economic impact in Sweden and directly create nearly 2,200 jobs, two-thirds of them locally in Luleå. This will contribute about 1.5 percent of the local region’s total economy, with Facebook’s activity in the region benefiting a total of 4,500 full-time employees.

    Skanska Sweden is one of Sweden’s largest construction companies, with approximately 11,000 employees and 2013 revenue of SEK 30 billion.

    1:00p
    Hosting and Cloud Conference Launches First European Event

    logo-WHIR

    This article originally appeared at TheWHIR.

    West Chester, OH – HostingCon, the premier industry conference and trade show for hosting and cloud providers, along with their partner ResellerClub, Asia’s largest domain name registrar, announced that registrations are now open for ResellerClub Presents HostingCon Europe. The event will take place in Amsterdam, Netherlands on October 14th and 15th at the Passenger Terminal Amsterdam.

    “We are extremely excited about our first-ever European event,” Kevin Gold, HostingCon Conference Chair and VP of Marketing at iNET Interactive commented. “Along with our partner, ResellerClub, HostingCon is thrilled to bring its ‘Network, Learn, and Grow’ value proposition to the European hosting and cloud community.”

    ResellerClub presents HostingCon Europe is expected to attract over 400 attendees and exhibitors from Europe, North America and Asia to the event. The conference has been met with tremendous enthusiasm and is already being sponsored by a number of prominent companies including cPanel, NSFOCUS, Cloud Linux, Marketgoo, Leaseweb and GoMobi.

    Shridhar Luthria, General Manager at ResellerClub had this to say, “We wanted to bring the World’s premier Hosting Event to Europe and where better to start than the fountainhead of innovation in Europe? At ResellerClub presents HostingCon Europe in Amsterdam we hope to bring together attendees from all across Europe and give them a platform to Innovate & Collaborate. We look forward to hosting you and being a part of this very exciting phase for the Web Hosting industry!”

    In addition to HostingCon Europe, ResellerClub and HostingCon also produced an event in China that saw 700+ attendees and 35 Exhibitors. Their event in India in December is expected to attract more than 2000 Attendees & 60 Exhibitors. These events cater to the Chinese and Indian hosting and cloud provider markets respectively. More information is available at hostingcon.com.

    About HostingCon
    HostingCon is the premiere industry conference and trade show for hosting and cloud providers. In its tenth year, HostingCon connects the industry including hosting and cloud providers, MSPs, ISVs and other Internet infrastructure providers who make the Internet work to network, learn and grow. HostingCon is an iNET Interactive event. For details about HostingCon, visit www.HostingCon.com.

    About ResellerClub
    The ResellerClub platform powers some of the World’s most popular Web Hosts, Domain Resellers, Web Designers and Technology Consultants. ResellerClub provides scalable and secure Shared Hosting, Reseller Hosting as well as VPS solutions, in addition to a comprehensive suite of gTLDs, ccTLDs and other essential Web Presence Products.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/hosting-cloud-conference-launches-first-european-event

    1:30p
    Dell Updates Foglight Virtualization Management Software

    Virtualization management tool Foglight for Virtualization (FVE) 8 is now in preview. The two major additions include new features to Capacity Director and the Change Analyzer, a feature that will allow virtual administrators to track, and optionally roll back, change events.

    IT administrators use FVE for insight and control to change tracking across the virtual environment. It gives visibility across virtual and physical resources and helps to identify and fix problems and plan for growth.

    Advanced optimization and capacity management tools are key to meeting performance SLAs and help leverage capital and operational cost efficiencies.

    New additions to FVE 8 enhance the ability to better control change, logically prepare for growth and optimize the placement of future workloads. FVE 8 also includes an updated dashboard, providing a single pane of glass view showing a unified environment across virtual storage, network technologies and cloud deployments.

    The added Change Analyzer  allows virtual administrators to track, and optionally roll back, change events across several areas, from data center to cluster to VM and everywhere in between. It provides cross-hypervisor change tracking capabilities, allowing organizations to track, report and alert on any event that can impact overall virtual infrastructure performance. Impact Analysis reports can be performed to see impact before changes are made, as well as how changes impact performance.

    Capacity Director is enhanced in FVE 8 to provide easier server and infrastructure migration planning. It helps plan for infrastructure upgrades and migrations with little to no downtime. The following features were added:

    • Advanced scenario modeling to predict the impact of future workloads
    • Hardware refresh to see the effect of changing current hardware to new devices or adding additional hardware capacity
    • Physical to Virtual modeling (P2V) to see the virtual resources needed to migrate physical workloads to virtual
    • Virtual to virtual (V2V) to see what a workload would look like on multiple hypervisors and/or to model a workload moving from one hypervisor to another
    • Physical capacity modeling to predict resource requirements for non-virtualized workloads
    • Performance optimization to recommend and allocate future virtual machines (VMs) based on CPU, memory and storage requirements

    “Initially, virtualization was about getting better ROI and total cost of ownership on hardware, but the drivers of virtualization growth are changing,” said John Maxwell, vice president, virtualization monitoring, Dell Software. “Cost efficiency is still important, but tools in the virtualization space – management, provisioning and monitoring – have been innovative and have enabled the next logical phase, which is reducing operating expenses by allowing customers to automate many tasks.”

    Version 8 follows FVE 7.2, released last month. FVE 7.2 delivered vSwitch support. It added virtual Ethernet network management support, delivering packet level analysis with netflow technology.

    Foglight supports leading hypervisors, a wide range of storage arrays, virtual network technologies, Virtual Desktop Infrastructure (VDI) and many cloud platforms.

    2:00p
    Five Ways to Optimize Your Hybrid Cloud Platform

    Modern cloud technologies are continuing to evolve. The really cool part here is seeing how all of these next-generation cloud-based solutions impact business operations. The proliferation of IT consumerization, various cloud services and a new type of user have all created new types of demands around data center technologies.

    We’re at a point where cloud computing has a firm foothold on the industry and organizations are clearly understanding where they can benefit from this type of technology. Now, one of the most prevalent types of cloud models revolves around a hybrid cloud configuration. Many organizations are now spanning their private cloud architecture to leverage services and resources outside of their data center. In the past, methods of connectivity, application compatibility and the transfer of data were all pretty serious challenges when it came to spanning a cloud platform. Now, with better connectors and a lot more resource power, creating your own hybrid cloud is much more feasible.

    The recent Cisco Global Cloud Index report indicates:

    • Annual global cloud IP traffic will reach 5.3 zettabytes by the end of 2017. By 2017, global cloud IP traffic will reach 443 exabytes per month (up from 98 exabytes per month in 2012).
    • Global cloud IP traffic will increase nearly 4.5-fold over the next 5 years. Overall, cloud IP traffic will grow at a CAGR of 35 percent from 2012 to 2017.
    • Global cloud IP traffic will account for more than two-thirds of total data center traffic by 2017.

    Those are some pretty staggering numbers considering the dynamic growth around cloud technologies that we’re already seeing. With that in mind – what are some of the latest tools which directly optimize a hybrid cloud deployment? What can organizations – of all sizes – do to optimize their hybrid cloud model? Let’s look at a few ways.

    1. Incorporate automation. New technologies now allow you to automate how you build, deploy and manage your infrastructure. This can scale from within your data center and all the way out to your cloud. There are a few ways to approach the automation layer. You’ve got infrastructure automation with tools like Chef and Puppet Labs as well as cloud-layer automation with technologies like OpenStack and Eucalyptus. The point is that you begin to create powerful automation scripts and policies to control every aspect of your cloud platform. You’re able to create a proactive environment where workloads are dynamically controlled and resources are optimized. All the while – your administrators can continue to focus on improving your overall platform.
    2. Open-source vs. proprietary – both can be great. There are new tools which allow you to dynamically control the numerous resources which make up your cloud environment. For example, Apache CloudStack is an open-source cloud computing platform designed for creating, managing, and deploying diverse infrastructure cloud services. What’s great is that you can deploy this type of platform on a variety of hypervisors including KVM, vSphere and others. On the other hand, proprietary technologies allow you to aggregate and control resources from a single management layer as well.  VMware vCenter Orchestrator to allows you to adapt and extend service delivery and operational management for your cloud environment. When running a VMware architecture, this is one of the best ways to optimize policies, resource control, and even content delivery.
    3. Using next-gen load-balancers and WANOP. Your WAN optimization platform doesn’t have to be all physical. If your hybrid cloud is a diverse platform with data centers of various sizes, optimization must span both the logical and physical. Silverpeak’s VX virtual WANOP appliances can power through 256,000 certified connections while allowing for 1Gbps in WAN capacity. Simiarly, the NetScaler VPX virtual load-balancer and application delivery controller can handle up to 3000 Mbps of system throughput. All of these devices are highly agile and virtual. They can be deployed from within a number of network paths and can optimize application delivery, traffic flow, and even security processes.
    4. Software-defined technologies to the rescue. There are some very real use-cases for software-defined technologies. You can now abstract physical resources at the network, storage, compute and even data center layer. For example, Atlantis USX integrates policy-based controls for all storage resource pointed to the virtual appliance. From there, the platform pools, accelerates and optimizes existing SAN, NAS, RAM and any type of DAS (SSD, Flash, SAS). What’s amazing is that you can incorporate automation policies and allow these optimizations to span into you hybrid cloud architecture. VMware took the software-defined conversation to a new level. Their platform converges compute, storage, network, and management under one control layer. Now, these control mechanisms can span into a VMware Hybrid Cloud architecture.
    5. Integrate next-generation security. New security solutions are allowing cloud services to be a lot more agile. Now, organizations can incorporate virtual security appliances for even greater control of their data. Palo Alto and their virtual firewall system allows administrators to abstract security services. This creates managed security multi-tenancy and improved scalability. Another great example is deploying application and service-specific policies. If your environment is spanning multiple data centers and cloud instances, it’s important to have a platform that is capable of going from the cloud and back. Using NetScaler as an example again, you have the capability to deploy application firewall capabilities which can actually span a large array of clustered NetScaler appliances. These appliances can be local or at a completely different data center point. The cool part is that their all working in concert around managed security and network policies.

    In working with the cloud we’ve begun to see a number of organizations openly adopt a variety of cloud technologies. Whether it’s an app or two sitting in the cloud, or an entire cloud-based desktop delivery architecture, companies are utilizing cloud resources. Moving forward, one of the dominant cloud models will be a hybrid cloud platform. It provides flexibility and the ability to dynamically scale outside of your own data center. In creating your own hybrid cloud, make sure to always IT and business goals. Remember, modern businesses now directly rely on the capabilities of their technology infrastructure.

    7:06p
    DDoS Attack Takes PlayStation Network and Sony Entertainment Network Offline

    logo-WHIR

    This article originally appeared at TheWHIR.

    The PlayStation Network and Sony Entertainment Network are now back online after a DDoS attack over the weekend.

    In contrast to a major breach in 2011, PlayStation reports no evidence of any intrusion to the network and no evidence of any personal user information being compromised.

    “We will continue to work towards fixing this issue and hope to have our services up and running as soon as possible,” PlayStation noted in a message to customers. “We regret any inconvenience this may have caused.”

    Due to the outage, the planned maintenance on Monday has been rescheduled indefinitely.

    A group called “Lizard Squad” took responsibility for the DDoS attack, and also threatened on Twitter to blow up the American Airlines plane carrying Sony Online Entertainment President John Smedley. The flight was diverted due to the potential risks.

    Last year, PlayStation said Rackspace would be providing consulting and support for PlayStation’s OpenStack cloud.

    PlayStation servers had been the subject of a security breach in April 2011 whichresulted in the company shutting down its PlayStation Network for weeks, and losing out on at least $171 million in revenue.

    PlayStation, however, isn’t the only game network to suffer downtime. Late last year, Microsoft’s Xbox One gaming system release was marred by performance issueswhich Microsoft attributed to DNS issues outside of its own Windows Azure cloud platform.

    Security and availability have been major concerns for online gaming services, which store customer payment information and process financial transactions. And while gaming services are far from essential, they are a popular way to unwind, and any disruption can be bound to frustrate users and see them go elsewhere.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/ddos-attack-takes-playstation-network-sony-entertainment-network-offline

    7:51p
    VMware Adds Object Storage, Mobile Capabilities To vCloud Air

    VMware added object storage to its hosted hybrid cloud service vCloud Air (previously vCloud Hybrid Service). The company also added a new lineup of third-party mobile cloud application services to support mobile in the enterprise.

    vCloud Air joins two existing block storage tiers, Standard and SSD-Accelerated. Object storage is low-speed, long-term storage for unstructured data. The object storage is based on EMC’s ViPR technology and offers the S3 API. One differentiator is that vCloud Air supports large objects of up to 20TB in size. The object storage is available in beta during Q3 2014.

    In a bid for enterprise mobile cloud needs, the company has partnered with providers of Mobile Backend as a Service (mBaaS), front-end mobile application development tools, mobile testing and analytics tools and mobile application development and integration services.

    “With a mobile backbone, IT can support the explosive growth and compressed development cycles of mobile applications, while handling all of the security, connectivity, and governance requirements,” wrote Ajay Patel, vice president Application Services, Hybrid Cloud Business Unit.

    A wide array of pre-validated solutions offered by independent software vendors (ISVs) will be available on vCloud Air. Pre-validated mobile capabilities include:

    • Enterprise Mobility: AirWatch helps an organization manage its mobile footprint across employee-owned, corporate-owned and shared devices from a centralized console.
    • Mobile Backend as a Service (mBaaS): Scale and integrate mobile applications with backend systems and third party cloud services through mBaaS solutions on vCloud Air. The initial offerings are from BaaS provider Kinvey, enterprise mobile platform built.io, and Node.js community leader StrongLoop.
    • Mobile Application Development Platform (MADP): Mobile web application development tools from Sencha’s HTML5 mobile application platform Sencha Touch Bundle and cross-platform application development capabilities, including Appcelerator Titanium or the extensible, integrated Appcelerator Platform.
    • Pivotal CF Mobile Services: Platform as a Service Pivotal is extending Pivotal CF on vCloud Air with mobile backend capabilities such as Push Notifications, API Gateway and Data Sync, all at enterprise standards of compliance and security.
    • Rapid Application Delivery: Customers can create, deploy, manage and change both mobile and web applications on vCloud Air with the high productivity application platform from OutSystems.

    “Businesses today look to modern, mobile applications to drive revenue, differentiation and loyalty,” said Bill Fathers, executive vice president and general manager, Hybrid Cloud Services Business Unit, VMware. “We are continuing to push the pace of our cloud expansion, intent to make VMware vCloud Air the industry’s leading hybrid cloud platform for building, deploying, scaling and operating next-generation, mobile-cloud applications.”

    vCloud Air is available in eight data centers worldwide, with the company committing to further expansion. It recently announced its hybrid cloud expansion in the Asia-Pacific region via a joint venture with SoftBank Telecom Corp. and SoftBank Commerce & Service Corp. which brought vCloud Air to Japan. VMware recently announced a strategic partnership with China Telecom to build a hybrid cloud service in Beijing.

    8:00p
    Desalination Plant and Data Center: Not as Odd a Couple as May Seem

    Crises tend to inspire ideas for creative, unexpected solutions. One such crisis has brewing in California’s Monterey County, which is experiencing water shortages because of the severe drought plaguing the state, and where a group of entrepreneurs and local officials have come up with an idea to build a massive water desalination plant to address the crisis.

    There is obviously nothing new about a desalination plant. What is unique about the project is that it will take some serious data center capacity to make it work financially.

    There are lots of problems with desalination plants. They are expensive; they consume a lot of electricity; they kill marine life in areas they draw water from.

    But the group behind the project, being put together by a company called DeepWater Desal in Moss Landing, California, have found ways to address each of those problems.

    Funded by future water purchases

    Several water agencies in the area have proposed smaller desalination plants of their own, and DeepWater Desal is proposing that they instead join the project for a much bigger plant, which will cost less because of its scale. For that to happen, the agencies would have to form a Joint Power Authority – an entity that would enable them to jointly own the plant and one that would be able to issue a public bond measure to fund the construction.

    Dave Stoldt, general manager of the Monterey Peninsula Water Management District, said the bonds would be secured by water purchase agreements with municipalities that are in dire need of new sources of water.

    Lifeless depths

    To address the plant’s potential impact on marine life, it would draw water from the nearby Monterey Submarine Canyon. One of the largest such canyons on the U.S. Pacific coast, it is so deep that hardly any life exists beyond a certain point.

    Grant Gordon, chief operating officer of DeepWater Desal, said the plant’s intake would be at a depth virtually no light reaches. Because there is so little light, there is very little food and therefore not a whole lot of life down there.

    Too cold for desalination, perfect for data centers

    That leads to another problem. Water from such depth is extremely cold and would take a lot of energy just to bring it to the appropriate temperature for reverse osmosis – a process used to desalinate seawater.

    This is where the data center part comes in. The idea is to have somebody build one or more data centers adjacent to the desalination plant and push the seawater through data center cooling systems before putting it through the desalination process.

    There is at least one example of a massive data center that uses seawater cooling. Google has retrofitted a building in Hamina, Finland, that formerly housed a paper mill and used seawater for cooling, into a data center.

    Eyeing wholesale electricity market

    An adjacent data center in Monterey would also be used to lower energy cost for the desalination plant. Addition of the data center element to the project spurred the idea of creating a municipal utility that would be able to buy energy for the entire complex on the wholesale market, which would be about half the retail rates in the area.

    Industry response positive

    There is obviously a lot of moving parts to the concept, and a lot has to happen for DeepWater Desal’s idea to come to fruition. But, the company has support from many local officials because of the very real need for a quick resolution to the water crisis.

    The company was founded three years ago to find a solution, and the combination of a desalination plant and a data center campus is the result. The plans are currently being evaluated for the project’s impact on the environment, and an Environmental Impact Report will be completed in about one year, Stoldt said.

    DeepWater Desal’s officials have met with multiple data center developers to pitch the location, and the response, according to both Gordon and Stoldt, has been positive. James Hamilton, vice president and distinguished engineer at Amazon Web Services, was so impressed after a conversation with Gordon that he took to his personal blog to write about the company’s plans in Monterey earlier this month.

    There is potential for up to 150 megawatts of power to be available for data centers at the Moss Landing site, Gordon said, and the plant would provide the data centers there with an unlimited supply of incredibly cold water almost at no cost. “It’s like a perfect synergy,” he said.

    << Previous Day 2014/08/25
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org