Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, February 16th, 2015

    Time Event
    1:51p
    Why Hybrid Cloud Continues to Grow: A Look at Real Use-Cases

    Cloud computing isn’t going anywhere. In fact, there are more cloud platforms, services and environments being developed each and every day.

    The evolution of the cloud has seen many organizations evolve from private to public and now hybrid cloud platforms. In reality, almost every cloud environment within a pubic cloud has some sort of connection back to the central data center. So, at some level, all public clouds are some percentage hybrid.

    Moving forward, the hybrid cloud model will give end-users even more services and benefits. A recent Gartner report found that the use of cloud computing is growing, and by 2016 this growth will increase to become the bulk of new IT spend. 2016 will be a defining year for cloud as private cloud begins to give way to hybrid cloud. By the end of 2017, nearly half of large enterprises will have hybrid cloud deployments.

    But where are organizations really benefiting from a hybrid platform? What are some logical use-cases and what does the future hold?

    • Use-Case 1: A healthcare organization needed extra resources in a public cloud to process large amounts of data. This organization did not want to spend the money on internal resources and realized that a pay-as-you-go model was optimal for their environment. So, they turned to their AWS platform for help. In utilizing a key partnership between Citrix and AWS, this healthcare organization was able to directly link their public cloud with their private data center. Now, they’re able to use this cloud platform to migrate applications, workloads and data between AWS and their private data center, which has helped with security, data analytics and even efficiency. Furthermore, this organization can leverage Sharefile as Citrix has signed the Business Associate Agreement (BAA), which allows them to take on additional liability to manage protected healthcare information (PHI). Now, this healthcare organization can utilize ShareFile for data collaboration and still remain HIPAA compliant.
    • Use-Case 2: A marketing and multi-media organization needed extra resources for applications running during peak utilization timeframes. This would include shopping seasons, holidays and other peak usage moments. With the help of a private cloud environment hosted on platforms like Eucalyptus, the organization is capable of sending API commands to Amazon for extra burst resources. Furthermore, integration with technologies like RightScale, allow organizations to directly interconnect resources between both public and private cloud platforms. From there, the auto-scaling and cloud-bursting capabilities of the Eucalyptus platform allowed this organization to seamlessly connect to their Amazon cloud resources, and burst extra workloads when peak utilization was hit. Commands sent to the auto-scaler can help administrators better manage resource utilization both within the private and public cloud environment. Even further integration with the vCloud API allows administrators to centrally control VMware vCloud instances located both privately and publicly.
    • Use-Case 3: Your organization is bound by compliance, regulations and other factors that have previously prevented you from moving to the cloud. Or, you have data points which are extremely sensitive and must be absolutely controlled. You have a need to distribute data alongside applications to a widely distributed user base. Fortunately, cloud and compliance have come a very long way. Akamai, Lockheed Martin, Microsoft, AWS and the U.S. Department of Agriculture are all are running government clouds – to be exact, they are FedRAMP Compliant Cloud Service Providers (CSPs). If you examine the Amazon AWS compliance matrix you’ll quickly see that you can now run cloud-based workloads with PCI DSS, ISO, FedRAMP, and even DoD compliance standards.

    Cloud Platforms of the Future

    There is more competition in the cloud market and many more organizations are finding ways to place their data centers or key services within the cloud. As more companies utilize resources from a distributed cloud data center, there will be even greater need to interconnect private and public infrastructures. Look for platforms like CloudStack, OpenStack and Eucalyptus to play an even greater role in providing powerful cloud platforms for organizations with direct integration with public and private resources.

    It is important to understand that there are simply more use-cases for organizations to adopt some kind of cloud model. Whether it’s as simple as offloading an application or as complex as big data management, hybrid systems allow for flexibility for both business and data center. Deploying complexity has been reduced, bandwidth and networking capabilities have improved, and even compliance-driven organizations can now adopt cloud computing.

    The above examples are just a small sample – there are new use-cases being created every day. Now, open-source technologies and software-defined systems make it even easier to extend your data center into the cloud.

    4:00p
    Cloud Foundry Foundation Names CEO

    Cloud Foundry Foundation has named Sam Ramji its CEO. The foundation is the independent governing body of the open source Platform-as-a-Service project. Nine new inaugural board members were added as well.

    The governing body is fairly young but active, hitting 40 member companies last December. Meetups organized by the community are up 350 percent in the last year.

    Pivotal, an EMC and VMware spin-off company and original overseer of the project, decided to create the independent foundation to help grow the ecosystem and establish a formal open governance model.

    Ramji is a 20-year veteran of Silicon Valley and Seattle technology scenes and has no ties to any of the foundation’s current member companies. This is an important factor in keeping the balance of power from tipping toward any one company’s favor, the very reason for the foundation’s creation.

    “Our goal is to run cloudfoundry.org with the speed and agility of a startup, not a traditional consortium,” said Ramji in a release. “This will help us continue the rapid growth and momentum Cloud Foundry has achieved to date.”

    Some notches on Ramji’s belt include leading strategy for API powerhouse Apigee, designing and leading Microsoft’s open source strategy, and driving product strategy for the BEA WebLogic Integration.

    Cloud Foundry is a popular open source PaaS heavily used in numerous fairly new commercial PaaS offerings, such as Pivotal CF, IBM Bluemix, and HP Helion.

    “Cloud Foundry is quickly becoming the enterprise platform of choice for deploying and operating applications in the cloud,” Ramji said. “Major corporations on the supply and demand sides of the cloud market are putting significant resources behind this community-built platform. They’re doing so because they know they can commit to Cloud Foundry as their long-term cloud platform with confidence.”

    A member of multiple industry advisory boards, Ramji is board secretary of Outercurve Foundation, a non-profit initially created by Microsoft as CodePlex Foundation. Outercurve’s goal is enabling the exchange of code and understanding among software companies and open source communities. The named was changed to Outercurve to drive wider participation and contribution. It now supports the wider Free and Open Source Software (FOSS) community.

    Outercurve’s evolution is similar to what’s occurring with Cloud Foundry. The foundation wants to drive wider open source participation after a Cloud Foundry charge mostly led by Pivotal.

    “The Cloud Foundry project was already keen on keeping some common ground between CF distributions with the notion of ‘core’ CF,” said Ovum senior analyst Laurent Lachal via email. “It seems to want to strengthen the approach [with the foundation].”

    The nine new inaugural board members added are:

    • ActiveState’s Bart Copeland
    • EMC’s John Roese
    • HP’s Bill Hilf
    • IBM’s Christopher Ferris
    • Intel’s Nicholas Weaver
    • Pivotal’s Rob Mee
    • SAP’s Sanjay Patil
    • Swisscom’s Marco Hochstrasser
    • VMware’s Ajay Patel
    4:30p
    Application Delivery in a Software-Defined Data Center

    Robert Haynes has been in the IT industry for over 20 years and is a member of the F5 Marketing Architecture team where he spends his time taking technology products and turning them into business solutions. Follow Robert Haynes on Twitter @TekBob.

    Plus ça change, plus c’est la même chose, or in more common IT parlance—same stuff, different day. Many of us know this sentiment well. As disruptive technologies and trends transform how we do business, at the heart of it all, we’re still grappling with the same core issues—driving better performance, ensuring security, and managing costs.

    As IT, and IT delivery, continues to evolve, the software-defined data center represents the next major advancement in app delivery. This data center of the future promises us a more efficient, responsive, and streamlined model for delivering enterprise IT. At the same time, we must still solve familiar issues, such as how to best supply application services.

    Bigger, Stronger, Faster Applications

    Application services are functions delivered in the data path between end users and application that make applications more secure, faster, or available. Firewalling, load balancing, authentication, and encryption, for instance, will be ongoing requirements, but now must be aligned to this on-demand, highly orchestrated design.

    The starting point for all design discussions, whether technology or topology, are the requirements. While there are a range of application service functions to deliver, there are three universal requirements to implement:

    1. A comprehensive API that lays the foundation for integrating with the orchestration tools that are necessary for creating the on-demand infrastructure.
    2. On-demand creation of new services using resources that are delivered from a common pool or platform to avoid delays in acquisition or provisioning.
    3. Ubiquitous deployment of application services that may require compatibility with networking overlays, like VXLAN or NVGRE, or the ability to work across multiple virtualization platforms or public cloud offerings.

    With these requirements in mind, there are a few preferred modes of app service delivery in the software-defined data center: specialized hardware devices, virtual machines, and a virtualization platform.

    Three Preferred Modes of Delivery

    For decades, specialized hardware devices—firewalls, VPN concentrators, and application delivery controllers—have been the first choice for mission-critical production environments, because they deliver specialized processing hardware, high availability, and high capacity. Can they be integrated successfully into the software-defined data center? The answer, as ever, is that it depends. If the hardware platform can be API driven, scale seamlessly with security controls, and can connect into the data center fabric (including support of overlay and tunneling protocols), then the answer is yes. This design makes the application services a function of the infrastructure rather than a specific entity within the application stack. As a result, you can greatly simplify and standardize the delivery of services and help combat virtual machine sprawl.

    Like most things, however, this approach does have its drawbacks. Hardware devices might be efficient at scale, but they inevitably concentrate services into a small number of physical locations. This is likely to result in significant “tromboning,” which happens when application traffic must leave a physical host to be serviced by a separate device before it returns and then reaches the next virtual machine in the application stack. This is particularly significant for services such as east-west firewalls, which can generate a lot of extra network traffic.

    Focusing on the virtual space, specifically virtual machines, services can be deployed where and when they are required, and can be placed close to the application servers to potentially remove additional network hops. While, conceptually, virtual devices fit in well with the software-defined data center, the need for orchestration now extends beyond creating services and must now encompass the creation, licensing, and deletion of the devices themselves.

    Additional integration with the chosen server virtualization platform is required, and organizations need to be sure that the licensing models offer the flexibility required to meet the demands of a more dynamic data center. In addition, it’s important to check that virtual versions from key vendors are available across a wide range of server virtualization and cloud platforms, especially if a hybrid data center model is a future goal.

    Virtualization platforms are also a very attractive way to deliver application services since many of them now include these services as part of their core functionality. Additionally, since these services are controlled and orchestrated by the core virtualization technology, they are usually bundled into the platform costs. Integrated into the hypervisor kernel, these services are embedded and available to all virtual machines. They are applied to the traffic as it passes across the hypervisor, and often no additional visible network hops are created.

    Again, there are some drawbacks to this approach. In general, the range of functionality that’s embedded in virtualization platforms is far smaller than with the other options. Where organizations have benefitted from advanced functions and programmability often offered by third party suppliers, integrated solutions can feel decidedly limited. Additionally, a virtualization platform often creates a degree of vendor lock-in given that configurations are not easily portable between different platforms or into a different supplier’s public cloud.

    The Model That’s Right for You

    So how do you know which model is right for your organization? Again, it’s the same decision process we’ve faced time and again. First, understand your current and (as much as possible) future needs before assessing the benefits and drawbacks of each model. Work with your key vendors to get a realistic picture of how their solution will work for you, then test and pilot as much as is feasible. After all, the great benefit of the software-defined data center is the ability to rapidly deploy, test, and destroy.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:18p
    Cloud Firm Linode to Add Singapore, Germany Data Centers

    Linode is one of those technology success stories that start with Linux and a dream. The company started in 2003, three years before Amazon Web Services hit the scene, and was 100 percent bootstrapped. Now it’s an international hosting and cloud provider expanding into two new markets: Frankfurt and Singapore. Both were identified as high-growth cloud markets.

    The company chose Pacnet for its Singapore data center. That location will be up and running in the next couple of weeks. It is down to a short list of providers for its upcoming Germany data center. That list is a little shorter following the recently announced merger between Interxion and TelecityGroup. The company does have data center space with Telecity in the U.K., but it’s hard to predict who it will choose given its process for selecting data centers.

    “Currently we deal with four colo providers in the U.S. and two internationally,” said Tom Asaro, Linode chief operations officer. “Interestingly, we use a different company in each location. We try to find the top provider in each market.”

    Linode’s growth has largely been through grassroots and positive word of mouth rather than marketing blitzes. The company focuses on service and technology rather than positioning. Its name usually comes up on “most reliable clouds” lists from the likes of CloudHarmony or on Internet message boards when developers are seeking a high-powered option.

    The Singapore location will complement a presence in Tokyo opened in 2011, which Linode said has been extremely successful. The reports and forecasts on cloud adoption in Singapore say the region is due to surpass even the U.S. in the coming years, according to Asaro.

    Data privacy laws and a need for certain levels of information needed to be hosted on German soil made a Germany data center location a logical choice for Europe.

    The company did not disclose how big its initial deployments are, other than saying they chose to be practical in new locations.

    The list of requirements for a provider includes room to grow and a very reliable remote-hands department. There needs to be a certain level of communication, according to Asaro, as Linode doesn’t directly staff many of these locations and needs a solid relationship with the providers.

    Part of the reason Linode doesn’t need local in-house staff is the amount of automation built into its platform. All of its tools are written in house, with the exception of Xen, the hypervisor.

    Automation benefits Linode because it streamlines operations. While they pride themselves in customer service and answering trouble tickets in a matter of minutes, automation means reduced customer service needs overall.

    Linode does a lot of business with the developer crowd, but it’s very much tuned for production. “We take a lot of pride in our build stack, “said Brett Kaplan, operations team lead. “We use some of the fastest CPUs, SSD. Top-of-the-line hardware.”

    The company committed $45 million to boosting its infrastructure last year, upgrading to SSD and doubling the amount of RAM in its plans for free. The upgrade followed a significant investment to bolster the network.

    With the likes of Google, Amazon, IBM, and Microsoft investing heavily in cloud, Linode has proven that the little guy can grow unimpeded in the space despite questions of commoditization.

    In addition to competing with giants like AWS, a comparable competitor is DigitalOcean, a recent entrant also tuned to developers that has seen great success.

    5:45p
    Brocade and EMC Unveil IP Storage Switch

    Brocade and EMC have extended their longstanding OEM relationship with announcement of new Brocade VCS Ethernet fabric solutions for EMC’s Connectrix family of network switches.

    Designed for modern IP networks, the new Connectrix VDX-6740B IP storage switch improves performance, resiliency, and agility in mission critical IP storage networks, according to Brocade. The switch will be available for EMC’s Data Domain, Isilon, VMAX, VNX, and XtremIO products.

    Resembling the way Fibre Channel works in networking, the idea of an IP storage switch is not new.

    The target for the VDX-6740B is a modern, flexible, and open network architecture for dedicated storage networks powering new IP workloads. Brocade notes that the new switch has up to 48 10GbE ports and 4 40GbE ports. To assist with orchestration, multiple systems connect into a self-forming fabric, and are managed as a single switch by Connectrix Manager Converged Network Edition (CMCNE).

    Brocade acquired Ethernet networking company Foundry Networks almost seven years ago.

    The Connectrix VDX-6740B does add something new, with a patented feature for load-balanced multi-pathing as well as on-chip buffers for extremely high throughput. Resiliency is another item Brocade highlights, saying that the new switch has millisecond link failover, allowing I/O to continue non-disruptively after a path or link failure.

    “Today’s IP storage workloads can benefit from a dedicated IP storage network which establishes the same performance, predictability, availability and operational simplicity that we have been delivering to our Fibre Channel customers for the past 15-plus years,” Jonathan Siegal, vice president of marketing for EMC’s Core Technologies Division, said in a statement.

    6:44p
    Survey: IT Concern With Green Data Centers Marginal

    If you’re paying attention to the data center industry, you sometimes see the occasional story about a colocation company touting sound sustainability characteristics of its data center operations.

    Big tech companies, including public cloud service providers, have been making massive investments in renewable energy. The most recent examples include Apple’s 130-megawatt solar deal in California, and Amazon Web Services’ 150-megawatt wind power purchase agreement in Indiana.

    Global wholesale data center giant Digital Realty said in January it would buy renewable energy credits for new customers anywhere around the world to make their energy consumption 100 percent carbon-neutral for one year free of charge.

    Reading stories like that always begs the question: How much do colocation and cloud customers actually care about whether or not the IT gear they use is housed in a green data center — a data center that’s reasonably easy on the environment?

    Data center provider Green House Data, which has sites in Oregon, Wyoming, New York, and New Jersey, cares about this question a lot more than its peers do. Sustainable operations are the way it has chosen to make itself stand out among competitors. It buys Renewable Energy Credits to cover 100 percent of its energy use and optimizes data center design for maximum efficiency (free cooling, hot and cold aisle containment, Energy Star equipment, and so on).

    As a company that cares a lot about this question, it regularly poses it to the market, and today, the answer is “a resounding … maybe,” as Green House Data Marketing Specialist Joe Kozlowicz put it in a blog post.

    In its most recent study, the company surveyed about 170 IT pros, from sys admins to CTOs, all but two of whom took part in making IT and infrastructure decisions. While the respondents generally agreed that operational cost savings that come with having a green data center make business sense, most IT department’s don’t really look at energy efficiency or sustainability when comparing different service providers.

    Most of those surveyed plan to expand or rejigger their infrastructure in some way over the next three years, be it extending infrastructure to the cloud to create a hybrid setup, take more colocation space, build, buy, or lease a data center, expand or migrate entirely to a public cloud, or consolidate data centers. Less than one-third of those respondents said they would choose green data centers but when asked to explain in detail what “green” factors they would consider, their responses illustrated mild commitment.

    For 10 percent of respondents, cooling design efficiency was a primary focus in selecting a service provider. Renewable energy and total energy consumption were a primary focus for 7 percent. Three percent really focused on green technology investment and data center PUE (power usage effectiveness).

    Those for whom cooling efficiency, green tech investment, renewable energy, total energy consumption, data center PUE, and overall environmental impact were a “minor consideration” ranged from 20 percent to 30 percent.

    Environmental impact was a “very important” factor for 27 percent; and total energy consumption was very important for 33 percent. No respondent said overall environmental impact was a primary focus.

    In sum, users tend to care more about energy efficiency (as a vehicle to deliver cost savings) than they do about their service providers’ carbon footprint.

    Here is the detailed breakdown of the results, courtesy of Green House Data:

    Green House Data green survey

    Legend:

    • Blue – Not important
    • Red – Minor consideration
    • Orange – Low priority but important
    • Green – Very important
    • Purple – Primary focus
    8:00p
    Indian Cloud Host ESDS Software Raises $4 Million for Data Center Growth

    logo-WHIR

    This article originally appeared at The WHIR

    Indian cloud hosting provider ESDS Software Solution is raising $4 million from Canbank Venture Capital Fund to open three data centers over the next couple of years.

    Initially, the investment will be put towards the first phase of a 200,000-square-foot data center in Navi Mumbai to be open by April 2015.

    According to a report by Economic Times, Canbank is a subsidiary of public sector bank Canara Bank.

    ESDS is also planning a 100,000-square-foot data center in Bengaluru and a 50,000-square-foot data center in Nashik, India, and would raise additional funds from a combination of debt, internal accruals and further private equity investment.

    The company was founded in 2005 by Piyush Somani, a former director of WebHosting UK.

    ESDS has already raised $1.6 million from German bank KFW and Small Industries Development Bank of India. It has customers including Pizza Hut, Kafila, Essel Propack and Maharashtra State Election Commission.

    As an emerging market, India brings with it lots of investment opportunity and room for infrastructure development. Recently, Facebook launched an app with Reliance Communications to bring free Internet access to underserved markets in India.

    More Internet access is a good thing for service providers as businesses will require additional services to support their online efforts. Ecommerce in particular is expected to grow, with the online shopping market to reach $15 billion by 2016.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/indian-cloud-host-esds-software-raises-4-million-data-center-growth

    9:05p
    TIA 2015: The Network of the Future

    The Telecommunications Industry Association will host TIA 2015: The Network of the Future June 2-4 at The Hyatt Regency in Dallas, Texas.

    Over three days, TIA 2015 will gather global thought leaders, top executives, and policy decision-makers for a unique look at mobile networks, big data, tech policy, emerging technologies, new business models and more. The conference will also spotlight new services and solutions that are allowing network operators and enterprises to more powerfully connect with customers, increase revenues and take advantage of new profit pools.

    Participants will have the opportunity to meet, learn from, and collaborate with:

    • Network operators who are exploring ways to maintain profitability, Quality of Service, Quality of Experience and the creation of additional revenue streams through new value-added services and cloud-based platforms.
    • Manufacturers and suppliers who realize that to differentiate themselves within a marketplace that favors vendor-agnostic interoperability they must find routes for greater collaboration with partners up and down the supply chain and the upselling of new managed services and offerings to network operators and enterprise clients.
    • Enterprise innovators who are ready to take advantage of opportunities for transformation enabled by Information and Communications Technology (ICT) – connecting people in new ways, improving efficiencies and increasing margins – and made more accessible as the availability of open source solutions and network convergence continues to reduce barriers to adoption.

    For more information about this year’s conference, visit the TIA 2015 website.

    To view additional events, return to the Data Center Knowledge Events Calendar.

    10:53p
    Firmer Pricing in US Data Center Market Expected in 2015

    Despite healthy market dynamics and strong leasing activity, pricing trends in the U.S. data center market have been somewhat at odds, according to a recent Cushman & Wakefield report. Pricing was largely soft in 2014, but the real estate company expects firmer pricing in 2015.

    Deal momentum from late 2013 carried over into 2014 and the pipeline points to sustained demand. Despite healthy activity, wholesale pricing in 2014 was flat, at $125-$140 per kW per month.

    “The most interesting takeaway is that, over the past 12 months, pricing has not moved with supply-demand dynamics,” said Jeff West, director of data center research at Cushman. “I think in the fourth quarter, we started to see that change. Pricing will start to move up in 2015.“

    Factors contributing to “soft” pricing in 2014 included a concern about subleased space re-entering the market, leading to a supply-demand imbalance, aggressive pricing for larger deals, and deals seen as having high growth potential.

    However, large providers are becoming more stringent in concessions, according to West.

    The firmer pricing is also result of providers being more disciplined in phased construction, leading to a better supply and demand balance. Publicly traded real estate investment trusts are showing more discipline when it comes to pricing, because they could take a bit of perception hit, according to West. A lot of private providers will still aggressively price, but on the whole it will be firmer.

    West noted that there were fewer purely speculative big builds in 2014 with more providers taking a phased approach. The ability to perform “just in time” builds doesn’t skew the supply-demand dynamic toward the buyer as it has done in the past.

    Alternatively, by not building too far out ahead, in many markets it takes only a few deals to create a vacuum of no immediately available capacity, said West. Some markets might see temporary pockets of supply constraints similar to the situation in Northern Virginia this time last year because of fewer speculative builds.

    Despite healthy market dynamics, wholesale pricing was soft and flat during most of 2014. Market maturity suggests firmer wholesale pricing in 2015.  (Source: Cushman & Wakefield Data Center Snapshot Winter 2015)

    Despite healthy market dynamics, wholesale pricing was soft and flat during most of 2014. Market maturity suggests firmer wholesale pricing in 2015. (Source: Cushman & Wakefield Data Center Snapshot Winter 2015)

    West pointed out that Dallas, a market considered largely oversupplied a year ago, now has a dwindling number of large wholesale options. Digital Realty, the largest landlord in the market, has no new capacity available until later in the year. CyrusOne and QTS are the only providers with large blocks that are immediately sellable.

    In addition to Dallas, the Northern Virginia data center market was the most vibrant of 2014. Each had nearly 40 megawatts worth of new data center leases.

    Other markets highlighted were Minnesota, which has emerged as a market capable of attracting sizable deployments following several investments, and Santa Clara, California, which only has a handful of large first-gen wholesale space options available.

    The report also noted that there was a lot sublease space at the beginning of the year along with concern that this would negatively impact market dynamics. The space has seen healthy interest from Facebook, Yahoo, and Zynga in Northern Virginia and Northern California.

    Vantage Data Centers, a major Santa Clara player, recently discussed the sublease dynamic in California with DCK.

    The Cushman report also mentions some of the larger lease deals, acquisitions, and other activity. Large wholesale deals included LinkedIn (7 MW) T-Mobile (4.5 MW) and State Farm (over 6 MW). A big acquisition was Zayo gobbling up Latisys and its 180,000 operational square feet for $675 million. Notable builds include T5 breaking ground in Portland with multi-megawatt anchor tenant and Switch’s $1 billion entry into Reno, Nevada, with eBay as anchor tenant.

    A recent report published by commercial real estate firm North American Data Centers examined wholesale leasing, determining it was up 37 percent in 2014.

    << Previous Day 2015/02/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org