Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, February 23rd, 2015

    Time Event
    1:00p
    Energy Sector’s Data Center Outsourcing Healthy

    Oil-and-gas-industry customers often have unique data center needs and are increasingly outsourcing their data center needs. Whether its upstream needs for High Performance Computing in the world’s most remote locations or a desire to get out of the business of operating data centers and outsource core infrastructure on the distribution and supply side, opportunity for data center service providers abounds.

    The industry’s computing needs can be split into two categories: upstream and downstream (downstream also includes midstream). “Upsteam” refers to where they get the oil. This is where sector focuses on exploration and research and development. “Downstream” refers to where the oil and gas are ultimately going, or the needs on the distribution and supply side.

    While oil prices dropping has some equity analysts concerned about the effect on colo providers that rely on energy-sector customers for big portions of their revenue, oil prices don’t necessarily affect data center outsourcing needs negatively. Colocation becomes more attractive to those downstream as they look to save by outsourcing. Companies upstream have constant computing requirements for R&D and exploration.

    Mansfield Outsources to QTS

    The downstream sector is increasingly turning to data center outsourcing. One example is Mansfield Oil, which recently transitioned from company-owned data centers to a QTS Realty Trust facility in Suwanee, Georgia — in the wider Atlanta Metro.

    Mansfield is on the distribution and supply side. It is a North American energy supply, logistics, and services enterprise. It partners with clients to provide technology-based solutions to support their energy supply chain needs.

    “Mansfield is consolidating a couple of data centers into QTS, so it’s all about increasing performance on what they do, which is providing technology,” said Chad Giddings, vice president of marketing at QTS.

    QTS will help Mansfield stabilize its core infrastructure platforms, transition network carriers, reconfigure and upgrade hardware, and more. In addition to migrating IT infrastructure, Mansfield also migrated its IT infrastructure team to the massive 370,000-square-foot QTS data center.

    “We realized that it doesn’t make sense for Mansfield Oil to be in the data center business, when experts like QTS build and maintain data centers at a level beyond what we can manage in terms of security, monitoring, and technology,” said Hercu Rabsatt, an IT infrastructure director at Mansfield, in a release.

    Sector Has Unique Reliability Requirements for HPC

    CyrusOne is a data center provider that does a lot of business with companies on the R&D side of the energy industry.

    These customers’ HPC needs require special data center environments. “Top of mind [with HPC] is the level of redundancy,” said John Hatem, senior vice president of data center design at CyrusOne. “Over the past five years, it’s gone from single generator and UPS to either only UPS or just straight utility power. We’ve seen a shift from a need of five nines reliability to three nines. Geographic diversity of two sites with three nines in a single site is more favorable, as there’s less operational expense.”

    No maintaining generators and UPS means CyrusOne is able to deliver at a much lower cost per megawatt.

    HPC customers are typically on a much higher density than a traditional enterprise-type client, said Hatem. High densities mean customers are more prone to water-cooled environments, which CyrusOne said it’s been doing for over seven years. There are also opportunities to innovate on the air side, as R&D customers in general are willing to push the ASHRAE envelopes.

    “They’re typically knowledgeable and much more open with temperate and humidity,” said Hatem, saying some customers operate at 80F. The second edition of ASHRAE guidelines in 2008 recommended an upper limit of 81F.

    Operating at higher temperatures means a lot more hours of free cooling. “These customers are trying to save front end and on the operational side by driving down PUE,” he said.

    Prefab Modules Popular Choice for Exploration

    Sometimes, these data centers are needed in some of the most remote parts of the world. For remote data center needs, pre-fabricated is the way to go, according to George Sutton, director of AST Modular at Schneider Electric. The company’s AST modular offerings are used a lot for energy exploration in remote areas.

    “As the price of oil has dropped, the size of opportunity is hard to explain,” said George Sutton, director of AST Modular at Schneider.While the opportunity seemed immense when prices were at an all-time high, lower prices don’t mean fewer exploration needs, and these upstream needs are extremely unique, according to Sutton.

    “Imagine having to build an Uptime-compliant, Tier-rated data center in a location that doesn’t produce any materials,” said Sutton. “It becomes quite an issue. Even things like concrete, they have to send over from U.S.” In other words, it’s the logistics of building a facility in a remote location that makes pre-fab a good option.

    Schneider’s AST delivers high-end pre-fab data centers to these locations in short order, for large companies that are often legally complex. Sutton said the nature of the structure tends to be hardened depending on the environments.

    3:18p
    Apple to Spend $2B on Two Massive European Data Centers

    Apple will undertake its biggest European data center project to date, investing around $1.9 billion on two massive data centers in Ireland and Denmark. The new data centers will be powered entirely by renewable energy. Both data centers are expected to begin operations in 2017 and will employ hundreds.

    Apple has used 100 percent renewable energy across its data center footprint since 2013. The data centers will power Apple’s online services such as the iTunes and App store, iMessage, Maps and Siri. Apple expects investment of almost $2 billion to be split evenly between the two new locations.

    The data center in Ireland will be in the west, located in Athenry, close to Galway. The Denmark data center will be in Viborg, western Denmark.

    The new data centers will also be some of the largest, at around 1.8 million square feet. In addition to using 100 percent renewables, both data centers have additional design considerations to make them even more environmentally friendly.

    In Denmark, the data center will be located next to one of Denmark’s largest electrical substations, which Apple said will help it eliminate the need for additional generators. The facility is designed to capture excess heat from equipment inside and conduct it into the district heating system to help warm homes in the neighboring community.

    Excess heat recycling has been a major trend in the last five years or so. Two other recent examples are Amazon’s Seattle project and Bahnhof’s Swedish data center.

    In Athenry, Apple will recover land previously used for growing and harvesting non-native trees and restore native trees to Derrydonnell Forest. Apple said in a statement that the project could mean more outdoor education space for local schools and a walking trail for the community.

    “We believe that innovation is about leaving the world better than we found it, and that the time for tackling climate change is now,” said Lisa Jackson, Apple’s vice president of Environmental Initiatives. “We’re excited to spur green industry growth in Ireland and Denmark and develop energy systems that take advantage of their strong wind resources. Our commitment to environmental responsibility is good for the planet, good for our business and good for the European economy.”

    Ireland’s government reacted positively, with Irish Prime Minister Enda Kenny calling the project “an extremely positive step in the right direction.” Ireland seeks to cut unemployment to below 10 percent, and 300 jobs will be added during the multiple phases of the project.

    Apple directly employs over 18,000 people across 19 European countries and has added over 2,000 jobs in the last 12 months. Investment in European companies and suppliers last year was close to $9 billion.

    “We are grateful for Apple’s continued success in Europe and proud that our investment supports communities across the continent,” said Apple CEO Tim Cook in a statement. “This significant new investment represents Apple’s biggest project in Europe to date. We’re thrilled to be expanding our operations, creating hundreds of local jobs and introducing some of our most advanced green building designs yet.”

    Apple recently signed a 25-year agreement to buy the output of 130 megawatts of generation capacity of a future solar project in Monterey County, California.

    4:00p
    Internap’s Earnings Show Continued Shift to Hybrid Services

    Internap reported results for its fiscal 2014, a year that was quite eventful for the company. The year was a tipping point for a transition from colocation provider to hybrid services. The company moved out of the Google-owned carrier hotel at 111 8th Ave. in New York and into its brand new data center in New Jersey. Its revenue stream also widened during the year due to the acquisition of iWeb toward the end of 2013.

    Full-year revenue was $335 million, up from $283.3 million in 2013. Data center services drove top-line growth, and IP services continued a slow decrease in revenue.

    While IP service revenue has been flat or shrinking for a while, the company believes it to be a differentiator in the market. Ninety-five percent of data center customers use the IP service.

    The acquisition and impact of hosting provider iWeb was thoroughly discussed on an earnings call. iWeb revenue growth was in the low single digits, lower than the initially projected estimate of 10 percent. Churn of two hosting customers on the large side and a modest exchange rate impact affected results. However, iWeb was more profitable this year under Internap, with very healthy EBITDA.

    One big benefit of the iWeb acquisition has been the acquisition of a different route to market via iWeb’s e-commerce selling capabilities. These capabilities are promising, as Internap continues the transition into a provider of hybrid services. Selling through the web has seen faster growth than traditional enterprise sales models, and it means much shorter sales cycles than traditional colo deals.

    Internap had 57 percent utilization across its data center footprint. It was a bit of an odd year, given the expansions and the big migration from 111 8th.

    However, the services mix means utilization rates aren’t an apples-to-apples comparison with a pure colocation provider.

    That unused space is potentially more valuable. The potential margin on that remaining footprint is higher, given the company’s growing push into cloud and hosting and the higher margin per square foot these services bring.

    Another benefit of the hybrid strategy is that, if some of the colo space is unattractive to the market or goes unused for an extended period, the space can be filled and capitalized on with hosting or cloud services.

    When it comes to colocation, the company is squarely in retail, so it very rarely sees any deals over 250 kilowatts. Those deals are not its sweet spot, and it is not targeting these larger deals. The company said it is typically seeing three-percent price increases on retail colocation per year.

    Internap’s capital expenditures were split about evenly between colocation assets and hosting and cloud assets.

    4:30p
    Understanding CloudStack, OpenStack, and the Cloud API

    OpenStack vs CloudStack is not so much of a battle as it is a push for advanced cloud management. Let’s start here: these platforms were designed as cloud computing has become an integral part for many organizations. The big push was for logical cloud-layer management that has a lot of ways to control various workloads.

    With that, let’s dive into the latest and greatest from both of these guys.

    CloudStack: Running on hyervisors like KVN, vSphere, XenServer, and now Hyper-V, CloudStack is an open-source cloud management platform designed for creating, controlling, and deploying various cloud services. With its growing API-supported stack, CloudStack already fully supports the Amazon AWS API model.

    • What’s good: It really does keep getting better. The latest release of CloudStack is actually pretty nice. The deployment is really smooth consisting of only one VM running the CloudStack Management Server and another to act as the actual cloud infrastructure. In reality, you could deploy the whole thing on one physical host.
    • The challenges: The first stable release of CloudStack is less than 2 years old, and some still question the rate of CloudStack adoption. Even with some big advancements, some complain that the architecture and installation process – although simplified – still requires quite a bit of knowledge and time to deploy.
    • What’s new:4.1 (with 4.4.2 just released) sees improved security, hypervisor agnosticism, and advanced network-layer management. Also big updates revolve around:
      • Improved Storage Management
      • Virtual Private Cloud tiers can now span guest networks across availability zones
      • Support for VMware Distributed Resource Scheduler
      • Improved Support for Hyper-V Zones, VPC and Storage Migration
    • Who’s using it: DataPipe currently deploys its global cloud infrastructure on CloudStack. According to DataPipe, their reasons for moving to the platform include:
      • Paused VMs maintain machine state without compute charges
      • Scale storage independent of compute
      • Single security zone across all regions
      • Access to Hong Kong Economic Zone, and Shanghai (Mainland China)
      • Additional cost savings as a result of high performance VM’s that require fewer computing resources

    Outside of Datapipe – CloudStack’s largest current user – there have been other smaller but important adopters as well. This includes Shopzilla, SunGard Availability Services, CloudOps, Citrix, WebMD Health, and several others.

    The general consensus is that CloudStack, although strongly gaining popularity, is that it is still in the shadows of OpenStack. Nevertheless, there are even more organizations moving towards the, now graduated, Apache Incubation Program CloudStack model, especially since many early adoption pains have been resolved.

    OpenStack: Managed by the OpenStack foundation, the actual platform consists of multiple interrelated stack-based projects. These all then tie into one management interface to provide a cloud computing management platform.

    • What’s good: It’s definitely a more mature product. Furthermore, there are more than 150 companies (AMD, Brocade, Dell, HP, IBM, VMware, and Yahoo) who are all contributing to development. It’s seen as the leader in cloud platform management and momentum around growth continues.
    • The challenges: Even with so much adoption and development around the platform, OpenStack is still challenging to deploy and, in many cases, needs to be managed from various CLI consoles. The fragmented architecture consists of a number of different modular components including– Compute, Open Storage, Block Storage, Networking, Dashboard, Identity Service, Image server, Telemetry, Multiple Tenant Cloud Messaging, Elastic Map Reduce, and others. The good news is that there are a lot of configuration and installation scripts out there to use as a template.
    • What’s new: Yes, there are still some technical and deployment challenges. Has this stopped adoption momentum? Not at all. The latest release of Juno touts 342 new features. The Juno release adds enterprise features such as storage policies, a new data processing service that provisions Hadoop and Spark, and lays the foundation for OpenStack to be the platform for Network Functions Virtualization (NFV), a major transformation driving improved agility and efficiency in telco and service provider data centers.
    • Who’s using it: Oh yeah, this list is impressive and yes, it’s growing. Jointly launched by NASA and Rackspace Hosting, OpenStack had some serious backers from the onset. Now, OpenStack is utilized by such organizations as AT&T, CERN, Yahoo!, HP Public Cloud, Red Hat OpenShift and several others.

    Let’s face facts: OpenStack is a more mature and more widely adopted platform. But that doesn’t mean it’s not facing the heat of other players in the market. There is a lot of money being pumped into platforms like CloudStack and even Eucalyptus. Right now, OpenStack is enjoying a mature product set with some very high profile users.

    Cloud services are fighting for market share and are developing the next generation of cloud management systems. Arguably the four biggest players in the market currently are OpenStack, CloudStack, Eucalyptus and OpenNebula. Each of them is creating new ways for organizations to connect various cloud services. There’s no doubt that, right now, OpenStack is leading the way. However, cloud interconnectivity, API architectures, and the influences of the end-user will ultimately dictate the future of the cloud. Whatever the outcome, the business, and the of course the end-user, will certainly benefit.

    4:30p
    Look Ma, No Fans!

    Herb Zien is CEO of LiquidCool Solutions, a technology development firm with patents surrounding cooling electronics by total immersion in a dielectric fluid.

    Today, virtually all data centers circulate conditioned air around the data processing room and through the racks to cool IT equipment. Separate hot and cold aisles are maintained to conserve energy, and blanks are inserted to prevent short circuiting through empty slots in the racks. In most installations cold air is forced up through holes in the floor. Humidity control is critical to avoid condensation on IT equipment if too high or electrostatic discharge if too low.

    Get Rid of the Horse!

    None of this makes sense. Air is a thermal insulator with an extremely low heat capacity and virtually no thermal mass. Cold air sinks. Contact between air and electronics promotes oxidation and corrosion. Pollutants in the air can cause additional damage. Fans can fail, affecting reliability. Earplugs are required in some data centers due to excessive fan noise. Heat generation at the device level is bumping up against the thermo­dynamic limit.

    Some engineers argue that the day for liquid cooling will come when racks reach 25 kilowatts or hell freezes over, whichever comes first. But the ability to accommodate high power densities is among the least important benefits of liquid cooling. My great grandparents did not trade up from a horse and carriage to a horseless carriage because they wanted to go 30 miles per hour; they did it to get rid of the horse! The horse wasted energy, took up space and had a negative environmental impact, sort of like fans in a data center. Data center owners and operators should want to get rid of the fans regardless of power density.

    But Why Are Fans a Problem?

    • Fans are inefficient and add to the heat load that must be dissipated: 15 percent of data center energy is used to move air and onboard fans in the chassis can use up to 20 percent of the energy at the device level when servers are operated at full load.
    • Fans waste space because racks need room to breathe and CRAC units and in-row coolers take up expensive floor space.
    • Fans reduce reliability because they are prone to failure, electronics are exposed to air, and thermal fluctuations and vibration drive solder joint failure.
    • Fans are noisy, blow dust around and generally create environmental issues for equipment operators.

    Heat_Capacity

    So the issue has nothing to do with liquid cooling, it is all about getting rid of fans. The engineering problem is how to replace fan-based cooling with a cost efficient platform that is scalable and provides quick and easy access for maintenance.

    There are companies that offer technologies that eliminate fans. These technologies travel different paths to reach the objective of getting rid of fans, but they all isolate electronics from air, significantly reduce or eliminate the need for mechanical refrigeration, save space, reduce thermal fluctuations, reduce noise, eliminate the need for humidity control, eliminate raised floors or high ceilings, and facilitate energy recovery. As icing on the cake they also can dissipate heat from high density racks. Some are more practical than others, but my guess is that all of these technologies are more cost efficient than current data center cooling practice.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.

    8:02p
    TelecityGroup Building Fourth Dublin Data Center

    TelecityGroup, whose recent merger with Interxion made it the largest Europe data center provider, is building a data center in Dublin, which will be its fourth facility in Ireland.

    Dublin is an attractive Europe data center location for U.S. companies expanding to the continent, according to the provider. Google and Microsoft have large data centers there, and so does San Francisco-based wholesale provider Digital Realty. Apple announced today it will build a massive data center in the country’s west, near Galway.

    Construction on the new data center site in Northwest Business Park in Blanchardstown, a Dublin suburb, has commenced. The company expects to launch Phase I this summer.

    The location has access to lots of network carriers and is close enough to TelecityGroup’s other three data centers in the region for direct intercampus connectivity.

    One of its existing facilities is in the same business park. That data center is being expanded, along with nine other existing sites around Europe.

    The company’s other two data centers in the country are also in Dublin suburbs.

    The new Dublin data center will provide about 27,000 square feet of data-hall space and employ 10 people in engineering roles.

    Maurice Mortell, vice president of emerging markets and managing director of TelecityGroup Ireland, said the company had seen more demand from customers for lots of low-latency network capacity, especially to the U.S.

    “It was this high demand for TelecityGroup data centers in Ireland that drove our major expansion in Ireland in 2011,” he said in a statement. “Since this expansion just over three years ago, our business in Ireland has grown considerably.”

    10:33p
    Box Buys Airpost, a Startup that Brings Visibility into Corporate Clouds

    logo-WHIR

    This article originally appeared at The WHIR

    Cloud storage and collaboration platform Box has bought Airpost, a startup whose service allows organizations to know what cloud services their employees are using, and apply full management and reporting to them, making them safer.

    According to a blog post from Airpost co-founder and CEO Navid Nathoo, Airpost will be closing operations as of Mar. 1, 2015. The Airpost team will be joining Box, and, as TechCrunch notes, the wording of Nathoo’s announcement makes the deal seem like an “acqui-hire.”

    Nathoo wrote, “We are excited to bring our experiences from Airpost to Box and continue to build world-class products so organizations all over the world can feel secure that their content is safe in the cloud, and that their employees will be more productive than ever before.”

    Airpost was founded two years ago, and attempted to deal with the issue of “shadow IT” – essentially staff members using outside cloud services (such as Box) without administrators knowing.

    Security is obviously a major concern for enterprises seeking cloud solutions, and it’s been an area of competition in the cloud storage and collaboration space, where the major players are Dropbox, Microsoft and Box. Security is, for instance, a major focus of the alternative cloud platform Huddle, and Peer-2-Peer-based offerings such asBitTorrent Sync and Mega also offer some interesting security features.

    Meanwhile, Box launched a long-delayed Initial Public Offering on the New York Stock Exchange last month, and it has been making a concerted effort towards growing its enterprise customer base partly through the addition of new features. Just prior to the IPO, it had bought CloudOn, a startup that provides cloud-based productivity applications that can be accessed on any device.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/box-buys-airpost-startup-brings-visibility-corporate-clouds

    11:10p
    QTS Reports Positive First Year as Public REIT

    QTS Realty has reported results for its first full year as a publicly traded Real Estate Investment Trust. Results were on the whole positive, but were greatly assisted by a blockbuster 19-megawatt deal with an unnamed tenant closed during the year.

    The company converted into a public REIT as part of its IPO late in 2013. Other data center providers that recently converted to public REIT status are CyrusOne and Equinix, although the latter is still awaiting official regulatory approval.

    QTS kept busy since the IPO, investing $300 million in expansion and bringing 180,000 square feet of data center space online. In 2014, it reached 2 million square feet of powered-shell space across its portfolio, up from 1.8 million square feet at the end of 2013.

    The company will hit a major milestone this year, reaching 1 million square feet of built-out space. It currently has 927,000 square feet and plans to bring close to 100,000 square feet online this year.

    Planned expansions in 2015 include over 40,000 square feet in Richmond, Virginia, 31,000 square feet in Dallas, and 25,000 square feet in Atlanta.

    In addition to the 19-megawatt deal, highlights of the year include bringing online a Dallas-Fort Worth data center and key acquisitions and deals in Chicago and New Jersey. Its Princeton, New Jersey data center is now also acting as showcase for a recently launched critical facilities management practice. The company also achieved the elusive FedRAMP certification, which puts it in a strong position to win federal government deals, and continued to expand its services offerings, adding disaster recovery as a service in September.

    Expectations were high for QTS and data center REITs in general. QTS slightly beat analyst expectations, with higher funds from operations than expected. Funds from operations is a common REIT measuring stick.

    Revenue for the total year was $217.8 million, up 22 percent. The company had $59.6 million in revenue in the fourth quarter, 3 percent higher than in Q3.

    QTS focuses on its 3 Cs portfolio, the three Cs being Custom data centers, Colocation, and Cloud. C1 is big, strategic wholesale deals; C2 is retail colo; C3 is cloud and managed services. C3 deals generate higher revenue per square foot.

    There was no C1 activity in the fourth quarter, but new and modified leases were up 10 percent, which suggests an increasingly better services mix. Average revenue per square foot for C2 and C3 is $1,025, or much higher than with C1. C1 deals are strategic, big customers.

    Pricing for new and modified deals was above the four-quarter average because of increased C2 and C3 activity.

    Revenue guidance for 2015 is between $225 million and 275 million, the company expressing comfort in its booked-not-billed pipeline, new strategic assets, and the growth of its higher revenue per square foot C3 cloud and managed services portfolio.

    Room for Growth

    QTS is using less than half of its total building-shell space and has capacity to more than double the amount of raised floor in its inventory. Additionally, the company owns land adjacent to all of its mega data centers. The room to grow is there.

    Of the space online and available, utilization as of the third quarter was 85 percent, not including space that’s booked but not billed. There was no space brought online in Q4, as planned, but over 80,000 square feet was added in Q3.

    Utilization rates are good, but it’s not a simple metric to evaluate. QTS is one of many providers attempting to diversify its services mix beyond big wholesale deals. The utilization rate doesn’t include powered shell, “booked but not billed,” and the revenue per square foot ranges across its offerings.

    Many of the impressive growth metrics for 2014 can be directly attributed to that 19-megawatt deal across Richmond and Atlanta metro facilities. The company’s net operating income highlights were Atlanta (up 19 percent) and Richmond (up 53 percent). Third biggest net operating income gain was California — up 12 percent.

    Richmond was a bright spot beyond the big deal, however. This time last year, it had 52,000 square feet leased. It ends the year with 137,000 square feet leased. Last April, Richmond was highlighted in a discussion with DCK.

    The company signed a new customer in the healthcare industry, who took multiple cloud and managed services in Richmond. An insurance provider who used to take colocation space in Richmond expanded its services to include Disaster Recovery as a Service offering out of Atlanta.

    Focus on Chicago, Jersey

    Chicago and Princeton are important strategic markets for 2015. The company believes Chicago will play out very similar to the Dallas facility. Both locations are key markets that are on the must-serve list for national providers. Dallas had great pre-selling activity, but the market has also been supply-constrained as of late.

    “Chicago is one of the dots we’ve been trying to put on the map for some time,” one QTS official said on the recent earnings call.

    The company’s strategy is acquiring infrastructure-rich assets in strategic new markets on a low-cost basis, which it succeeded in doing in Chicago.

    Chicago is really two markets: downtown, where the carrier hotel at 350 E Cermak is the big dog, and the suburbs. It’s more expensive in the city. The new QTS Chicago location is strategically located a few miles south of the financial district.

    Princeton is the flagship for what the company wants to accomplish with the critical facility management practice. The 200-acre property has about 600,000 square feet of facilities with 175,000 square feet of shell.

    << Previous Day 2015/02/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org