Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 29th, 2013
| Time |
Event |
| 12:30p |
Bridging the Cloud Storage Gap Ranajit Nevatia is VP of Marketing, Panzura. He brings more than 15 years’ experience in enterprise storage and software. Prior to Panzura, Ranajit was responsible for business strategy, cloud alliances, product management and marketing for Riverbed’s Cloud Storage Business Unit.
 RANAJIT NEVATIA
Panzura
Cloud is one of the biggest buzzwords in technology these days. After cloud’s initial success with software as a service (SAAS) applications like e-mail, CRM, and payroll, the next frontier has become infrastructure as a service (IAAS), especially with storage. The idea that you could store your data in the cloud and eliminate expensive on-site storage systems is very compelling to companies that want to reduce capital spending in their IT budgets. CFOs and CIOs are strongly attracted to the idea that they could move corporate storage to the cloud and eliminate potentially millions of dollars spent on buying, maintaining, and upgrading storage systems. And cloud providers like Amazon Google and others like Dell, EMC, HP and IBM are eager to acquire enterprise storage customers, offering virtually unlimited storage in their clouds for pennies per gigabyte.
Storage Needs Drive Cloud Storage Adoption
The number one driver for the growth of cloud storage is that storage isn’t necessarily a core competency of most large organizations. Many enterprises are trying to leverage cloud storage are doing it to move from a CAPEX oriented expenditure model to an OPEX oriented model. They’re trying to get out of the mundane task of managing large amounts of storage. CIOs at large corporations are basically saying they’re done buying storage gear because it doesn’t add value to their line businesses in terms of delivering services. For example, the U.S. Department of Justice is being asked to provide more e-discovery and better case management for its offices. That’s how the DOJ’s IT staff wants to spend its time, not necessarily arranging the storage to support that application. They’re trying to get the storage function into someone else’s hands so their IT department can focus more on value-added services to their courts and their lawyers.
Multiple Barriers to Cloud Storage Adoption
So if cloud storage seems a solution to this problem, why the slow adoption? If cloud storage seems like an unalloyed benefit for enterprises, it’s not as simple as visiting the Amazon site and clicking the “Sign me up” button. Less than one percent of traditional large enterprises store their files in the cloud because using the tools provided by cloud vendors, they can’t get there from here.
One issue is that enterprises store “files,” and clouds store “objects,” a new data construct that is required as part of this scalable architecture. Somehow, there must be a translation from file to object in order for enterprises to access cloud storage. Alternatively, the companies would have to re-write many of their applications to take advantage of cloud storage and they’re unwilling to do that because of the high cost. Cloud providers offer basic on-ramps to their clouds – software programs that perform a raw conversion from files to blocks – they might serve the needs of small businesses, but these are not adequate for most use cases in a large corporation.
Other barriers to enterprise cloud storage adoption include security, availability, and performance. Companies are reluctant to store proprietary data in a public cloud because they worry that the data won’t be secure. Companies worry that the network or cloud provider could experience outages, denying them access to their data at a critical time. Companies are also concerned that their users won’t be able to access files in the cloud with the same speed as they get them from on-premises storage on a local network.
Eliminating the Challenges to Cloud Storage
To address these issues, a new category of product has arisen: cloud storage controllers. Cloud storage controllers are systems that sit in the corporate data center and remove these objections or concerns. They provide translation of files for storage in the cloud and make cloud file storage as simple, fast, secure and reliable as local storage. A cloud storage controller integrates memory, solid-state storage, and hard disk storage along with an on-ramp to the cloud.
There are cloud controller solutions that are designed primarily to address the needs of smaller businesses, but these take a very basic approach that isn’t robust enough for enterprise cloud storage applications. A true enterprise-class solution must deliver the scalability, reliability, and performance required to support hundreds or thousands of users working with thousands or millions of files.
Aside from translating files into objects for storage in the cloud, the most basic function of an enterprise-class cloud storage controller is to provide a standard enterprise-grade file system that is scalable into millions or billions of files spanning hundreds of terabytes or petabytes. The file system enables enterprise applications to transparently integrate with the cloud storage controller as if it were a local storage device.
On top of the file system, the cloud storage controller includes other key features:
- Tight integration with existing applications, and use of corporate directory structures for access control;
- Granular file deduplication, which eliminates duplicate copies of files to save storage space;
- Compression, which saves space by reducing the size of files as they are stored;
- Military grade encryption, which encrypts files stored on disk;
- Unlimited snapshots, which are point-in-time views of all files stored so storage administrators can return to earlier versions of files;
- Efficient bandwidth management.
As on-ramps to cloud storage, cloud storage controllers also enable users to determine which files are cached locally and which are only stored in the cloud. The cloud storage controller itself incorporates multiple terabytes of storage, and users can “tier” their data for various access priorities – a file can remain in memory, be stored in solid-state storage, be stored on local hard disk, or be stored in the cloud. Using cloud storage controller tiering and caching mechanisms, users are able to intelligently cache frequently-accessed files, or files that through policies are kept in the first tier of storage. Using these tools and a feature called pinning, users can ensure that certain files are always available at LAN speeds.
Another aspect of the most advanced cloud storage controllers is that they work well in distributed organizations. Each office’s data center can host a cloud storage controller so the entire company has access to all of its data. Through use of a global file system, data on any cloud storage controller is available through any other cloud storage controller, and changes to files are automatically and instantly updated across the organization.
In short, cloud storage controllers eliminate the barrier to cloud storage adoption by making cloud file storage possible, by encrypting data in transit and in the cloud to eliminate security concerns, by making data available at all times, and by ensuring the performance of local LAN-attached storage.
Now, companies that want to reach for cloud storage can make it happen quickly and easily.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 12:31p |
Internap to Move Out of Major Manhattan Data Hub  The exterior of 111 8th Avenue, one of the premier carrier hotels in Manhattan.
Internap Network Services will move a data center out of a prominent Manhattan data hub and relocate customers to a new facility it is building in New Jersey, the company said this week. Internap’s decision to migrate its operations out of 111 8th Avenue illustrates the shifting tides in the greater New York data center market, which has seen a flurry of new projects emerge in the wake of Superstorm Sandy and questions about Google’s plans for 111 8th Avenue.
Last October Internap announced that it would build a 100,000 square foot data center in Secaucus, a growing data center hub in northern New Jersey. At the time, the company said it expected that its two Manhattan data centers would near their capacity in the next 12 months. But in Thursday’s earnings call, Internap said it plans to migrate its operations out of a 75,000 square foot facility in 111 8th Avenue and move them into the Secaucus facility. The New Jersey site will open late this year, allowing nearly a year to move customers between facilities.
“Prior to the lease expiration at the end of 2014 for our data center at 111 8th Avenue, we will migrate our colocation and hosting infrastructure into our new data center in Secaucus,” said Eric Cooney, President and CEO of Internap, who said there were “various reasons” the company decided not to continue at 111 8th Avenue.
Speculation About Google’s Plans
Google’s stance on data center tenants at 111 8th has been the subject of much speculation since Google acquired the building in 2010 for $1.9 billion. ”In the wake of that deal, there is growing concern that as these data centers’ leases expire Google will take over the space for its own use,” The New York Times noted last year. That hypothesis will likely gain support from the exit by Internap, which helps accelerate customers’ web performance, yet is checking out of one of the Internet’s leading connectivity hubs.
111 Eighth Avenue is among the world’s most wired buildings, and one of the two key Internet intersections in Manhattan, along with 60 Hudson Street. The 2.9 million square foot property occupies an entire city block in Chelsea and houses major data center operations for Digital Realty Trust, Equinix, Telx and many other providers and networks.
After initial speculation that Google acquired the building to control a strategic Internet intersection, the company said it will use the building for office space for its growing business operations in New York.
“It’s not about the ‘carrier hotel’ space,” said Jonathan Rosenberg, Google’s Senior Vice President for Product Management, in a 2011 earnings call. “We have 2,000 employees on site. It’s a big sales center, but also a big engineering center. With the pace at which we’re growing, it’s very difficult to find space in New York. There are very few buildings in New York that can accommodate our needs. This gives us a lot of control over growing into the space.”
Google declined comment on Internap’s decision to move out, and whether Google is seeking to eventually occupy third-party leased space at the building. Internap had no additional comment on its decision beyond Cooney’s comment that it was guided by “various reasons.”
Not Driven Solely By Cost
But during the earnings call Thursday, Cooney suggested that the move wasn’t driven by economics. Internap operates its own data centers, but also leases space from third parties. The company has been shifting its focus from leased space to company-owned data centers, which offer better profit margins. But Cooney noted that Internap’s facility at 111 8th was a company-controlled data center.
“In terms of the cost savings from a pure colo in 111 8th to colo in the Secaucus data center, there’s probably a modest (cost) improvement, but not a massive improvement, because 111 8th isn’t a partner data center for us,” Cooney told analysts. “Certainly, rents are cheaper in Secaucus, but recognize that our 111 8th rent was derived from well over 10 years ago in terms of the market pricing in New York. So clearly, current rates are significantly higher than they were at the time we negotiated that deal. So again, we’ll still see modest costs or margin benefit, but probably not as dramatic as you might expect going from a partner facility into a company-owned facility.”
Internap’s space at 111 8th Avenue is one of two data centers the company operates in Manhattan. The other is at 75 Broad in Lower Manhattan, one of the buildings that experienced flooding problems during Superstorm Sandy last October.
Internap will open its Secaucus data center in the fourth quarter of 2013. The first phase will be 13,000 square feet, suggesting that it may build additional phases quickly as it migrates customers from the Manhattan site.
The Manhattan data center has seen a flurry of new projects since Google’s acquisition of 11 8th Avenue, including Sabey’s Intergate.Manhattan (375 Pearl Street), new space from DataGryd at 60 Hudson Street, a new Telehouse facility at 85 10th Avenue, an expansion by 365 Main, a renovation at 325 Hudson Street, and most recently the announcement that Telx will build a new data center at 32 Avenue of the Americas. There have also been new construction projects in northern New Jersey, with new buildings underway for CoreSite and Digital Realty as well as Internap, Telx and IO. | | 2:30p |
Video: Jon Koomey on Predictive Modeling Data center energy expertJonathan Koomey, a research fellow and consultant on data centers, recently spoke about predictive data center analysis in a talk titled, “Why Predictive Modeling is Essential for Managing a Modern Data Center Facility” at Data Center Dynamics in San Francisco. He stated that the business problem amounts to the disconnect between the way IT services are supposed to support business value and the way decisions about their deployment are made. He also spoke about today’s trend toward using computer models to manage data centers and predict data center capacity. Capacity in a data center is like a game of “Tetris,” and the blocks of capacity don’t fit neatly together, resulting in “voids” or lost capacity. He uses the Tetris metaphor to explain how data center capacity becomes fragmented and stranded over time, thus stranding capacity and CapEx as well. The video runs 34 minutes.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 3:00p |
Locomotive Factory to Data Factory: Interxion’s PAR7 Data Center Location Has Rich History  European data center specialist Interxion’s PAR7 is part of the transformation of Parisian suburb, La Courneuve, from a former industrial area to a vibrant technology center. The data center is the former Corpet Louvet plant, which manufactured steam locomotives.
Interxion’s newest data center campus in France has a rich history. Located at La Courneuve, a suburb located 8.3km north east from the center of Paris, the region played an integral role during the heart of the industrial revolution in the 19th and early 20th centuries. Now, it’s part of the digital revolution.
European data center specialist Interxion’s PAR7 is part of an ongoing regeneration of La Courneuve and its transformation from a former industrial area to a vibrant technology center. The data center is the former Corpet Louvet plant, which manufactured steam locomotives. The company provided 1,962 trains during one of history’s major turning points. Two families, Corpet and Louvet, joined forces to launch the first steam locomotive in France.
Old Industrial Footprint
“We took over a piece of land that was very old commercial manufacturing plant. We rebuilt it completely,” said Fabrice Coquio, Managing Director of Interxion France. “The industrial revolution started with trains, then in early 20th century comes electricity. Now it’s the digital age. The site encapsulates the evolution of industry in Paris in the last 200 years. The East end is industrial.”
The location’s infrastructure makes it ripe for data center use. “Things are changing, but there are things remaining,” said Coquio. “We need electricity – a lot of data centers are establishing themselves in former industrial areas.” As many in the data center industry know, a data center site needs available power, just as bricks-and-mortar industries, such as mills, manufacturing facilities required power. “The data center is the electrical factory of the future,” Coquio added.
New from the Old
“We had to destroy most of the former, very old building,” said Coquio. “We still kept two elements: the first is the former office of the plant manager. The second, there was a kind of weight in front of this building to check how many kilos the locomotives were representing before shipping. We kept this as well.” While the data center is a new facility with very modern design, the company kept these two flourishes as a reminder of the location’s rich history. It gives the location a distinctive personality, and shows a sort of evolution of industry.
 A view of the factory floor of the Corpet Louvet plant while it was in operation building steam locomotives.
$165 Million Facility Investment
“We’re a data center specialist. We design build and operate data centers. We do not want to compete with our customers, we claim to be what we call neutral data centers,” said Coquio.
Interxion invested what equals to $165 million USD in PAR7. It’s a very large data center, with the company managing over 4,500 square meters of data hall space, and offering high power, with 65 Megawatts available. It’s high-density data center, offering up to 25-30 kW per rack. “It’s good for customers around density, cloud, or digital media services,” said Coquio. The site houses three Internet exchanges, making it a major hub.
One interesting and unique feature of the location is that the inside the power distribution is 20,000 volts. This is more costly than distributing at 400 volts, which is the standard in the industry. “The higher the voltage, the less loss,” said Coquio. ”We save 0.2 of PUE, so it’s not a minor advantage. PUE is the combination of many designs and solutions,” Coquio states, as several design considerations were taken outside of the power distribution. However, it’s unique to Interxion in France. “We are the only one in France that is able to build and maintain a power distribution like this,” said Coquio. “It requires a lot of expertise to run.”
The site features 2N+1 redundancy, on top of high density. It offers space from 1 rack to 1000 square meters. PAR7 officially opened on the 29th of November and implemented its first customer in 2012. The build-out lasted for exactly 11 months. Interxion announced the facility in summer of 2011.
The France Data Center Market
Interxion is a major player in France. It has the most data centers there and owns a sizeable chunk of the market. The company has a total of seven data centers around Paris. “It’s the most dense region in Europe,” said Coquio.
While the economic crisis is effecting everywhere in Europe, Coquio is optimistic. “Last year wasn’t the best year, but we clearly see the market recovering,” said Coquio. “We’re still talking about a market growing 15-20 percent.”
Coquio founded Interxion France in 1999, and it has grown nicely over the years. France is the largest entity of the group at 19 percent of revenues. Second biggest is Germany, followed by Holland and the UK. The company has 27 percent of the Paris market.
“In the European data center market, the data center is always focused on the capital cities,” said Coquio. This holds true in France, but Coquio notes that the exception to this rule is Germany, which has notable data center concentration in Frankfurt, Dusseldorf and Munich.
Interxion considers itself a neutral data center provider, in that it specializes in facilities and does not want to compete with its customers when it comes to cloud. “A bit more than 30 percent of our customers are multi-site,” said Coquio. The company’s multiple locations in Europe play a large part in its growth.
“Our customer base in Paris is quite different than what we have elsewhere,” said Coquio. “It’s much more about national corporations. We tend to have some specialties in western Europe. Paris is organized around connectivity, and our customers are large systems integrators like Cap Gemini. The cloud specialists are in London or Amsterdam.” | | 3:07p |
New CEO Choksi Sees Growth Ahead for Vantage  The exterior of the first Vantage data center in Quincy, Washington. New CEO Sureel Choksi said Vantage is focusing on boosting its sales and marketing efforts. (Photo: Vantage)
Sureel Choksi says it’s no accident that Vantage Data Centers quickly established itself as a player to be reckoned with in the highly competitive Silicon Valley market.
“Our strength is the Vantage vision,” said Choksi, who was recently named the new Chief Executive Officer at Vantage. “It’s culminated in the best data centers in the industry, facilities that are customer oriented. Our growth over the last three years has been outstanding.”
Since its launch in 2011, Vantage has leased nearly 20 megawatts of space in its campus in Santa Clara, and another 6 megawatts at a new single-tenant facility in Quincy, Washington. Vantage has been scrappy in its pursuit of deals, but says its success in Silicon Valley is related to its tenant focus and the ability to customize its facilities to customer requirements.
Building on that initial success will require a sustained focus on sales and marketing. That’s where Choksi comes in. He’s a 15-year veteran of the telecom and data center industry, with a resume that includes a decade at Level 3, including stints as both Chief Financial Office and Chief Marketing Officer. He previously worked at Elevation Data Centrs before joining Silver Lake, the private equity fund backing Vantage.
“It’s about how we continue to grow,” said Choksi. “We’re going to bring in some additional sales and marketing resources.”
Choksi succeeds Jim Trout, a co-founder of Vantage who previously held key leadership roles at CoreSite and Digital Realty Trust. Trout will transition to a new role as Chief Technology Officer, with a focus on data center design and construction and corporate strategy.
“Jim is a real visionary in the industry, dating from 10 years ago when he started what would become CoreSite,” said Choki. “When Jim started the company, it was about taking all the lessons he had learned and incorporating them into Vantage. What Jim loves to do is design and construct data centers. He’s less excited about sales and marketing, which is my passion. This makes more sense. We have a lot of growth ahead.”
Opportunity in Both Markets
Choksi sees opportunity in both of Vantage’s campuses. Santa Clara is a busy market in which nearly all of the major players in the wholesale and colocation sectors are marketing space. Recent leasing has reduced the available footprint at both Vantage and DuPont Fabros Technology, which had the largest chunk of wholesale space on the market. Vantage’s leasing focus is on the third building on its campus, known as V1.
“The market has been competitive in Santa Clara,” said Choksi. “From my standpoint, it’s good that there are no more large chunks of supply. For us, it’s about investing more heavily in sales and canvassing the market. Our data centers are very attractive. We need to get more folks through them. If we get you in there, most of the time we’ll close a deal.”
The Vantage campus in Quincy offers an opportunity for larger space requirements. With its low-cost hydro power, Quincy has been an attractive market for comanies with web-scale operations, including Microsoft, Yahoo and Dell.
“Quincy has significant additional power at the site,” said Choksi. “We envision being able to support other large customers, either in a powered shell or a turnkey facility. We’ve got a lot of land and a lot of access to power. Real estate is very dear in Santa Clara. In Quincy, there’s a lot.”
Choksi said Vantage may look for opportunities to expand its model to other markets. But not just yet.
“I’ve only been in this job for four days so far,” Choksi said with a chuckle. | | 6:08p |
OnApp’s Federation PoPs Support Tomorrowland Festival 
Tomorrowland is a massive electronic music festival that occurs in Boom, Belgium. Never heard of it? It sold out days before the event and had a record attendance of 120,000 visitors over two days. So it’s like Woodstock, only with more “beeps” and “boops.”
While attendees were most likely powered with unknown substances, its website was powered by a local hosting provider called Stone.is and OnApp. Cloud service provider and content distribution network OnApp provided the capabilities to handle such a global audience.
“OnApp’s federated CDN is the only way we could provide enough global capacity for this size event,” said Stein Van Stichel, Founder and CEO of Stone.is. “We optimized Tomorrowland’s sites to ensure there were no issues with the registration and activation process, and worked with OnApp to ensure we had the global coverage we needed. Using OnApp CDN, we could distribute Tomorrowland traffic from cities all over the world for the launch, and scale back when ticket sales closed. With access to capacity from the OnApp federation, we can design hosting packages for high availability and high load, and guarantee uptime for global brands like Tomorrowland.”
The festival is a perfect example of what kind of customer has extreme traffic fluctuations with a dispersed audience. Fans came from 214 countries, making for a more global audience than attended the Olympics. How does one handle bursts of traffic, spread out over the world? The local hosting provider Stone.is was able to leverage OnApp’s Federated CDN for 46 Points of Presence (PoPs).
The host was able to support one million fans from around the globe who pre-registered for 180,000 Tomorrowland tickets. Two million customers visited the day ticket sales opened, and they were forwarded to Tomorrowland’s ticket system via websites hosted by Stone.is. It supported 4.6 million web pages with a peak of 1.4 million page views in just one hour on ticket sales day, with record-breaking ticket sales that sold in just one second.
OnApp has been building up its federated infrastructure, creating a CDN capable of competing with the big guys. Examples like Tomorrowland show how hosting providers are able to stand up infrastructure previously unavailable or too costly to the traditional provider.
The OnApp federation offers more than 170 points of presence and the potential to add capacity from the 2,000+ OnApp clouds deployed to date across 87 countries. It was a perfect fit for a music festival with a large global fanbase. | | 7:54p |
Schneider Electric Expands Agreement With Equinix to Germany and Italy Global energy management specialist Schneider Electric has expanded a global agreement to help colocation provider Equinix manage its energy use. The agreement now includes Equinix locations in Germany and Italy.
Schneider Electric will provide its Energy Management Procurement Services (EMPS) and sustainability services to the data center giant. The addition of Germany and Italy to the agreement means that Schneider Electric provides its full range EMPS services to 110 Equinix facilities around the globe.
The partnership has saved Equinix $11 million since March of 2011, according to Schneider Electric.
“Schneider Electric is the first data center provider with the ability to work with our customers on both the supply and demand side of the energy equation,” said Steve Wilhite, Senior Vice President, Professional Services, Schneider Electric. “By working with a global client like Equinix, we saw the opportunity to offer new approaches to energy procurement which would be able to provide their company significant cost savings, in this case millions of dollars.”
Schneider Electric is providing Equinix with a range of services. It’s giving Equinix a more a flexible approach to energy buying with full service energy management and procurement capabilities, utility bill management, comprehensive Green House Gas emissions reporter, annual budgeting and risk management services, electric variance reporting and access to Schneider Electric’s Resource Advisor energy management platform.
Schneider Electric raises a very interesting paradox in IT today: despite exponentially rising demand for faster, more complex data center services, there is increasingly intense public pressure to operate highly efficient and sustainable facilities. Data centers and data center needs are growing, and businesses are asking how they will sustainably and affordably power these growing needs. Schneider Electric believes the keys to solving this dilemma and achieving complete and profitable sustainability are intelligence, efficiency and integration. | | 8:52p |
Data Center Maintenance: Hot Topic for Wall Street? Data center maintenance may not seem like a sexy topic for Wall Street. But during this earnings season, maintenance costs have been a key discussion item during the earnings calls for three of the industry’s largest data center operators.
Why the sudden focus on maintenance? The attention has been driven by a headline-making May presentation in which hedge fund Highfields Capital Management asserted that investors should short shares of Digital Realty Trust, saying the huge data center developer was understating the future investment in facilities that would be required to support its enterprise customers. Digital Realty responded publicly, saying Highfields was “mischaracterizing and drawing inaccurate conclusions” from its disclosures.
Nonetheless, Digital Realty (DLR) appears to want to put the controversy behind it. In its quarterly conference call with analysts Friday, the company said it would not be accounting for maintenance costs below $10,000 as capital expenditures, rather than operating expenses, as had previously been the case. Digital Realty Chief Financial Officer William Stein said the change would be more in line with GAAP accounting practices. The company also will make additional documentation of its CapEx spending available in its quarterly reporting, Stein said.
“Today, we are capitalizing what’s appropriate to capitalize down to any amount, which is really consistent with GAAP,” said Stein. “The $10,000 and lower policy was a holdover from our IPO days when we had limited resources and there was a question of our ability to track, from a capitalization standpoint, expenditures of $10,000 or less.”
Shortly after the completion of the call, shares of Digital Realty slipped more than 6 percent.
DuPont Fabros
On its earnings call Thursday, DuPont Fabros (DFT) President and CEO Hossein Fateh described maintenance expenses as a “topic of major interest in our industry” and emphasized how the company’s lease structure protects it against unexpected maintenance costs over the long run.
“Since we’re a ground-up developer, we have not acquired any properties with existing leases,” Fateh said. “Every lease that we have signed is with a lease that we have written. All our leases are triple-net. This allows us to recover all our operating expenses. Should these expenses go up, we’ll be reimbursed by our tenants.”
Under a triple net lease, tenants pay for property taxes, insurance and maintenance costs, insulating the landlord from these costs. An example: DuPont Fabros recently had to do maintenance on batteries to support backup systems for its VA4 data center inA shburn, Virginia. “We replaced the batteries for the building, and we’re able to pass this replacement cost through to our tenants over the useful life of the batteries, which is 12 years,” said Fateh.
Fateh said tenants are willing to pay these maintenance expenses because “it’s in their best interest. Our customers do not want us to be financially incentivized to cut corners on maintenance. They want us to consistently maintain and renew our assets. This assures them maximum efficiency and uninterrupted service.”
Equinix
Equinix (EQIX), the largest player in the colocation industry, included an extra slide in its earnings presentation to address questions from investors, according to CFO Keith Taylor.
“It’s fair to say the economic life of our IBXes (data centers) and these critical assets will likely extend to 30 years or greater, given the level of spend in both our predicted and preventive maintenance programs,” said Taylor. “Overall, our maintenance capital was approximately 2% of our revenues, consistent with our expectation, but there should be no meaningful reinvestment requirement in our IBXes.” |
|