Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, October 21st, 2013

    Time Event
    11:30a
    Data Center Jobs: ViaWest

    At the Data Center Jobs Board, we have a new job listing from ViaWest, which is seeking a Data Center Director in Richardson, Texas.

    The Data Center Director is responsible for master planning, such as power distribution and cooling strategies for each data center in region, capacity forecasting and monthly reporting, development of data center standards, maintenance oversight for all regional data centers to ensure best practices are implemented, design and implementation of emergency response procedures, and design and implementation of efficiency initiatives and budgetary planning. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    1:00p
    Introducing Secure Solutions for Data Center Connect

    The modern data center has evolved to support numerous different technology platforms. These platforms now house more users, more devices and a lot more information. With the direct increase in cloud utilization and virtualized servers – there became a need to better secure workloads running within the modern data center. Organizations are now shifting to the real-time transfer of data between data centers, and implementing on-the-fly data encryption with key management for security. Physical Layer encryption is the preferred method for securing data across the data center connect (DCC) WAN – which are deployed across optical fiber and DWDM for converged LAN and SAN traffic. Optical DWDM solutions enable the highest throughput for DCC at the lowest TCO.

    With that, comes the conversation around efficient network traffic distribution and logical as well as physical security at the networking layer. In this white paper from Alcatel-Lucent, centralized and distributed security models are explored to help facilitate even further levels of data integrity. The better enhance performance and create a solid security platform, this paper outlines:

    • Centralized, compliant authentication and authorization
    • Network and key management
    • Data confidentiality
    • Data integrity
    • Data availability

    There’s no doubt that the trend to digitize the modern business will continue to grow. With more users consuming information directly housed within the modern data center, administrators will need to utilize switching and networking platforms that have increased optical and data transition resources. Download this white paper today to learn how these data center security challenges can best be addressed with the Alcatel-Lucent Secure Data Center Connect Solution, designed to flexibly support the full range of DCC requirements. Remember, data will always be an integral part of any organization. Now, more than ever – it’s important to find ways to secure this information and help facilitate its safe delivery. This is why deploying intelligent devices at the data center connect layer allows for greater connectivity between data centers – and enables service providers to deploy high-bandwidth and low-latency encrypted services.

    1:15p
    The HealthCare.gov Experience: Why Critical Systems Fail
    Richard Cook, Royal Institute of Technology, Stockholm

    Richard Cook, from the Royal Institute of Technology, Stockholm, gave a talk at Velocity 2013 titled, “Resilience in Complex Adaptive Systems: Operating at the Edge of Failure.” (Photo by Colleen Miller.)

    NEW YORK - The amazing thing isn’t that systems like the HealthCare.gov government web site can fail in spectacular fashion, says Dr. Richard Cook. It’s that it doesn’t happen more often.

    “The systems we build are so expensive, and so important that we always seem to run at the edge of failure,” said Cook, an expert on engineering failures from the Royal Institute of Technology in Stockholm. “Every system always operates at its capacity. As soon as there is some improvement or some new technology, we stretch it.”

    Cook’s presentation on reliability in complex systems was one of the highlights of last week’s Velocity 2013 conference, which focused on web performance and how to avoid the kind of headaches being experienced by healthcare.gov, the online insurance marketplace created by the Affordable Care Act. The site has been plagued by problems, with many users unable to access the site, and others stymied by enrollment problems.

    Rush to Fix A Broken Web Site

    “The experience on HealthCare.gov has been frustrating for many Americans,” said the Department of Health & Human Services in a blog post. “Some have had trouble creating accounts and logging in to the site, while others have received confusing error messages, or had to wait for slow page loads or forms that failed to respond in a timely fashion.

    Today President Barack Obama will announce steps to address the problems with HealthCare.gov, including additional phone support for enrollees and initiatives to fix the broken elements of the web application.

    “Our team is bringing in some of the best and brightest from both inside and outside government to scrub in with the team and help improve HealthCare.gov,” the department said.

    It’s familiar refrain for Cook. “When you have a healthcare.gov experience, everybody says ‘I don’t care what it costs, get it back up!’” he said. “I don’t care how many people you have to put there, get it up! You don’t care (cost) anymore, because you’ve got a big problem.”

    But Cook says even accidents and downtime rarely have a permanent effect on the tendency to push systems to “the hairy edge of failure.”

    Pushing the Operational Boundaries

    Cook brings a unique perspective to systems failure. He’s an anaesthesiologist and expert on healthcare safety who has also worked in engineering and supercomputing system design. His research has been used in improving systems ranging from semiconductor manufacturing to military software systems. He says it is the nature of complex systems to establish an operating comfort zone and then gradually push the boundaries.

    “We make an imaginary line within the accident boundary that is our margin of safety,” said Cook. “We don’t have a lot of accidents, so we don’t have a good idea of exactly where that boundary exists.

    “So we’re always flirting with the margin,” he continued. “What is surprising about this world is not that there are so many accidents. It is that there are so few. The thing that amazes you is not that your system goes down sometimes. It is that it’s up at all.”

    The Front Lines of Downtime

    Many of the attendees at Velocity are working inside that margin, seeking to coax every ounce of efficiency and performance out of web sites and applications. Some are actively engaged in defining the boundaries of failure, such as Netflix with its use of the “Chaos Monkey” and other tools that introduce random failures to test the resiliency of their systems.

    Cook says Internet infrastructure will only become more important, raising the bar for reliability testing.

    “The future of all your systems, although you do not realize it right now, is safety,” said Cook. “Your web applications systems are becoming business-critical systems. The future of your systems is to be involved intimately with some level of safety. All of your systems will become safety critical.”

    Here’s a video of Cook’s talk at Velocity, which includes a look at tools for understanding the factors in this “drift” toward the margin of safety. This video runs 19 minutes.

    1:52p
    Facebook Not Working, So Twitter Fills With Facebook Humor

    A map of the global audience for Facebook visualizes the geographic spread of its user base. (Source: Facebook)

    Yes, Facebook is experiencing performance problems, and it appears to be a widespread issue.

    Facebook users can load their page, but are unable to post status updates. It appears to be the most significant performance problem since a 2010 outage in which a configuration change created a feedback loop that overwhelmed a database cluster. UPDATE: As of 10:15 am Eastern time, the problems appear to have been resolved.

    When a service with 1.15 billion users is not working, what are users to do? The answer is to shift to Twitter and other social media sites and talk about the fact that Facebook is down. Many users have responded with humor rather than angst or outrage.

    Here’s a sampling:

    As usual, memes are invoked:

    As is collateral damage:

    2:14p
    10 Data Projects Not to Leave Off the Schedule in 2014

    Jim McGann is vice president of information management company Index Engines. Connect with him on LinkedIn.

    JMcGann-tnJIM McGANN
    Index Engines

    Everyone’s talking about unstructured data lately – the cost, the risk, the massive growth – but little is being done to control it.

    Analyst group IDC estimates unstructured data growth at 40 to 60 percent per year, a statistic that is not only startling, but puts a great deal of emphasis on the need to start managing it today or at least have it on the schedule for 2014.

    With budgets tightening – often to pay for storage costs – data center managers are struggling to find the highest impact projects that will see an immediate ROI. While there’s no one project that will reclaim all of the unstructured data rotting away in the data center, there are 10 crucial data projects not to leave off the schedule in 2014.

    1. Clean up abandoned data and reclaim capacity: When employees leave the organization, their files and email languish on networks and servers. With the owner no longer available to manage and maintain the content it remains abandoned and clogs up corporate servers. Data centers must manage this abandoned data to avoid losing any valuable content and to reclaim capacity.

    2. Migrate aged data to cheaper storage tiers: As data ages on the network it can become less valuable. Storing data that has not been accessed in three years or longer is a waste of budget. Migrate this data to less expensive storage platforms. Aged data can represent between 40-60% of current server capacity.

    3. Implement accurate charge-backs based on metadata profiles and Active Directory ownership: Chargebacks will allow data center to accurately recoup storage expenses and work with the departments to develop a more meaningful data policy including purging of what they no longer require.

    4. Defensively remediate legacy backup tapes and recoup offsite storage expenses: Old backup tapes that have piled up in offsite storage are a big line item on your annual budget. These tapes can be scanned, without the need of the original backup software, and a metadata index of the contents generated. Using the metadata profile, relevant content can be extracted and archived and the tapes can be defensibly remediated, reclaiming offsite storage expenses.

    5. Purge redundant and outdated files and free-up storage: Network servers can easily be comprised of 35 – 45% duplicate content. This content builds over time and results in wasted storage capacity. Once duplicates are identified a policy can be implemented to purge what is no longer required such as redundant files that have not been accessed in over three years, or those owned by ex-employees.

    6. Audit and remove personal multimedia content (ie. music, video) from user shares: User shares become a repository not only aged and abandoned files, but personal music, photo and video content that have no value to the business and in fact may be a liability. Once this data is classified reports can be generated showing the top 50 owners of this content, total capacity and location. This information can be used to set and enforce quotas and work with the data owners to clean up the content and reclaim capacity.

    7. Profile and move data to the cloud: Many data centers have cloud initiatives where aged and less useful business data is migrated to more cost effective hosted storage. Finding the data and on-ramping it to the cloud however is a challenge of you lack understanding of your data: who owns it, when it was last accessed, types of files, etc.

    8. Archive sensitive content and support eDiscovery more cost effectively: Legal and compliance requests for user files and email can be disruptive and time consuming. Finding the relevant content and extracting it in a defensible manner is the key challenge. Streamlining access to critical data so you can respond to legal requests quicker, not only lessons their time burden but saves you time and money during location efforts.

    9. Audit and secure PII to control risk: Users don’t always abide by corporate data policies. Sharing sensitive information containing client social security and credit card numbers, such as tax forms, credit reports and application, can easily happen. Find this information, audit email and servers, and take the appropriate action to ensure client data is secure. Some content may need to be relocated and moved to an archive, encrypted or even purged from the network. Managing PII ensures compliance with corporate policies and controls liability associated with sensitive data.

    10. Manage and control liability hidden in PSTs: Email contains sensitive corporate data including communications of agreements, contracts, private business discussions and more. Many firms have email archives in place to monitor and protect this data, however, users can easily create their own mini-archive or PST of the content that is not managed by corporate. PSTs have caused great pain when involved in litigation as email that was thought to be no longer in existence suddenly appears in a hidden PST.

    There are a number of ways companies can approach these projects, but to maximize impact in a smaller time frame, a number of file-level metadata tools, sometimes referred to unstructured data profiling, exist.

    Through the file-level information date, owner, location, file type, number of copies and last accessed information can be determined, which will help data center managers classify data and put disposition policies in place.

    The benefits of managing unstructured data include reduced risk, capacity and budget. With finances already tight and data growing rapidly, don’t leave these projects off the schedule in 2014.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:30p
    Flash Goes to Wall Street: Nimble Storage Files For IPO

    Taking a company public seems to be popular once again, as the enterprise flash-optimized hybrid storage company Nimble Storage has filed registration with the SEC relating to the proposed initial public offering of its common stock. The number of shares to be sold and the price range for the proposed offering have not yet been determined, but under the NYSE symbol NMBL the company hopes to raise $150 million.

    After reporting over a $100 million run rate in bookings earlier this year, the company reported $53.8 million in revenue for its fiscal year, and has generated $50.6 million in the first six months of 2013. Nimble Storage has seen impressive growth in its storage offerings in its 5 years of business, and enjoys a strong board of directors from large silicon valley companies, and investors like Sequoia Capital. 

    The company’s value proposition, how it competes with the entrenched storage giants, and what its vision is, is summed up in a paragraph from its S-1 SEC filing:

    “Enterprises and cloud-based service providers today are overwhelmed by numerous storage challenges including increasing costs, capacity and performance tradeoffs, management complexity and data protection issues. These challenges have been exacerbated by key trends in the data center: the rapid proliferation of applications with varying performance requirements, increased use of virtualization and the exponential growth in data. Over the last several years, major technological advancements have been made in flash storage media and data analytics, but traditional storage system providers have been unable to fully capture these improvements into system performance and efficiency at reasonable cost. We believe that a fundamental change to the software architecture underlying storage systems is required to fully take advantage of these advancements.”

    Just a few months ago storage rival Violin Memory filed for a $172.5 million IPO, which followed a Fusion-io IPO earlier last summer. Taking an alternate route to financing, Pure Storage snagged $150 million Series E funding round with institutional investors.

    3:43p
    Smaller Data Centers and Markets Emerge as M&A Sweet Spot
    raining-money

    Easy access to capital is creating opportunities for both expansion and M&A, says Bill Bradley of Waller Capital.

    Smaller data centers and emerging markets represent the sweet spot for mergers and acquisitions (M&A) for the foreseeable future, according to Bill Bradley of Waller Capital. Bradley believes the combination of access to cheap capital and maturing providers dodes well for the data center industry going forward. There is no bubble or glut coming, but rather smaller, smarter acquisitions on the horizon, he said.

    Waller Capital is not a source of capital, but rather an advisory shop. Bradley joined the Waller Capital in 2012, and leads efforts with data center, managed hosting and cloud computing clients. He previously worked with Credit Suisse as head of Telecommunications M&A, and advised clients on approximately $150 billion of strategic transactions, including the $2 billion sale of Terremark to Verizon. Bradley has deep experience with data center firms.

    One trend Bradley sees is that providers are looking for underserved markets rather than sticking to the major markets like Ashburn, Dallas, and Santa Clara. The overall IT outsourcing trend is spreading to these smaller markets, creating opportunities for data center providers.

    “Smaller markets have been getting a lot of attention the last couple of years,” said Bradley. “One example is Minneapolis. ViaWest is building and Databank bought a business up there. There are others rumored to be coming to the market. It’s a nice example, however, some markets are only capable of sustaining a certain number of providers.”

    Early Movers Gain Advantage

    Who will succeed in these markets, and why?

    “It’s good to be early, and have a good market share,” said Bradley. “You look at smaller markets like Minneapolis, and the necessary attributes exist, but on a smaller scale. You may not have room for five to six competitors. That may change over time, as corporations around markets like Minneapolis begin to outsource their IT. In some markets, the rate of outsourcing is higher. In Minneapolis, the rate of outsourcing is lower.”

    Bradley sees a trend of providers purchasing individual facilities rather than portfolios. This is the result of players wanting to enter specific markets where they see opportunity.

    “Some of the properties out there are still developing and there is investment needed,” said Bradley. “This is a business where, in the recent past, barriers to entry have come down a bit. What we’re finding out there, is that people are buying smaller facilities. We haven’t seen as many portfolios trade – we haven’t seen multiple facilities, in multiple locations change hands as often as in the past. What we’ve seen is one facility in one location. They are selectively acquiring. You see them buying a facility in a specific market.”

    One example of a provider serving smaller markets well that Bradley gives is the resurgence of 365 Main.

    “365 Main has access to capital from a few sources, both debt and equity,” said Bradley. “It buys facilities in smaller markets, in smaller locations, and if you talk to Chris Dolan and his team, these are facilities and properties he’s spending money on rebranding, and putting in a salesforce. This kind of highlights the diligence that was needed.”

    People are doing more diligence on these deals, according to Bradley. It’s the age of the smart, targeted acquisition rather than the portfolio acquisition.

    Enterprises Pursue Sale-Leasebacks

    There are a lot of underused enterprise data center assets. Enterprises are waking up to the fact that they are not properly leveraging these facilities, which has resulted in a few trends around the enterprise data center. The first is the sale-leaseback.

    “There are a large number of corporates, and government facilities,” says Bradley, “they come to a point where they need to upgrade the facility. When a facility needs a tech refresh, a arge corporate will ask itself: do I want to spend all that money, or do I want to have a third party come in and do it right?”

    In a sale-leaseback, the user sells a building and then leases it back from the new owner. This type of transaction allows these companies to switch spending from CapEx to OpeE, and outsource the facility to a provider whose core competency is running the data center. It looks good on the books, and makes a lot of sense.  “From the buyer perspective, they say ‘this gives me an anchor tenant, but I need to upgrade the physical infrastructure.’”

    So with the cost of capital being so low, is there a risk of overbuilding? Bradley doesn’t see this. “There is ability to raise money, but so far we haven’t seen that ‘if you build it they will come’ scene occur. It’s become just-in-time inventory.”

    If there’s a potential risk of excess capacity, it’s in markets that are too friendly. “You have a lot of markets where zoning may be easier, tax incentives may be in place,” said Bradley. “It goes municipality by municipality, potentially, those sorts of things can lead to an excess of supply in advance of demand. If it exists, it seems to be on a market by market basis at most.”

    5:45p
    TIBCO Launches Cloud Services

    At its annual TUCON user conference, TIBCO Software (TIBX) announced a series of new cloud offerings that give customers the tools to successfully deploy and manage all of their data in the cloud. These new offerings join TIBCO’s advanced location, business, and data prep analytics, as well as new open APIs, integration tools and management solutions.

    “As cloud technologies become an increasingly large part of businesses’ IT infrastructure, it’s critical that TIBCO is able to offer valuable solutions that support these technologies,” said Matt Quinn, chief technology officer, TIBCO. “Over the past year we have been committed to developing and acquiring the foremost cloud services that enable businesses to discover new ways of connecting people, deriving intelligence, and improving operational efficiency while mitigating the risks commonly associated with enhancements in innovation.”

    By the end of this year TIBCO will have new cloud services ready, with additional services scheduled in the first quarter of 2013. TIBCO GeoAnalytics is a cloud service that provides advanced location intelligence and geospatial analytics solutions, for interacting with enterprise platforms on mobile devices. TIBCO Spotfire Cloud gives enterprises a platform to analyze and collaborate on business insights whether or not the data itself is hosted, as well as comprehensive management features.

    TIBCO Clarity enables business users to discover, profile, cleanse and standardize data collated from disparate sources and load quality data to applications for accurate analysis and intelligent decision-making. Finally, TIBCO Cloud MDM provides a single platform to manage master data records in a multi-domain environment as a cloud service.

    In 2014 TIBCO will launch a Cloud API Exchange, Project Austin – a next generation data integration tool for real-time sharing, and a Cloud Metrics analytical tool.

    Big Data Architecture

    Starting the week at the TUCON event, TIBCO announced a big data architecture that sets the stage for a new wave in technology architecture that specifically addresses big data and the need for enterprises to process this data in the context of their business. A new version 6.0 of TIBCO Spotfire was released, featuring Spotfire Consumer, which presents up-to-date key performance indicators on a comprehensive range of mobile devices supporting capabilities such as offline KPI monitoring, contextual drill down and social collaboration. Additionally, Spotfire 6.0 includes,  Spotfire Event Analytics, a new product that allows enterprises to automate the tracking and identification of new trends or outliers in business data, as they are generated.

    “Historically big data has not focused on the operational outcomes that can greatly improve an organization’s competitive landscape,” said chief technology officer, TIBCO, Matt Quinn. “At TIBCO our focus is on making it as seamless and as efficient as possible for customers to access all their data – at rest and in-motion, giving them the power to quickly use that data to identify and address business problems and opportunities in the moment.”

    Expanded Alliance with PerkinElmer

    TIBCO also announced it has expanded its relationship with PerkinElmer, Inc., a global leader focused on improving the health and safety of people and the environment. The expanded alliance will allow customers to benefit from substantially larger investments in product and application development from PerkinElmer, fostering collaboration and integration between discovery research and clinical development.

    “Through this expanded strategic relationship, we are very excited to offer scientists a complete package of data generation, data management, plus data analysis and visualization, so they can better harness information for more informed insights and decisions in the clinical space,” said Mike Stapleton, general manager, informatics, PerkinElmer. “TIBCO Spotfire complements PerkinElmer’s informatics offerings by adding search and data visualization, which are among life science laboratories’ most pressing business needs.”

    6:15p
    NetSource Targets Proximity Trading In Chicago With New Space
    Some of the cabinets inside the new NetSource data center in Illinois. (Source: NetSource)

    Some of the cabinets inside the new NetSource data center in Illinois. (Source: NetSource)

    Colocation provider NetSource is adding new space in Naperville, IL outside of Chicago, targeting proximity trading at the Chicago Mercantile Exchange (CME) data center in Aurora and at 350 E. Cermak, the primary data hub for downtown Chicago. The new expansion includes room for an additional 3,500 rack mountable servers.  Space is now available by the U to full rack.

    The company has a 15,000 square foot facility with over 8,500 of raised floor including 1.6 megawatts of generator power. The new space comes just in time, as the company is 95 percent full in its data center space located in the same building, and is looking forward to moving customers into the new data center space. The new space has the existing N+1 redundancies that the existing data center has and it is covered under the company’s annual SSAE16 Type II compliance.

    The company is looking for clients in need of 1ms and 2ms (millisecond) proximity trading to the Chicago and Aurora facilities. The space is located between 350 E. Cermak  (adjacent to the Chicago Board of Trade – CBOT and Chicago Mercantile Exchange – CME), or the new CME data center in Aurora, for proximity financial trading. The company has a direct 10 Gbps direct connection to 350 E. Cermak, and is physically located a half-mile from the CME facility.

    It’s ideal for proximity trading needs, more specifically for those that can’t necessarily afford to pay the heavy premium of up to $10,000 per rack to be located within CME Aurora. The company says smaller financial companies would have a window for faster ping times to these facilities.

    The Chicago and neighboring residential data center markets continue to grow. NetSource is positioned to offer up space to those that need close proximity to downtown and financial hotspots without necessarily being there, as downtown tends to be strapped for space, as well as much more expensive.

    7:17p
    Level 3 Launches Cloud Connect Solutions

    Level 3 launches Cloud Connect Solutions, Latisys IaaS is selected by Collaborative Learning, Savvis UK data centers earn the Carbon Trust standard, and Windstream Hosted Solutions gains PCI certification.

    Level 3 Cloud Connect

    Level 3 Communications (LVLT) launched Cloud Connect Solutions, which offer the underlying network connectivity and services for global enterprises to more effectively integrate the cloud into their evolving IT architecture. With private network access the solution creates a secure, reliable path for customers to realize the efficiency and flexibility of the cloud without compromising productivity or revenue. ”Highly reliable, secure and performance-optimized network connectivity is becoming an increasingly important component of an enterprise IT strategy as mission-critical business processes and applications become more network dependent, especially with the widespread deployment of IT cloud environments,” stated Melanie Posey, research vice president at IDC. “Level 3 Cloud Connect Solutions provide the flexibility and performance capabilities for enterprises to dynamically interconnect with their cloud ecosystem, enabling them to run their applications efficiently and securely.”

    Latisys selected by Collaborative Learning

    Latisys announced an IaaS agreement with Collaborative Learning (CLI). Under the agreement CLI’s IT infrastructure will be delivered from the Latisys national platform and SOC2 and SOC3 audited data center in Chicago. Latisys was selected based its SAN solution being the right fit and familiar, that its SAN was secure, and that the storage-as-a-service solution was backed by aggressive service level agreements to ensure enterprise-grade reliability and redundancy. “Latisys understands that peace of mind is paramount for web-dependent businesses like CLI,” said Pete Stevenson, CEO of Latisys. “Managing IT infrastructure and related costs is challenging as technology rapidly evolves. Latisys is focused on right-sizing hybrid IT infrastructure solutions for today’s growth businesses, but with an eye toward ensuring we can support future requirements.”

    Savvis awarded Carbon Trust standard

    Savvis, a CenturyLink company (CTL) announced all five of its UK data centers have met the prestigious Carbon Trust Standard for energy-efficient best practices. the award from the Carbon Trust recognizes Savvis’ on-going efforts to reduce its carbon footprint within its UK data centers in London, Slough and Reading. Savvis is one of only a few service providers to achieve this standard across all of its UK facilities. ”As the world increasingly moves online, the emissions from powering and cooling a growing number of data centres have been increasing as well,” said Darran Messem, managing director of certification at the Carbon Trust. “This is why it is significant that companies like Savvis take a robust approach to cutting carbon intensity from their operations. By achieving independent certification from the Carbon Trust, Savvis is setting a positive example to its customers, stakeholders and the rest of the industry.”

    Windstream gains PCI certification

    Windstream Hosted Solutions (WIN) announced that its cloud and hosting data centers have validated compliance with the Payment Card Industry (PCI) Data Security Standards (DSS) version 2.0 as a “Level 1” certified service provider. The Windstream PCI assessment provides assurance and compliance services to global companies. As a “Level 1-certified” service provider—the highest certification achievable—Windstream can store, process, and/or transmit an unlimited amount of transactions annually. “As businesses continue to store, process, and transmit credit card information in data centers, we must work to minimize the risk of security breaches that impact customers while also putting businesses at risk of critical data loss and monetary liability,” said Chris Nicolini, Windstream’s senior vice president of data center operations. “Because of our commitment to our customers and the security of their data, Windstream has proactively met this service provider responsibility, alleviating some of the compliance obligations and costs placed on individual businesses.”

    8:30p
    Equinix Renews Five Leases With Digital Realty

    Two of the data center industry’s largest players have renewed their vows. Interconnection and data center specialist Equinix and landlord Digital Realty Trust have entered into agreements for five data center lease renewals in key markets.

    Equinix (EQIX) will renew existing leases with Digital Realty for five data center properties located in Chicago, Dallas, Los Angeles, Miami and Washington, D.C. All five of the leases were negotiated at market rates and include 15-year initial terms from the current lease expirations, as well as two approximately 10-year renewal options at pre-negotiated rental rates. The Chicago lease includes space in one of  the city’s key connectivity hubs, 350 E Cermak.

    “We have had a long-standing and productive relationship with Digital Realty and are pleased to have negotiated mutually beneficial lease renewal agreements over effectively a 35-year period on these five assets at rates in-line with our expectations,” said Howard Horowitz, senior vice president, Global Real Estate for Equinix. “This represents an important step in managing our real estate portfolio and provides greater operational flexibility, predictability and consistency for key data center assets.”

    “These early renewals represent a win-win for both companies, as they provide long-term operational certainty to a strategic customer, while simultaneously unlocking a portion of the embedded rent growth within our portfolio,” said David Caron, senior vice president, Portfolio Management for Digital Realty. “Furthermore, the cash rental rate uplift also provides a real-time read on the health of current data center pricing.”

    Equinix is planning to convert to a real estate investment trust (REIT), a status in which property control is important.

    Equinix connects more than 4,000 companies to their customers and partners inside its network of 95 data centers in 31 markets across the Americas, EMEA and Asia-Pacific. The company manages nearly 5.8 million square feet of data center space.

    Digital Realty Trust operates 23.7 million square feet of space in 127 technology properties in 32 global markets.

    << Previous Day 2013/10/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org