Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, March 30th, 2016

    Time Event
    12:00p
    CyrusOne Rolls Out Five-Year Plan to Double Its Value

    CyrusOne recently pulled off one of the biggest data center wins of 2016, buying in a sale-leaseback transaction CME Group’s state-of-the-art data center to expand and offer colocation services in the red-hot Chicago market.

    CyrusOne has enterprise DNA and Midwest roots going back to originally being spun-out of Cincinnati Bell.

    During its 2016 Investor Day presentation earlier this month, the company rolled out a bold plan to double its enterprise value from $4 billion today to $8 billion by 2020.

    Huge Addressable Market

    CyrusOne estimates the annual data center spend for the entire Fortune 1000 at just over $15.5 billion. Currently, 173 members of the Fortune 1000 are generating $270 million in revenues for the data center provider. However, this represents only 10 percent of the $2.7 billion IT budget for these firms, leaving a long runway for growing revenues from the existing customer base.

    CyrusOne can offer enterprise CIOs solutions which range from entire data centers down to a single cabinet. This helped drive record leasing of 30MW and over 200,000 square feet of data center space in Q4 2015.

    Read more: CyrusOne Reports Record 2015, Plans Big New Jersey Expansion

    The company’s chief commercial officer Tesh Durvasula emphasized that the sales cycle to bring on a new Fortune 1000 customer is typically over two years. The wooing and eventual signing of Fortune 1000 member CME Group was no exception.

    CME Group Impact

    Notably, it was CME COO Julie Holzrichter who came to New York and described the arduous selection process which resulted in the sale of her “precious data center” in Aurora, Illinois, to CyrusOne.

    Holzrichter, who has been with CME for 30 years, described the vetting process that started in 2012 as “putting ’em through the wringer,” with CyrusOne coming out the other side as the only logical choice.

    Last year, the 428,000-square foot Aurora data center processed over 3.5 billion global transactions, supporting the Chicago Board of Trade (CBOT), New York Mercantile Exchange (NYMEX), and numerous other global commodities trading platforms located throughout Europe and Latin America.

    CME customers are interested in how to monetize information in order to manage risk. By co-creating a robust ecosystem with CyrusOne, Globex trading customers will gain more real-time information to support trading decisions.

    CONE - InvDay s63 snip CME Group

    Source: CONE – Investor Day 2016 for all slides

    By partnering with CyrusOne, CME is able to expand its service offerings to customers, including: disaster recovery, cloud access, data storage, and high-performance computing.

    As part of the $130 million sale, CME will enter into a 15-year lease for data center space and will continue to operate its electronic trading platform, CME Globex, from the data center and offer colocation services there. There are 14 acres on the campus for future expansions.

    In hindsight, the acquisition of Cervalis, and the recent announcement of expansion plans into Northern New Jersey, are all part of a much larger FinTech strategy for CyrusOne.

    A Five-Year Plan

    A plan to grow revenues from about $500 million to $1 billion annually in just five years was the central focus of the Investor Day presentation.

    On average, 65 percent of growth is projected to come from existing customers, with the balance achieved by new customer acquisition, growing interconnection revenues, and strategic M&A.

    CONE - InvDay s81 5-Yr Growth incl acq

    From 2011 to 2015, the data center sector M&A activity averaged $6.1 billion, including a record $10.3 billion last year. CyrusOne anticipates the bulk of any new acquisitions to occur from 2018 to 2020.

    Notably, this 20 percent annual growth is projected to be achieved while maintaining balance sheet leverage below 4.5 turns. This is quite low compared with other REITs attempting to grow at this rapid pace and should help CyrusOne achieve an investment grade rating.

    Rising Interconnection Revenues

    CyrusOne’s interconnection revenue is growing 300 percent faster than overall revenues. This fast growth is off of a relatively small base, which now represents 6 percent of revenue.

    By way of comparison, connectivity-focused Equinix and CoreSite Realty derive about 16 percent and 13 percent of revenues from interconnection, respectively.

    Connectivity is a high-margin business, and CyrusOne is in the early innings of assisting enterprise customers to shift certain applications from legacy data centers over to the cloud.

    Scale, Strength, Speed

    Since the CyrusOne IPO in 2013, the “Massively Modular” data center design has continued being refined in conjunction with design and construction partners. New data centers are currently being constructed for $6.5 million per MW — down from $7 million at the 2013 IPO — with the goal of achieving $5.5 million by 2020.

    Construction schedules have been accelerated from eight to six months for new shells, and data halls are now completed 25 percent faster, in just 12 weeks. Delivering data halls at the lowest cost and shortest time while maintaining high levels of facility utilization are critical to getting mid-teen returns on invested capital.

    Managing the balance sheet in order to achieve a future investment grade rating helps lower the cost of capital to fund the five-year growth plan.

    Investor Takeaway

    CyrusOne shares are already up 16 percent in 2016 year-to-date.

    Analysts at Raymond James were impressed with what they heard, raising their target price on CONE shares to $51.00 from $44.00, noting: “CyrusOne has struck an interesting balance of strong, better than industry NOI growth that appeals to REIT investors, and is now balancing it with impressive top-line growth that appeals to TMT investors.”

    CyrusOne issued new guidance during its Investor Day presentation, increasing revenue and EBITDA for 2016. However, the 2016 midpoint guidance of FFO per share remained unchanged at $2.51, a 17.5x multiple at current levels.

    Given management’s five-year plan to double revenues by 2020, the FFO multiple still appears to be reasonable, despite shares registering another new all-time high of $43.98 per share.

    The Raymond James $51 price target implies a potential 12 month price upside of 16 percent from here — a total return of 18.5 percent, including the 3.5 percent dividend.

    3:00p
    The Expanding Role of Tape in Today’s Modern Data Center

     

    Rich Gadomski is VP of Marketing at Fujifilm Recording Media U.S.A., Inc.

    Have you ever stopped to think about how much data we are creating? And even more thought provoking than that – how can we continue to store that data for the long term, reliably and cost effectively? These are important questions IT executives are asking themselves today. According to a recent report from the Tape Storage Council, Tape Reaches New Markets as Innovations Accelerate, today’s advanced, modern data tape provides the answers.

    Demand for tape is being fueled by unrelenting data growth, significant technological advancements, its highly favorable economics, and the growing regulatory and business requirements to maintain access to data “forever.” Tape continues to play a major role in backup and disaster recovery in addition to effectively addressing many new large-scale storage requirements including cloud storage. Many major cloud providers are quickly realizing the value for implementing tape in their cloud infrastructure as the amount of data is escalating and storing less active data exclusively on HDDs becomes increasingly cost prohibitive.

    In addition, tape storage is addressing many new applications in today’s modern data centers while offering relief from relentless IT budget pressures at the same time. Continued development and manufacturing investment in tape library, drive, media and management software has effectively addressed the constant demand for improved reliability, higher capacity and power efficiency.

    Enterprise tape has reached an unprecedented 10 TB native capacity per cartridge with native data rates reaching 360 MB/sec. Enterprise tape libraries can scale beyond one exabyte as exascale storage solutions have arrived.

    Another breakthrough in areal data density of 123 billion bits per square inch on data tape utilizing magnetic particle technology. This density breakthrough equates to a standard LTO cartridge capable of storing up to 220 TB of uncompressed data, more than 36 times the storage capacity of the current LTO-7 tape. A tape of this size is the highest capacity storage media ever announced.

    Tape’s favorable economics are fueling increased interest in active archive solutions as well. An active archive provides a persistent online view of archival data using one or more archive technologies (tape, HDDs, and cloud storage) behind a file system. Active archive data can typically be shared using NAS and standard Windows or Linux file sharing protocols (CIFS / NFS) to easily store, search and retrieve data directly from the archive. The benefits of an active archive intelligent data management framework include:

    • Scalability: Effortlessly add capacity and scale to petabytes of storage.
    • Lower Cost: Reduce TCO by matching media type to SLA requirements and optimizing storage infrastructure.
    • Ease of Use: File-level access to all of your data, all the time.
    • Compliance: Achieve regulatory retention requirements and reduce risk of non-compliance and data loss.

    The innovative Tape as NAS solution has also gained traction and provides direct file access capability for data tape and integrates an LTO tape library with a front-end NAS for standard NAS (CIFS/NFS) mounts and LTFS to deliver the newest archive architecture. Data arrives at the NAS disk cache and is written to tape, files remain on disk cache until the cache is full, at which time the oldest files are reduced to metadata pointers only. File searches continue to see all files archived and only when a read request is received are files moved back from tape to disk cache and on to the user. A tape library as a NAS enables users to leverage familiar file system tools, and even drag and drop files directly to and from a tape cartridge, just like a disk-based NAS.

    IT executives and cloud service providers are addressing new applications that leverage tape for its significant operational and economic advantages. This recognition is driving continued investment in new tape technologies with extended roadmaps, innovations and exciting use cases. It is also expanding tape’s profile from its historical role in data backup to one requiring cost-effective access to enormous quantities of stored data. With the exciting trajectory for future tape technology, many data intensive industries and applications already have or will begin to leverage the significant benefits of tape’s continued progress.

    Clearly the innovation, compelling value proposition and new development activities demonstrate tape technology is not sitting still; expect the role of tape to continue to expand as more and more exabytes of data are stored on tape.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:01p
    Post-Heartbleed, Data Centers May be Better Prepared

    In retrospect, the vulnerability classified as CVE-2014-0160, whose discoverers dubbed “Heartbleed” and rendered a logo for it, was not all that devastating. The true danger came as a result of the discovery itself: a portion of OpenSSL encryption code that was left unchecked since having been finalized in January 2011.

    Now that nearly two years have passed since local TV newscasts first misrepresented Heartbleed as a “virus” and caused more mass hysteria than any other software bug, history may end up recording its existence as a net benefit for data centers.

    “Heartbleed has not only given the [OpenSSL] project a kick in the pants, and a renewed focus, it actually led to a regeneration and rebirth,” said Tim Hudson, who co-founded the OpenSSL Project (originally called SSLeay) back in 1996. “We’ve also noticed that security researchers have been much more active.” Hudson is currently CTO of security consultancy Cryptsoft.

    Higher Priorities

    One of the biggest unresolved problems faced by data centers and their tenants was the low priority that company executives placed on researching and implementing the solutions to vulnerabilities. Although OpenSSL was among the most open of open source code, both the severity and the obscurity of the open hole in its code were due, in large part, to lack of interest.

    Hudson believes giving the vulnerability an identity beyond “CVE-2014-0160” thrust it in the face of executives who would otherwise have ignored it. But Heartbleed also created a new and positive trend, he believes: Researchers now have both the incentive and the financial backing — including in the form of outright grants, said Hudson — to dive deep into the oldest and coldest code, in a concerted effort to thwart any likelihood of a sequel.

    “The renewed focus in security research is working to help improve the (vulnerability) database,” he told attendees at the RSA 2016 security conference in early March. “The more people looking at the code, the better that the code is going to get.”

    As a result, the process of perfecting the core infrastructure code of data centers is finding a rhythm, and becoming somewhat more automated than it was. What’s more, commercial developers and security interests are paying more than just attention to the integrity of infrastructure code, said Hudson. Underwriters also came to the realization that each Heartbleed sequel would have a potential for costing organizations more than its predecessor.

    “The amount of testing that’s being done on the OpenSSL code base, post-Heartbleed, is many orders of magnitude higher than pre-Heartbleed,” said Hudson. “And that is a good thing. If we’d gone out as a project team and said, ‘Hey, can somebody help us do more testing?’ Silence. Crickets.”

    Now, the possibility of discovering — and perhaps laying claim to — the sequel, increases the potential that research teams can get more funding. The motivation is greater now that organizations — their executives in particular — have a quantifiable interest in ensuring that “the next Heartbleed” doesn’t impact them.

    False Comfort

    There remains, however, a lingering threat: After outsourcing some or all of their data center resources to cloud service providers, some organizations may be comforting themselves with the belief that their IT assets are automatically secured.

    That threat was brought to the surface during the first day of the RSA conference, by a symposium of the Cloud Security Alliance. There, Raj Samani, vice president and CTO for Intel Security in the EMEA region, told CSA members he’s encountered situations where municipal service providers, such as water companies, are being assured that the threats imposed by such factors as unchecked software vulnerabilities, subside once they’ve moved to the cloud.

    An exploit for an unpatched OpenSSL vulnerability, even to this day, could lead to what Samani calls “integrity-based attacks.”

    “Today, there are companies right now offering water treatment applications… through cloud computing,” said Samani. “Now, when I looked at the security of this particular service, they said, ‘You don’t need to worry about antivirus updates, because they’re stored in the cloud.’ That’s funny, right? Until you realize that the company that’s keeping the water clean for your local area is using that provider.”

    Granted, Heartbleed was not a virus. However, the whole Heartbleed escapade made clear that, at the executive level, too many organizations believed that security vigilance can be accomplished through periodic “cleanings” of the system — manual labor-like tasks that can be outsourced to service providers.

    There are some major service providers that would be happy to take their business. In March, the Global Services unit of British telco BT entered into a partnership with Intel Security (parent company of McAfee), in the interest of renewing an effort to share real-time indicators of possible threats to data centers — stopping exploits, including to municipal infrastructure and citizen services, before they spread.

    Brian Fite, senior cyber physical consultant with BT Global Services, believes such real-time indicators will be more valuable to organizations in the long run than continuing to trust the good intentions of open source foundations.

    “Anybody’s who’s actually helping the trustworthiness of shared code repositories, is (doing) a good thing,” said Fite. “With OpenSSL, we all kinda trusted it because it was open source, but what were we basing that trust on? In hindsight, probably fairly flimsy indicators.”

    BT Americas’ CTO for its security consultancy practice, Konstantinos Karagiannis, added to Fite’s point that the best intentions of open source contributors doesn’t amount to much when a corporation makes a risk assessment for its critical IT infrastructure.

    “I love open source. But a serious problem is that some of the most important packages in Linux are being maintained by one person at a time, maybe two. That’s a serious flaw. The ‘many-eyes’ theory that you hear about, for open source? Sometimes those eyes are two.”

    HPE’s CTO for security software, Steve Dyer, repeated that warning during an RSA session.

    “With the old idea that ‘all eyes make all things shallow’ — one of the mantras of open source in the beginning — you’d think would extend to security,” said Dyer. “That hasn’t exactly proven to be true. What it may be is, ‘all eyes’ give the bad guys enough time to look at code, and really figure out what’s vulnerable about it.”

    Dyer believes that many applications are comprised of mashups of open source components. “In my mind, that actually amps up the need for us to keep an eye on open source.”

    That “eye” to which Dyer refers includes automated tools, such as HPE’s own Fortify, to scan open source code in addition to original code. The results of HPE’s own open source scans, he said, are contributed back to the developer community.

    The Path Forward

    The solution BT’s Karagiannis suggested, however, is precisely one Cryptsoft’s Hudson says he’s seeing more of: investments by organizations of their developers’ time and resources to contributions to the maintenance of critical code.

    “If you’re saving millions of dollars a year because you’re using a bunch of open source packages, maybe it’s not the worst karmic thing in the universe to have a few developers spend two weeks in the summer, helping out with one of those packages that you make so much money off of.”

    When asked if he was confident that no part of the world’s data center infrastructure would need to be subject to the same level of humiliation that OpenSSL was subjected to, Hudson responded, “Absolutely, this will happen again. “There are a pile of critical infrastructure projects that are under-resourced, and it’s the work and passion of one or two individuals. Effectively, they’re a Heartbleed waiting to happen.

    “What we’ve done as a project team is, the dirty laundry is all out there in the open. We want people to learn from our experiences,” he continued, “to reduce the likelihood. But it will not be possible to eliminate.”

    4:07p
    Google Rolls Out Cloud-Based Home Phone Service to Fiber Cities
    By The WHIR

    By The WHIR

    Google is becoming a fully-fledged telecommunications provider with the launch of Fiber Phone, a cloud-based home phone service that offers unlimited local and nationwide calling for just $10 per month.

    According to a blog post by Google Fiber Product Manager John Shriver-Blake, Fiber Phone will be available in a few areas to start and will eventually roll out to residential customers in all its Fiber cities: Atlanta, Austin, Charlotte, Kansas City, Nashville, Provo, Raleigh-Durham, Salt Lake City, and San Antonio.

    While wireless-only households have grown in popularity, there are still 3.4 percent of households in the US with no telephone service at all. According to data from the National Center for Health Statistics, 7.5 million adults and 2.3 million children lived in households without phones.

    Read more: Google Fiber Brings Free Internet to More Public Housing Communities

    And despite being able to bundle landlines with home Internet and TV, the average cost of a landline is still between $15 to $30; not a lot more than Google’s service, but likely with fewer features.

    Google began testing its Fiber Phone service in January by inviting a small group of users to provide feedback.

    “Adding Fiber Phone means getting access on the road, in the office, or wherever you are,” Shriver-Blake said. “Your Fiber Phone number lives in the cloud, which means that you can use it on almost any phone, tablet or laptop. It can ring your landline when you’re home, or your mobile device when you’re on-the-go.”

    The phone uses a Fiber Phone box that works with any phone, and a handset is not included.

    Fiber Phone uses the same rates for international calls as Google Voice, and sends users texts or emails of their voicemails.

    This article first appeared at http://www.thewhir.com/web-hosting-news/google-rolls-out-cloud-based-home-phone-service-to-fiber-cities

    5:00p
    A Roundup from Data Center World Global 2016

    More than 1,000 IT professionals across all industries converged on Las Vegas March 14-18 as Data Center World Global 2016 presented a plethora of educational sessions related to the industry and a trade show with best-in-class exhibitors and state-of-the-art technology.

    In case you weren’t fortunate enough to have attended, here’s a roundup of sessions and activities you missed:

    IoT Demands a Shift from Centralized to Stratified Data Centers

    Today, a slew of disruptive trends are placing back-breaking demands on existing networks and data center infrastructure. Data Center World speaker Chris Crosby explained what changes need to occur as the IoT gains steam.

    Emerging Trends Shaping the Data Center of the Future

    Five billion people today don’t have access to the Internet, but they will, and that alone is a big reason the data center of today must change, said Jack Pouchet during his emerging trends session.

    Keynote Steve Garvey: Living Out His Dreams

    Baseball hero and keynote speaker Steve Garvey spoke about his accomplishments on and off the field.

    Drones: Is the Airspace Above Your Data Center Secure?

    As the use of drones explodes, data center operators should include airspace protection in their security programs, according to speaker Adam Ringle, a security and emergency services expert.

    Why Data Center Managers Should Care about DevOps

    The data center needs to be aligned with business as much as development and ops do.

    Merger of Two Healthcare Giants Makes IT Transformation Inevitable

    Newsworthy Notes from the DCW Exhibit Hall

    Check out the major products or news announcements made by exhibitors during the trade show.

    Data Center World Global 2017 will be held April 3-7 in Los Angeles, CA.

    << Previous Day 2016/03/30
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org