Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, May 7th, 2014

    Time Event
    12:00p
    Digital Realty Tweaks Strategy, Plans Property Sell-off

    Digital Realty Trust is planning to divest the least-performing properties in its massive global portfolio and change its mix of product offerings to include solutions that combine its traditional space-and-power offerings with services higher up the stack provided by some of its customers.

    The changes come as the company continues its search for a permanent replacement of former CEO Michael Foust, who abruptly resigned in March. The company’s CFO and interim CEO Bill Stein announced what the company’s senior leadership called “a new path forward” on a first-quarter earnings call with analysts Tuesday.

    “This is a new path forward that both I and the senior leadership team have created,” Stein said.

    Addressing concern expressed by an analyst that whoever becomes the next CEO may have ideas of their own about changing the company’s direction, Stein said the changes were too common-sense for anyone to disagree with. “I can’t see anyone disagreeing with those initiatives,” he said.

    The plan includes divesting legacy non-data center assets, as well as data centers in non-core markets and under-performing data centers. The idea is to recycle capital currently sunk into those “non-strategic” assets.

    The real estate investment trust is also going to stop building data centers without first securing tenants for them. Since Digital Realty can now deliver a data center pod in 12-16 weeks, it no longer needs to stockpile finished facilities, Stein said, and will take a “just-in-time” approach to inventory.

    With the new strategy, the team expects to have more than 90 percent of its portfolio to be occupied by the end of 2014.

    First Two Properties Offloaded

    Scott Peterson, chief investment officer the company appointed in April, said the leadership team had been evaluating every property in its portfolio and assessing its long-term potential over the past several weeks. The plan is to offload the bottom 5-10 percent of the portfolio over the next several years.

    The first property Digital Realty has gotten rid of is a 110,000-square-foot facility in Somerset, N.J., which it contributed to a joint venture with Prudential Real Estate Investors.

    The data center is fully leased to a financial-services company with nine years left on the lease. Digital Realty contributed the facility to the joint venture, of which it owns 20 percent, at a valuation of about $40 million.

    The second property to go was another single-tenant data center which the company sold to the tenant who currently occupies it for about $42m. It did not name the tenant.

    The company’s portfolio currently consists of 131 properties, 13 of which are investments in unconsolidated ventures. Together, the properties comprise about 24.5 million square feet.

    Recognizing Market Changes

    Digital Realty also has recognized that the data center market is changing and there is now more demand for solutions that include more than its traditional space-and-power offerings. To address this dynamic, the company’s executives said they will partner with some of its customers who provide things like managed services to offer combined solutions customers need.

    Digital’s revenue for the first quarter was about $390 million – a 9-percent increase year over year. Its funds from operations (similar to earnings) were $1.22 – up 5 percent from the first quarter of 2013.

    12:00p
    GoGrid Attacks ‘Commodity Clouds’ With Open Orchestration

    GoGrid launched and is sponsoring OpenOrchestration.org, a repository for on-demand blueprints to simplify technology rollouts in the cloud, multiple clouds, or on-premise. The new community site fights back against what GoGrid believes is the major threat going forward: commodity clouds.

    “The Mission of OpenOrchestration.Org is to facilitate and foster orchestration,” GoGrid CEO John Keagy said. “GoGrid is launching and sponsoring the site because orchestrated solutions are a necessary evolution of the marketplace in the face of dominance by one or two large commodity cloud services.

    GoGrid offers what it calls one-click deployment of big data solutions that are multi-cloud or on-premise ready from the start. “We’ve actually taken our stack and purpose-built it for big data,” Kole Hicks, senior director of product, said.

    In its first release to the OpenOrchestration.org community, GoGrid will publish the blueprints behind its four orchestration solutions for Cassandra, Hadoop, MongoDB and Riak. Members of the community can modify these blueprints in any way they see fit as well as use a free orchestration service powered by GoGrid’s orchestration engine technology to test and deploy.

    GoGrid was a pure-play Infrastructure-as-a-Service provider until only a few years ago, when the team found that many customers were using the platform for big data solutions. “When we were doing the research on why many of our customers were using us for big data, we discovered that many were running polyglot applications (using multiple development languages and runtimes), and they were running multi-cloud and on-premises solutions,” Heather McKelvey, GoGrid CTO, said. “Our philosophy now is to stand up the best automation tools.”

    The company does not shy away from partnering with with competing clouds and systems integrators. It has recently added Clustrix and Couchbase to the list of partnerships that also includes Basho, DataStax, Hortonworks. and MongoDB.

    “The ability to deliver one-click solutions has created ecosystem opportunities,” Keagy said. “We can take the friction out of the adoption of partner solutions.”

    The company operates out of four data centers: two in California, one in Virginia and one in Amsterdam. While it touts the ability to take your workload anywhere, it has built a cloud platform that is purpose-built to handle big data solutions.

    GoGrid allows up to 256 gigabytes of RAM on its cloud platform and provides options for mass storage, including up to 48 terabytes of direct attached storage or options for solid state drives.

    12:30p
    TDS To Acquire BendBroadband, Including Green Data Center

    Telephone and Data Systems, parent of TDS Telecom, announced a plan to acquire BendBroadband for $261 million.  BendBroadband serves Bend, Oreg., and surrounding areas with a range of broadband, fiber connectivity, telephone and cable television services. Also included in the deal, however, is a data center that provides colocation and managed services called BendBroadband Vault, a LEED Gold and Energy Star certified facility.

    The Vault tries to get as much power as it can from solar, with panels installed on its south-facing roof. These 624 solar cells are capable of generating 152 kilowatts of power, which equates to more than 16 percent of total power usage on a sunny day.

    The facility uses a passive cooling system. It effectively gets free cooling 75 percent of the time through a 12-ton wheel from KyotoCooling, called a KyotoWheel. The facility uses outside air in a closed loop to cool the heat wheel. The building also has a flywheel-based Uninterruptible Power Supply (UPS).

    TDS said it will retain the BendBroadband name as well as its 280 employees. Privately held, BendBroadband said its 2013 revenue totaled $70 million.

    “TDS is a longstanding, family-controlled company that shares our commitment to our customers, our employees and our communities,” Amy Tykeson, president and CEO of BendBroadband, said. ”With TDS, we will keep hundreds of good jobs in Central Oregon.”

    BendBroadband’s roots go back to 1955, when it was a cable TV provider known as Bend TV Cable. It captured TV signals that made it across the Cascades and distributed them around the Bend area. It moved into internet services in 1997.

     

    12:30p
    Data Center Security Lessons from Heartbleed and Target

    Winston Saunders has worked at Intel for nearly two decades and currently works on making the data center more secure and efficient. Winston is a graduate of UC Berkeley and the University of Washington. You can find him online at “Winston on Energy” on Twitter

    Data center security is of increasing concern, with data breaches and cyber vulnerabilities more and more in the news headlines. The recent Symantec’s threat report (PDF)  highlighted more “zero day” attacks in 2013 than in the two previous years combined. Verizon’s Data Breach Investigations Report  shows data breached and  cyber attacks at levels substantially above previous years.

    While this dire news can leave one feeling helpless, it’s useful to look deeper into the causes of some of the more prevalent cyber events to understand what proactive roles we can play in preventing or mitigating them, both from the standpoint of our industry and from the standpoint of business responsibilities.

    Two of most infamous and far-reaching cyber-security events of recent memory are the Point of Sale attack on Target  during the 2013 Holiday season and the even more recent Heartbleed vulnerability discovered in the OpenSSL library. Both affected millions of people’s information privacy, were well publicized, and were preventable by known, low cost and common best practices.

    In the case of Target, as is still the case with many companies, the responsibility for information security was reported to have fallen to many individuals. Although not explicitly stated, it’s reasonable to guess that none of the many executives had information security as their primary job role.

    In an excellent podcast, Eric Cole highlights why this is a problem. In his example, the CIO is primarily responsible for system availability. While availability is certainly important, it is only one-third of the CIA-triad of Confidentiality, Integrity and Availability. Information security mandates these interests be balanced, and the only way to ensure split organizational incentives to not get in the way, Cole argues, is to ensure the CSO and CIO work at peer levels. Could simple organizational change have helped Target? It’s impossible to say in retrospect, but Target does appear to be heading in that direction.

    The case of Heartbleed is even more important to understand the root causes. According to many reports, the vulnerability was exposed when a new version of the code was “checked in” which neglect to do a check on the keep alive heartbeat data length. Allowing data fields to exceed their intended length is one of the most basic kind of attack. In fact, it is such a common an basic coding practice that even the most basic security audit would expose the vulnerability.

    Why was it missed? Nobody knows for sure. But as The New York Times reported , the attention and funding given OpenSSL was far less than other important elements of the open source world. The assumption is that because the code is “open,” many eyes will quickly discover all vulnerabilities. But the result again was the same; when everyone is responsible, nobody is. Security for whatever reason was not given the attention it was due.

    Both Heartbleed and the Target breach share a common root cause: preventable vulnerabilities. If we adopt the frame of mind that “all vulnerabilities are preventable” we can see that shared responsibility, whether among multiple individuals or a single individual with too many responsibilities, can diminish the attention needed to do a thorough job. And as the above examples highlight, detecting vulnerabilities is serious work, for the consequences of failure can be quite severe.

    What is the lesson learned and challenge for your company? It all boils down to risk management. Who has the responsibility to identify and manage information security risk in your company and do they have adequate resources to do their job effectively? If the answer is not an easy, “yes,” it may be worth a deeper look, lest your company end up in what may be a long series of headlines.

    Note: After this post was filed, Target removed its CEO, Gregg Steinhafel, making him the first CEO to be fired over a significant data breach.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Demystifying Generator Set Ratings

    Downtime and resiliency are serious considerations for all data centers. To keep a proactively healthy data center platform, it’s very important to have backup plans in motion. In very many cases, this means deploying powerful generators to maintain uptime.

    However, generator sets must be capable of delivering the necessary power required for an anticipated, yet potentially unknown number of run hours per year, to ensure reliable power generation. To meet power and run-time criteria, manufacturers have developed standard ratings definitions for their equipment.

    While the International Standards Organization (ISO) has developed guidelines for common rating definitions, many generator set manufacturers are using differing specifications. This whitepaper from Caterpillar directly clarifies the confusion surrounding ratings. Furthermore, the paper outlines standard classifications and discuss important considerations to help you make educated decisions when designing and specifying equipment.

    Remember, many generator set manufacturers have different names or variations for their ratings, so it is important to know how those ratings compare to the ISO standards. For example, Caterpillar defines five basic generator set ratings: Emergency Standby Power (ESP), Standby (no ISO equivalent), Mission Critical Standby (no ISO equivalent), Continuous and Prime. Cat generator set ratings differ in certain respects from those defined by ISO 8528-1, but will always meet the minimum crite­ria set forth by the standard. ISO-8528 identifies four ratings:

    • Continuous power
    • Prime power
    • Limited running (LTP)
    • Emergency standby power

    Download this whitepaper today to understand that ratings are important because they directly impact the efficiency and effectiveness of the selected generator set based on how it’s going to be used.  Remember, regardless of the application, generator set ratings help to ensure that customers’ power needs are met and that generating equipment is protected from premature wear. As your infrastructure continues to become more important – creating the right generator infrastructure will be critical for uptime, resiliency, and an optimally maintained data center.

    2:00p
    EMC Upgrades Software Defined Storage Platform ViPR

    At EMC World 2014 this week in Las Vegas EMC announced version 2.0 of its  ViPR software-defined storage platform, designed to simplify management of both existing and new storage infrastructure and provide new data services to underpin next-generation applications and big data analytics.

    ViPR 2.0 builds a bridge to EMC’s 3rd platform (built for cloud, mobility, social networking and big data). It also plugs into higher-level management and orchestration tools from VMware, OpenStack and Microsoft, so storage can be a part of broader data center workflows.

    “ViPR is the cornerstone of EMC’s software-defined storage portfolio,” Amitabh Srivastava, president of EMC’s Advanced Software division, said.

    Support for commodity drives is added to the existing object and HDFS storage implementation along with block data services, based on EMC ScaleIO. Version 2.0 also adds multi-site support through new geo-scale storage capabilities which provide data access, integrity and protection.

    ViPR Object Data Services can now span multiple locations and offer geo-replication and geo-distribution capabilities. ViPR 2.0 also adds ViPR Block Services, new data services based on EMC ScaleIO server-SAN software, which deliver block storage capabilities to any ViPR-managed commodity storage array, the company said.

    ViPR now natively supports arrays from EMC, Hitachi Data Systems and NetApp, as well as commodity storage. Through the OpenStack Cinder plugin, it also supports Dell, HP and IBM arrays.

    The management interface seeks to automate and standardize policy-driven management of existing and new storage infrastructure through a single “pane of glass.”

    EMC has tweaked its storage software suites, ViPR SRM and Service Assurance Suite for improved integration with ViPR and VPLEX, providing new chargeback capabilities and enhanced virtual storage management through the ViPR console. SAS 9.3 boasts integration with VMware NSX.

     

    2:00p
    EMC Buys Flash Storage Startup DSSD

    At EMC World Monday in Las Vegas EMC went all in on rack-scale flash storage by announcing acquisition of stealth startup DSSD, which provides flash storage for I/O-intensive in-memory databases and big data workloads.

    Menlo Park, Calif.-based DSSD was founded by ZFS-creators Jeff Bonwick and Bill Moore, as well as Andy Bechtolsheim, who is also chairman and chief development officer at Arista Networks.

    The startup will operate as a standalone unit within EMC’s Emerging Technology Products division, with Bechtolsheim serving as its strategic advisor.

    “The prospects of what EMC and DSSD can achieve together are truly remarkable,” he said. “We ventured out to create a new storage tier for transactional and big data applications that have the highest performance I/O requirements.

    David Goulden, CEO of EMC Information Infrastructure, said the IT giant had established a relationship with DSSD more than one year ago when it led the Series A investment in the startup. “We’re now thrilled to be joining forces with Andy, Bill and the entire DSSD team,” he said.

    “Complementary to our market-leading all-flash and hybrid storage portfolio, DSSD will unlock an abundance of new possibilities for customers as they build out their infrastructures to support the emerging tier of next-generation in-memory and big data workloads. ”

    First DSSD products are expected in 2015. They will be optimized for in-memory databases such as SAP HANA and GemFire, real-time analytics, and high-performance applications used by research and government agencies.

    DSSD has been granted patents for a “storage system with incremental multi-dimensional RAID” and patent a “storage system with guaranteed read latency.”

    EMC said it had sold more than 17 petabytes of flash capacity in the first quarter of 2014 alone, which was more than 70 percent more than it had sold in the first quarter of last year.

     

    2:30p
    Data Center Jobs: Willdan Energy Solutions

    At the Data Center Jobs Board, we have a new job listing from Willdan Energy Solutions, which is seeking an Associate Project Manager in New York, New York.

    The Associate Project Manager is responsible for project management experience, a basic understanding of building mechanical and electrical systems and other energy using systems, effectively communicating with technical and non-technical individuals, strong organizational skills, strong verbal and written communication skills, strong professional presence, self-starting nature with the ability to drive tasks through to completion, and comfort with Microsoft Word, Excel, and PowerPoint. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    5:05p
    HP Ramps Up Enterprise Cloud Play With $1 Billion Investment

    HP kicked off a new $1 billion cloud services initiative that will include Infrastructure-as-a-Service and Platform-as-a-Service offerings, all underpinned by OpenStack – the popular open source cloud infrastructure software.

    The company has a massive global data center infrastructure, consisting of about 80 facilities in 27 countries, and considers this footprint to be a built-in advantage in the market already dominated by a handful of giants, each of whom also has a large global infrastructure.

    “Over time we will leverage these facilities, as well as our global service provider network, to deploy Helion cloud services around the world,” Bill Hilf, HP’s senior vice president of cloud products and services, said in a webcast announcing the initiative Wednesday.

    The first phase of roll-out will consist of deploying the cloud in 20 of the data centers over the next 18 months.

    Helion (the name HP gave to its new cloud services portfolio) will also include a developer platform based on Cloud Foundry – an open source PaaS project led by EMC spin-out Pivotal. “IaaS is important but developers also need fast, easy and open platform to deploy applications,” Hilf said.

    The development platform is targeted for release before the end of the year.

    Contest for Enterprise Cloud Market

    In going after the enterprise and service provider market, HP is competing against IBM, which also has recently ramped up its IaaS play, Microsoft’s Azure cloud portfolio, Amazon Web Services and, to a lesser extent, against Google’s public cloud services.

    IBM has sold hardware into the enterprise market for decades and is focusing its cloud portfolio on this space, where it already has a lot of expertise and many relationships.

    IBM started expanding its cloud services play in a big way in 2013, buying IaaS provider SoftLayer for $2 billion. Earlier this year, the company announced it would invest another $1.2 billion in expanding the data center footprint to support the SoftLayer cloud.

    Also earlier this year, IBM announced BlueMix, a Cloud Foundry-based PaaS for enterprise developers.

    Its traditional focus on the enterprise makes IBM stand out as a cloud provider among large competitors, and HP may have a similar advantage. While Microsoft has sold software into the enterprise for a long time, its initial foray into the public cloud market was similar to Google’s – focused more on individual or startup developers.

    This is a market Amazon has also built its IaaS empire on, but all these players are now going after the enterprise market with a vengeance.

    Free Trials, Hybrid Deployments and Legal Protection

    HP’s initial offering (already available) is a free version of its commercial OpenStack cloud, meant for proofs of concept, pilots and basic production workloads. The company is working to release an enhanced commercial edition “in the coming months.”

    HP is planning to integrate as many of its hardware products as possible with OpenStack to enable customers to set up hybrid infrastructure. The integrations will include the vendor’s 3Par and StoreVirtual storage platforms, as well as its Moonshot microservers.

    HP also introduced an OpenStack Technology Indemnification Program, which offers legal protection to customers that use Helion OpenStack code from patent, copyright and trade-secret infringement claims.

    Finally, the vendor set up a new professional services practice to help customers plan, implement and operate Helion cloud environments.

    << Previous Day 2014/05/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org