Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, October 20th, 2014

    Time Event
    11:36a
    GlobalFoundries Gets $1.5B to Take Over IBM’s Processor Business

    IBM will pay GlobalFoundries $1.5 billion cash over the next three years as part of the Silicon Valley-based semiconductor company’s takeover of its processor manufacturing business, which reportedly has been losing money. IBM said on Monday it will continue its processor R&D activity, including a recently announced major investment program in this area, and GlobalFoundries will have access to results of that research.

    GlobalFoundries, headquartered in Santa Clara, California, gets thousands of patents and semiconductor manufacturing operations and facilities in East Fishkill, New York and Essex Junction, Vermont. The company will also be the exclusive supplier of server processors for IBM for the next decade.

    This is the second major divestiture IBM has done this year as it attempts to hit its projected earnings per share goals and reverse the trend of continuously declining revenue. It has sold its x86 server business, and now no longer wants to be in the processor making business.

    In July, IBM announced a $3 billion investment in processor R&D, and GlobalFoundries will have access to the program’s results through its and IBM’s collaboration with Colleges of Nanoscale Science and Engineering at the SUNY Polytechnic Institute in Albany, New York.

    IBM has made no secret of its intent to use its Power business to eat into Intel’s share of the chip market. Today, it licenses the architecture to others through the OpenPower Foundation.

    But trying to eat Intel’s lunch is a tough job. Intel is ahead of IBM in process technology and spends $10 billion a year on R&D, according to an Intel spokesperson. Big Blue’s $3-billion investment is going to be spent over the course of five years.

    IBM has continued to release new models of its Power Systems servers, launching the latest ones, based on its Power8 processors, in April. It has positioned the chips and the servers for high-octane workloads, such as big data, high performance computing and databases.

    The company announced a big Power8 customer win last week, saying that French hosting giant OVH has decided to build a new big data cloud infrastructure service, called RunAbove, on Power Systems servers.

    IBM has been promising its shareholders $18 in earnings per share for 2014. Its 2013 EPS was $16.28, and divesting money-losing businesses appears to be one way its senior management is working toward its 2014 projections.

    In September, IBM closed the sale of its commodity x86 server business to Lenovo, making the Chinese hardware company the biggest x86 server supplier in China and third biggest in the world.

    IBM has been shopping the chip manufacturing business around since last year, according to a Bloomberg report that cited anonymous sources.

    Software was the company’s only business segment to show some revenue growth in 2013. Revenues of the other four segments were either flat year over year or shrunk, and systems and technology, the hardware segment reported by far the biggest drop in revenue: 19%.

    12:00p
    Hurricane Electric Expands IPv6 Network to More Equinix Data Centers in Europe and Asia

    Hurricane Electric, which operates a global IPv4 and IPv6 network and claims that its IPv6 network is the world’s largest, has added more Equinix data centers in Europe and Asia to its network.

    With recently established Points of Presence at Equinix facilities in Munich, Frankfurt and Hong Kong, Hurricane Electric’s network now spans 17 of the Redwood City, California-based colocation provider’s facilities.

    As more and more devices get connected to the Internet and the pool of IPv4 addresses rapidly dwindles, IPv6 services are a crucial capability to have for a service provider. IPv6 adoption is quicker in some countries than in others.

    Adoption in Germany is second-highest in Europe, following Belgium, according to a website Google has put together to track IPv6 adoption. Germany is on par with the U.S. Adoption rate in Asian countries is extremely low at this point.

    In April, American Registry for Internet Numbers, said it was down to 16 million IPv4 addresses. ARIN and other regional Internet registries, including ones on Europe and Asia, have been operating in austerity mode, evaluating with extra scrutiny every request for a new block of IPv4 addresses they receive.

    Hurricane Electric competes with Equinix to some extent – it operates two colocation data centers on Equinix’s home turf in Silicon Valley – but it has had PoPs in the provider’s data centers since 2002.

    Mike Leber, Hurricane Electric president, said the Internet was on the “true cusp” of widespread IPv6 migration, and expansion of the company’s network into more Equinix data centers was its way of preparing. “We are live in 22 countries now, many through Equinix, and look forward to realizing our goal of being present in 100 countries by continuing to proliferate our offerings through strategic colocations,” he said in a statement.

    3:30p
    Architecting the Right Cloud Stack for Your Enterprise

    Sebastian Stadil is founder and CEO of Scalr and founder of the Silicon Valley Cloud Computing Group.

    Cloud computing has been on Gartner’s list of strategic technologies for the past five years and with good reason, the promise of self-service provisioning is accelerating application delivery and business innovation is strong. However, progressing toward this goal means making several critical decisions around each of the layers making up your cloud architecture.

    With cloud being a vast and rapidly evolving ecosystem, the most crucial mistake an enterprise can make is to assume that all solutions that fall in a given category are equivalent. And while that realization can quickly make the architecture process overwhelming, we will break it down for you here with a set of questions for each cloud layer.

    The three layers of technology that define the cloud stack – resource, platform, and management – each have their own options and key considerations. However, the ultimate business needs and goals should drive the decision process.

    Enterprise resource layer

    It’s highly likely that this foundation to the cloud stack that includes components such as hardware, storage, virtualization and network infrastructure, is already in place. And while the enterprise is likely already familiar with choosing the best-fit resource layer components for their organization, when constructing cloud architecture, this is an initial decision point. Enterprises must decide if they will work with a public cloud provider that manages this layer for them, if they will reuse their existing resources and manage their own resource layer in a private cloud, or combine the two approaches.

    As such, the two critical business requirement questions enterprises need to ask themselves are:

    1. Are our planned cloud workloads sensitive and need to run on-premises? If so, private cloud will ensure these workloads meet regulatory, performance and/or security requirements.
    2. Are the workloads elastic with resource requirements varying greatly over time? Public clouds are best-suited for elastic workloads. Most often associated with development, test, and demo environment needs, public clouds will likely be the best, most cost effective choice for this type of requirement.

    Cloud platform layer

    This layer of technology automates resource provisioning by presenting an API that other pieces of technology can leverage and by translating requests made to that API into lower level commands that are sent to the resource layer. Just like the resource layer, cloud orchestration layers can be public or private.

    Businesses assessing cloud orchestration platforms should ask themselves three important questions:

    1. Will cloud become a core competency? If so, the enterprise should consider OpenStack with its feature breadth and flexibility. Planning to make cloud a core competency does require that substantial resources are committed to achieve the goal.
    2. Does the organization value stability over development velocity? If so, CloudStack or a stable, packaged distribution of OpenStack such as Nebula One may be a better option, minimizing resources committed, with the trade-off being reduced feature sets.
    3. Has the enterprise already made a commitment to working with AWS and therefore API compatibility with AWS is critical? If so, Eucalyptus (recently acquired by HP) may be the best choice as it guarantees full AWS API compatibility.

    Cloud management platform

    The final layer before the application stack, cloud management platforms are the interaction layer between developers, IT, the business and the enterprise’s cloud infrastructure.

    To determine the best cloud management platform, enterprises should assess:

    1. Is the enterprise seeking to manage a hybrid or multiple cloud environment? If the answer to this question is no, and an enterprise is managing one cloud, the cloud console that ships with their cloud orchestration platform, such as AWS Console or OpenStack Horizon may be a preferred approach. However, for enterprises seeking a single pane of glass to provision from and manage multiple clouds, a full-featured cloud management platform is recommended.
    2. Are business agility and cloud governance priorities? While cloud management is a crowded and ill-defined space, the range of available solutions is vast and enterprises should do their homework to ensure that their corporate priorities are reflected in the CMP capabilities. It is critical to not fall for the common misconception that all cloud management platforms feature the same breadth. Before issuing an RFP or selecting a vendor, it is highly recommended that an enterprise scope its governance and other operational needs.
    3. Is the goal of the move to cloud computing to accelerate development and innovation? If so, it is recommended that enterprises seek a cloud native cloud management platform, which supports a forward-thinking DevOps-centric approach to cloud management, as opposed to legacy cloud management platforms that simply layer traditional IT infrastructure provisioning workflow onto the cloud.

    Enterprises should be careful in the architecture choices they make to not lock themselves into a solution that inhibits agility and their ability to dynamically manage cloud resources. Yet, defining a cloud infrastructure stack for an enterprise is a series of critical decisions that must complement each other. With each layer of the cloud stack, there are numerous options, and no two enterprises will share the exact same cloud stack as each needs to define its requirements and the solutions that best match those requirements and build the cloud stack that will translate into business success.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:14p
    Amerijet Restructures IT Team to Set Itself Up for Growth

    Organizational changes are never painless, but they are unavoidable. Rapid change in the way companies do IT has made deep organizational changes necessary in many companies.

    Today, IT is increasingly viewed as a way to enable business growth, which necessitates an IT team whose roles are radically different from roles of a team that sets up office networks, manages an exchange server, sets up employee PCs and answers support calls. Both the technology and the talent mix of the modern IT team is different. It is a service provider and driver of the company’s strategy.

    Amerijet International, the Fort Lauderdale, Florida-based cargo airline operator recently went through the transition from old model to new model of IT, and Jennifer Torlone was the person the company hired to make the switch.

    Torlone, senior director of technology and information services at Amerijet, spoke about the transition in her keynote at the Data Center World conference in Orlando Monday morning.

    When she was interviewing for her position in 2011, the management mentioned they were planning to double the company’s size within the next few years. They also mentioned that IT was somewhat of a mess, something the new head of technology would have to untangle in addition to setting up the infrastructure to support the growth plan.

    The company had grown through acquisitions, but there wasn’t a unified IT platform to integrate each acquired firm’s infrastructure onto. Instead, things were “bolted on,” in Torlone’s words.

    There were multiple IT teams operating in silos, and even the basic services, like providing work computers to staff, took a long time. For her first six weeks Torlone did business using her own iPad and Blackberry and a personal Yahoo! account.

    “I needed to know that it wasn’t going to be like this every day, forever,” she said.

    And on top of having to clean house, business leaders were looking to her to help grow the company.

    Hard choices

    So Torlone went to work, at first focusing primarily on changing the way the company’s IT staff went about their business. There needed to be a cultural change, and that meant some hard decisions needed to be made.

    “It’s important for you to have the best and the brightest team that you can,” she said. “Unfortunately there is a time when you have to separate the wheat from the chaff.”

    As with any IT organization, her resources were limited, and she needed to reassess the team and make sure she was spending resources on the right talent. “Only one person from that original team remains with me today,” she said.

    Hire for the vision you have

    She was careful in selecting people to fill the positions that were freed. Hiring carefully is crucial because staff turnover is very costly.

    According to Torlone, the cost of replacing an employee can range from 100 percent to 300 percent of the salary of the person being replaced.

    Torlone established a new hiring process for the IT department. Her team built a behavioral test tool, which helped her find people with the right mind set for the kind of things she wanted to do.

    Candidates that passed the test were then handed over to the hiring managers to assess their technical skills.

    “You need to hire with your vision in mind,” Torlone said.

    And she did not focus on hiring people with deep technical expertise for the IT jobs. She was hiring a lot of business analysts and project managers, not typical IT employees.

    She said she believes in taking her employees out of their comfort zones and asking them to do things they hadn’t done before. “They have the seeds… and you can cultivate that.”

    Outsource the day-to-day

    Torlone also outsourced a lot of the day-to-day work the IT team was doing. All level one and level two support is now outsourced to Dell, for example.

    While employees may get nervous when things are getting outsourced, if the employees in place are good at what they do, the idea is not to get rid of them but to have them do different more demanding tasks that take advantage of more of their talents.

    7:00p
    Combining Cloud With Disaster Recovery and Business Continuity

    Emergencies happen, environments go down, but the business process must go on! Right? Over the past few years, smaller and even mid-size organizations found it challenging to enter the DR and even business continuity conversation.

    First things first: it’s important to understand that disaster recovery and business continuity are two different business objectives. However, they can certainly overlap. Even today there is still some confusion about what the cloud can really deliver. Not so much how the cloud works, but more around DR capabilities. Folks, whether you’re a small shop or a large enterprise, if you have haven’t looked at the cloud as a DR or business continuity option, it’s time that you do.

    There are specific cloud technologies that have become driving factors for better business IT redundancy.

    • Cloud-based traffic management. This is all about global traffic management (GTM) and global server load balancing (GSLB). Our ability to logically control traffic over vast distances has allowed the cloud to become a true hub for many different kinds of DR strategies.
    • Software-defined everything! Let’s face it. Working with today’s modern networking controllers is pretty nice. We can now create thousands of virtual connection points from one physical switch port. Plus, we’re able to control layer 4-7 traffic at a much more granular level. Networking aside, we also concept around software-defined data center, storage, security and much more. The point is that you can now abstract a lot of physical resources directly into the cloud layer.
    • Virtualization. Working with agile technologies like virtualization allows the sharing, replication, and backup of various kinds of images and workloads. These images can then span global data centers depending on the needs of the organization. Image control has come a really long way. Open source technologies like OpenStack, CloudStack, and Eucalyptus allow for VM and image management spanning many different types of cloud environments.
    • New types of storage and hardware intelligence. Cloud-based storage solutions have come a long way. Whether we’re working with an open-source cloud storage platform or something more traditional, like EMC or NetApp, our ability to control storage is pretty advanced. Data deduplication, controller multi-tenancy, and fast site-to-site replication make cloud storage systems a powerful part of the DR process. Here’s the other concept to understand: it’s now much easier to build your own commodity storage/backup platform. As software-defined storage continues to gain popularity, creating your own disk and flash resource pool becomes very feasibly. From there, you can point all data and storage resources to one, logical, storage head. This software-defined storage component will let you span cloud environments regardless of the underlying hardware.

    Here’s the best part. Moving toward a cloud DR strategy can be very cost effective. Organizations will need to conduct a business impact analysis (BIA) to establish effective recovery time objectives (RTOs) and the internal DR strategy. However, once that’s complete administrators can look at various cloud models to help them delivery true infrastructure high availability. From the cloud’s perspective, the options are truly numerous:

    Hot site or active/active configurations

    • Requirements: Absolutely minimal downtime.
    • This can be pricey. Basically you would need a hot site that is always operational – sill, this may be a necessity.

    Warm or cold site active/passive

    • Requirements: Some downtime is allowed. But not prolonged.
    • This is less expensive and can be adopted under a “pay-as-you-use” cloud model.

    Workload-based disaster recovery or business continuity

    • Requirements: Not the entire infrastructure needs to be recovered – only certain services or applications.
    • While still stored in the cloud – applications, databases, or other services can be either mirrored live or be provided in case of an emergency.

    Backup-based recovery

    • Requirements: Downtime is not a major factor – but the application, or workload is still very important and needs to be brought up quickly.
    • Similar to workload-based recovery, these cloud services replicate data, applications or other services to a cold VM-based backup. Should the need arise, either specific data or an entire workload can be recovered. Depending on the contract – this process is a bit slower.

    There are numerous different kinds of services being offered which are originating from cloud vendors and providers. They really interesting part here is that these products and services are being delivered to a larger vertical and more organizations of various sizes. With more resilience, better replication, and a lot more support the cloud can be a pretty powerful platform for your DR strategy.

    7:48p
    Standards Organization ISO Takes on Cloud Computing Standards

    logo-WHIR

    This article originally appeared at The WHIR

    Given the quality differences in different cloud services and issues of compatibility, ISO, the world’s best known standards body has issued two standards related to cloud computing.

    The first standard, ISO/IEC 17788, provides definitions of cloud computing terminology such as Software as a Service (SaaS), and the difference between “public” and “private” cloud deployments.

    The second standard, ISO/IEC 17789, deals with cloud computing reference architecture. It contains diagrams and descriptions of how the various aspects of cloud computing relate to one another, including roles, activities, and functional components and their relationships within cloud computing.

    ISO hopes this will lay down the basic terminology and architectural framework, which will, in turn, help provide assurances to companies buying cloud services and allow the cloud computing industry continue to grow.

    The standards were developed by the joint ISO/IEC technical committee JTC 1/SC 38, in collaboration with the International Telecommunication Union, and involved experts from more than 30 countries.

    The complete text of the 17788 standard can be purchased for around $60, and CHF 58,00, and 17789 for around $190.

    Another organization, the IEEE Standards Association, has originated two working drafts around cloud computing covering similar areas. P2301 (Cloud Profiles) provides information for different ecosystem participants such as cloud vendors, service providers, and users. P2302 (Intercloud) is meant to define topology, functions, and governance for cloud-to-cloud interoperability and federation.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/standards-organization-iso-takes-cloud-computing-standards

    9:39p
    Startup SocketPlane Plans to Bring SDN to Clouds That Use Docker Containers

    logo-WHIR

    This article originally appeared at The WHIR

    SocketPlane, a new Software Defined Networking startup, is working on a solution to address the performance, availability and scale requirements of networking in large, container-based cloud deployments.

    SocketPlane was given venture funding this week from LightSpeed Ventures to develop its own open-source networking stack.

    Given the massive popularity of Docker (as well as its growing support) as a container technology, SocketPlane aims to bring enterprise-grade networking solutions to the ecosystem which are native to Docker and easy for developers and network operators to use.

    Many developers have been interested in creating SDN solutions which essentially add a software layer between the network hardware and the software controlling it. This frees many networking functions from physical networking equipment and enables virtual networking solutions that are more adaptive and responsive.

    Earlier this year, for instance, Oracle bought carrier SDN startup Corente to help it virtualize enterprise data center LAN and the WAN. And Nokia and Juniper partnered to help bring SDN to the telco market.

    Google, Facebook and Twitter have all developed their own SDN capabilities to operate their large-scale, policy-driven networks with a great deal of agility.

    SocketPlane hopes to provide similarly DevOps friendly capabilities to any container-based environment using Docker.

    “Our approach to SDN is unique compared to most ‘SDN’ vendors” SocketPlane product VP Dave Tucker said in a statement. “Instead of trying to reinvent the wheel, we realized early on that there is value in our networking heritage and we are building our solution on top of these solid technologies”

    With its first product slated for release in early 2015, SocketPlane hopes to providing engineers robust APIs for network automation, provide elastic network scaling, and provide using network virtualization that simplifies network complexity.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/startup-socketplane-plans-bring-sdn-clouds-use-docker-containers

    10:22p
    The Myth of the Data Center’s High-Density Future

    ORLANDO, Fla. - Regardless of predictions by vendors that data center densities would go up, it has hasn’t been the case so far. This misconception has led to a lot of data center design missteps, Kevin Brown, vice president of data center strategy for Schneider Electric, said.

    The projections overstated reality, which has led to numerous data centers that were overbuilt and ended up being sold by the owners because they wanted to wash their hands off of them. The third parties that take the facilities over lease the unused capacity to multiple tenants. So the idea that building an entire data center for higher densities is a good way to future-proof the facility is a misconception.

    But determining data center design needs is hard. In a Monday morning presentation at Data Center World in Orlando, Brown suggested that designing density at the POD level is a more efficient way to go than planning for a certain density across the entire facility.

    Density policy at the POD level should still have peak and average specified. “If you have to oversize something, make it a POD, not rack power distribution or the whole data center,” he said. “Have contingency plans in case you miss.”

    Designing for lower density is easier to manage overall, according to Brown, and high densities don’t necessarily save money.

    “The data center of the future will look a lot like the one today,” he said. “Average density of 3-5kW per square foot and peak of 11.4kW [can be operated] with high degree of confidence. The days of data centers as rack-by-rack deployments are going away. We need to start thinking about the POD. Stop thinking of kW per rack, but by POD.”

    While Brown proposes this strategy in right-sizing data centers, it’s important to note Schneider’s growing investment in the prefab data center. It acquired AST earlier this year and has been rolling out modules and modular reference designs.

    Modules can adjust as the business evolves, and prefab modules allow a business to move faster. It is an argument against building unnecessarily big facilities, but Schneider also sells components for big mechanical systems installed in large data centers. The trend, however, is towards more incremental build-outs.

    The high density misconception

    There are two basic reasons for misguided density forecasts:

    • Mixed loads in data centers brings the average down
    • Technology issues and improvements

    “Density projections are really talking about servers,” he said. “What’s happening is the focus is no longer about driving performance, but increasing performance per watt. While experts predict densities are going up, servers are going the other direction. All data is showing this.”

    Cost per watt as density increases. Schneider Electric's Kevin Brown said the return diminishes and design complexity isn't worth potential savings when it comes to high density (source: Schneider Electric Data Center World Presentation)

    Cost per watt as density increases. Schneider Electric’s Kevin Brown said the return diminishes and design complexity isn’t worth potential savings when it comes to high density (source: Schneider Electric Data Center World Presentation)

    Brown believes that mobile chip technology will continue to make its way up into the data center, increasing performance per watt and offsetting any supposed offset costs through higher density data center space. This is one major trend fighting against high-density predictions.

    At low densities, rack count drives savings; at higher density, rack count savings is countered with higher-cost racks and rack power distribution. The assertion is that design complexity isn’t worth the savings.

    “As density goes up, it means bigger more expensive racks, greater capacity, so power distribution is more expensive. Where I save money is fewer racks, less space. That’s my trade off. Higher costs are supposedly offset.

    When determining density, the cost curve drops really quickly as you reach 5kW per square foot, then continues at an increasingly slower pace to 15Kw. “Beyond 15kW, there’s no savings to be had. This is part of the reason densities are stabilizing.”

    Design as insurance premium

    Brown believes that when designing for uncertainty, it should be looked as an insurance premium.

    There are three strategies to provision for density uncertainty:

    • Oversize the whole data center (a bad idea)
    • Create oversized PODs
    • Create oversized racks.

    Design strategy should embrace POD deployment, said Brown. Oversized PODs are better than oversized racks.

    Oversizing the three should be viewed as insurance premiums with varying degrees of desirability. Oversize rack is comparable to paying a < 1 percent insurance premium, oversizing the POD is a 5 percent premium and oversizing the data center is a comparable to a 25 percent premium. The POD is the insurance premium that goldilocks chose. Oversize a few PODs in design and the capacity guessing game is easier to figure out.

    11:00p
    Digital Realty Taking its Medicine

    On its earnings call toward the end of the month we will likely hear some details about the properties Digital Realty Trust (NYSE: DLR), one of the world’s largest data center real estate companies, is going to sell.

    Describing its “new path forward,” the San Francisco-based company’s senior executives announced some big strategic changes after it wrapped up the first quarter. The new path included pruning the real estate investment trust’s massive portfolio to get rid of “non-core” properties. But the word has been mum since.

    “It seems like not a lot’s happening because you haven’t heard an announcement or seen a building transact,” Jim Smith, Digital Realty’s CTO, said. “The reality is that the team that does dispositions has been really cranking at full speed, because we wanted to make sure we did it right.”

    As painful as it may have been for the company’s management to admit to investors, not all buildings in the portfolio are gold mines exactly, and time has come to trim the fat.

    “We haven‘t sold anything in the whole life of the company, and that has been a mistake,” Smith said. “We have sort of 10 years of deferred maintenance on pruning. We should be selling a little bit every year. It’s just good real estate behavior.”

    What’s for sale?

    So what constitutes a “non-core” property for Digital Realty? In the early years after the private equity fund GI Partners spun it out in an IPO, the company’s management had not drawn a hard line between types of assets it would and wouldn’t own. Some buildings in the portfolio aren’t even data centers.

    One is a training facility for semiconductor equipment in an office park, for example. It’s high tech, with lots of expensive infrastructure, but it’s not a data center, Smith said. Another example is an office building that houses a technology company’s headquarters.

    There are non-core data centers as well. Some of them are facilities with thin operational teams – buildings Digital Realty acquired in markets where the management thought they would bulk up but didn’t. Having a single data center in a market is not a bad thing on its own, but if it needs some redevelopment work or a tenant’s lease is approaching expiration, it doesn’t fit the company’s current strategy.

    If the landlord has a good lease deal going in a building, but the tenant’s credit profile changes, that too may be a good reason to sell. Another good reason to sell would be having a building that’s currently overvalued because it is in a hot market, Smith explained.

    No shortage of buyers

    How easy or hard it will be to sell and how well the company will make out on each deal will vary widely. “Some of them will be easy, some of them will be hard, some of them will make money, some of them we may take some haircuts on,” Smith said.

    At the moment, there are plenty of bidders and valuations are high, so it’s a good time to be a seller in the market. “Anything that has ‘data center’ associated with it and has some cash flow is very interesting to many buyers,” he said.

    There are developers out there looking to reposition themselves for the data center market and willing to take on some risk. There are also local entrepreneurs in some markets for whom a partially occupied data center that is specific to their local market is attractive.

    Pamela Garibaldi, vice president of global marketing at Digital Realty, said there were also institutional investors shopping around. “There are some private equity funds out there that focus on the technology sector that want to acquire data center assets,” she said.

    If a data center has a tenant with decent credit, it will be fairly easy to sell. “There’s never been a shortage of buyers looking for credit tenants,” Jim Kerrigan, who has worked on high-tech real estate deals since the early 90s, said. A client with a double-A credit or above and a good lease term is important, however.

    Most recently Kerrigan was part of Avison Young’s data center practice and worked in a similar role at Grub and Ellis before that. He now runs his own real estate company called North American Data Centers.

    From private buyers to other REITs, pension funds and private equity players, the pool of buyers is far and wide, Kerrigan said.

    Taking one’s medicine is a sign of maturity

    That Digital Realty is taking a hard look at its portfolio after a decade in business is a sign of maturity, Chris Crosby, one of the company’s early employees who now has his own data center development business called Compass Data Centers, said. “It’s very solid strategy.”

    As any company ages, it runs the risk of suffering from past investments it has neglected to part with. Any time a company take its medicine is a good sign of mature management, Crosby said.

    And Digital Realty can afford to take the time to do it thoughtfully. “We’re not in some sort of crisis, like ‘these things have to be disposed of,’” Smith said. “We want to optimize the value and maximize the value, and it’s a process that will take some time.”

    << Previous Day 2014/10/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org