Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 4th, 2014

    Time Event
    1:00p
    COPT: The Federal Space Drives A Growing Data Center Business
    The COPT DC-6 data center in Manassas, Virginia.

    The COPT DC-6 data center in Manassas, Virginia. (Photo:

    Corporate Office Properties Trust is a real estate investment trust (REIT) that has historically focused on properties for the federal government, but has seen success beyond the federal space since moving into data centers. In Northern Virginia, it owns facilities in Ashburn and Manassas. The company discussed the market and its progress.

    COPT is a $4 billion dollar real estate investment trust that has about 2.5 to 3 million square feet of space under management.

    “In the past, we traditionally did office space,” said Mark Gilbert, Director of Data Center Business Development for COPT. “We own property outside Fort Meade, and in the National Business Park. We have about 30 office buildings there. Some of those office buildings have raised floor, so some is dedicated to data center space. We branched out to some of our customers and started building data centers.”

    One of those data centers on public record is the Northrop Grumman data center 1 in western Virginia.

    The federal space is how COPT got into data centers. “Our good government customers liked our work on the build-out for (their) data centers, so we migrated into the business,” said Gilbert.

    The company  does own several properties outside the DMV (DC, Maryland, Virginia) area. “We generally follow the federal client and the systems integrators that support them,” said Gilbert. “San Antonio and Colorado Springs are two examples that have a dense footprint for federal.  This strategy is still at our core.”

    Project in Ashburn

    Last January, the company announced it was building again in Ashburn, the leading data center hub in Northern Virginia. “In Ashburn, the powered shell has been delivered to the customer there,” said Gilbert. “The second building is about to come online, and it too will go to the same undisclosed customer.”

    While COPT won’t comment, it’s been widely speculated that Amazon Web Services is the customer.

    The Ashburn project is a “powered shell” – undeveloped space with the power and fiber connectivity already in place. This allows for easy expansion for companies with the capital to build the data center infrastructure themselves.

    “Powered shell was a new development for us. (The customer) pointed at a land in Ashburn and said we need you to build,” said Gilbert. “We were able to execute, and deliver 200,000 square feet in eight months. The large players seem to be going to powered shells. They like that you don’t have to be a landowner.”

    A COPT affiliate acquired the 34-acre Ashburn site from St. John Properties Inc in December 2012 for $14 million.

    Progress In Manassas

    Perhaps the most interesting tidbit about COPT’s Manassas facility is that the company actually rewards customers for higher power densities, and this is a message that has been resonating with customers.

    The Manassas facility was originally built by Power Loft, which historically struggled to compete with Ashburn-based rivals, but a recent visit shows that the location has been thriving as of late. In 2010, COPT took a majority position in the facility before outright buying Power Loft.

    “One thing about the Manassas facility is that it took us a while to find the model that made it relevant,” said Mark.  “We can handle very high densities, air cooled. We reward densities. Most facilities raise the price, we actually lower the price the higher the density. At the Manassas facility, I can give people better pricing depending on the density,” said Gilbert.

    In Manassas, roughly 30 percent is still available for development and raised floor. The company is building out more space and is currently in negotiations to fill the second building.

    What changed to make the Manassas building more attractive? “We’re a REIT, the model in the past was a wholesale model and the market changed,” said Gilbert. The larger wholesale opportunities became fewer and fewer in between. The facility has DoD class fencing, and security environment and we decided to buy this data center.”

    COPT’s willingness to do smaller deals, coupled with rewarding higher densities and the company’s strength in the federal space all came into play with the building’s success.

    Northern Virginia: Healthy Demand, but Buyer’s Market

    What’s business like in the northern Virginia market? “Four months ago it was miserable, today it’s great,” said Gilbert, who said the business is somewhat seasonal, and the slow season has come and gone. With additional entrants into the Northern Virginia market, Gilbert still believes that there’s enough demand to meet the supply coming online. He believes the area might be overbuilt and that it’s a currently a buyer’s market. However, COPT has landed itself the goose that laid the golden egg with its undisclosed commercial entity, who will most likely fill up space in COPT as it makes it available.

    “If our customers ask us to do something, we will do it,” said Gilbert. “We’re broadening the conversation. If a large SI (systems integrator) can grow, our basic story is that we can support you from five racks to 5,0000. We’ll build to whatever your needs are.”

    Gilbert also mentions that COPT has been keenly interested in sale-leasebacks, if the right opportunity comes along. There are quite a number of relatively new enterprise data centers built by companies that aren’t looking to be in the data center business. These companies struggle to fill the facility, so COPT sees an opportunity in buying the data center, leasing back the space to the enterprise and turning the rest into rentable raised floor.

    1:30p
    Enterprise Networks: More Ways to Control the Major Drivers of OpEx

    Michael Bushong is the vice president of marketing at Plexxi. This column is part two of a two-part series looking at the costs factors and controlling costs in your networking infrastructure. The first column, 5 Major Drivers of OpEx in the Enterprise Network, was published previously.

    bushong_michael-tnMICHAEL BUSHONG
    Plexxi

    Operational expenses in the enterprise networks consist of the total cost of ownership for network devices as well as the underlying data center architecture. In my previous post, I outlined five of the major drivers of operating expenses in the enterprise network. Now that we understand the source of these expenses, controlling these cost drivers needs to be a primary objective when designing all data centers. Here are five architectural decision points to consider to help keep your operational expenses in check:

    1. Start with fewer devices
    Given the role that the number of devices plays in driving long-term operational expense, the most important decision a data center architect can make is the foundational architectural approach. To control costs, architects should favor designs that require the fewest number of devices possible. Legacy three-tier architectures are already being replaced by more modern two-tier approaches. As technology continues to evolve, two-tier architectures are being supplanted by completely flat designs. To the extent that these flat designs can reduce the total number of devices in the network, they can dramatically improve long-term cost models.

    2. Utilization matters
    Capacity costs can be measured in simple price-per-unit terms. If the usable capacity is only a fraction of what is available, the price per unit increases. Low utilization also requires higher capacity overhead. Architects should consider how to drive higher network utilization so that they can take advantage of better economics. This is likely to be a function of solution capability, so buyers will need to augment purchasing criteria appropriately.

    3. Architect for uptime
    Architecture has a profound impact on network downtime (scheduled or otherwise). Architects need to pay careful attention to failure and maintenance domains, resilience features, and upgrade procedures. Further, customers should consider how short-term capital costs amortized over the life of the equipment compare to longer-term downtime trends. These evaluations are particularly important where fees and penalties are concerned (as with managed or cloud services). Additionally, the cost of even minimal downtime for certain applications can exceed capital costs (i.e. ecommerce, financial services, and so on).

    4. SDN should provide relief
    The central control model that SDN promotes provides a single point of administration, which will drive maintenance costs down. Accordingly, buyers should consider SDN controller-based capabilities as a top-tier purchasing criterion for data centers where cost is important.

    5. DevOps is still in its formative stages
    Automation is clearly the future for most large-scale data centers. The transition to a fully automated environment will largely depend on a management discipline and a tooling ecosystem that are both still emerging. This is generally referred to as Development Operations (or DevOps). Put simply, DevOps provides a tailored glue layer between management models maintained by a combination of management frameworks (Chef, Puppet, Ansible, and so on) and in-house programming staff.

    Because DevOps is in its formative stages, it is impossible to predict with any kind of precision which tools will ultimately win. It is highly likely that companies will, in fact, operate with some mix of commercial and homegrown tools designed to meet their specific requirements. We’re already seeing this happen. It has resulted in a fractured operational tooling landscape. Point tool integrations will be handled case-by-case, typically driven by significant revenue opportunities. This will cause a scattered DevOps tool support matrix that will not perfectly match most customer environments.

    Until DevOps frameworks become richer and natively support more management models, expect to see higher in-house development costs to maintain a fully DevOps-automated environment. Accordingly, data center architects will need to consider not just operational tool support but also ongoing integratability of tools within the architecture. This should favor DevOps-friendly solutions that have built underlying data service infrastructures that allow for repeated integration with new tools.

    Restructuring any data center architecture begins with minimizing complexity. The ultimate measure of effective design, though, is whether complexity (and associated cost) remains low as applications place additional capacity and management requirements on the infrastructure. Taking these architectural points into consideration in the design phase of your data center will help keep the amount of unnecessary operation expenses to a minimum, and eliminate the over complexities that are often associated.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Five Technologies That Will Improve Your Cloud

    cloud-rows-dreamstime

    You’re a large organization, using cloud computing to your advantage. You’ve enjoyed your cloud experience and see how this model is helping you evolve. Still, you wish you could make your cloud infrastructure run a little bit better. You’d love to experience better performance and utilize your cloud resources a bit more efficiently. But how can you upgrade your cloud experience without recreating your entire architecture?

    Many shops have now migrated to some type of cloud model for many different reasons. There are lots of use cases emerging which make cloud a powerful platform to utilize.

    Still, like everything in technology, solutions right out of the box have the potential to be optimized. So let’s take a look at five technologies which have helped organizations of all sizes utilize their cloud infrastructure more intelligently.

    Cloud Automation

    This is a pretty big one because you’re effectively creating powerful cloud intelligence. It’s not quite a “set it and forget it” system. But it can get pretty close. Technologies like CloudPlatform, OpenStack and Eucalyptus all create powerful management extensions for your cloud environment. Dynamic resource provisioning and de-provisioning, workload and application control, and a very powerful distributed cloud management portal can help organizations gain new insights into their infrastructure.

    At this logical layer, you can connect a myriad of cloud models into one automation and orchestration module to really create a proactively dynamic cloud infrastructure. Remember, you can automatically control everything from storage resources, to VM provisioning here. If you’re a growing organization utilizing various cloud technologies, cloud automation can make your life a lot easier.

    Agnostic Cloud Control

    Private, public, hybrid, distributed, community – does it really matter? The cloud model continues to evolve, and soon it’ll just be “How can I manage all of my cloud instances agnostically?” The agnostic cloud concept arises from the fact that almost all cloud environments are pulling resources from outside of themselves. Whether it’s an application that lives as a SaaS instance or there is a connection into a public cloud provider for DR, the idea is to manage the whole thing intelligently.

    Technologies like BMC begin to explore the concept of agnostic cloud control. By connecting with major control plains and interfacing with solid APIs, the cloud computing concept and everything beneath it can be better abstracted.

    Software-Defined Technology (SDx)

    There are very real technologies behind many of these terminologies. Cloud computing relies on the virtual layer for it to run optimally. Now, virtualization and the logical layer have helped abstract on physical-only platforms. We have software-defined: Storage, Security, Networking and even Data Center. Each of those four examples already have technologies there to back up the conversation:

    • Storage: Atlantis USX and VMware vSAN.
    • Networking: Cisco NX-OS and VMware NSX.
    • Security: Palo Alto PAN-OS and Juniper Firefly.
    • Data center: VMware SDDC and IO.OS.

    These are solid platforms which help control many new aspects of cloud computing. Furthermore, many of these SDx technologies directly integrate into the agnostic cloud model.

    2:30p
    Top 10 Data Center Stories, February 2014
    supernap-cooling-470-2

    A look at the cooling units outside the SuperNAP 8 data center in Las Vegas. These 1000-ton units can switch between multiple cooling modes, and have on-board flywheels to provide extended runtime in the event of power outages. (Photo: Switch)

    Bitcoin infrastructure captured the imagination of Data Center Knowledge readers in February. The cryptocurrency was the focus of our two most popular stories for the month, followed by our inside look inside SuperNAP 8, the newest Switch data center in Las Vegas. Without further ado, here are the most viewed stories on Data Center Knowledge for February 2104, ranked by page views. Enjoy!

    3:00p
    Big Data News: Symantec Picks Splunk for Enterprise Security Tool

    Looker Datafold Engine empowers analysts with more meaningful insights through its in-database architecture, Symantec uses Splunk software to boost its security intelligence operations, and CommVault’s Simpana 10 software has achieved certified integration with the SAP HANA platform.

    Looker introduces enhanced datafold engine.  Business analytics company Looker announced its enhanced Datafold Engine, with support for persistent derived tables to deliver faster, more meaningful business insights. Looker’s in-database architecture empowers data analysts and reduce their workloads, allowing them to model complex raw data—quickly and in multiple ways.  Extraction of data in advance of exploration often obscures the data or limits the ability to understand root cause by restricting detailed drilling. As a result, legacy approaches make it impossible to engage in real-time data discovery. The Datafold Engine uses the underlying analytics database to transform raw data at query time, enabling deep exploration of ever-growing and increasingly complex datasets. Persistent derived tables also free up precious technical talent for other business-critical projects. The Datafold Engine works in concert with LookML, Looker’s flexible modeling environment, to enable analysts to slice and dice large datasets by any combination of dimensions and measures. “The Looker Datafold Engine enables the unlocking of massive sets of data and delivers powerful value to today’s businesses,” said Frank Bien, CEO of Looker. “The result is a new kind of business—one that shares and collaborates around data and drives curiosity and intelligence throughout the organization.”

    Splunk selected by Symantec to help security intelligence operations. Splunk (SPLK) announced that Symantec (SYMC) has selected Splunk Enterprise 6 to help bolster its security intelligence Operations. Symantec will centralize, monitor and analyze security-related data in Splunk Enterprise to help investigate incidents and detect advanced threats. Symantec will also use Splunk software to ensure comprehensive compliance with Sarbanes-Oxley (SOX) and the Payment Card Industry Data Security Standard (PCI DSS). “With today’s threat landscape, it’s critical that we react quickly to identify and respond to any type of threat, especially advanced threats that continue to increase in complexity” said Julie Talbot-Hubbard, chief security officer, Symantec. “Our efforts, in combination with Splunk software, demonstrate that we are implementing best practices to not only protect our customers and partners, but also help with addressing critical customer problems.”

    CommVault Simpana integrates with SAP HANA platform. CommVault (CVLT) announced that its Simpana 10 software has achieved certified integration with the SAP HANA platform. Simpana 10 software delivers robust and comprehensive backup and recovery for environments running SAP HANA with the speed, ease of use and performance required in the fast-moving world of real-time, predictive analytics. The new SAP-certified integration with the SAP HANA platform is especially timely because of increasing demand from CommVault customers for deeper protection of enterprise database applications from SAP. “CommVault continues to enable enterprises to increase the business value of their information as a result of our investments in solutions for SAP HANA and our relationship with SAP as a member of the SAP PartnerEdge® program for Application Development,” said David West, senior vice president, worldwide marketing and business development, CommVault. “Companies can speed up their data management by using CommVault software together with solutions based on SAP HANA.”

    4:00p
    American Express Vacating Massive Minneapolis Data Center
    amex-minneapolis

    Colliers is marketing this site in Minneapolis, which currently houses an American Express data center.

    American Express is vacating a massive 541,000 square foot data center in Minneapolis, which will soon be up for sale, according to local media.

    The building is currently fully leased to American Express through 2014, but the Minneapolis/St. Paul Business Journal discovered Colliers International brokers are listing the property with no disclosed price so far. A sale could provide an opportunity for providers eyeing the Minneapolis market, which has been among the most active regional data center markets over the past year.

    The data center is an eight-story building located at 1001 Third Ave S. near the Minneapolis Convention Center.  It’s currently designed to support 5.4 megawatts of critical power and has a total of around 150,000 square feet of raised floor space, making it one of the largest blocks of raised floor in Minneapolis. The building is also in close proximity to the 511 Building, the city’s major interconnection hub.

    Mainframe Heritage

    The American Express site was originally a mainframe and print operations facility built in 1988 for IDS. American Express sold the data center in 2004 as part of a sale-leaseback transaction to Inland Western Retail Real Estate Trust. The lease to American Express was for a 10-year term, which is up now. That includes six five-year options (which apparently aren’t being taken).

    Such a large facility potentially turning into a multi-tenant site, should a provider be interested, would mean a potential major consolidation point for data center space in Minneapolis given its size and close proximity to the major interconnection hub.

    There’s been a wealth of multi-tenant provider activity in the region as of late. Cologix has been expanding rapidly in 511 11th Avenue. Stream Data Centers is building a 75,000 square foot data center in a southwest suburb (which just received Tier III certification from the Uptime Institute), Viawest has a 150,000 square foot build, and other providers include DataBank, Compass Datacenters and Digital Realty Trust all making moves in the bustling Minnesota market. The data center might very well go to an enterprise once again, but that hasn’t been the trend. If a colocation provider is looking to enter the Minneapolis market in a big way, this building should be of interest.

    << Previous Day 2014/03/04
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org