Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 3rd, 2013

    Time Event
    11:30a
    Data Center Jobs: Critical Facility Manager

    At the Data Center Jobs Board, we have a new job listing  from McKinstry, which is seeking an Assistant Critical Facility Manager in Des Moines, Iowa.

    The Assistant Critical Facility Manager is responsible for team management, performing managerial functions including hiring, coaching and separations, directing the team to ensure successful achievement of business goals and process adhesion, coaching, mentoring and developing members of the team, including conducting goal setting worksheets and performance reviews, acting as steward of McKinstry culture; communicating and influencing policies and procedures, developing and managing training and staffing budget for development team, and supporting and assisting Critical Environments Facility Manager. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    1:16p
    Scenes From IMN’s New York Forum
    ceo-panel

    The C suites at the offices of major data center developers must have been empty late last week, because it looks like they were all at the IMN Spring Forum in New York. Can you name these executives? Check out out photo gallery for the details. (Photo: Rich Miller)

    More than 300 data center professionals attended the IMN Spring Forum on Financing, Investment and Real Estate Development for Data Centers, held Thursday and Friday at the Conrad Hotel in Manhattan. The event featured panel discussions on the investment environment for data centers, design criteria for successful projects, and merger activity in the industry. Data Center Knowledge will provide full coverage of key trends and themes from IMN over the next several days. In the meantime, here’s a look at Scenes from the IMN Spring Data Center Forum.

    1:30p
    Network News: Rackspace Adds Brocade vRouter to Hybrid Cloud

    Brocade’s Vyatta vRouter enhances the Rackspace Hybrid cloud offering, Mellanox adds a dense, 12-port 40GbE switch, and Akamai integrates Operator CDN offerings, as a result of its Verivue acquisition.

    Brocade to enhance Rackspace hybrid cloud.  Brocade (BRCD) announced the addition of the Vyatta vRouter to its portfolio of networking and security solutions for the Rackspace Hybrid Cloud. The software network appliance will add to the breadth of Rackspace’s networking solutions, which consist of physical firewalls and load balancers, as well as global traffic management, threat management, web application firewall and log management capabilities in the hybrid cloud. “We’re expanding the reach of our hybrid cloud portfolio by giving customers more choices for security and networking solutions to use across their public, private and dedicated cloud environments,” said John Engates, CTO at Rackspace. “To complement our existing physical firewall capabilities, the Brocade Vyatta vRouter now delivers virtual firewall and advanced networking solutions that enable customers to connect their data centers with the Rackspace Hybrid Cloud.” “The value of the Brocade Vyatta vRouter is both in the security and cost savings it provides,” said Jack Lopez, chief technology officer at Netcare. “The flexibility of this solution allows us to quickly create a secure virtual firewall that our customers can use to protect their cloud servers from outside threats.  This can be done in a fraction of the time it would take to configure a physical firewall solution and it is far more economical.”

    Mellanox launches 40Gb/s Ethernet Switch.  Mellanox (MLNX)  announced its SwitchX-2 based SX1012 Ethernet switch, a cost-effective solution for small-scale high-performance computing, storage and database deployments. With 12 ports of 40GbE fitting into just a single 1U spot, each port can provide up to 56 GbE line rate to further enhance performance, or be split into four standalone 10GbE interfaces. The Mellanox SX1012 switch solution is based on Mellanox’s Virtual Protocol Interconnect (VPI) technology and can run the full capabilities of MLNX-OS® to enable the switch to work in mutual Ethernet and InfiniBand environments for maximum usability in various scenarios. “The Mellanox SX1012 Ethernet switch is a great fit into small-scale storage and database applications, providing very high-throughput capacity in a compact enclosure,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “We are seeing increased market demand for small-scale 40GbE switches that can enable the creation of small high-performance clusters, storage solutions and database appliances. The new SX1012 switch delivers the right solution for these environments, removing the need for larger, more expensive switch platforms.”

    Akamai enhances Aura Network Solutions.  Akamai Technologies (AKAM) announced that Aura Lumen and Aura Spectra are now available as part of the company’s Aura Network Solutions family of Operator CDN (OCDN) technologies. Aura Network Solutions can help network service providers support new revenue generation opportunities, provide a superior user experience, manage network costs, and simplify the video delivery infrastructure for both operator-owned content and over-the-top (OTT) services. The integration of  Aura Lumen (licensed CDN) and Aura Spectra (Software-as-a-Service CDN), obtained through Akamai’s acquisition of Verivue has allowed Akamai to deliver Aura Lumen as a comprehensive solution that can be owned and run by the operator, giving them greater control of their investment. ”Many carriers are taking a very different approach to their CDN strategy than they did only a few years ago,” said Paris Burstyn, senior analyst, Ovum. “In the past, there was a strong desire for operators to build their own CDN and go it alone in the market. Now, they actively seek a partnership approach to address their needs. Working with the right partner can improve their network economics by offloading more Internet traffic, help sell enterprise and media CDN services to their customers, and leverage an open system CDN to deliver multi-screen video experiences to their subscribers.”

    2:14p
    IBM Unveils Big Data Analytics Software and Services

    IBM launches new big data analytics software and services, Hortonworks advances its Data Platform, and the TACC Longhorn system Hadoop cluster helps researchers data mine for science.

    IBM unveils Analytics Services.  IBM unveiled new Big Data analytics software and services to help organizations build and maintain their global workforce. Mad in IBM Labs, the software allows businesses to analyze massive amounts of data shared by employees and uncover work-related trends that can be used to build and preserve more productive work environments and minimize attrition. New workforce analytics services include Survey Analytics services, which uses text and visual analytics software to automatically extract and display over one million pieces of anonymous unstructured data derived from employee surveys. Retention Analytics services provide a data-driven approach to understanding attrition patterns within a business. It applies predictive analytics software to enterprise HR and CRM data, and social data, and then identifies high-attrition “hot spots” within the company. Using IBM’s acquisition of Kenexa’s offerings, the IBM Smarter Workforce initiatives help businesses capture and understand data and then use these insights to empower their talent, manage expertise and optimize people-centric processes. ”Companies that invest in Big Data and analytics to nurture their workforce will keep the best talent and distinguish themselves from their competition,” said Dr. Bob Sutor, VP, Business Analytics and Mathematical Sciences, IBM Research. “Knowing what motivates people can boil down to the data you capture and how you interpret it. Using the insights identified by our new predictive and socially-driven workforce analytics tools, companies can ensure long-term success through employee engagement and meaningful work.”

    Hortonworks launches 1.3 of Hadoop Data Platform.  Hortonworks announced the availability of the 1.3 version of the Hortonworks Data Platform (HDP), the industry’s only 100-percent open source Apache Hadoop platform. The new release incorporates advancements from the open source community, as well as the first phase of the Stinger Initiative – an effort to enhance the performance of SQL queries and enhanced SQL compatibility. “Hortonworks is dedicated to working with the community to advance 100-percent open source Apache Hadoop, and the regular cadence of updates to the Hortonworks Data Platform are designed to engage the entire IT ecosystem in its march toward a next-generation enterprise Hadoop platform,” said Bob Page, VP Products, Hortonworks. “The vast majority of Hadoop deployments rely on Apache Hive for proven and scalable SQL queries. HDP 1.3 is designed to improve the performance and SQL compatibility of Hive, empowering enterprises to more effectively execute SQL commands for faster, interactive queries and deeper analysis.”

    TACC Longhorn Hadoop cluster data mines for science. Initial experimentation with Hadoop and a later technology grant to build a Hadoop-optimized cluster on the Longhorn remote visualization system, has led TACC (Texas Advanced Computing Center) to offer researchers a total of 48, eight-processor nodes on the Longhorn cluster to run Hadoop in a coordinated way with accompanying large-memory processors.  ”Hadoop provides researchers with the first major tool for doing groundbreaking research in the era of Big Data,” said Niall Gaffney, TACC’s director of data intensive computing. “I am very excited to see its early and fruitful adoption amongst researchers as well as the explorations into how it can be used to take advantage of the world class supercomputing resources TACC provides.”

    3:00p
    SingleHop Expands in Amsterdam With Interxion
    singlehop-cages

    A look at some of the cabinets inside the Interxion data center in Amsterdam, where Chicago-based hosting provider SingleHop has announced an expansion. (Photo: SingleHop).

    SingleHop is expanding into Europe to meet customer demand. The Chicago-based cloud hosting provider announced plans to open a new data center in Amsterdam, according to CMo and co-founder Dan Ushman, who said SingleHop has installed 30 racks in an Interxion data center.

    The new 2,000-server facility expands SingleHop’s server capacity by approximately 20 percent, and widens its global footprint to reach high-growth markets in Europe and the surrounding areas. The facility will help SingleHop meet growing demand for high quality dedicated and cloud infrastructure from value-added resellers, hosting providers and SMBs around the world.

    “This European data center solidifies our commitment to building a truly global network,” says Andy Pace, Chief Operating Officer at SingleHop. “With demand for hosted infrastructure and cloud computing growing quickly in Europe, this facility helps position us to satisfy this demand with our award-winning automated approach to infrastructure hosting that can have clients up and running in minutes.”

    Amsterdam Ideal Location for European Expansion

    SingleHop believes Amsterdam offers a well-connected location with a skilled workforce and a significant number of quality network providers and data centers in the area.

    The facility will replicate the entire SingleHop technology platform and security features of the company’s U.S. data centers, with biometric physical access controls, individually locking cabinets and 24-hour security. The majority of the company’s current services will be offered through the new data center, including dedicated servers, enterprise private clouds, and managed services. Customers can choose to deploy SingleHop’s services at its data centers in Chicago, Phoenix or Amsterdam. All services will be covered by the SingleHop “Customer Bill of Right” SLA, which guarantees service quality, customer service response times, network stability and much more.

    In addition, the Amsterdam data center will also be powered by SingleHop’s award-winning LEAP3 platform, giving customers complete control over their infrastructure, regardless of data center location, from any smart phone or computer.

    According to Gartner, $677 billion will be spent on cloud services worldwide over the next three years, with Western Europe predicted to be the second-largest region, accounting for nearly a quarter of all spending from 2013 to 2016. Eastern Europe, Asia/Pacific, Latin America, the Middle East and North Africa as are also expected to post the highest growth rates during this period.

    SingleHop provides cloud services to more than 4,000 customers hailing from 114 countries, and houses more than 10,000 servers in three geographically dispersed U.S.-based data centers in addition to the new facility opening soon in Amsterdam. SingleHop was established in 2006 in Chicago and was named #25 on the Inc. 500 list for the fastest growing companies in America in 2011.

    SingleHop is featuring an early-bird pre-sale offer for customers with predictable infrastructure needs to reserve server capacity in the new Amsterdam data center in advance of its official opening. Pre-sale buyers will receive twice the RAM and bandwidth allocations, plus 10 percent off base pricing for the first six months.

    4:35p
    Top 10 Data Center Stories, May 2013

    During the month of May, the most popular story on Data Center Knowledge detailed Sears efforts to retrofit its shuttered retail outlets into data centers. Other topics trending well this month included the “lights out data center of tomorrow,” Iron Mountain’s underground facilities and the new photo feature, titled, “The Illustrated Data Center.” Here are the most viewed stories on Data Center Knowledge for May 2013, ranked by page views. Enjoy!

    Stay current on Data Center Knowledge’s data center news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or Facebook. DCK is now on Google+.

    4:42p
    Data Center or Ark? How Bad Weather Causes Construction Chaos

    Chris Curtis is the co-founder and SVP of Development for Compass Datacenters. We are publishing a series of posts from Chris that will take you inside the complexity of the construction process for data centers. He will explore the ups and downs (and mud and rain) of constructing data center facilities and the creative problem-solving required for the unexpected issues that sometimes arise with every construction process. For more, see Chris’ previous column on the start of construction and previous columns on the planning process.

    CHRIS CURTIS
    Compass Datacenters

    When a customer purchases a data center, they specify the date that they would like to take possession of it. They tend to be pretty insistent on these things. I’ve yet to read a lease or purchase agreement in which terminology like “mid-month, “June-ish” or “whenever it’s done” has been used to designate the time that the project must be completed.

    From a developer’s standpoint, this means that the project must have a schedule and that schedule must allow for the fact that during a six-month construction period, you’re probably going to run into some less than optimal weather. Of course, these weather considerations tend to vary with geography–you don’t build in many snow days for a data center in Phoenix, for example—you still have to take them into account. In other words: when developing a realistic development schedule, “weather days” must be built in.

    Monsoon, Anyone?

    Because I’ve learned some hard lessons about the varieties of weather, and their associated schedule impact, I looked at the average rain volume for the project’s location and built the estimated “weather days” into the schedule. Unfortunately, my calculations did not anticipate that, over the course of the project, the area would experience a volume of precipitation that can only be described as “biblical” in nature.

    In my case, rain completely stopped work for 45 days, and impacted a total of 69 days. In other words, 38 percent of our already aggressive schedule was blown away by the weather. At various times, the customer’s future data center appeared to be surrounded by a moat. Good if you’re building an impregnable medieval fortress, but for the future home of a million or so dollars of computing gear—not so much.

    Weather Impacts, Schedule Struggles

    As you might expect, rain and construction are not a productive combination. People are cranky. Imagine trying to your job if you were sitting in the equivalent of a cold shower all day. I’ll bet you wouldn’t be a barrel of laughs if you had to type on your keyboard with cold, “pruny” fingers for eight hours or so, electricians tend to be especially averse to working in the rain with water being a conductor of electricity and all. I guess the heightened potential of a few thousand volts coursing through your body does tend to give you a little different perspective on the term “work-related accident.”

    Things tend to progress a little slower. Concrete takes longer to dry, trucks get stuck in the mud, and the humidity wreaks havoc on the painting process. The net result of this data center version of Noah’s Ark is that all of the slack in our schedule is now gone, or more accurately, has been gone for a while.

    Project Management Needed, And Some Persuasion

    A project schedule is a funny thing. Although everyone understands that you need one, it takes a strong project management to get sub-contractors to deliver as committed. I guess seeing everything you’ve committed to, written down on paper, turns all the phrases that you used to get the business like “piece of cake” or “no problem” into one giant “holy crap.”

    Based on this level of initial enthusiasm, you can only imagine the level of sheer panic on behalf of all involved when that same schedule has to be compressed to meet the deadline. Situations like this test the mettle of even the most battle hardened data center developer. Resistance is fierce, excuses are made, and there is always some begging for extra time. Some will even refuse to move forward. But you still have to find a way to gain their compliance to meet the date and maintain quality. I don’t know what methods other developers use to overcome these issues, but I rely on a combination of coordination, communication, persuasion and escalation to senior executives.

    At the end of the day, however, holding their money until successful completion always proves to be the most effective technique. Okay, so I’m no Vince Lombardi, or Tony Robbins for that matter, but for all you new developers out there I say that, “When the going gets rough, pull out the biggest stick in your bag.”

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:30p
    Blue Waters Supercomputer Gets 380 Petabyte Storage System

    The NCSA Blue Waters supercomputer adds a 380 Petabyte storage system, uses NVIDIA GPUs to help researchers make a breakthrough in HIV research, and a Japanese Observatory brings its Cray XC30 into production.

    NCSA launches 380 Petabyte Storage System for Blue Waters.  NCSA announced that a 380 petabyte High Performance Storage System (HPSS) is now in full service production as a part of the Blue Waters supercomputer project. The HPSS is comprised of multiple automated tape libraries, dozens of high-performance data movers, a large 40 Gigabit Ethernet network, hundreds of high-performance tape drives, and about a 100,000 tape cartridges. “With the world’s largest HPSS now in production, Blue Waters truly is the most data-focused, data-intensive system available to the U.S. science and engineering community,” said Blue Waters deputy project director Bill Kramer. During acceptance testing the new HPSS at NCSA ingested 426 terabytes and retrieved 499 terabytes of data in 24 hours (averaging a total throughput of 38.5 terabytes per hour).  NCSA joined forces with the HPSS Collaboration’s Department of Energy labs and IBM to develop an HPSS capability for Redundant Arrays of Independent Tapes (RAIT)—tape technology similar to RAID for disk.

    Blue Waters, NVIDIA aid breakthrough in HIV research.  The University  of Illinois at Urbana-Champaign (UIUC) announced that it has achieved a breakthrough in the battle to fight the spread of the human immunodeficiency virus (HIV) using NVIDIA Tesla GPU accelerators. In collaboration with the University of Pittsburgh School of Medicine researchers have, for the first time, determined the precise chemical structure of the HIV “capsid,” a protein shell that protects the virus’s genetic material and is a key to its virulence. Using 3,000 NVIDIA Tesla K20X GPU accelerators, the Cray XK7 supercomputer gave researchers the computational performance to run the largest simulation ever published, involving 64 million atoms. ”GPUs help researchers push the envelope of scientific discovery, enabling them to solve bigger problems and gain insight into larger and more complex systems,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at NVIDIA. “Blue Waters and the Titan supercomputer, the world’s No. 1 open science supercomputer at Oak Ridge National Labs, are just two of many GPU-equipped systems that are enabling the next wave of real-world scientific discovery.”

    Cray system in production for Japan Observatory. Cray announced that the National Astronomical Observatory of Japan (NAOJ) has put one of the world’s fastest supercomputers solely dedicated to astronomy into production. The new Cray XC30 supercomputer is used to run complex simulations allowing researchers to reproduce and observe astronomical phenomena in a virtual environment. Nicknamed “ATERUI,” the eight-cabinet Cray XC30 supercomputer has a peak performance of more than 500 teraflops and is located at NAOJ’s Mizusawa VLBI Observatory in Iwate, Japan. Researchers and scientists at NAOJ, and at universities and institutes throughout Japan, are applying the innovative supercomputing technologies offered in the Cray XC30 system towards performing highly advanced numerical simulations — experiments that they hope will one day answer longstanding questions, such as the formation of galaxies and the origin of the solar system.

    6:45p
    BYTEGRID Acquires Cleveland Site, Secures $100M Credit Facility
    BYTEGRID-Cleveland

    A rendering of the future look of BYTEGRID Cleveland, formerly the Cleveland Technology Center. (Image: BYTEGRID)

    BYTEGRID believes there’s big opportunity in the underserved Cleveland market. The company has acquired a massive data center building in Cleveland and has secured a $100 million revolving credit facility to continue its expansion, primarily in second-tier markets.

    BYTEGRID, a wholesale data center specialist, has acquired the Cleveland Technology Center (CTC), a  333,215 square foot data center property sitting atop one of the larger fiber points of presence (POP) sites in Ohio. The property is supported by redundant 20 MVA power feeds from two separate utility substations, with access to additional capacity. Terms of the acquisition weren’t disclosed.

    The Cleveland Technology Center is currently 53 percent leased.The new owners plan to create 30,000 square feet of enterprise-class, mission-critical data center space with over 4 megawatts of power capacity.

    “In the facility, the tenant base has invested in its own infrastructure,” said Ken Parent, CEO of BYTEGRID. “We’re going to go in and develop turnkey space. We’ll build it out in a phased approach, but we’ll deliver 4 megawatts initially which is a healthy amount to start with based on the demand we see there.”

    Doubling BYTEGRID’s Footprint

    The acquisition doubles the company’s footprint. BYTEGRID now operates three data centers encompassing more than 625,000 square feet of wholesale data center space.

    “Cleveland is a perfect match for BYTEGRID’s national expansion strategy focused on unleashing data center capacity in underserved areas of major markets,” said Parent. “Vibrant business expansion and investment, along with market changing development projects, are taking place all around Cleveland’s CBD and midtown health-tech corridor. BYTEGRID plans to further improve and expand the CTC making it the primary provider of capacity and connectivity in Cleveland, as well as an ideal disaster recovery location for Midwest and East Coast markets such as Chicago and New York.”

    BYTEGRID looks for wholesale opportunities, seeking purpose-built data centers with existing anchor tenants that would benefit from upgrading and redevelopment. It seeks out markets that have some level of existing activity, as all three of its data centers were leased or had anchor tenants at the time of acquisition.

    “We’re looking at a huge number of markets,” said Parent. “We’re scouring the world, literally. In this particular market, we wanted to be in the Midwest, but Chicago is pretty crowded. Cleveland popped up as a great opportunity.”

    Why Cleveland?

    Cleveland is a strategic location serving the Ohio corridor, bisecting major business centers of Chicago and New York. There are over one hundred Fortune 500 companies with operations in Cleveland, a limited amount of wholesale supply available, reliable, low cost commercial power, and strong connectivity all playing to BYTEGRID’s favor.

    The company already has government customers in the building, and believes that enterprises and service providers are a great opportunity. The CTC is right by Cleveland’s “Healthcare Corridor,” which is currently undergoing a major building boom with nearly $4 billion of real estate investment. The company is also expecting some disaster recovery customers “because we bisect NY and Chicago we expect a lot of DR activity from financials,” said Parent. “Cleveland has a lot of technology companies, it is very business friendly. And we will provide the infrastructure to further enable these businesses.”

    Cleveland isn’t yet a large data center market, but there are other players in town, most notably colocation provider 365 Main, which is based at the Sterling Building on the Euclid corridor.

    Funding Will Aide Growth Plans

    The company also announced an initial $25 million revolving credit facility that includes an accordion feature allowing BYTEGRID to increase the size of the revolving credit facility up to $100 million. The revolving credit facility was arranged by KeyBank Real Estate Capital which also serves as the facility’s administrative agent. “The Keybank funding is very critical,” said Parent. “The large commercial bank supported our plan in a very big way.”

    The funding will support BYTEGRID’s national expansion. “We have four to five actionable opportunities currently,” said Parent.

    The company is planning to acquire more properties this year in addition to the one in Cleveland, and the credit facility will go a long way towards funding those plans. The money will also help BYTEGRID promote leasing activity within current data centers and support the expansion of new, sellable space in current facilities. The company will also use it to fund technology, security and facility enhancements in each acquired facility to meet BYTEGRID standards for financial and compliant-grade data center infrastructure.

    Late last year, BYTEGRID announced the acquisition of a 77,322 sq. ft. world-class, carrier neutral facility in Alpharetta, Georgia. “In Atlanta, the goal is to put another campus up to the one that’s fully occupied,” said Parent.

    Prior to this, BYTEGRID expanded and enhanced its 214,000 square foot facility in Silver Spring, Md.  ”We’ve had significant leasing activity in Silver Spring,” said Parent.  “It’s a financial grade asset with a world class tenant. We’ve been extremely successful there, particularly on the government side. We’re doing extremely well there. We have a fair amount of power to go, but it’s going briskly.”

    6:58p
    Fidelity Enters the Data Center Business with CenterCore
    centercore-fidelity

    Here’s a look at Centercore, a multi-story factory-built data center design developed by Fidelity Investments. Fidelity is now commercializing Centercore. (Photo: Fidelity)

    Last month one of the nation’s largest retailers announced that it was getting into the data center market. Now one of the nation’s largest investment companies is following suit.

    Mutual fund giant Fidelity Investments is commercializing a factory-built data center product called Centercore. The company is using the design to build out its own infrastructure, including portions of its $200 million data center project in Papillon, Nebraska. Fidelity has also begun offering its solution to the data center industry, and discussed its effort at last week’s IMN Spring Data Center Forum in New York.

    Centercore’s approach differs from most existing modular data center solutions by using a multi-story design, with the initial units featuring a three-story structure. Fidelity, which is the nation’s largest provider of 401(k) retirement plans, liked the concept of modular data center deployment. But after reviewing the leading modular solutions, it found that none precisely fit Fidelity’s needs. So it built its own.

    “We ultimately developed our own solution, an off-site constructed data center built in 500kW increments,” said Eric Wells, Vice President, Data Center Services at Fidelity Investments. ”We’re now in the process of commercializing that technology.”

    Core Units as “Building Blocks”

    Fidelity worked with Boston-based design and engineering firm Integrated Design Group in developing the Centercore system, which is built around Core Units – building blocks that are constructed by a fabricating firm and assembled on the data center premises. It features a steel structure and a weather-resilient exterior shell that is engineered to withstand an F3 tornado.

    Wells discussed Fidelity’s plans for Centercore during a panel at the IMN event. The existence of the design was first revealed in January at the Open Compute Summit.

    “This is a way we can deploy capital in an entirely new way, with flexibility, better efficiency and more adaptability to future technology changes,” said Joe Higgins, the VP of Engineering and Corporate Sustainability Officer at Fidelity, in a presentation at Open Compute. “The CIOs that have gone through this innovation were absolutely blown away.”

    Avoiding the “M Word”

    But in talking to those CIOs, Fidelity was careful in how it characterized the pre-fab offering, avoiding the use of the word “modular.”

    “We don’t use the ‘M word’ anymore,” said Wells. “Modular may not be the best term. We like the term off-site construction. So we branded it internally as Centercore, because when our CIOs heard modular, they thought container.”

    The Core Units use a column-free floor plan, and are designed to use a diverse range of power and cooling technologies and options, according to a marketing brochure from Fidelity, which says it can construct, deliver and assemble Centercore units in less than six months.

    Fidelity will use the design as part of its new data center in Nebraska. Initially known as “Project Photon,” the $200 million facility will feature both traditional raised-floor space and Centercore units that are built elsewhere and shipped to Nebraska.

    Modular Momentum Among Financials

    Fidelity’s move is the latest sign of momentum for for factory-built designs among America’s largest financial firms. Goldman Sachs is using a modular design from IO, while AST Global has built pre-fab data centers for several large European banks.

    It closely follows the announcement that Sears Holdings is entering the data center market with Ubiquity, a real estate business focused on converting former retail stores into data centers and disaster recovery sites. The two marquee brands are the latest entries in a data center market that has expanded beyond its historic core of specialist firms.

    Notably, Corecenter reflects the culture of innovation at Fidelity, which has developed an in-house cloud computing platform based on open source principles. The huge investment firm is also a contributor to the Open Compute project, which develops standards for open hardware. Wells heads the Compliance & Interoperability Project for Open Compute.

    << Previous Day 2013/06/03
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org