Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, August 18th, 2014

    Time Event
    12:00p
    CloudSigma Changes Data Center Strategy, Moves Into Three New U.S. Locations

    Public cloud provider CloudSigma is expanding into California, Florida and Hawaii data centers. The company is tapping Equinix‘s  SV5 facility in Silicon Valley (San Jose) and MI3 in Boca Raton. In Hawaii it is using DRFortress as the provider. San Jose and Miami are typical deals, but the Hawaii deal is different: DRFortress is selling CloudSigma’s public cloud services.

    The company uses all-SSD storage for its cloud and touts flexibility in resource provisioning and private patching capabilities as major benefits. It recently added Live Snapshotting as well as expanded to an Equinix facility in Ashburn, Virginia, its second U.S. location. Its first in the country was Switch’s Las Vegas SuperNAP, after the company expanded internationally, beyond its first location in Equinix’s Zurich data center. The company also recently expanded with Equinix in Osaka, Japan.

    The original strategy was having fewer cloud data center locations concentrating around major hubs. “We moved from large hubs to a local model,” said CEO Robert Jenkins. Customers prefer having more location choices for public clouds, particularly when it comes to building hybrid infrastructure, he explained. They want public cloud in close proximity to their existing infrastructure.

    DRFortress runs a former Equinix data center in Hawaii. The deal between the two companies there is more of a joint venture, Jenkins explained. “Hawaii, to be honest, [is] not a location we originally had on our roadmap. We had conversation with DRFortress. They have been using CloudStack by Citrix and for a number of reasons they wanted to step back from operating cloud themselves. They discovered CloudSigma and started using us.”

    DRFortress will eventually migrate its cloud customers over to CloudSigma, which will let it focus on its core business as well as take the company out of competing with other potential cloud providers wishing to colocate.

    “Some of the customers in the area are using direct connect to Amazon Web Services all the way to the mainland for security reasons,” said Jenkins. “They’re happy they can now have a public cloud based in the Hawaii market.”

    CloudSigma is also targeting other markets in the Pacific.  “Places like Guam, Haiti are small in the scheme of things, but there’s absolutely no cloud services there. The local model is a good model.”

    The rationale behind Miami and San Jose is straightforward. “We chose Miami with an eye on Latin America,” said Jenkins. “We’re interested in bringing a location on the ground in that market eventually.” CloudSigma also has several customers in the southeastern U.S. that wanted lower latency. The company expanded into San Jose because customers want a west coast cloud, according to Jenkins.

    CloudSigma has also joined Equinix’s Cloud Exchange, which allows Equinix customers to make fast, direct, hybrid connections to CloudSigma, while expanding the cloud provider’s global reach.

    12:30p
    Digital Realty’s Brese Joins Former Colleague Chris Crosby’s Compass

    Compass Data Centers announced that Rebecca Brese has joined its executive management team. Brese comes from Digital Realty Trust, where she was integral in creating a framework for customer service and quality control.

    Brese will do similar work for Compass, where she will be responsible for customer experience. She will work closely with the developer’s delivery partners on continuous improvement and quality of the products and services they provide to Compass. Compass CEO and co-founder Chris Crosby, also a Digital Realty alumnus, has worked with Brese in the past.

    Crosby and developer Chris Curtis formed Compass in 2011. The Dallas-based company recently raised $100 million of investor money to help its next stage of growth.

    Compass has a standardized design for a 1.2 megawatt data center it claims it can build quickly anywhere a customer desires, targeting primarily second-tier U.S. markets. Now, with three service provider customers and a few completed data center builds under its belt, the company is looking at further expansion. As it grows, customer relations grow in importance, and. Brese will help establish these processes.

    “One of our core convictions at Compass is to continually improve on our ability to deliver a high-quality product and customer experience via effective processes and systems,” Crosby said. “In her new role, Rebecca will be integral in helping to establish Compass as a company whose processes and practices directly benefit our customers’ ability to successfully deliver their own mission critical applications.”

    Brese has over 23 years of overall experience in the IT customer service field, including a long tenure at Digital Realty, which started in 2007. She designed and managed Digital Realty’s client service organization, which includes 13 global customer service centers and is credited for redefining customer support in the wholesale data center market.

    Brese earned her undergraduate degree from Southeastern Oklahoma State University and is a graduate of SMU Cox School of Business Management.

    “I couldn’t be happier to be part of the Compass team and working along so many people who are shaping the next phase of our industry’s growth,” said Brese. “Chris Crosby is a true visionary in our industry, and I had the pleasure of working alongside him several years ago when he was playing such an instrumental role in creating the wholesale data center model, which provided such an elegantly simple solution to the ‘build vs buy’ debate that had existed in our industry for eons.

    “Now Chris and the Compass team are pioneering a new phase of our industry’s evolution with their methodology for providing dedicated data center facilities anywhere—which eliminates the frustrating compromises that have existed with the multi-tenant, major market-centric model that has predominated. This model is all about customers, which is part of why it is so exciting to me.”

    Compass has great momentum. It built a data center for Iron Mountain in Northborough, Massachusetts (just outside of Boston). It also recently completed a build in Minneapolis, Minnesota, for CenturyLink, and hosting company Windstream is using its facilities near Nashville, Tennessee, and in Durham, North Carolina.

    Compass owns a piece of land near Columbus, Ohio, but has not yet announced a client for that location. Expansion plans are still focused on North America, including development on the Ohio property.

    Announcement of Brese’s appointment followed announcement of another departure from Digital Realty’s management team. Michael Siteman, who used to be solutions director at Digital Realty, has joined a boutique Los Angeles-based private-cloud solutions provider M-Theory Group.

    1:14p
    Powering the Internet of Everything by Equipping the Workforce

    Sudarshan Krishnamurthi is a Senior Product Manager at Learning@Cisco responsible for the Education Strategy for Security and Internet of Things (IoT).

    Shelf sensors in stores create greater inventory visibility. Smart factories use machine-to-machine communication to predict maintenance needs and reduce production delays. Welcome to the Internet of Everything (IoE)—a world that brings things, data, processes, and people together into a vast web of connectivity that will transform how we live and work.

    Cisco predicts that the potential gain for the private and public sectors between 2013 and 2022 as a result of IoE could be as high as $19 trillion. These gains will come from new revenues, cost savings, increased productivity and improved citizen experiences. However, software and networking skills shortages could delay the realization of these gains.

    The expanding, interconnected world

    The network will function as the command center for IoE and will therefore have to play a more crucial role than ever, needing to be more secure, agile, context-aware, automated, dynamic and programmable. The realms of mobile, cloud, apps, and Big Data and analytics will all be interconnected, and security will be of particular concern. With so many devices all connected, the attack surface will increase significantly, and security breaches could become even more costly.

    Just as the attack surface will increase significantly, so will the quantities of data being generated and exchanged by the ever-expanding number of connected devices. The role of the data scientist will be crucial in terms of converting this data into usable information.

    Gathering and processing data is ultimately how benefit is derived from IoE. In order to optimally connect people, processes, data and things, connections must be secure. Additionally, the network must be programmable so that information gathered from data can be more intelligently applied to devices rather than having to configure and manage them manually.

    Getting prepared for IoE will require the existing workforce, especially in areas such as manufacturing, utilities, safety and security, and transportation, to understand IT networking to a greater degree. At the same time, IT networking professionals need to better understand manufacturing control systems and industrial networks as IoE causes these operational technologies to converge with IT. Lastly, it will be vital for the current generation of graduating students to have the networking skills that will enable them to address this convergence of operational technologies and IT.

    Joining forces for network training

    ManpowerGroup’s 2013 Talent Shortage survey found that IT workers and engineers were among the hardest positions to fill in the U.S. in 2013. Cisco predicts that approximately 220,000 new engineers will be needed globally every year for the next 10 years to keep up with the technological surge of IoE.

    The networker’s view is expanding to include many new technologies, and the networker’s responsibilities are expanding to include many new duties. For example, the increase in connected things requires network professionals who will maintain a strong security posture across the expanded attack surface. Also, the ability to analyze Big Data and turn it into actionable information is needed to drive business outcomes.

    There are many emerging roles in the future for IoE – Business Transformation specialists, Cloud brokers, Network Programmers and Data Scientists. Cyber Security becomes more pervasive and the networking career becomes much more specialized.

    As new roles emerge, organizations can look to their current skill sets to make a way forward. People with fundamental networking experience will lead the transition to IoE because they have the knowledge to build the bridge from network infrastructure to the application environment. Application developers who are implementing SDN technologies, as well as those at the business application layer, will need a tighter grasp of the new world they operate in.

    In addition, control systems engineers in manufacturing industries have traditionally worked on drives, motors, sensors, and programmable logic controllers (PLCs) to manage automated plant networks. Now, with the convergence of operational technologies and IT on the horizon, these engineers will need to become trained in IT and networking.

    Companies will need to work with industries throughout the world to create the pathway for IT networking skills and talent development. Continued efficiency and productivity gains will depend upon it. But this is only part of the equation.

    The other educational requirement is to prepare youth from the beginning to understand the network and its underlying connection to everything. It is incumbent on IT companies to work with educational systems to develop curricula that ensures rising talent is well prepared to understand the functioning of the network and how it makes IoE work.

    Evolving education consumption

    IoE is beginning to change all aspects of life, and how education is consumed is no different. As students move to a Bring Your Own Device, ubiquitous access model, their needs and preferences regarding where and when they will get training are changing along with what they are learning. Students no longer prefer traditional delivery modalities. Instead, they want mobile, video-based, game-based learning that not only is an evolution of traditional delivery but also helps remove barriers to education by making it easy, fun, accessible and effective.

    Figure 1

    In preparing the workforce for the job role changes that IoE is creating, we need to consider the ways in which training is delivered and the ways that learners prefer to receive it. The good news is that the technology with connected devices and collaboration software can help make this happen, since the technology and infrastructure are there to move in this direction.

    Figure 2

    Cooperating to fill the gaps

    People. Process. Data. Things. Yesterday, they functioned independently. Today, the Internet of Everything brings them all together in ways that are amazing and challenging at the same time. The network is the heart of IoE, which calls for a next-generation workforce equipped to deal with IoE’s vast data requirements and attendant safety concerns. Tremendous gains stand to be made across all industries and with regard to humanitarian concerns as well, but this vision of a more prosperous and efficient world cannot be realized without a properly equipped workforce. Enterprise, government and educational entities must come together to create strategies to fill current and projected network skills gaps.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:53p
    Report: UIG Building Apple Data Center in Curaçao

    A data center in Curaçao might soon house Apple infrastructure. Unique Infrastructure Group, a company that has worked extensively with Apple, is building a new data center on the Dutch island in the South Caribbean, which according to the Dutch media outlet Among Tech will be used by the American company.

    The data center was near completion when the company that was originally behind it went bankrupt. UIG has taken over more than 75 percent of that company and will most likely complete the facility for Apple, according to media reports. UIG has also worked with Facebook and Google in the past.

    The future Apple data center, reportedly, was originally built as part of the Ctex project, a Curaçao government initiative to create a Silicon Valley’s equivalent in the Caribbean region.

    Curaçao is a polyglot island, most natives speaking Dutch, English and the indigenous Papiamentu. It is connected by about six major submarine cables and has strict policies regarding client data confidentiality.

    The island is located outside of the traditional hurricane belt and has acted as a vital trade hub. It is well positioned to become a key internet infrastructure player in the Caribbean thanks to favorable data policies and favorable tax structure.

    The country has special tax legislation for international companies that qualify for establishment in “E-Zone” areas. This includes payment of only two percent profit tax, no import duties or sales tax and expatriate entitlement for employees, including a special income tax regime.

    Apple has data centers in Prineville, Oregon, Maiden, North Carolina, Newark, California, and in Reno, Nevada (a new site currently running at low capacity). UIG was the developer behind Reno Technology Park, a large property with access to power transmission that is home to Apple’s data center there.

    It also uses colocation providers sparingly but says that most of its workload is served out of its own facilities. It was revealed last week that the company is using China Telecom to store iCloud data. Apple continues to expand both its data centers and renewable energy production in North America, and appears to be focusing on an international push as well.

    6:42p
    Feds Green-Light IBM’s Sale of Commodity x86 Server Business to Lenovo

    U.S. government committee that reviews acquisitions of domestic business assets by foreign companies to prevent deals that compromise national security has approved Chinese IT vendor Lenovo’s bid to buy the commodity x86 server business from IBM, the companies announced Monday.

    The companies agreed on the $2.3-billion deal in January, and the government initiated a review because of concerns that a Chinese firm was gaining access to technology used to build servers plugged into U.S. networks and particularly servers used extensively in the government’s own IT infrastructure, the Wall Street Journal (paywall) reported.

    IBM and Lenovo issued statements Monday saying the deal had received a green light from the Committee on Foreign Investment in the United States – the government body that conducts such reviews.

    In a statement, IBM said divesting the commodity-server business would allow it to focus more on other strategic initiatives. “The approval of the $2.3 billion sale to Lenovo enables IBM to focus on system and software innovations that bring new kinds of value to IBM clients in areas such as cognitive computing, Big Data and cloud, and provides clarity and confidence to current x86 customers that they will have a strong partner going forward,” the statement read.

    Lenovo issued a statement confirming the CFIUS approval and saying it was confident this and its other big acquisition currently in the works – the acquisition of Motorola Mobility from Google – would close.

    “As we have stated consistently for both the x86 and Motorola Mobility acquisitions, we continue to work through a number of regulatory and business processes to ensure an effective and timely closure on both deals,” the Chinese company’s statement read. “We remain on track to close both deals by the end of the year.”

    While the two vendors have argued that most of IBM’s x86 servers are manufactured by Chinese companies in China using many Chinese components anyway, the government was also concerned with service contracts tied to the hardware used in government data centers, Richard Gephard, member of a government agency overseeing U.S.-China trade and economic relationship, told the Journal.

    There was a similar concern when Lenovo bought IBM’s PC business in 2005, and the solution was to keep the service contracts with IBM and to renew them when they expired. That deal was also reviewed by CFIUS and approved.

    The Journal reported that the national-security review of the server-business acquisition was likely in January, following announcement that the two companies had reached a deal. The newspaper interviewed a series of expert lawyers who agreed that given the deep entrenchment of IBM hardware in government IT infrastructure, the review was a must.

    CFIUS has blocked big cross-border deals in the past. The committee’s 2011 investigation into Huawei’s bid to acquire Silicon Valley startup 3Leaf Systems resulted in withdrawal of the offer by the Chinese hardware vendor.

    In another example, President Barack Obama blocked the acquisition of four wind-farm projects in the U.S. by Chinese company Ralls Corp. Ralls appealed, however, and the appeals court ruled against the president’s administration in July.

    7:04p
    Mesosphere, Kubernetes to Meld into Google’s Cloud

    Mesosphere is collaborating with Google to bring the startup’s server cluster management software and Google’s open source Docker container management solution to the Google Cloud Platform, the companies announced Monday.

    Mesosphere is one of the leading companies in its sphere and Kubernetes, which Google announced in June, enjoys support of industry heavyweights, including Microsoft, Red Hat and IBM. Mesosphere is based on the open source Apache Mesos distributed systems kernel used by customers like Twitter, Airbnb and Hubspot to power Internet-scale applications. Kubernetes helps manage the deployment of Docker workloads. Combination of the two provides a commercial-grade, highly available and production ready compute fabric, the companies said.

    The collaboration results in a web application that enables customers to deploy Mesosphere clusters on Google’s cloud in minutes. Kubernetes is being incorporated into the offering and into Mesosphere’s ecosystem.

    The new application automatically installs and configures everything a user needs to run a Mesosphere cluster, including Mesos kernel, Zookeeper and Marathon, as well as OpenVPN to log in to the cluster. The functionality will soon be incorporated into the Google Cloud Platform dashboard.

    Mesosphere co-founder and CEO Florian Leibert was formerly engineering lead at Twitter. Mesos is credited with helping Twitter scale and avoid the infamous “fail whale” failure screen. Mesosphere creates a single, highly-elastic pool of resources that all applications can draw from, creating clusters of raw compute nodes.

    “We’re collaborating with Google to bring together Mesosphere, Kubernetes and Google Cloud Platform to make it even easier for our customers to run applications and containers at scale,” co-founder and CEO Florian Leibert wrote in a post on Google’s Cloud Platform blog. “Today, we are excited to announce that we’re bringing Mesosphere to the Google Cloud Platform with a web app that enables customers to deploy Mesosphere clusters in minutes.”

    Liebert wrote that developers can literally spin up a Mesosphere cluster on Google’s Cloud Platform with just a few clicks in standard or custom configurations. “Whether you are running massive Internet-scale workloads like many of our customers, or you are just getting started, we think the combination of Mesos, Kubernetes and Google Cloud Platform will help you build your apps faster, deploy them more efficiently and run them with less overhead.”

    9:41p
    SoftLayer’s Toronto Data Center Gives Canadians More Cloud Hosting Options, CTO Says

    logo-WHIR

    This article originally appeared at The WHIR

    Toronto is North America’s fourth largest city, and Canada’s largest technology hub with a technology sector that employs more than a quarter of all tech workers in Canada. And with countless startups on their way, it’s no wonder there’s demand in the Toronto market for IT services.

    Earlier this week, SoftLayer opened a data center in Markham, Ontario, just north of Toronto, with capacity for 15,000 servers. It is based on a standardized design that, on the inside, looks identical to its Singapore data center or its Dallas data center, and means that SoftLayer’s entire product catalog available in the Toronto region.

    SoftLayer CTO Marc Jones said that the company is going after Internet-centric companies in the the startup community, but also enterprises which can benefit from SoftLayer’s platform but also IBM’s Platform-as-a-Service BlueMix and the various Software-as-a-Service products that IBM runs on the SoftLayer platform.

    “When we go to Toronto, when we go to Canada, we’re really targeting that entire customer base with the entire breadth of offering we have with IBM and SoftLayer combined,” Jones said in an interview.

    The new data center will provide resources close to Toronto users, but also be linked to SoftLayer’s global network where users can move data to other data center without additional transfer fees. Users also have choice between SoftLayer’s have Bare Metal servers for very demanding and specific workloads, and SoftLayer’s public virtualized cloud that allows users to spin up new cloud servers in minutes.

    The location of their data is also a concern to many Canadian companies, and data sovereignty is a major appeal of the Toronto data center. Jones notes, however, that locality is something that underlies IBM’s $1.2 billion dollar investment to expand its global data center footprint.

    Among the 15 new data centers projects it is planning to open this year, data centers are now online in LondonHong KongDallas, and the Washington D.C. area. The plan also call for data center projects in China, Japan, India, and Mexico.

    SoftLayer has also been on the ground in Toronto through its “Catalyst” program, which has been working to develop local startups through organizations like GrowLab,CommunitechRyerson UniversityInnovation FactoryExtreme Startups and theOntario Network of Excellence. Catalyst offers infrastructure as well as technical guidance and even some financial support to these early-stage tech companies.

    The Toronto data center can also be thought of as a starting point for businesses that want to expand either now or in the future, according to Jones.

    “Customers in Canada might want to start with the Toronto data center or maybe a lot of their customers are in Canada and they want their data and their compute to be closer to their end users from a performance standpoint,” he says. “But as they grow and their customer base grows, they have a lot of choice as to where they deploy, how they deploy.”

    He continues, “Leveraging our network, you’re able to have your primary site in Toronto, if you wanted to have your [Disaster Recovery] site or backup site in Dallas or D.C. you can leverage your private network with no bandwidth fees and be able to keep your environment succinct and provide more choice for how you architect or define your solutions.”

    Jones says that the global expansion of the SoftLayer platform owes a lot to IBM’s acquisition of SoftLayer in July 2013.

    A little more than a year since the acquisition was finalized, he says, “I think overall it’s been very successful. We’ve seen a lot of growth on the SoftLayer platform. And quite honestly, it’s been a lot of fun. Every day we are focused at SoftLayer on building the best Infrastructure-as-a-Service platform in the industry. That definitely drives us everyday.”

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/softlayers-toronto-data-center-gives-canadians-cloud-hosting-options-cto-says

    11:09p
    Microsoft’s Azure Cloud Struggles with Service Outages Monday

    Microsoft Azure team updated its uptime status around 3 p.m. Pacific to “Partial Service Interruption” after reporting “Full Service Interruption” in multiple regions of the cloud infrastructure earlier in the day.

    Issues started around 1 a.m. Monday morning, according to a post on the service health dashboard for Azure, which said a number of services on the cloud platform, including virtual machines, websites, backup and site recovery, were experiencing full service interruptions in multiple regions. The company did not immediately provide a reason for the outages.

    In an update posted around 3 p.m., the Azure team said a small subset of customers was still experiencing connectivity issues to some of the cloud services. By that time, services had been restored in Japan and East Asia regions.

    Around 4 p.m., services hosted by data centers serving US Central, US East, US East 2 and Europe North regions were still having issues. Affected services included Cloud Services, SQL Database, Virtual Machines, Websites, HDInsight, Mobile Services, Service Bus, Site Recovery and StorSimple.

    Both US East 2 and US Central regions are new, the former hosted by a data center in Virginia and the latter in Iowa. The company announced their addition in July.

    Microsoft is one of the world’s largest providers of public cloud infrastructure services, competing with the likes of Amazon Web Services and, increasingly, Google Compute Engine. Cloud outages and degraded performance incidents are a common occurrence for public cloud providers who operate massive global data center infrastructures.

    This is not Azure’s first full service interruption this month. The company reported full service interruption in its Japan East region on August 15 and another one across multiple regions on August 14.

    Our sister site Web Host Industry Review has a roundup of Azure’s recent uptime issues.

    11:30p
    Geodesic Dome Makes Perfect Data Center Shell in Oregon

    Used to build everything from a planetarium in post-WWI Germany to mobile yoga studios at outdoor festivals today, the geodesic dome has proven to be a lasting concept for highly stable structures of any size. Structural stability is a valued goal in data center design, but the idea of building a data center shell using a spherical skeleton that consist of great circles intersecting to form a series of triangles – the most stable shape we know of – is novel.

    That is the approach Perry Gliessman took in designing the recently completed Oregon Health and Science University data center. Gliessman, director of technology and advanced computing for OHSU’s IT Group, said structural integrity of a geodesic dome was only one of the considerations that figured in the decision. It was “driven by a number of requirements, not the least of which is airflow,” he said.

    One of the new data center’s jobs is to house high-performance computing gear, which requires an average power density of 25 kW per rack. For comparison’s sake, consider that an average enterprise or colocation data center rack takes less than 5 kW, according to a recent Uptime Institute survey.

    Needless to say, Gliessman did not have an average data center design problem on his hands. He needed to design something that would support extreme power densities, but he also wanted to have economy of space, while using as much free cooling as he could get, which meant maximizing outside-air intake and exhaust surface area. A dome structure, he realized, would tick all the boxes he needed to tick.

    No chillers, no CRAHs, no problem

    The data center he designed came online in July. The resulting $22-million facility has air-intake louvers almost all the way around the circumference. Gigantic fan walls suck outside air into what is essentially one big cold aisle, although it is really lots of aisles, rooms and corridors that are interconnected. Inside the dome, there are 10 IT pods. The pods are lined up in a radial array around a central core, which contains a network distribution hub sitting in its own pod. This placement ensures equal distance for air to travel through IT gear in every pod and equal and shortest distance to stretch cables over to the network hub.

    Each pod’s server air intake side faces the space filled with cold air. The exhaust side is isolated from the space surrounding it but has no ceiling, allowing hot air to escape up, into a round plenum above. Once in the plenum, it can either escape through louvers in the cupola at the very top of the dome or get recirculated back into the building.

    There are no air ducts, no chillers, no raised floors or computer-room air handlers. Cold air gets pushed through the servers partially by server fans and partially because of a slight pressure differential between the cold and hot aisles. It goes into the plenum because of the natural buoyancy of warm air.

    When outside air temperature is too warm for free cooling, the data center’s adiabatic cooling system kicks in automatically to help out. Beaverton, Oregon (where the facility is located), experienced some 100 F days recently, and the evaporative-cooling system cycled for about 10 minutes at a time at 30-minute intervals, which was more than enough to keep supply-air temperature within ASHRAE’s current limits. Gleissman said he expects the adiabatic cooling system to kick in several weeks a year.

    In the opposite situation, when outside air temperature is too cold, the system takes hot air from the plenum, mixes it with just enough cold air to bring it down to the necessary temperature and pushes it into the cold aisle.

    The army of fans that pull outside air into the facility have variable frequency drives and adjust speed automatically, based on air pressure in the room. When server workload increases, server fans start spinning faster, sucking more air out of the cold aisle, causing a slight drop in pressure, which the fan walls along the circumference are programmed to compensate for. “That gives me a very responsive system, and it means that my fans are brought online only if they’re needed,” Gliessman said.

    Legacy IT and HPC gear sharing space

    That system can cool 3.8 megawatts of IT load, which is what the data center is designed to support at full capacity. There is space for additional pods and electrical gear. Each pod is 30 feet long and 4 feet deep. The pods have unusually tall racks – 52 rack units instead of the typical 42 rack units – and there is enough room to accommodate 166 racks.

    Since OHSU does education and research while also providing healthcare services, the data center is mission-critical, supporting both HPC systems as well as hospital and university IT gear. Gleissman designed it to support a variety of equipment at various power densities. “I have a lot of legacy equipment,” he said. All infrastructure components in the facility are redundant, and the only thing that puts it below Uptime Institute’s Tier IV standard is lack of multiple electricity providers, he said.

    It works in tandem with the university’s older data center in downtown Portland, and some mission-critical systems in the facility run in active-active configuration with systems in the second data center.

    Challenging the concrete-box dogma

    Because the design is so unusual, it took a lot of back-and-forth with vendors that supplied equipment for the project and contractors that built the facility. “Most people have embedded concepts about data center design and, like all of us folks, are fairly religious about those,” Gleissman said. Working with vendors was challenging, but Gleissman had done his homework (including CFD modeling) and had the numbers to convince people that his design would work.

    He has been involved in two data center projects in the past, and his professional and educational background includes everything from electronics, IT and engineering to biophysics. He does not have extensive data center experience, but, as it often happens, to think outside the box, you have to actually be outside of it.

    Take a virtual tour of OHSU’s “Data Dome” on the university’s website. They have also posted a time-lapse video of the data center’s construction, from start to finish.

    << Previous Day 2014/08/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org