Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 19th, 2013

    Time Event
    2:00p
    10gen Upgrades MongoDB, Unveils Enterprise Version

    MongoDB is hot. The open-source, document database is one of the most popular noSQL technologies in the market. It’s why giant hosting provides SoftLayer and Rackspace have both placed their chips on the technology in their own ways. SoftLayer is working closely with MongoDB developer 10gen for its offering, while Rackspace acquired database-as-a-service provider ObjectRocket.

    10gen has now released version 2.4 of MongoDB, which has a slew of new features and a new enterprise version to keep the MongoDB momentum going. MongoDB 2.4 enhances management, performance and developer productivity, while the enterprise commercial version adds some key features around private monitoring and security.

    “We think the community is going to be excited about this release, as it incorporates features that our users have been requesting including hashed-based sharding, text search and faster counts,” said Eliot Horowitz, co-founder and CTO of 10gen. “We’re also supporting our enterprise customers with a new edition of MongoDB that responds to their needs related to monitoring and security of their infrastructure.”

    MongoDB 2.4 key features and enhancements include:

    Hashed-based Sharding – MongoDB provides horizontal scaling by transparently sharding data across multiple physical servers. Hashed-based sharding enables simple, even distribution of reads and writes to data in MongoDB. So basically, this is how you make data really big, spread across multiple computers. Previously, it had only range-based sharding, which required knowledge of the data up front. Now, it hashes a value for you and provides an automatic even distribution of writes. Hash-based sharding makes everything easier and will be of interest to the community.

    Capped Arrays –  Applications provide real-time visibility into top ranking attributes, such as leaderboards and most frequently viewed, emailed or purchased items. Capped Arrays simplify development by automatically maintaining a sorted array of fixed length within documents. In layman’s terms, one example where this is used is if you’re building a gaming application and tracking a leaderboard, or top 10 articles on a blog etc.

    Text Search (Beta) –  Search is the primary interface for navigating data in many applications. MongoDB’s native, real-time text search simplifies development and deployment for MongoDB users with stemming and tokenization in 15 languages. In layman’s terms, if you’re building an app and you want “search” to be a featured part of it, now you can do that. Before, you’d have to integrate outside search functionality. It will be good enough for most applications, and there are connectors for outside search technologies.

    Geospatial Enhancements – Mobile and social applications and government programs rely on location and sophisticated geospatial analysis. MongoDB 2.4 introduces GeoJSON support, a more accurate spherical model and polygon intersections.

    Faster Counts – Count operation performance has improved, including low cardinality index-based counts that are 20x faster than prior releases of MongoDB.

    New enterprise subscription level includes security, private monitoring

    In addition, 10gen introduced MongoDB Enterprise as part of a new MongoDB Enterprise subscription level. MongoDB Enterprise contains new monitoring and security features including Kerberos Authentication and Role-based Privileges.

    Features of the Enterprise version include:

    On-Prem Monitoring – Monitoring, visualization and alerting on more than 100 operational metrics of a MongoDB system in real time. 10gen offers a free monitoring service as well; the big difference here is that it’s a private and on-premises version.  The free MongoDB monitoring service lets you install an agent and use a free cloud based monitoring tool. Every minute, it shifts data to the cloud, and you can visualize the data. “That service has been widely successful,” said Kelly Stirman, Director of Product Marketing . “Since version 2.2, (about 6 months) we’ve had 40 percent growth with this service. The growth of the monitoring service is important because it’s an indicator of MongoDB increasingly being used in production.” The ability to run on-prem monitoring will make MongoDB more appealing in the enterprise.

    Kerberos Authentication – Version 2.4 adds support for the Kerberos protocol. Enterprise applications can now securely authenticate with MongoDB using this widely adopted standard. Kerboros  is really important to banks, insurance companies, and federal government, so it’s a solid feature for the commercial version.

    Role-based Privileges – The ability to separate responsibilities for server, database and cluster administration.

    MongoDB passed the 4 million download in the last month or so. “People fall in love with MongoDB because it is ridiculously easy to use,” said Kelly Stirman, Director of Product Marketing. “You can start querying the data in a few minutes. It has super-fast performance (and) high availability.”

    10gen has more than 600 commercial customers including many of the world’s leading brands, such as Cisco, Craigslist, Disney, EA, eBay, Ericsson, Forbes, Foursquare, Intuit, LexisNexis, McAfee, MTV, Salesforce.com, Shutterfly and Telefonica. Common use cases include operational and analytical big data, content management and delivery, mobile and social infrastructure, user data management and data hub.

    “10gen has raised the bar on what to expect for quick and easy live database monitoring,” said Harun Yardımcı, Software Architect, eBay. “On-Prem Monitoring allows us to actively diagnose application issues quickly and easily to improve our MongoDB-powered application’s performance and ultimately provides a superior experience for our customers, which is our top priority.”

    2:00p
    DDoS Prevention Appliance Report – Printed Excerpts

    As more organizations and users utilize the Internet, there will be more data, more management needs, and a lot of worries around security. The big push around cloud and the modern cloud-ready data center really revolves around IT consumerization and newly available resources. Just like any infrastructure, the bigger and more popular it gets – the bigger the target.

    Cyber threats have been growing and at an alarming rate. Not only have frequencies increased – the creativity of the intrusions and attacks are staggering as well. There plenty of evidence to supporting this as well:

    • Malware is reaching new all-time highs – Trend Micro, for example, has identified 145 thousand malicious Android apps, as of September 2012.2 Keeping malware at bay, already a “treading water” challenge, is intensifying.
    • BYOD is a growing threat vector – Frost & Sullivan estimates smartphones shipped in 2012 will reach 558 million, and tablets will reach 93 million. With more users using more cloud networks – targets will become larger as well.
    • Distributed Denial of Service (DDoS) attacks are approaching mainstream In a 2012 survey of network operators conducted by Arbor Networks, over three-quarters of the operators experienced DDoS attacks targeting their customers.
    • Exposure footprint is expanding –According to a Frost & Sullivan 2012 global survey of security professionals, slightly more than one-third of the respondents cite cloud computing as a high priority for their organizations now, and that percentage increases to 54 percent in two years.

    With the evident change in the technological landscape, there will undoubtedly be a need to re-evaluate existing security environments. Why is the case? Simple, many existing security platforms are just not enough to handle today’s demands around cyber security. In this white paper, new types of security platforms are explored. Specifically, Arbor’s ATLAS platform is seen as a leader in enterprise-ready security and traffic-monitoring. Between these two sources, Arbor is collecting data from all assigned IP addresses—service-active IP addresses from Arbor platforms and service-inactive IP addresses from darknet-hosted ATLAS sensors.
    DDoS Prevention Appliance Report

    Launched in 2007, ATLAS transparently, and on an hourly basis, collects network traffic data from sensors hosted in carriers’ darknets, and data from carrier and enterprise-deployed Arbor monitoring platforms. Download this white paper to see how ATLAS and its platform has direct benefits for carriers and enterprises. This includes:

    • More threats are proactively mitigated, resulting in a lower overall risk posture.
    • Less remediation occurs. With fewer attacks being successful, remediation efforts (e.g., purging endpoint devices of malware infections, bolstering Web infrastructure to defend against DDoS attacks, and conducting data breach notifications) will be fewer in number and smaller in scale.
    • As ATLAS researchers monitor and assess traffic data from Arbor platforms and darknet sensors, carrier and enterprise security analysts gain the benefits of this threat analysis without incurring the work effort. Their knowledge levels are enhanced.

    Remember, the cyber threat environment will only continue to grow and evolve. Whether your environment is utilizing the WAN or some type of cloud environment – it’s time to evaluate your security infrastructure and see how new, advanced, platforms can help.

    3:30p
    PMC Introduces 100G Optical Processor to Speed Big Data Networks

    Semiconductor company PMC (PMCS) has introduced a new single-chip Optical Transport Network (OTN) processor supporting speeds of up to 100G speeds for OTN transport to meet the growing demands for high-speed networks to enable cloud computing and “Big Data” workloads.  The product was launched at the OFC/NFOEC (Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference) in Anaheim, California.

    The  PM5440 DIGI 120G advances data center networks currently filled with 1G and 10G services, by supporting OTN Transport, aggregation and switching. The OTN processor to support on-demand re-sizing of ODUflex from 1G to 100G, as well as 10G, 40G and 100G speeds, with numerous configurations supported. For the OEM market, the DIGI 120G provides leverage, for a common hardware and software investment to rapidly build-out a portfolio of line cards.

    “Explosive packet traffic growth projected from cloud services, residential broadband, and mobile backhaul is driving China Mobile to invest in scaling up our optical transport network to support 100G,” said Bill Huang, General Manager of China Mobile Research Institute. “To maximize our 100G investment, we need OTN solutions from the 100G ecosystem that allow us to manage the optical bandwidth efficiently so we can economically deploy 100G and OTN switching on a wide scale.”

    PMC said the DIGI 120G offers the industry’s only single-chip solution, the highest number of 10G ports enabling 2x higher density 10G OTN line cards, and flexible per port client-mapping of OTN, Ethernet, Storage, IP/MPLS and SONET/SDH. The PM5440 DIGI 120G along with the lower port density PM5441 DIGI 60G are available immediately.

    “The optical transport network infrastructure needed to support Big Data requires efficient sharing and dynamic allocation of the optical network bandwidth,” saidBabak Samimi, vice president of marketing and applications for PMC’s Communications Business Unit. “Our DIGI 120G brings innovations that disrupt the economics of 100G by responding to the industry’s need to move towards dynamically configurable optical transport networks for delivering cloud services.”

    4:00p
    HP Converged Infrastructure For Dummies

    Today’s modern IT infrastructure is more demanding than ever. With more devices, more users and IT consumerization, there has become a greater need for agility and efficiency. Many organizations have moved towards better computing practices and strive to increase the amount of users they can support – both now and in the future.

    This is where high-density, converged systems can come into play. Converged infrastructures, simply put, a converged infrastructure enables organizations to accelerate time to business value. This is achieved by turning today’s rigid technology silos into adaptive pools of assets that can be shared by many applications and managed as a service. The idea isn’t hard to grasp: unify systems, simplify management, and create efficiency.

    They key becomes understanding the underlying infrastructure and the engines that drive this efficiency. Download HP’s Converged Infrastructure For Dummies to gain a greater under understanding of how the converged infrastructure works, where it can directly benefit your organization, and how to identify the right system for you.

    P’s Converged Infrastructure

    [Image source: HP’s Converged Infrastructure For Dummies]

    The Converged Infrastructure For Dummies eBook covers eight chapters of information around the converged environment and helps illustrate points with excellent graphs and info-graphics.

    • Chapter 1, The Era of Convergence
    • Chapter 2, Things to Know about Infrastructure Convergence
    • Chapter 3, How Convergence Affects You
    • Chapter 4, The Inner Workings of a Converged Infrastructure
    • Chapter 5, Finding the Right Converged Infrastructure Solution:
    • Chapter 6, How HP Can Help
    • Chapter 7, Eight Reasons You Should Embrace the Era of Convergence
    • Chapter 8, Five Ways to Converge with Ease

    Download The Converged Infrastructure For Dummies eBook to learn how and where a converged infrastructure can fit into your organization. Remember, user density and new types of environment management systems only strive to simplify the IT process. The most important part, however, would be the understanding of how these technologies work together to not only improve IT – but also the entire business organization.

    5:59p
    CommScope Acquires iTRACS in DCIM Deal
    iTRACS

    A screen shot displaying some of the 3-D capabilities of data center modeling software from iTRACS Corp. (Graphic courtesy iTRACS)

    Consolidation was to be expected in a crowded DCIM field, and CommScope has struck in a big way, acquiring iTRACS Corporation, a provider of enterprise-class data center infrastructure management (DCIM) technology. The acquisition expands CommScope’s solutions for supporting enterprise customers’ infrastructure management requirements.

    The transaction was completed Monday and terms are not being disclosed. CommScope funded the acquisition from cash on hand and funds available under its revolving credit facility.

    iTRACS was previously a privately-held company principally located in Tempe, Ariz., with a 25 year track record in physical infrastructure management. It helps enterprises improve operational efficiency, maximize capacity utilization, reduce costs and optimize the business value of their infrastructure investments. More than one-third of its customers are in the Global 500/Fortune 500, with 1.6 million square feet managed in one customer’s site alone.

    “Together with iTRACS, we can provide business enterprises around the world with the broadest and best possible solutions for designing, managing, monitoring and planning data center infrastructure,” said Kevin St. Cyr, senior vice president of Enterprise Solutions, CommScope. “iTRACS Converged Physical Infrastructure Management (CPIM) will build upon CommScope’s imVision solution to position us with a platform that we believe will bring to our customers best-in-class network and power infrastructure documentation and asset management. This open architecture platform, coupled with a 3D visual modeling that is second to none in the DCIM market segment, provides a complete visual representation of the entire physical ecosystem, including energy consumption, thermal conditions, network connectivity, power connectivity, and can incorporate additional statistics like server performance and storage capacity.”

    DCIM is a rapidly growing field as enterprises recognize its strategic value.

    “iTRACS brings CommScope an unparalleled understanding of enterprise connectivity through its rich history in intelligent infrastructure,” said St. Cyr. “iTRACS understands the interconnectivity and interdependencies in the physical layer—and how to manage those interrelationships to optimize the performance of the logical layer—better than anyone. We look forward to combining iTRACS’ award-winning software portfolio with CommScope’s global reach, deep expertise and robust data center solutions portfolio. The benefits and synergies of two leaders coming together to provide world class solutions are a compelling story for our customers and can set a new standard in what DCIM can do and be.”

    iTRACS CPIM provides a holistic view of the entire physical ecosystem of a data center and dynamically understands how a change or failure of a single device in the ecosystem affects the performance of all other devices and conditions in the environment. The platform allows users to see, understand, manage and optimize these dynamics continuously in a real-time visualized 3D environment.

    iTRACS said it believes open-systems alliances and partnerships are vital for the future and the company intends to honor and build on those relationships.

    “iTRACS combines unparalleled interconnectivity expertise and best-of-breed visualization capabilities in an open-systems, management-based platform that transcends vendor-lock or proprietary constraints,” said Elizabeth Given, president and chief executive officer, iTRACS. “We empower infrastructure owners and operators with holistic, actionable insight that drives knowledge-based decision-making across the entire multi-vendor ecosystem. Decision-makers can proactively align their IT resources to the shifting needs of the business; leverage cloud, big data, and other technology game-changers; maximize business continuity; and optimize the business value of the entire infrastructure investment. By joining CommScope, we look forward to creating an unmatched portfolio of open-systems intelligent infrastructure management that will drive a new level of business value for enterprises around the world.”

    For more background on iTRACS and its technology, check out our series of Industry Perspectives columns from Gary Bunyan of iTRACS.

     

    6:15p
    Hardening OpenStack to Support Trust in Public Clouds

    Vin Sharma is a software strategist at Intel responsible for planning and marketing Intel contributions to open source datacenter software projects, specifically Hadoop, OpenStack, KVM, and enterprise Linux.

    Vin-Sharma-tnVIN SHARMA
    Intel

    A public cloud is meaningless without multi-tenancy. And multi-tenancy is unworkable without trust in the infrastructure. So when the community OpenStack came together at its summit in San Diego last fall, I was particularly excited about the full day of security topics on the agenda. I wasn’t alone — the packed sessions were a clear indication that the OpenStack community is serious about building trust from the foundation up as well as hardening against vulnerabilities. This focus on security is timely – it’s been close to the hearts and minds of many at Intel.

    Security Concerns on the Rise

    We’ve all known for a while that as enterprises move business-critical workloads into a public cloud, security and privacy issues rise to preeminence. The annual survey conducted by the Open Data Center Alliance continues to reinforce this view. As with every other “enterprise requirement”, then, cloud service providers and solution vendors start with a high-contrast choice:  either adapt for the cloud what’s worked well in traditional data centers (knowing fully well that some techniques just won’t scale) or design a whole new approach for this new IT delivery model.

    So when a new open source project like OpenStack comes along, it presents a great opportunity to implement the best new thinking on cloud security, while leapfrogging over hurdles strewn by enterprise legacies. Put simply, we have that rare opportunity to keep the baby and let out the bathwater.

    To be sure, the underlying security infrastructure of OpenStack must be hardened—with authentication, encryption, role-based access control, containment, auditing and myriad other security capabilities that are well understood in traditional enterprise operating systems. What we are advocating is a combination of security features and practices refined over decades of enterprise usage with security mechanisms built specifically for use cases that are unique to the public cloud – like trustworthy multi-tenancy.

    Trusted Compute Pools

    One significant step in that direction is the notion of “trusted compute pools” – a usage model that Intel has espoused and brought to light with the Folsom version of OpenStack. The premise is simple:  organizations moving regulation-compliant workloads to the cloud require the same assurance of security that they get from traditional IT today. To support a service provider’s ability to create a pool of resources whose integrity can be assured, Intel developed a number of components that weave through the stack – from UI changes in Horizon through the APIs and scheduler in Nova to an independent remote-attestation server down to the trusted boot pre-kernel module in the Xen/KVM hypervisors. The lynchpin of this apparatus is a new filter in the OpenStack scheduler that selects servers whose trust has been attested. We’re thrilled that the entire solution stack is ready to be packaged and supported by distribution vendors such as Canonical.

    And at the Summit with developers working on Grizzly, we are proposing enhancements to identity management using platform attributes. Until now, the fundamental element of identity has been a set of user attributes: whether it’s username & password, biometrics, or something else. Intel’s Abhilasha Bhargav-Spantzel is proposing device attributes as another element of identity. Abhilasha wants you to envision device characteristics as another link in the chain of the trust forged from client to cloud, when a trusted service is delivered by a provider.

    How is this useful? This enables a number of use cases for the delivery of cloud services with varying levels of personalization and anonymity. For example, a service provider might want to deliver some services to specific device types regardless of user attributes while delivering other, more security-sensitive, services only after robust attestation of device attributes as well as user credentials. There’s a lot of exciting work in this area happening and we are developing use cases and blueprints that advance the state of the art.

    To sum up, we’re excited by the promise and potential that OpenStack presents to build an infrastructure that balances privacy, security, and usability while delivering cloud services at scale. Abhilasha’s video from OpenStack is now online. Stay tuned for updates on our contributions to OpenStack and cloud security.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:30p
    Fusion-io Acquires Software-Defined Storage Company ID7

    Fusion-io (FIO) announced that it has acquired ID7, a specialist in software defined storage UK-based ID7 is an open-source leader in shared storage systems, as well as the developers and key contributors of SCST, a Linux storage software subsystem used by many storage vendors throughout the world.

    “Software defined storage platforms are key to delivering peak performance and efficiency in today’s data centers,” said David Flynn, Fusion-io CEO and Chairman. “ID7 has provided valuable contributions to the industry, making us excited to welcome them to our ION Data Accelerator team as we continue to grow our business with the expertise of many of the world’s most innovative engineers.”

    ID7 builds storage solutions across iSCSI, Fiber Channel, InfinBand and other interfaces, and its engineering team is the primary developer of the SCST storage subsystem. The SCST core provides a unified, consistent interface between SCSI target drivers, the Linux kernel and storage system by connecting target drivers with a physical or emulated storage backend. ID7 has been collaborating with Fusion-io on software development for the ION Data Accelerator software that transforms industry standard servers into shared storage appliances. The ID7 team, including the primary SCST developers, will join Fusion-io as a result of the acquisition.

    “We had an opportunity to work with Fusion-io on the development of the ION Data Accelerator when it became apparent that the team has been founded on a culture of architecting software innovation deep within the Linux operating system kernel to deliver significant breakthroughs in modern storage architectures,” said Mark Klarzynski, Founder and Chief Technology Officer of ID7. “We’re excited to join the Fusion-io team of world class engineers and developers to work together on open, software defined solutions to today’s most challenging data demands.” Klarzynski also wrote a blog post about joining Fusion-io, and the promise of software defined storage. 

    7:00p
    Old Gas Tower to Become Futuristic Data Center
    bahnhof-gasometer-exterior-

    The exterior of the Stockholm Gasometer, which was built in 1893. Swedish ISP and hosting provider Bahnhof hopes to convert the building into a data center. (Photo: Bahnhof)

    In one of the more interesting retrofit projects we’ve seen, a Swedish ISP is planning to convert a huge former natural gas holding tank into a five-story data center. The developer is Bahnhof, which has gained notice for its unusual data center designs, including the “James Bond Villain” data center in a former nuclear bunker and a modular unit designed to look like a space station.

    This time Bahnhof plans to rehab one of Stockholm’s huge gasometers – a towering building designed to store gas –  and turn it into a five-story data center housing thousands of servers.  The gasometer project is one of two new data centers planned by Bahnhof, both of which will capture waste heat from servers for use in district heating systems that will provide energy for homes and offices.

    The second project, known as Nimrod, will be built on the site of one of the plants feeding Stockholm’s district heating and cooling system. The existing building is operated by Fortum, a large energy company in Stockholm.

    “Fortum let us construct a data center on top of Europe’s most powerful heat pumps for a direct transfer of heat into their system,” said Jon Karlung, the CEO of Bahnhof, whose love of futuristic design has informed the company’s facilities. “Why vent the energy out?”

    Karlung says these projects are envisioned as data centers for large IT companies, and that Bahnhof is in talks with a large US company about one of the sites. ”There is really a substantial interest,” said Karlung. “The concept works for anybody that doesn’t want to ventilate out money in thin air. Our role is to build and provide the concept. We do this as part of our business. We are also a hosting provider, but this is pure design and construction.”

    The Gasometer

    The gasometer is a cylindrical building erected in 1893, constructed with red bricks and enclosed by a spectacular wood and steel ceiling structure as ceiling, which Bahnhof says contributes to the “sacral character of the space.” Here’s a look at the building’s interior:

    bahnhof-gasometer-interior-

    Bahnhof has commissioned two designs for the gasometer site. One is from Albert France-Lanord, the designer of Pionen White Mountain, Bahnhof’s stylized high-tech underground fortress 100 feet beneath Stockholm.

    8:00p
    Bringing Colo to the Customer: Modular Gets Local
    io-dayton-modules-aisle

    IO has created a modular data center for LexisNexis within a short drive of the company’s global headquarters in Dayton, Ohio. (Photo: Rich Miller)

    DAYTON, Ohio – Colo has come to the customer. In a business park just minutes from its global headquarters, LexisNexis is housing racks of IT gear inside factory-built data center modules from IO. It’s an example of a new paradigm for enterprise data centers, in which pre-fabricated designs can create resilient Tier III facilities within 120 days at any location a customer chooses.

    LexisNexis, an information service provider for the legal profession, is the prototype customer for IO’s on-site offering for enterprise customers. IO has built a data center on LexisNexis’ doorstep in Springboro, Ohio, a suburb of Dayton, where the company has already deployed two double-wide D2 modules,housing 400 kilowatts of IT load.

    This new facility is filled with modular data centers, steel enclosures which are about 42 feet in length and can house up to 50 racks of IT gear. Dubbed “IO.Anywhere,” the modules are built in a factory in Phoenix and can be shipped virtually anywhere by truck, rail or plane. IO has developed modules for networking gear and power and cooling equipment, allowing customers to create all components of a modern data center.

    A New Look for Enterprise Data Centers

    IO shifted to a modular design in 2010, and until now has housed its customers in a pair of massive “modular colo” facilities the company has built in Phoenix and New Jersey.  The IO Ohio facility represents the next phase in its vision, opening up new possibilities in site selection that could gradually alter the enterprise data center landscape.

    IO and other modular providers say these designs provide a cheaper and faster way to deploy data center capacity. They also offer a predictable, repeatable design that can standardize many aspects of expanding data center capacity. That’s attractive to a growing list of enterprise customers, including LexisNexis, which is using IO’s “Data Center as a Service” program .

    “This is a much better way to do it,” said David Short, Senior Project Manager for LexisNexis, a division of Reed Elsevier. “We’ve got to get to a point where we can control our operations and control our costs.”

    In addition to housing its IT gear, the 46,000 square foot Dayton facility includes offices for LexisNexis, which will soon build a networking operations center (NOC) at the site to manage its global data center footprint.

    Over time, LexisNexis will add another 400 kW of capacity, while IO will fill the remainder of the 26,000 square foot data area with modules housing its own colocation customers. This offers a “best of both worlds” relationship – Lexis-Nexis gets a dedicated, energy-efficient infrastructure that can be built in phases, while IO gets a marquee anchor tenant to support the facility, as well as the ability to generate revenue from its own customers.

    More “Anywhere” Projects in the Works

    The Dayton facility is IO’s first public construction project at a customer-selected site, but there are already more in the works. IO is creating modular data centers for investment bank Goldman Sachs in the United States, the UK and Singapore. And last week’s announcement that IO has won a $17.5 million contract with the Securities and Exchange Commission could mean a future presence for IO in the Washington D.C. market.

    LexisNexis is a long-time IO customer, starting out in traditional raised-floor data center space in IO’s first project in Scottsdale, Arizona. The company provides managed hosting and business continuity services for  law firms, helping them safeguard their intellectual property. The Dayton facility will consolidate LexisNexis customer gear that previously was housed in a facility in Columbus, Ohio.

    Short said the switch from a traditional data center to a modular deployment wasn’t a major stumbling block for LexisNexis’ customers. ”It’s an education process for our customers,” said Short. “As soon as they walk in and see it, they get a comfort level.”

    The IO. Anywhere modules are among a new generation of customized container-based designs that resemble a traditional data center. To provide visual continuity, the module design incorporates a raised floor, which isn’t essential but provides a familiar look and feel for users.

    Efficiency Through Containment, Control

    Efficiency is a major selling point for the IO modular solution, along with its flexibility and repeatable design. The enclosed environment functions like a containment system, allowing greater control over cooling airflow. The modules create a consistent environment that allows LexisNexis to customize its configuration “inside the box,” an important consideration for service providers.

    “We were able to design a power distribution system that allows us to install two to five cabinets at a time,” said Short. “That flexibility is important.”

    IO began retrofitting the space last  July, and LexisNexis was able to begin moving in by early November.

    “We were able to bring IO Ohio online in less than 90 days, demonstrating the technology’s ability to deliver data center capacity where and when it’s needed to meet the growing demands of customers,” said Rick Crutchley, Senior Vice President of Global DCaaS Sales for IO. “We’re proud to continue our relationship with LexisNexis as they adopt a Data Center 2.0 strategy for their IT operation in Ohio.”

    Lexis-Nexis is also using the IO.OS software to manage its infrastructure in Dayton. With a growing number of companies providing modular and factory-built data center products, IO sees its data center management software as a differentiator as more companies enter the market for modular enclosures.

    The industry debate about modular data centers will continue, as some continue to question the economics and breadth of use cases for modular designs. The IO Dayton facility provides a real-world example of the potential for local deployments, adding another data point to the discussion as IO continues to make its case, one customer at a time.

    io-dayton-lexis-nexis

    David Short, Senior Project Manager for LexisNexis (at right) checks in at the company’s new modular data center with Jon Lind (left), a business development manager at IO Dayton. (Photo: Rich Miller)

    For more photos of the facility, continue to page 2.

    << Previous Day 2013/03/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org