Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, January 29th, 2014

    Time Event
    12:30p
    CyrusOne Receives Open-IX Certification in Six Markets
    CyrusOne's Phoenix data center

    CyrusOne’s Phoenix data center is one of the locations that has been certified by the Open-IX Association. (Photo: CyrusOne)

    CyrusOne has received data center certification from the Open-IX Association for six of its largest data centers. The company is among the first batch of providers to receive Open-IX certification and the first to receive multi-site data center certification. CyrusOne data centers in Dallas, Houstin, Austin, Cincinnati, and Phoenix have been certified.

    Open-IX began accepting applications for North American Data Center and Internet Exchange Points earlier this month. The company began accepting applications in the key Northern Virginia market in 2013, but saw demand in other regions.

    Open-IX, as a refresher, wants to create a new network of Internet exchange points (IXPs), creating neutral, member-governed exchanges that allow participants to trade traffic. The group is embracing a non-profit model that is widely used in Europe and spreads exchange operations across multiple data centers in a market. In the U.S. these exchanges are typically hosted by commercial providers, with interconnections focused in a single facility or campus operated by that provider.

    A More Robust Ecosystem

    “We’re very pleased to receive data center certification from OIX and excited to participate in the continued growth of the organization,” said Josh Snowhorn, vice president and general manager of Interconnection at CyrusOne. “With six of our largest data centers now certified we can bring OIX participants such as cloud, content and network providers closer to our Fortune 1000 customers. This allows for a more robust data center ecosystem that can deliver greater efficiencies, resiliency and transparency.”

    Many providers have announced their intention to get Open-IX certified. Digital Realty has initiated the Open-IX certification process. Digital Realty is in the process of applying for Open-IX certification for its data centers in several key markets, including New York, New Jersey, Chicago, Northern Virginia, Silicon Valley, San Francisco and Los Angeles.

    There has been a surge of European Internet Exchanges entering North America as well. The Amsterdam Internet Exchange (AMS-IX) opened in New York,  The London Internet Exchange (LINX) is launching the LINX NoVa exchange across three sites in northern Virginia, while Frankfurt-based DE-CIX is setting up an exchange in the New York market.

    1:00p
    Latisys Adding 3.6 Megawatts In Suburban Chicago
    Some of the data center space inside the Latisys Chicago data center

    Some of the data center space inside the Latisys Chicago data center. (Photo: Latisys)

    Colo and cloud service provider Latisys is increasing its data center capacity in suburban Chicago on the back of continued strong demand. The company is building out 24,600 square feet of raised floor, adding 3.6 megawatts of critical power in Oak Brook. The new build will create CHI-06, which will open later this year and will be part of Latisys’ CHI1 data center, a 99,000 square foot Tier III facility.

    Latisys underwent an expansion in Chicago in May of last year, and the company is already primed for the next phase. “We’re not out of space, but the pace of adoption has been strong enough that we made the decision pretty quickly to continue the momentum and be ready,” said Pete Stevenson, CEO of Latisys. ”It didn’t totally take us by surprise, but we were extremely pleased with the growth.”

    What’s driving all of this growth in Chicago? “There are many factors,” said Stevenson. “People have rid themselves of the illusion that they have to be downtown. I’d say at least half of our Chicago business is people outsourcing for the first time – Chicago has awoken to the outsourcing bug. In the past 12 to 18 months, it’s as if an outsourcing giant has been awakened.”

    Growing Demand for Managed Services

    Stevenson also notes that Chicago has historically been a stronger market for colocation than managed hosting. That has changed. “We’ve seen an acceleration of our managed business,” said Stevenson. “We ended up 65 percent year over year in managed growth, and a significant chunk was Chicago-based business”

    Latisys is uniquely positioned in that it offers hybrid IT, spanning high density colocation, managed services, both private and multi-tenant cloud and everything in between.

    “A CIO wants a partner that will take complex stuff off of his hands,” said Stevenson. “We can really prescribe the ideal scenario. Colocation still remains a very important piece of our portfolio, but our end to end, unified platform is also an important part.”

    When it comes to customers, Latisys says it is seeing requests from all ends – some customers ask them to help with a cloud strategy, and some customers come looking for colocation and end up on hosting.

    “There’s been several notable wins where it was initially a colocation conversation and it became a managed deal,” said Stevenson. He attributes consulting, and truly getting to know a customer’s unique needs to the company’s success in Chicago. Rather than shoehorn a customer into a certain setup, Latisys’ diverse portfolio of offerings means more custom-tailored deals.

    Another big driver in the Chicago suburbs, according to Stevenson, is disaster recovery. “There’s a tremendous amount of interest (in DR),” he said. “It’s now more cost effective than ever for a company to build a viable, flexible DR plan. People that were priced out previously, now have that viable option.”

    Chicago Suburbs On Fire

    “This is the latest in a series of Latisys investments both nationally and in the Western suburbs of Chicago,” said Glenn Ford, Senior Analyst with 451 Research. “Latisys has a range of services with the ability to offer hybrid solutions that span datacenter colocation, managed hosting and cloud. The addition of 24,000 square feet of enterprise quality datacenter space reflects both accelerating demand for outsourcing and constrained capacity in and around Chicago.”

    “Chicago’s western suburbs have arrived as an attractive alternative for downtown enterprises seeking additional capacity and data center resiliency, making this region a critical hub for our unified data center platform,” said Stevenson.

    Latisys’ Oak Brook data center campus serves as the Midwest hub of Latisys’ national platform of high-density data centers – which includes Denver as an ideal Disaster Recovery site and Southern California and Northern Virginia as gateways to Asia Pacific, Europe and South America. More than 800 enterprises and government organizations rely on Latisys.

    Oak Brook will offer the full suite of managed services, capacity to meet increasing demand for power and space and is in key proximity to downtown Chicago and CME. The company also recently made investments in a new monitoring platform that gives both them and their customers some deep insight into the infrastructure.

    CHI-06 will be tied directly to existing fiber carriers in the buildings and monitored 24x7x365 by on-site NOC personnel and systems. Latisys engages external audit firms to perform annual assessments to support customers’ diverse compliance requirements. These accredited examinations span SOC 2, SOC 3, PCI DSS and HIPAA

    1:30p
    Equinix and AT&T Form Alliance for Enterprise Cloud Services
    equinix-fiber-tray

    Cable trays filled with fiber within an Equinix data center. (Photo: Equinix)

    Equinix (EQIX) and AT&T announced a new alliance that will deploy AT&T networking technology across select Equinix data centers globally. AT&T NetBond will be installed in those data centers, enabling customers to connect to their cloud services using their private AT&T VPN networks, which deliver highly secure connections with high reliability and performance capabilities, rather than relying on access via the public Internet. The alliance will create the opportunity for cloud providers to allow access to their services via AT&T NetBond from Equinix data centers around the world.

    “The AT&T and Equinix alliance will boost enterprise customers’ confidence in the security, reliability and performance of the cloud,” said Mike Sapien, Principal Analyst – Enterprise at Ovum. “The companies are addressing many of customers’ concerns about moving applications to the cloud and will ultimately speed up enterprise cloud adoption. This is a great example of AT&T’s open network initiatives aligning with Equinix’s cloud ecosystem.”

    Available in the first half of 2014, the AT&T NetBond service will combine the security of AT&T virtual private networking with cloud resources.

    “Business customers want rock solid network-based security protecting their traffic and applications when they use cloud services,” said Jon Summers, senior vice president growth platforms, AT&T Business Solutions. “Virtual private networking gives them better protection against Internet threats while also delivering the reliability, agility and performance they need when accessing the cloud.”

    “Equinix sits at the intersection of cloud service providers, network service providers and enterprise customers that want to use those services,” said Pete Hayes, chief sales officer for Equinix. “The alliance lets AT&T quickly expand its portfolio of AT&T NetBond-enabled cloud services by using our position as a cloud hub. Together, we’re making the network as flexible as the cloud and giving enterprises confidence they can move their most demanding applications to the cloud and still meet their security, scalability and performance requirements.”

    2:00p
    Dell Teams With Cumulus Networks on Disaggregated Networking

    Dell has teamed with hot networking startup Cumulus Networks, signing a reseller agreement that figures to boost interest in disaggregated networking.

    Dell will begin offering Cumulus Linux network OS as an option for its Dell Networking S6000 and S4810 top-of-rack switches. This combination will give customers fast, high-capacity fabrics, simplified network automation and consistent tools, and help lower operational and capital expenditures.

    “This is a great example of innovation coming from the new Dell,” said Tom Burns, vice president and general manager, Dell Networking. “Networking is an industry crying out for disruption. We’ve done this before with PCs and servers, putting us in the best position to offer a choice of network operating systems. Networks are like human minds – they work better when open.”

    “Dell is fundamentally changing the nature of the networking business, and this partnership with Cumulus Networks represents a definitive step towards disaggregating hardware and software,” said JR Rivers, co-founder and CEO of Cumulus Networks. “In this new open, multi-vendor ecosystem that’s becoming all the more prevalent, the customer finally gets to choose exactly the components they need to build the software-defined datacenter of the future without having to worry about vendor lock-in.”

    Cumulus launched last June, and now has more than 100 customers. The company said partnering with a tier-one enterprise technology supplier like Dell will expand its reach into new markets and regions.

    “This announcement is emblematic of an eventful period in data center networking,” said Brad Casemore, research director, Datacenter Networks, IDC. “Cloud-service providers and large-enterprise customers are thoroughly evaluating alternatives to their traditional data center network infrastructure. Dell has chosen to position itself as a strong proponent of disaggregation of network hardware and software, while Cumulus Networks has struck a partnership with a major vendor to gain favorable exposure in more customer accounts.Such alliances will become increasingly important as developments such as network disaggregation reconfigure industry ecosystems.”

    3:00p
    Teradata Launches Analytics for SAP

    TD-Analytics-for-SAP

    Teradata (TDC) has  launched a software and services solution for SAP that bundles business insight with analytics, combining SAP data with operational data and big data. Designed by Teradata, the new solution provides actionable insights, and enables more complete, comprehensive, and accurate business decisions.

    “It makes perfect sense to move SAP system data to Teradata and integrate it with big data and business data,” said Neil Raden, chief executive officer and principal analyst, Hired Brains Research. “By doing so, SAP customers will enjoy a vastly more performant, simplified, and useful system. They will also be served by a very large professional services organization from Teradata that is focused on – big data analytics, data warehousing, and business intelligence.”

    Integrated data from multiple SAP systems and and non-SAP systems can then be exposed to over 1,000 in-database analytic functions, available from Teradata and its partners. The resulting analytics equips SAP application users to meet the unrelenting demand for intelligence. Out of the box it includes enterprise architecture, ELT scripts, and data models. Business users have a full, enterprise view of their business with integrated data.

    “The integration of SAP-system’s data into the Teradata Database will enable our customers to leverage high-value, predictive analytics and see their entire organization in a new way and guide it forward,” said Scott Gnau, president, Teradata Labs. ”They will be able to move from operational reporting to creating a vision for the future. Teradata Analytics for SAP can be rapidly deployed and easily maintained, it also reduces IT complexity and cost.”

    3:00p
    Factoring Cost Into Your Cloud Services Evaluation

    It’s time to get your head into the cloud! Modern organizations are all looking at ways to optimize the way that they do business. A big piece of this has been user mobility, data delivery, and the truly interconnected data center model. The wide-scale adoption and rapid move to the cloud is being met by a similarly explosive growth in the number of providers offering services.

    Unfortunately, not all cloud services are created equal. The wrong choicecan have significant consequences and could lead to a loss of revenue, productivity, reputation, and customers.

    As Sungard explains in this whitepaper, to understand how the quality of service a provider delivers can impact a business, one need only look at how companies are using cloud services. The number of IT organizations that have migrated at least half of their total applications to the Cloud increased from 5 percent at the start of 2012 to more than 20 percent by year’s end. Others are turning to cloud-based infrastructure services to complement or usurp the traditional approach to delivering IT services from in-house datacenters. Interest in this form of cloud service is rapidly growing.

    The challenging part – as with any technology – is downtime and outages. Unfortunately, most businesses cannot tolerate downtime associated with cloud outages. They lose business, experience lost productivity, potentially lose customers for good, and expose the organization to fines and penalties. Simply put, quite often there is a “cost of doing it wrong” when selecting a cloud service provider.

    Download this whitepaper today to learn about the 6 key cost considerations when evaluating a cloud service. This includes:

    • The cost of downtime.
    • Financial impact related to lost reputation.
    • Lost productivity.
    • Security-related costs.
    • Regulatory-related costs.
    • The cost of IT staff time.

    As your organization makes its way into the cloud – Sungard outlines three key considerations around selecting the right provider:

    • Reduce the cost of downtime.
    • Reduce the financial impact related to lost reputation.
    • Keep worked productivity high.

    In creating the optimal cloud infrastructure, make sure to work with a provider which can meet your technological needs both today and in the future.

    3:03p
    Photos from Day One: Open Compute Summit V
    The Open Compute Summit V had a large crowd in the main ballroom when Frank Frankovsky, who chairs the Open Compute Foundation and works at Facebook, greeted the crowd and updated them on foundation news. (Photo by Colleen Miller.)

    The Open Compute Summit V had a large crowd in the main ballroom when Frank Frankovsky, who chairs the Open Compute Foundation and works at Facebook, greeted the crowd and updated them on foundation news. (Photo by Colleen Miller.)

    SAN JOSE – The fifth edition of the Open Compute Summit kicked off on Tuesday at the San Jose Convention Center, and showed the growing enthusiasm for the open source hardware movement, with 3,400 participants registered for the conference, 150 official member companies and multiple new contributions in servers, networking and storage. We present photo highlights of the first day of the Open Compute Summit V.

    3:21p
    The UPS Debate: A Conversation on High Efficiency, Multi-Mode UPSs

    Brad Thrash is a product manager for GE’s Critical Power business with global responsibility for the company’s three-phase UPS product line, including its TLE Series UPS.

    Brad-Thrash-tnBRAD THRASH
    GE Critical Power

    There are a lot of smart people in the data center power protection market debating the merits, value, risks and rewards of high efficiency uninterruptable power supplies (UPSs). Referred to as multi-mode UPS, or eco-mode, this technology uses smart control logic to switch in milliseconds, as needed, between a premium efficiency mode (multi-mode) and a premium power protection mode (rectifier/inverter double conversion). This improves the energy efficiency of converting alternating current (AC) power to direct current (DC) power by reducing conversion steps required when utility power is within an acceptable tolerance. If there is a power anomaly that affects the load to data center servers and equipment, multi-mode UPSs quickly switch to double conversion mode.

    With energy efficiencies topping 98 to 99 percent compared with double conversion technologies operating at 92 to 95 percent levels, these new multi-mode UPS architectures offers significant operating expense (OpEx) savings for data centers. Yet, there’s some industry discussion about both the risks and rewards of energy efficiency versus power reliability and quality. Is the switching, or transfer, technology robust and fast enough to protect the load? Does a four or five percent energy efficiency improvement make a difference in life cycle operating costs? Are there enough multi-mode UPSs deployed at data centers to be statistically significant to make the return on investment (ROI) case?

    At GE’s Critical Power business, we’ve been working with energy efficient power solutions for a long time and our eBoost UPS technology, (a multi-mode system), is deployed in many data center locations globally. (See YouTube video which explains this technology.) So we’re a bit biased. Yet the questions about high efficiency UPSs deserve further discussion, so we’ve outlined some of those issues and thoughts below.

    Multi-Mode Transfer Speed

    The basic premise of any multi-mode UPS depends on balancing and optimizing the time between when stable and clean energy is coming from the utility, and can be efficiently converted to power the load, and when a power anomaly requires the UPS to shift into a double conversion mode. The latter stabilizes the power, but impacts overall power efficiency.

    So what’s that optimum switching or transfer time? Some earlier white papers and blogs (1) suggest that anything over eight to 10 milliseconds (ms) is problematic, given that not all data center sensitive equipment (servers, etc.) have tolerances at or above these levels. According to a Green Grid white paper (2) on multi-mode, “if, for example, a UPS has a transfer time of greater than 10 ms and is paired with information technology (IT) equipment that has ride-through capabilities of only 10 ms, the UPS may not be able to support the IT equipment.”

    That’s one of the reasons a few companies, including GE, design its multi-mode UPSs with transfer speeds of less than two milliseconds. The technologies that help achieve these speeds are seamless, but represent an intricate combination of power disturbance detection, analysis and control systems.

    When eBoost’s responsive monitoring technologies detect any sort of deviation on the main or bypass power path, the inverter is immediately turned on to allow quality power to flow from the double conversion premium protection mode. In the same instant, the static switch on the bypass path from the utility is turned off to block the disturbance from reaching the load. Several patented innovations enable GE’s eBoost technology to accomplish the multi-mode-to-double-conversion mode switching processes in less than two milliseconds.

    A variety of disturbance analyzers, some patented by GE, are employed in combination, such as an instantaneous adaptive voltage error detector that monitors subtle changes in amplitude and duration; a root mean square (RMS) voltage error detector that computes the root mean square of all three UPS output voltages for variances; or an output short circuit detector that, after a breaker is tripped, will automatically increase line current to rapidly clear and reset the breaker. A sophisticated transient inverter controller quickly manages the transfer of the load to inverter power and back again to the bypass path.

    All these advanced monitoring and control systems work in concert to anticipate and respond to a comprehensive set of possible power conditions, creating transfer switch speeds of less than two milliseconds. This speed helps to maximize the intermittent transfer to double conversion protection, while maintaining higher multi-mode efficiency for the majority of the time when quality utility power is flowing.

    Does Multi-Mode Equate to Unprotected Utility Power?

    Some data center facility managers fear it’s hard to sell their risk-adverse senior managers on running their data center’s critical operations on multi-mode UPSs which, for some UPS suppliers, is a utility bypass line. At GE, designers incorporate a bypass line reactor that electrically couples with the output filter circuit to provide power line conditioning while the UPS operates in multi-mode (eBoost). This power line conditioning protects against many low level transients of the utility source, thus cutting down the number of possible transfers to the UPS inverter. More than 98 percent of utility power anomalies are voltage transients, so this design feature is quite helpful. The other two percent of power anomalies are brownouts and blackouts, which truly need the double conversion mode and battery backup feature of the UPS system.

    Do Percentages Matter?

    A third argument concerning multi-mode UPSs we hear in the market is “if our UPS running in double conversion already gets us to 93 percent efficiency, why take a ‘risk’ for a few percentage points in efficiency? Can that extra energy efficiency provide a significant return?”

    If we look at a UPS deployment at a typical 10 megawatt (MW) data center realizing just a one percent gain in efficiency, we can see a significant impact over 10 years (see recent Data Center Knowledge Article on UPS TCO)(3). As the chart below shows (Figure One below), while capital expenses (CapEx) are fixed, a total cost of ownership (TCO) evaluation of the OpEx for running an UPS over 10 years creates an operational savings of $1.4 million when energy efficiency improves a single percent − from 93 to 94 percent efficiency. With newer multi-mode UPS technologies that provide up to 96.5 percent efficiency, that savings could jump to almost $3.4 million.

    UPS-Efficiency-GE-Chart
    Figure One

    Given that data center managers are typically looking to reduce OpEx numbers to cut costs, and hosting companies need to find savings at every level of their operations to remain price competitive, those efficiency percentages matter.

    Finding Proof Points

    When multi-mode or eco-mode technologies are discussed in trade journals, at conferences and in industry committees, there’s general agreement that multi-mode is the right path for all UPS designs moving forward. While some people still caution that there’s not enough runtime data to make the business use case, that’s not been our experience at GE.

    GE conducted two fleet trials during a three-year period from March 2010 to March 2013, which showed some positive efficiency gains in data centers in Atlanta, Ga. and Louisville, Ky. The Atlanta facility, with two GE SG Series 300 kilovolts ampere (kVA) UPSs, operated for more than 25,000 hours with 100 percent UPS reliability, running in eBoost mode 98 percent of the time. The Louisville data center, also operating at 100 percent UPS reliability continuously for more than 21,000 hours, using four GE SG Series 750 kVA UPSs, ran in eBoost mode 95 percent of the time.

    More recently, GE helped CoreSpace, a data center, cloud, and hosting provider convert a newly acquired data center in Dallas, Texas, to centralized power management and UPS architecture. They’re using a GE three-phase SG Series 500 kVA UPS system running in eBoost energy-efficiency mode. Since the conversion, these UPSs running in eBoost are providing an overall energy efficiency of 99 percent, with eBoost contributing an annual energy cost savings of $24,800.

    As multi-mode technology continues to advance and we see more deployments yield clear and dramatic performance and ROI data, we expect the debate about this efficiency-driving power conversion approach to continue. We invite the industry to join this conversation by adding their comments to journal articles such as this; to share their ideas at industry forums; and to bring forward new data and ideas to advance new approaches for data center power efficiency.

    Endnotes
    (1) Schneider Electric Blog – UPS Eco Mode Can Deliver Big Savings – Jan. 30, 2013.
    (2) The Green Grid Association – White Paper #48 – 2012.
    (3) Data Center Knowledge – Using a Total Cost of Ownership (TCO) for Your Data Center – Oct. 2013.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Storage News: Quantum, Violin Memory, Tegile Systems

    Quantum boosts storage performance with the introduction of two new StorNext appliances, Violin Memory helps SCOR Global Life dramatically accelerate in-house applications, and Tegile Systems propels an Irish airline’s internal infrastructure.

    Quantum launches new StorNext appliances. Quantum (QTM) announced it has expanded its StorNext platform with two new StorNext 5 metadata appliances.  The new StorNext M445 SSD leverages flash technology to maximize the benefits of StorNext 5. The StorNext M445 SSD significantly raises the performance bar with a 7x boost in metadata operations. A second new metadata appliance, the StorNext M660XL, offers greater capacity and scale with support for up to 5 billion files, 100 percent compatibility with Apple Xsan, and fully integrated scale-out IP connectivity for Windows and Linux workstations. Additionally, StorNext 5 is now available to existing StorNext appliance users, enabling greater efficiencies in their media workflows. ”The new StorNext 5 appliances truly unleash the potential of StorNext 5 for broadcasters and post production professionals and are ready to meet their toughest workflow challenges,” said Alex Grossman, vice president, Media and Entertainment at Quantum. ”With more than two years of development in modernizing every component of StorNext to take advantage of the newest system architectures and technologies — beginning with flash metadata storage — the results are stunning and deliver dramatically new levels of performance and scalability. We are excited to see what our customers will create on the StorNext 5 optimized appliance platform.”

    SCOR Global Life selects Violin Memory. Violin Memory (VMEM) announced that it has been selected by SCOR Global Life USA Reinsurance Company to accelerate its in-house Windows Server and SQL server-based applications. A 6000 Series Memory Array enabled the company to increase server performance by an average of four times, improve staff productivity, as well as provide timely year-end reports to reduce financial risk and meet compliance regulations. Reports that once required six full days using hard disk drives took SCOR Global Life only one day to complete with the Violin solution. With an acquisition looming and applications experiencing high disk latency, SCOR processes needed to be accelerated to meet the compliance regulations of the accounting department. “We knew of the performance and latency benefits of flash memory and that Violin led the market with its arrays, yet the results exceeded our expectations,” said Greg Clinton, vice president of IT at SCOR Global Life. “We received exceptional and prompt customer service and were treated like a true partner. We didn’t feel like we were just a number in the system.”  To expedite the deployment of the Violin flash memory array, the account team at Violin shipped the array overnight and installed it the following morning.

    Tegile selected by Ireland’s National airline. Flash-driven storage provider Tegile Systems announced that Irish airline Aer Lingus has implemented its Zebi HA2800 arrays to revamp its internal infrastructure and provide the bandwidth necessary to better support end-user needs. Unable to scale at the speed necessary to cope with projects such as SQL databases, data warehousing and VDI deployment with its existing EMC storage, Aer Lingus looked to alleviate its problems by assessing other solutions available on the market.  After conducting evaluations with various storage companies such as Apache and HP, the airline chose to supplement its EMC solution with Tegile based on the high performance and cost effectiveness of its Zebi arrays. “With the legacy EMC storage, as soon as we reached our storage limit, we had to go back to the vendor to buy additional storage space,” said Brian Price, IT Architect at Aer Lingus. “This was both costly and an inefficient use of our IT team’s time.  With the implementation of Tegile arrays, storage is no longer an issue at Aer Lingus.  The deployment has given us more bang for our buck and allowed a lot more breathing space for our IT team.  Tegile arrays offer storage that is rich with features to enhance data management.”

    5:15p
    Facebook Taps CA Technologies for Hyperscale DCIM
    Facebook data center executive Tom Furlong discusses the company's DCIm software during a session Tuesday at the Open Compute Summit in San Jose. (Photo: Rich Miller)

    Facebook data center executive Tom Furlong discusses the company’s DCIm software during a session Tuesday at the Open Compute Summit in San Jose. (Photo: Rich Miller)

    SAN JOSE, Calif. - Facebook once contemplated building its own software to manage its massive data center infrastructure. But after a lengthy review of its options, the company has opted to use software from CA Technologies to track and manage its data center capacity.

    The announcement is a significant win for CA, which beat out a dozen companies for the high-profile deal. Facebook will use CA Data Center Infrastructure Management (DCIM) software to bring together millions of energy-related data points from physical and IT resources in its global data centers to improve power efficiency.

    DCIM software is seen as a key growth sector, as companies struggle to gain control over increasingly complex data center environments. But many end users have struggled to make sense of the DCIM landscape, which is crowded with more than 70 providers selling software to manage the various aspects of data center operations.

    One System to Rule Them All

    The hope is to find “one system to rule them all,” according to Tom Furlong, VP, Infrastructure Data Centers at Facebook, who discussed the company’s DCIM selection process Tuesday at the Open Compute Summit. Furlong said

    “We are on a mission to help connect the world, and our IT infrastructure is core to our success,” said Furlong. “We are continually looking at ways to optimize our data centers and bringing all of our energy-related information together in one spot was a core requirement.”

    CA Technologies is a long-time player in the data center software arena that has repositioned itself for cloud computing workloads through a series of acquisitions. CA DCIM provides a web-based centralized solution for monitoring power, cooling and environmentals across facilities and IT systems in the data center as well as managing the use of space and lifecycle of assets which make up the data center infrastructure.

    Intensive Vendor Reviews

    Facebook conducted an intensive DCIM vendor review process. CA was one of a dozen considered and completed a proof-of-concept, followed by a more extensive pilot in a 100,000 square foot section of Facebook’s data center in Prineville, Oregon. data center. This type of field test is a critical step in evaluating a DCIM vendor solution, Furlong said.

    “You have to figure out a way to try it before you buy it,” he said.

    CA then worked with Facebook to create a custom solution.

    “Facebook’s IT team can now bring this energy-related data into their broader DCIM system for an even more complete view of overall system status.  said Terrence Clark, senior vice president, Infrastructure Management, CA Technologies. “They can then analyze all of the data in aggregate to make decisions to improve efficiency and reduce costs, while delivering a seamless customer experience and creating new opportunities for innovation.”

    Data Quality Matters

    Facebook’s overall data center management strategy integrates several in-house tools, including Cluster Planner (used in deploying entire clusters) and Power Path, which provides data on electric consumption at many points within the data center.

    Despite its intense focus on data collection and management, the DCIM review process presented challenges and learning experiences, Furlong said.

    “We learned how important data quality can be,” said Furlong. “You need to look at every single facility in nauseating detail. At the scale we’re at, you can miss stuff.”

    7:54p
    Andreessen Bullish on ARM in the Data Center. And Bitcoin!
    Internet pioneer Marc Andreessen, right, makes a point at the Open Compute Summit while Arista Network chairman Andy Bechtolsheim listens. (Photo: Colleen Miller)

    Internet pioneer Marc Andreessen, right, makes a point at the Open Compute Summit while Arista Network chairman Andy Bechtolsheim listens. (Photo: Colleen Miller)

    SAN JOSE - When it comes to the future of infrastructure, Marc Andreessen is bullish on ARM and Bitcoin. In a session at the Open Compute Summit, the Internet pioneer and founder of the venture capital firm Andreessen Horowitz said he sees big changes ahead for the hardware sector.

    He sees a big opportunity for low-power processors from ARM Holdings, the UK firm whose chips power iPhones, iPads and a plethora of mobile devices. thus far, efforts to adapt ARM processors for servers have moved slowly. But Andreessen says the long-term outlook for ARM is strong.

    “I think data centers in the next 10 to 15 years will run many of the same components that now run on smartphones,” said Andreessen. “The reason I’m so bullish on ARM in the data center is that every large Internet service is bound by the cost of the data center, and by being i/o bound.”

    Andreessen is best known for developing the modern web browser at Mosaic and Netscape, but has serious data center experience from his post-Netscape outing as founder of managed hosting service Loudcloud in 1999. Since cloud wasn’t as cool at the time, the company changed its name to Opsware and focused on data center software before being sold to HP.

    Now, through his venture capital firm, Andreessen is investing in disruptive technologies. Sharing the stage with Arista Networks chairman and Sun co-founder Andy Bechtolsheim (more tomorrow on his thoughts), Andreessen discussed his firm’s expectations for big changes in networking and storage.

    Pumped for Bitcoin

    Andreessen was most enthusiastic in discussing the potential for Bitcoin, pumping his fist at the mention of the cryptocurrency. Andreessen Horowitz recently announce a $25 million investment in Coinbase, which offers a Bitcoin wallet.

    “Bitcoin is the first thing like the Internet since the Internet,” said Andreessen. “The amount of hardware and imagination that will be put behind it will be gigantic.”

    He noted the surprising pace of development for Bitcoin mining hardware, and the resulting burst in demand for data center space to house large mining operations seeking better power efficiency.

    “Mining is the heart of Bitcoin,” said Andreessen. “There’s a ton of work going on in optimizing Bitcoin mining. We’re now seeing fundamental advances in chip design and data center optimization. We’re seeing a new wave of interesting chip designs. I think these custom chips will dominate mining. I never would have said that a year ago.”

    For more on Bitcoin mining and data center infrastructure, see our recent stories:

    << Previous Day 2014/01/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org