Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, April 29th, 2014

    Time Event
    10:00a
    365 Data Centers Targets SMBs, Ditches Colo Contracts

    The “Main” may be gone, but the data centers remain. 365 Main has rebranded as 365 Data Centers, and is sharpening its focus as a provider of colocation services to the small business sector.

    As part of that shift, the new 365 Data Centers is skipping the traditional multi-year service agreement, and offering its colocation services on a “commitment free” month-to-month basis.

    365 Main took its name from the company’s initial property, an iconic San Francisco colocation center, which was sold to Digital Realty in 2010 along with four other properties. In 2012, 365 Main co-founders Chris Dolan and James McGrath resurrected the brand by acquiring 16 data centers from Equinix, seeing an opportunity to create a “national player with a local focus.” Last year 365 Main added an Evocative data center in Emeryville, Calif., giving it 17 facilities.

    Dolan and McGrath have since moved on, and a new executive team has come on board to launch the new brand and refine the focus. The new CEO is John Scanlon, a veteran of Internap and Yipes Communications. Chief Marketing Officer Keao Caindec worked with Scanlon at both those companies.

    Their goal is to optimize their offerings for small businesses, making colocation as user-friendly as possible. Small businesses are typically seen as candidates for cloud services, especially software-as-a-service offerings. But 365 Data Centers believes there’s an opportunity for small and medium-sized businesses that have regulatory compliance requirements, and may be interested in created private clouds or hybrid clouds in colocation centers.

    Caindec says small businesses are good candidates for colo – if it can be made accessible to them. ”The big name colocation and data center providers are focused on the enterprise, and it leaves SMBs and the channel out in the cold,” he said. “We think there’s a gap there, and we want to address that.”

    That’s the impetus behind 365 Quick Start, which offers pay-for-use colocation service, offering online sign-up and commitment-free terms. That’s a change from common practice of two-year or three-year contracts for colocation agreements. 365 Data Centers isn’t the first service provider to offer these terms, as CyrusOne last year introduced an “express” offering with month-to-month terms and online ordering. 365 is convinced that a different approach is required to boost adoption of colo space by small businesses.

    “We are reinventing the customer experience into one that fully meets their needs and reflects the way business is done in the rest of the technology realm,” said Scanlon. “365 Quick Start is our first proof point of that. SMBs want to take gradual steps into the cloud and we are dedicated to supporting them with local presence and national reach, providing services that facilitate a hybrid cloud-like model.”

    Scanlon said the contract-centric nature of the data center business is tied to a real estate model, driven by the priorities of real estate investment trusts (REITs) and the desire for predictable future revenue for investors and analysts. He asserts that stickiness in the colo business is driven by exceptional service, rather than a detailed contract.

    Scanlon says 365 Data Centers hopes to leverage its presence in the central business districts (CBDs) in many of its markets. He says the company is working closely with economic development agencies to create startup-friendly environments to support urban tech incubators.

    “Our data centers sit inside cities, which is unlike the large national providers, which have largely built outside the city in greenfields. We see the data center as a cornerstone in the revitalization of our cities.”

    Availability of 365 Quick Start
    365 Quick Start is immediately available for purchase online. To purchase or learn more about 365 Quick Start, please visit: 365datacenters.com/quickstart.

    About 365 Data Centers
    365 Data Centers provides secure and reliable colocation services that offer an easier way to scale business growth and connect to the cloud. Through its 17 U.S. data centers, 100% uptime SLA, and national network of carriers and content providers, 365 Data Centers offers flexible pricing models that let customers pay as they grow with no long-term lock in. Services are secure and tailored for small and mid-sized businesses, telecom carriers and cloud service providers. We partner with businesses, technology leaders and community leaders to build technology centers that enable economic growth. With 365 Data Centers as a local provider, driving business growth just got easier. For more information, visit 365datacenters.com.

    11:00a
    Pepperdata Raises $5 Million to Grow Hadoop Solution

    Hadoop specialist Pepperdata announced that it has raised $5 million in a Series A financing round, led by Signia Venture Partners and Webb Investment Network. Founded by former Yahoo and Microsoft executives, Pepperdata also lists Yahoo co-founder Jerry Yang and former Motorola CEO Ed Zander as investors.

    The company will use the new funding to accelerate investments in product development and further build out the company’s sales and marketing organization. Pepperdata software is already helping a number of customers, including some of the world’s largest internet and software-as-a-service companies, optimize large-scale Hadoop clusters with over a thousand nodes. Pepperdata’s software allows customers to run multiple applications on one cluster, achieving more efficient capacity utilization, predictability, and quality of service for their Hadoop infrastructure. The company plans to more than double the size of its team over the next 12 months.

    “We operate in a world where thousand-node Hadoop clusters are rapidly becoming commonplace,” said Sean Suchter, cofounder and CEO, Pepperdata. “We want to make running multiple big data applications on a single cluster as easy as running multiple applications on your laptop. This funding will allow us to continue enhancing the solution, expand our sales team, and deliver even more value to existing and future customers.”

    “Sean and Chad have built a world-class technical team who understand the challenges with Hadoop better than anyone — they were the first ones to run a business on it,” said Ed Cluss, Partner at Signia Ventures and Board Member at Pepperdata. “Technology that solves the challenges of running Hadoop in the enterprise will be critical to the future of big data. Pepperdata is uniquely positioned to provide a much-needed solution that brings increased visibility, capacity planning and control to Hadoop.”

    11:30a
    CloudVolumes 2.0 Enables Software Defined Workload Management

    Virtualization company CloudVolumes introduced version 2.0 of its workload deployment and management software, adding enhanced support for Citrix XenApp. The company is announcing an upcoming capability called AppCloaking, which will give customers additional flexibility to deploy multiple applications inside a single stack, and expose applications per user based on policy.

    CloudVolumes 2.0 features hybrid multi-cloud enablement, which allows customers to choose to easily provision workloads to cloud providers such as AWS and Microsoft Azure. It will enable developers to test and deploy their application stacks, driving more efficient resource utilization through increased virtualization. CloudVolumes’ virtual appliance now works with XenApp to enable support for both Citrix XenDesktop/XenApp and VMware Horizon deployments.

    “With over 15,000 students, managing applications and their deployment can be a very daunting task, especially without a major infrastructure overhaul,” said Lucien Haak, team leader workplaces, Maastricht University. “CloudVolumes enables us to easily manage our complex desktop applications, so that we can rapidly deploy them to our students and faculty. Since it requires no modifications to existing applications, it is easy to implement and also reduces our software and hardware requirements, which helps us save on costs.”

    AppCloaking, a soon to be released capability, will be introduced across the CloudVolumes family of products and will provide users with the flexibility to deploy multiple applications inside a single volume, instantly delivering applications only to entitled users. This unique approach will simplify deployment for users who want a golden image and single volume containing many applications, while avoiding the cost of delivering and managing large base images and simply hiding applications within them, which significantly limits agility.

    “As we work with customers, we uncover more use cases that show the need for Software Defined Workload management solutions that simplify deployment in a cloud and virtualized world,” said Harry Labana, SVP and chief product officer, CloudVolumes. “Our goal is to provide customers with flexibility when it comes to their applications, so they are able to easily deliver them across diverse infrastructures, increasing efficiency and saving on costs.”

    12:00p
    What Will The Data Center of 2025 Look Like?

    What will the data center look like in 2025? Enterprise data centers will be much smaller, power densities will be much higher, and the majority of IT workloads will have moved to cloud computing platforms. That’s the consensus from data center professionals surveyed by Emerson Network Power for its Data Center 2025 Project, who were tasked to imagine what facilities will look like 11 years from now.

    For perspective, consider how much has changed since 2003. That was the era of the dot-com bust, a surplus of data center capacity, rack densities in the 250 watt to 1.5 kW range, and no social media revolution.

    “We didn’t know what to expect,” said Steve Hassell President, Data Center Solutions at Emerson. “We wanted to step all the way back and get a feel of where the industry is going. There were expected results such as increased utilization of the cloud, and some ambitious predictions, such as largely solar-powered data centers and power densities exceeding 50kW per rack.

    “The common denominator was that most believe we’ll undergo a massive change to parallel the change that occurred the 11 years prior,” added Hassell. “The pace of change is only going to increase.”

    Smaller Data Centers

    Emerson tapped over 800 data center professionals to generate the findings for Data Center 2025.  Sixty seven percent of participants believe at least 60 percent of computing will be cloud-based in the year 2025. It’s also likely that enterprise data centers will at least shrink in size: 58 percent expect that data centers will be half the size of current facilities or smaller, while 10 percent of participants believe the enterprise data center of 2025 will be one-tenth the size of current facilities.

    “The enterprise data center will take less space,” said Hassell. “With cloud adoption at 60 percent, probably what they were thinking about is core data centers that are very large, and with enterprise data centers, we’re seeing a move to computing at the edge.”

    Survey participants don’t envision radical changes in current approaches to data center thermal management. Forty-one percent expect a combination of air and liquid to be the primary method of data center cooling. Only 20 percent see ambient air, or free cooling, emerging as the primary means of thermal management, and just 9 percent see the emergence of immersive cooling.

    emerson-2025-density

    Higher Densities, But Better Efficiency

    What about power? On average, experts predict power density in 2025 will climb to 52 kilowatts per rack. However, a significant majority of participants in survey (64 percent) believe that it will require less energy in 2025 to produce the same level of data center computing performance available today.

    Despite long-held expectations of soaring power densities, Emerson Network Power’s Data Center Users’ Group notes that average density has remained relatively flat since peaking at 6kW nearly a decade ago. Yet the Data Center 2025 findings predict a radical change that will affect the physical environment of the data center.

    emerson-2025-power

    Other notable survey results and forecasts from the report:

    • Big changes in how data centers are powered: The experts believe a mix of sources will be used to provide electrical power to data centers. Solar will lead, followed by a nearly equal mix of nuclear, natural gas and wind. Sixty-five percent believe it is likely hyperscale facilities will be powered by private power generation.
    • Cloud forecasts are somewhat conservative: Industry experts predict two-thirds of data center computing will be done in the cloud in 2025. That’s actually a fairly conservative estimate. According to Cisco’s Global Cloud Index, cloud workloads represent around 46 percent of current total data center workloads, and will reach 63 percent by 2017.
    • DCIM will play a prominent role: Twenty-nine percent of experts anticipate comprehensive visibility across all systems and layers, while 43 percent expect data centers to be self-healing and self optimizing. Taken together, that would indicate 72 percent of the experts believe some level of DCIM will be deployed in 2025—significantly higher than most current estimates of DCIM adoption.
    • Utilization rates will be higher: That increased visibility is expected to lead to more efficient performance overall, as 72 percent of industry experts expect IT resource utilization rates to be at least 60 percent in 2025. The average projection is 70 percent. That compares to estimated averages today as low as 6-12 percent, with best practices somewhere between 30-50 percent.

    The report also discusses potential opportunities to improve efficiency, such as chip-level cooling, increased server efficiency, higher data center temperatures and streamlined power delivery. But there won;t be any one-size-fits-all patterns emerging.

    “The data center of 2025 certainly won’t be one data center. The analogy I like to use is to transport,” said Andy Lawrence, vice president of Datacenter Technologies and Eco-efficient IT at 451 Research. “On the road, we see sports cars and family cars; we see buses and we see trucks. They have different kinds of engines, different types of seating and different characteristics in terms of energy consumption and reliability. We are going to see something similar to that in the data center world. In fact that is already happening, and I expect it to continue.”

    “The future, to us, is looking increasingly like one that is automated, and converged,” said Emerson’s Hassell, “The facility side and the IT side is coming together and operating as an integrated unit. It’s the only way to deal with the speed and complexity.”

    12:30p
    HostingCon Aims to “Heat Up” Miami

    HostingCon, the annual conference and trade show for hosting and cloud providers, celebrates its 10th anniversary in Miami in June. There will be lots of networking, presentations and gatherings at the event with a “Turning Up the Heat” theme.

    More than 1,900 hosting and cloud providers decision makers from more than 34 countries are expected to attend from June 16-18.

    Reader Discount

    Data Center Knowledge users can get a discount on registration by using this coupon code: DCK2014

    The code will get users: $60 off Full Conference pass; $30 off Single Day pass and $10 off Exhibits Only pass. These discounts are in addition to the early bird discounts. Early bird ends May 1.

    Registration information is available on the HostingCon website. (This link will pre-populate the coupon code.)

    Schedule Highlights

    A CEO panel will feature five industry CEOs on stage at one time discussing the state of the industry and its future. Participants include Art Zeile, CEO of HOSTING, Lance Crosby, CEO of SoftLayer, Emil Sayegh, President and CEO of Codero Hosting, Kenneth Ziegler, CEO of Logicworks, and Matthew Porter, CEO of Contegix. The panel will be led by Philbert Shih, Managing Director at Structure Research.

    This year’s keynote address at HostingCon will be Chip Bell, who is the founder and senior partner with The Chip Bell Group, a consulting firm focused on helping organizations create a culture that supports long-term customer loyalty and service innovation. He will speak on “Innovative Service: Strategies for Creating Growth and Bottom Line Impact,” which will provide attendees with inspirational ideas to help make service an exceptional experience for their customers. Bell will draw on more than 20 years working with top decision makers from the world’s leading brands, helping them create ingenious “value unique” strategies instead of pricey value-added schemes.

    The first evening of the conference, HostingCon is throwing a 10th Anniversary Party at the Loews Miami Beach Hotel. “Networking opportunities, including our Anniversary Party, at this year’s event will be bigger and better than ever,” says Kevin Gold, Conference Chair of HostingCon.

    A tool called HostingCon Connect enables attendees to get in touch with other attendees via email prior to the conference and schedule places and times to meet during the event.

    About HostingCon

    HostingCon is the premier industry conference and trade show for hosting and cloud providers. In its tenth year, HostingCon connects the industry including hosting and cloud providers, MSPs, ISVs, and other Internet infrastructure providers who make the Internet work to network, learn, and grow.

    12:30p
    Beware the Closed System: Just Integrate

    Lara Greden is a senior principal, strategy, at CA Technologies. Her previous post was titled, Leaders Achieve Best Practice With DCIM. You can follow her on Twitter at @laragreden.

    When done right, DCIM software integrations can provide tremendous value to your business. First, they allow you to leverage investments that have already been made, such as representing data flows and applications from a wide variety of vendors. Second, software that is architected to integrate with other systems, as opposed to having a closed architecture, enables an organization to make the best decisions for future investments. However, there is a caveat that you should be mindful of when choosing DCIM software – be careful that your selection doesn’t lock you into a limited subset of power, cooling, rack and other hardware choices just so that it will integrate with your DCIM software.

    It sounds straightforward, but there are several reasons why integration may present challenges. These challenges can be categorized in two ways: integrations to receive and/or share data, and integrations with other applications to support workflows. First we will address the category of data integrations, and in next month’s column we’ll examine considerations for application integrations.

    Consolidating Systems Drives DCIM

    Data integrations are often the number one driver to acquire enterprise scale DCIM software as organizations look to eliminate the labor-intensive task of going to ten, fifty, or even a hundred different systems to get accurate power, space and cooling data. Chances are that if you have a portfolio of data centers, you likely have multiple people carrying out the same task of peering into the separate interfaces to collect data. Whether one data center or many, DCIM software represents an opportunity to improve efficiency, accuracy, and risk management for monitoring critical data points.

    Another major driver for DCIM is that while various teams, including facilities, IT operations and capacity planning teams, require power, space and cooling data, their efforts are often limited today due to siloes. This slows down or inhibits capacity planning processes that can lead to over capacity and over spending; or, alternatively, results in hidden risk. As an enterprise application, DCIM software provides the technology that helps data center teams work together towards a common cause – fundamentally helping the business grow and be more profitable. It also helps these teams be more efficient with their time, accurate with their analysis, and improve the quality of decision making.

    However, data integrations can present a challenge because equipment vendors don’t always follow the same protocols or naming conventions. For example, inlet temperature may be called TempIn in one data source, while it is called inlet_temp in another. Not all DCIM software solutions are alike when it comes to their ability to normalize data from across various data sources. Some solutions do it automatically, while others require custom work that requires expensive professional services. This can be further complicated if there are additional challenges that affect access to the data.

    Avoiding Vendor Lock-in

    An experienced DCIM software provider will help you address any challenge that your internal architecture may present as well as understand how the systems themselves configure and communicate the essential data. If you want to maintain the freedom and flexibility to make decisions on future hardware choices for reasons other than whether or not your DCIM software will integrate with them, be sure to ask your short list of DCIM vendors for their practical experience and approach to integration.

    Fundamentally, DCIM software that is architected to integrate with heterogeneous data sources will bring the benefits of DCIM to a larger audience. It allows service providers to proactively monitor their customers’ environment and even schedule services before availability and performance are at risk.  It allows resellers who previously focused on providing power and cooling hardware to also serve their customers’ needs to manage power, space, and cooling using DCIM software, and thereby further leverage their unique value-add of understanding their customers’ data center environments, architecture, and business objectives. Most fundamentally, for the end customer, DCIM software that provides integration across the entire data center footprint will deliver faster time to value and help future proof IT investments.

    Remember the truism “just integrate” as you lead your business forward in the DCIM maturity process.

    Check back next month as we continue the discussion on the value of DCIM integration and application and workflow integration. 

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:30p
    Zayo Acquires Neo Telecoms

    International bandwidth infrastructure company Zayo Group announced it has signed a definitive agreement to acquire Paris-based bandwidth infrastructure company Neo Telecoms. The deal nets Zayo nine colocation data centers across France, and over 350 metro route miles in Paris. The Paris and regional network throughout France will be integrated into Zayo’s existing European network connecting London, Frankfurt and Amsterdam and the U.S.

    Neo provides dark fiber, IP, Ethernet, wavelength and colocation services to high-bandwidth companies in continental Europe and serves more than 600 carrier and enterprise customers, primarily concentrated in the technology, media, telecom, and finance sectors.

    “Neo has established a strong Bandwidth Infrastructure presence in France and we believe this acquisition will enable growth through our combined fiber footprint for end-to-end solutions,” said Dan Caruso, President and CEO of Zayo Group. “While we will bring a much expanded capability to the table, we also respect the importance of local relationships and expertise. The two principal founders of Neo, Didier Soucheyre and Florian du Boys, will stay with the combined companies and lead Zayo France, a new business unit focused on that market.”

    “Neo has been focused on a very similar Bandwidth Infrastructure product set and is well-aligned with Zayo’s focus on data center connectivity,” says Florian Du Boys, CEO of Neo and future leader of the Zayo France business unit. “The combined networks of Neo and Zayo will provide our customers with the same key services, and now access to Zayo’s extensive international network.”

    Zayo Selected by DE-CIX

    Zayo also recently announced that it has signed a long-term agreement with DE-CIX North America to support the launch of its U.S. interconnection business model in the New York/New Jersey market. DE-CIX, the German Internet exchange operator announced the New York expansion earlier this year. zColo will provide the infrastructure for DE-CIX New York’s access points at 111 8th Avenue and 60 Hudson Street colocation facilities in New York, as well as connectivity to the DE-CIX infrastructure at 165 Halsey St. in Newark, NJ. zColo enables access to 103 points of presence in these buildings and connectivity options to over 800 customers through its dense fiber riser system.

    “DE-CIX New York is delivering a peering ecosystem that provides the world’s largest ISPs, Content Delivery Networks (CDNs), service providers and web hosters with the most advanced interconnection options available,” explains Frank Orlowski, chief marketing officer for DE-CIX. “To deliver this, we require first-class interconnection facilities and an extensive fiber-optic infrastructure that allows our customers to easily connect from anywhere in the NY/NJ metro. zColo and Zayo deliver this for us, with a high level of service and availability.”

    2:00p
    Learn to Take Your Network to 40/100G Ethernet

    The world’s information is doubling every two years. In 2011, the world created a staggering 1.8 zettabytes. By 2020, the world will generate 50 times the amount of information and 75 times the number of ‘information containers’ while IT staff to manage it will grow less than 1.5 times. Throughout all of this – the data center platform will continue to act as the central piece for all next-generation technologies, workloads, applications and data points.

    There is no sign of slowed growth in the production of, and demand for, more data – as well as faster access to it. In this whitepaper from CABLExpress, we learn how high-performance cabling – that can transfer data over 40/100G Ethernet – will be a necessary addition to data centers looking to keep up with this digital data growth.

    The exponential growth in infor­mation means processing speeds have to increase as well, so as not to slow access to data. And in fact, they are. Butter’s Law, a lesser known parallel to Moore’s law, states that data throughput from one optical fiber doubles every nine months. As connections in the data center increase to improve manageability, performance suffers. This is because added connections contribute to increased dB loss. Therefore, a balance must be maintained between manageability and performance.

    Download this whitepaper today to see how data centers are experiencing the most significant change in cabling infrastructure since the introduction of fiber optic cabling. No longer is it a question of if, but when, data centers will migrate to 40/100G Ethernet. Installing a high-performance, fiber optic structured cabling infrastructure is essential to a successful migration.

    Remember, the timeline for migration is different for every data center, depending on technological needs, budget, size and organizational priority. However, educating yourself on 40/100G Ethernet, evaluating your current cabling infrastructure and beginning plans for implementation will ensure a smooth, trouble-free migration.

    3:00p
    New Relic Receives $100 Million Financing Round

    Software analytics company New Relic announced that it has received a $100 million financing round to support further product development and expand the company’s international presence. The round was led by BlackRock, Inc. and Passport Capital, LLC with T. Rowe Price Associates, Inc. and Wellington Management also in participation. The company has raised a total of $215 million and is valued at over $1.2 billion.

    Founded in 2008 by Lew Cirne, New Relic began with an application performance management (APM) solution and has expanded its software analytics offering to make sense of billions of data points about millions of applications in real time. It offers one powerful interface for web and native mobile applications and consolidates the performance monitoring data for any chosen technology.The company was recognized in 2013 by Gartner’s magic quadrant as a leader for Application Performance Management. With 70,000 active customer accounts in only five years time, including 500 new, large enterprise customers in the past year, New Relic has acquired more customers, and tracks and optimizes more metrics and applications than any other single APM vendor worldwide.

    Last month, the company announced New Relic Insights, a real-time analytics platform that transforms collected data into insights about customers, applications and their business. Delivered as a cloud-based software-as-a-service (SaaS) offering and using New Relic’s fast and custom-built Big Data database platform, New Relic Insights empowers application developers and business users alike to make ad hoc and iterative queries across trillions of events and metrics and get answers in seconds.

    “We monitor billions of data points in real-time for tens of thousands of active accounts,” said Lew Cirne, New Relic founder and CEO. “This funding will help us further accelerate company momentum on a global basis, build out our presence among large enterprises and develop both new and existing products, including our real-time analytics platform to enable more organizations make better data-driven business decisions.”

    7:04p
    Understanding International Colocation Webinar Available On-Demand

    It’s not easy extending your data center operations overseas, but more and more, it is becoming a necessity. Businesses and audiences are becoming global, so companies increasingly need to address the market far beyond their traditional borders.

    In “Understanding International Colocation: Best Practices for Extending Your U.S. Data Center Platform Overseas” experts from across the globe participate in discussion of their respective markets in depth. The webinar discusses key colocation market trends, market analysis and key selection criteria for picking a data center in a foreign market. The webinar is now available on demand.

    What trends are each market facing? How is each market growing? What are the key decision parameters to take into account when picking a data center provider in each region?

    Each expert provided a thorough overview of the data center market in their region making the webinar ideal for anyone who is contemplating or just curious about other markets.

    A truly global panel of data center experts participated:

    •  NTT Europe and data center specialist Gyron’s Len Padilla talks about the UK data center market
    • AVP (Products & Service) of Data Centre service at Netmagic, Nilesh Rane gives insight into the market in India
    • Manager of PR office at global information and communications technology company NTT Communications’ Yuko Miyamoto talks about the market in Japan
    • Vice President of Marketing at RagingWire, Jim Leach outlines the U.S. market

    Check out the webinar here.

    7:42p
    Scott Noteboom: Technology Trapped in Real Estate Prison

    LAS VEGAS - Data center technology innovation needs to be freed from the prison that has been created by the real-estate-driven thinking that prevails in the industry.

    That is according to Scott Noteboom. The industry veteran, who in the past ran data centers for Apple and Yahoo, delivered a keynote address Tuesday morning at the Data Center World conference.

    Data centers have traditionally been built with life expectancy of 15-20 years, over which period the “real estate asset” depreciates, Noteboom said. The infrastructure systems inside are treated as part of this asset, on the same depreciation schedule.

    This 15-year-depreciation mindset is the reason power and cooling infrastructure technology has not evolved much over the past 50 years, Noteboom said. “Is the data center truly the static thing that must remain the same for 15 years?,” he said. “If it is, to the technology guys it’s a prison.”

    Faster Innovation at the Device Level

    End-user devices, such as smartphones and tablets, evolve every six months, and IT hardware in data centers gets refreshed about every three years. Innovation in hardware, however, has been inhibited by the static nature of the data center.

    “Imagine how frustrating that is for a technologist,” Noteboom said.

    This is why he is a proponent of thorough disaggregation in the data center: separating the IT and the supporting infrastructure from the building itself. The inspiration for the idea came from visiting massive manufacturing plants in China operated by Taiwanese electronics makers Foxconn and Pegatron, as well as a BMW factory in South Carolina.

    One of these companies’ most important design principles the ability to build these plants anywhere in the world, quickly, simply and affordably. Another one is for the plants to be able to support production of products 30 years in the future without knowing anything about what those products are going to be like.

    One of the ways to achieve these goals is to separate construction and management of the building from design and installation of the infrastructure. The buildings themselves are very simple, and everything that has complexity, such as cooling, electrical and network infrastructure, is managed by different teams, entirely separate from the real estate teams, Noteboom said.

    Disaggregation in the Data Center

    But he is a proponent of even further disaggregation – the type of disaggregation Intel, Facebook and Facebook’s Open Compute Project have been talking about. The idea is to make every component of a server individually replaceable and upgradeable.

    Intel’s Rack Scale Architecture, for example, is a chassis that provides shared power supply, cooling fans and interconnect to however many CPU cores, memory or network interface cards the user chooses to install. When time comes to swap out a CPU, the user does not need to replace an entire server.

    Whether these were the kinds of things Noteboom is cooking up at LitBit, the company he started after leaving Apple in 2013, he would not say. He said he was trained by Apple not to talk about anything you were doing until you had something great to talk about, and has decided to keep things under wrap at LitBit until the time is right.

    Npteboom does say he has been spending a lot of time in Asia lately, and the region’s data center market has dominated his interviews and speaking engagements. Massive growth in the number of internet users in Asia will drive rapid data center capacity build-out, which is where Noteboom sees opportunity for applying expertise of the US data center industry.

    Noteboom spent two years as head of infrastructure strategy, design and development at Apple. For six years prior to that, he was vice president of data center engineering and operations at Yahoo. He got his start in the data center industry in 2000, when he ran operations for colocation provider AboveNet.

    << Previous Day 2014/04/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org