Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, June 20th, 2014

    Time Event
    11:00a
    Khosla: Get the Humans Out of the Data Center

    Between battling activists in California for his right to block the only access road to a picturesque public beach that happens to run through his property and deciding which tech startup founder will get a lifeline to continue slaving away around the clock for another 12 months, India-born venture capitalist Vinod Khosla is happy to make some time to remind the public that machines are much better at doing things than humans are.

    In May, the billionaire who co-founded Silicon Valley legend Sun Microsystems, told an audience at a Big Data and biomedicine conference in Stanford that we will have better healthcare when we have fewer doctors and more machines doing their work. This week, speaking at the GigaOm Structure conference in San Francisco, Khosla said the most important opportunity in the business of IT is getting rid of all the IT people.

    “It’s ridiculous to have humans manage the level of complexity that we have humans manage inside the data center,” he said. Cost of hardware and software is a tiny fraction of the total cost of ownership of IT infrastructure because of the massive human cost. “People are a big cost in IT. Not equipment itself. Let’s take that out.”

    The opportunity is in building a data center operating system that will automate resource management much like a computer OS does. This is what Google is doing with tools like Omega, which automates management of its global data center infrastructure. This is what Mesos (an open source cluster manager) is ultimately aimed at, he said.

    Companies using Amazon Web Services or Microsoft Azure clouds instead of buying and running their own servers is a way of reducing IT staff. “All I’m saying is do it for the enterprise too,” Khosla said.

    An innovation void that had to be filled

    A big change in IT is only starting to happen, and it isn’t the hardware incumbents that are leading it. “I haven’t seen a new architectural innovation from an IBM or a HP or a Dell – and I’m going to get some nasty phone calls – in 20-30 years,” he said. “Innovation in computer architecture fundamentally stopped … and then this thing happened on the side.”

    By “that thing” he meant Google, which innovated because it had to. Speaking at the same event earlier, Urs Hölzle, senior vice president of technical infrastructure (and employee number eight) at Google, said the company was initially forced to rethink data center infrastructure because it had to cut cost as it scaled. Off-the-shelf gear did not allow for that. Eventually, however, technical necessity became the impetus for innovation. Off-the-shelf systems just weren’t good enough for what the company was doing.

    “They’ve invented a new way of doing things,” Khosla said. “I don’t think Google talks to Cisco about getting its networking gear. I don’t think they even look at those product lines.”

    Other web companies – Facebook, Amazon – were in the same position and came up with their own ways of doing data centers. Google and Facebook share bits and pieces of that innovation with the industry. Both have open sourced a ton of software, and the latter has been open sourcing hardware it has designed through the Open Compute Project.

    In the most recent examples, Google open sourced Kubernetes, an application container manager that is a leaner version of Omega, earlier this month, and Facebook Vice President of Infrastructure Engineering Jay Parikh announced this week at Structure that the company would contribute design of its first own top-of-rack network switch to the Open Compute Project.

    Giving Google’s data centers to the masses

    Besides scale, web companies engineer for agility, because the rate of change in their world is much higher than it is in the enterprise. They build new applications and features constantly and deploy them immediately, and their infrastructure needs to be flexible enough to handle it. “And the same is happening in smaller ways inside the enterprise,” Khosla said.

    It is only going to become truer in the enterprise over time, which is a big opportunity for startups. “Giving everybody else the benefits of what Google and AWS (Amazon Web Services) are doing for themselves is a massive opportunity … to innovate and start new companies, and I think we’re starting to see the beginning of it,” Khosla said.

    The startup world is privy to this. Docker, which has built a commercial offering around an open source technology for application containers (a technology similar to an ingredient in Google’s secret sauce) of the same name, released version 1.0 of its software at the first and yet extremely well-attended DockerCon in San Francisco earlier this month. On the same day, Mesosphere, a startup that is commercializing the aforementioned open source cluster manager Mesos, announced that Andreessen Horowitz (another heavyweight Silicon Valley VC firm) had injected it with a $10.5 million Series A.

    VC will flow to hardware startups in due time

    As GigaOm executive editor Tom Krazit pointed out in his conversation with Khosla on stage, there hasn’t really been a whole lot of VC investment in data center hardware startups. Khosla replied that the money would come in due time. “There’s a cycle of innovation going on, which then leads to a cycle of investment,” he said. “I’m pretty optimistic there’ll be a lot more interest in this area, as marquee offerings, like my friend Andy Bechtolsheim’s new company Arista, really start to get the ball rolling.”

    Bechtolsheim co-founded Sun together with Khosla and a third partner, Scott McNealy, in the early 80s. He was also the first major investor in Google in the 90s. His current company Arista, which makes network switches for hyper-scale data centers and whose customers include Facebook, Morgan Stanley, Netflix and Equinix, held a successful IPO on the New York Stock Exchange on June 6, its shares debuting at about $12 more than expected.

    Another example of a successful next-gen hardware startup is Nutanix, which has blended storage and compute in one box, making SAN and NAS useless, raised about $100 million in a Series D round in January. Khosla Ventures participated in the round. “Nutanix is doing really, really well,” he said. There are others, and once the successful ones ramp up, more venture capital will open up to companies in this space. “Once you get a couple of role models, investors tend to follow.”

    12:00p
    ASG Pitches Single Management Platform For Legacy IT and Cloud

    ASG Software Solutions, which aims to help enterprises bridge the gap between business and IT processes, has added a new tool, called CloudFactory 8.0, to its arsenal, which manages both legacy and modern environments.

    ASG has been around since 1986. Initially focused on mainframes, it started to look beyond that business in the early 2000s. Today, despite its fast-growing end-to-end cloud orchestration business and an annual revenue of more than $300 million, it’s managed to stay relatively under the radar.

    Rather than pitching a wholesale switch to cloud, ASG’s model is to enable IT organizations to leverage their existing data center resources (mainframes, distributed and virtualized workloads) alongside their public cloud resources (IaaS, PaaS and SaaS) and software investments (Microsoft, Citrix and VMware) under one management platform.

    “CloudFactory 8.0 lets businesses derive more value from their past, present and future data center investments by effectively eliminating traditional silos to enable application-centric cloud and data center management,” said Torsten Volk, vice president of product management at ASG Software Solutions. “With a single tool for both legacy and modern environments companies can now truly bridge the gap between the corporate data center and the tools their employees use every day.”

    The tool provides unified management capabilities for disparate IT resources, from cloud software services like Office 365, Dropbox or Salesforce to cloud infrastructure from the likes of Amazon Web Services to in-house virtualized environments by VMware, Citrix or Microsoft. It is compatible with open source cloud technologies, such as OpenStack, CloudStack and Cloud Foundry.

    Pascal Vitoux, ASG CTO, said a companies’ resource consumption model, whether variable or fixed, is integral in choosing which environment is best from a cost perspective. “Our platform allows enterprises to do what is most advantageous to them. It allows them to build internally as well as consume resources in the public cloud.”

    ASG offers a two-week cloud assessment program, where it goes deep into how a company is consuming services to help determine the right approach. It then helps construct a private catalog offering inside the enterprise where employees can consume services without necessarily knowing whether it’s internal or in the cloud.

    “For us, it’s more a mixture of both cloud and on-prem based on what makes more sense,” said Vitoux. “We help end provisioning in a silo.”

     

    2:00p
    Rethinking Data Center Cooling

    For a long time, the data center platform stayed more or less the same. But over the years, a lot has changed.

    The current industry revolves around constant connections, growing user bases and far more data. Recently, there has been increased demand for hyper-scale and High Performance Computing “HPC” run­ning on high-density hardware platforms. This demand has driven the need for more powerful and effective cooling systems.

    Additionally, the cost of power – as well as increased sustainability awareness – has placed more focus on cooling system energy efficiency. This has motivated IT equipment and cooling systems manufactur­ers and data center designers to research and develop alternatives to existing cool­ing systems.

    In fact, in some cases the form-factor of IT equipment has been transformed to become cooling system centric.

    In this whitepaper from Intel, HP and Data Center Knowledge contributor Julius Neudorfer, we examine the various developments, emerging trends, such as liquid cooling, and functional deployments of other new cooling technologies, as well as their stra­tegic advantages.

    There are numerous aspects to consider when looking at data center cooling parameters:

    • Basic Summary of Todays’ Air Cooled Systems
    • IT Air Cooled Equipment
    • Power Usage Effectiveness PUE – Hidden Fan Energy
    • Developments in Facility Cooling Systems
    • Why Liquid Cooling – Why Now?
    • Close-Coupled Cooling
    • Higher Density and Heat Transfer Capacity
    • Immersion Cooling – Mineral Oil
    • And much more!

    The efficiencies of the modern data center are rapidly changing. A big part of that revolves around the ability to cool and keep the data center operating at optimal levels. The rate of innovation and change in cooling tech­nology is accelerating to meet the rising de­mands of heat loads of HPC and hyper-scale com­puting demands. While mostly driven by the need to effectively remove the tremendous amount of heat that these applications generate, which in most cases still becomes waste energy. Needless to say, energy usage and efficiency is a significant consideration for any computing cooling system, and even more so for large scale projects.

    Download this white paper today to understand that when considering any type of alternate technol­ogy (IT hardware and the facility cooling system) that deviates from an industry standard, long term issues must be considered. Po­tential risks and costs or early obsolescence due to lack of technology acceptance and product market adaption are important factors to consider for your organization.

    4:15p
    IO Gets Patent For Software-Defined Modular Data Centers

    IO has been granted a new patent for the company’s software-defined modular data center. The patent defines the technology necessary for a modular data center with intelligent management, monitoring and control mechanisms.

    The patent describes a modular data center which includes a network module and power module, which contains electronics equipment for conditioning and distributing power to the one or more data modules and the network module.

    IO continues to innovate as well as accumulate intellectual property rights through its heavy commitment to research and development. The patent details not only the modular construct, but an approach to managing the environment through software as well.

    The software defined data center is hailed as the future, the next evolution of a more operationally efficient, insightful way of doing things. Modular data centers approach building data centers in pre-fabricated chunks, allowing a data center to expand on an as-needed basis, but while space is added there remains complexity in managing the environment within that space.

    IO solves that problem with the IO.OS data center operating system. The company formed an analytics division in 2013 to advance DCIM smarts when it comes to modular data center building, focusing on how it can better manage and control the data center through software.

    The patent describes IO’s way of providing power and environmental management required to support IT equipment in addition to providing highly-scalable and rapidly deployable data center space through modules. Google has a patent for its approach to modular data centers as well, but it differs in the software-defined component found in IO’s patent.

    IO’s modules come equipped with IO.OS, which assists in measuring utilization and energy consumption. It goes beyond the ability to build just-in-time space, adding a management element as well.

    “At IO, we believe the status quo must be challenged and IT must be reinvented to build a sustainable and secure future for our customers,” said George Slessman, chief executive officer and product architect at IO. “This modular data center patent represents an important achievement for IO’s purpose-driven people and customers.”

    The full breadth of IO’s work is being substantiated in a broad, global patent portfolio that puts the company in a strong position to serve the market. IO remains focused on the software-defined data center and continues to innovate.

    The company says IO technology lowers the total cost of data center ownership compared to traditional data centers, enabling dynamic deployment and intelligent control based on the needs of IT equipment and applications in the data center.

    5:32p
    HP Touts its Wares Involved in Making How to Train Your Dragon 2

    In what has become customary, HP is once again reminding everyone the extent of its involvement in big animated feature film productions by DreamWorks.

    How to Train Your Dragon 2, DreamWork’s 29th animated film, started showing in theaters last Friday, and this week HP announced that the movie’s producers relied on a lot of its gear to make it happen, including HP workstations, blade servers and private cloud services.

    The film used 130,000 individual computer-generated frames and 270 billion pixels. It took 90 million render hours, with servers processing an average of 500,000 render jobs per day.

    DreamWorks used HP’s included Z800 Workstations and Z820 high-performance workstations, HP BladeSystem c7000, a mix of HP ProLiant server blades and scalable 3PAR StoreServ Storage. The team used cloud services from HP’s Utility Services portfolio, which includes a handful of Infrastructure- and Software-as-a-Service offerings.

    HP and DreamWorks have built a bit of a history of collaboration on animated films. Most recent examples include Mr. Peabody & Sherman, Turbo and Rise of the Guardians.

    The partnership extends beyond HP supplying products to the animation studio. It includes a lot of research-and-development work to improve on processes and technology in animation.

    Some of the solutions that come out of this process, according to HP, cross over into other industries.

    This joint “living laboratory” has helped create such things as billion-color HP DreamColor display. Today, the studio is adopting HP DreamColor Z27X for the most demanding workflows, enabling artists to view accurate color for future visual technologies, such as ultra-wide color gamut and 4K input.

    5:45p
    Friday Funny: Overhead Cabling

    The (official) arrival of summer is just around the corner and we’re in a celebratory mood. Let’s ring in the new season with a brand new Friday Funny!

    Diane Alber, the Arizona artist who created Kip and Gary, has a new cartoon for Data Center Knowledge’s cartoon caption contest. We challenge you to submit a humorous and clever caption that fits the comedic situation. Please add your entry in the comments below. Then, next week, our readers will vote for the best submission.

    Here’s what Diane had to say about this week’s cartoon, “I’m not sure about you, but I’ve walked into a few data centers that have SERIOUS overhead cabling issues to the point where it looks like it could collapse the ladder rack!”

    Congrats to the last cartoon winner, Todd, who won with, “I told you those old monitors were still good for something . . . See? Ambient Cooling.”

    For more cartoons on DCK, see our Humor Channel. For more of Diane’s work visit Kip and Gary‘s website.

     

    6:30p
    TierPoint Buys Philadelphia Technology Park Data Center

    TierPoint didn’t wait long after recapitalization to make a move. The company has acquired Philadelphia Technology Park, a 25,700 square foot data center located within the Philadelphia Navy Yard. It will be rebranded as TierPoint.

    Philadelphia Technology Park opened in 2010 was built to high requirements, so it won’t require as significant an investment for upgrades as an older, legacy building would. The data center originally costed $25 million to build. The purchase price was not disclosed.

    “This move not only strengthens our presence in the Mid-Atlantic region, but it is an excellent fit for us strategically, as we continue to build our national network of data centers to meet the growing demand for colocation, cloud and managed services,” Paul Estes, CEO of TierPoint said.

    This is the sixth acquisition for the company, which was itself acquired earlier this month.

    Reacapitalized by private equity partners

    TierPoint management and a group of investors teamed up to buy the company in a recapitalization move meant to put it in a better position to continue expanding through acquisition.

    The company focuses on buying data center properties in underserved regional markets. It is an example of what is often referred to as a “rollup” play, where investors — usually private equity — buy up small companies that compete in the same market and make business gains through economy of scale. Philadelphia is not traditionally known as a major data center market and fits right into the firm’s modus operandi.

    Philadelphia: growing but underserved

    Philadelphia is a sizable city located between the two biggest East coast data center markets of New York and Virginia. Data centers have had a habit of clustering together, so Philadelphia has seen slower growth as capital tends to be deployed in those two nearby regions.

    Philadelphia has seen success as a disaster recovery market, with SunGard having a major presence.  But it also has a strong local economy based on diverse sectors like healthcare, financial services and manufacturing, and as these companies continue to embrace an outsourcing model, it’s advantageous for TierPoint. Customers that colocate tend to like to be within a screwdriver’s reach of their servers.

    “Philadelphia Technology Park is a premier … data center in a fast-growing technology market,” Estes said . “Philadelphia is a top-five technology employment market with a rapidly growing e-commerce ecosystem and with strong job growth.

    The company was founded as Cequel Data Centers but took on the moniker of TierPoint following an acquisition in 2012. It has six other WAN-connected data centers in Dallas, Oklahoma City, Tulsa, Spokane, Seattle and Baltimore.

    It now owns and operates more than 141,000 square feet of raised-floor data center space.

    << Previous Day 2014/06/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org