Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, January 5th, 2016

    Time Event
    1:00p
    Helping the Government Run Data Centers Like Google

    If you come across the name Booz Allen Hamilton, it’s usually in connection with defense-agency IT services contracts worth tens of millions of dollars. The tech consulting and engineering giant, more than 100 years old, is primarily in the business of solving big technology problems for government agencies, although it does also work in the private sector.

    What you don’t see is Booz Allen mentioned in the context of open source technology. But that’s something that may soon change, as the company’s recently formed group charged with driving the giant’s participation in the open source community picks up speed. Most of this group’s work is focused on data centers and cloud, Jarid Cottrell, a Booz Allen senior associate who leads its cloud computing and open source practice, said.

    The reason Booz Allen now has an open source practice is the same reason companies like GE, John Deere, Walmart, and Target dedicate resources to open source. Like the manufacturing and retail giants, Booz Allen’s customers in government and in the private sector want to build and run software the same way internet giants like Google, Facebook, or Amazon do, and they want the kind of data center infrastructure – often referred to as hyper-scale infrastructure – those internet giants have devised to deliver their services. Market research firm Gartner calls this way of doing things “Mode 2.”

    “The desire [to add Mode 2 capabilities] is there,” Cottrell said. “There’s a lot of desire. Everybody wants it. We do a lot of architecting work around that.” But while the desire is there, things don’t move quickly in government. Agency tech leaders may be on board with application containers and microservices, but agencies are limited to lists of approved technologies, layers upon layers of review and approval, and a lot of compliance requirements. There’s also a lot of investment sunk in legacy solutions that aren’t yet due for replacement.

    It’s the combination of desire and good timing that marks the departments and agencies that are adapting Mode 2. The others will do it when the time is right, which for some of them will be this year. “We’ll start to see more migration where it makes sense, especially for [things] early on in the application lifecycle, like development, test, and those kind of environments,” Cottrell said.

    Government agencies and private companies have similar needs but in different orders of priority. “We all want to save money, we want to be efficient, we want to do things higher-quality,” he said. But while for private companies Mode 2 is primarily about bringing new products to market faster, its biggest appeal to the public sector is the savings it promises.

    Cottrell’s group’s job is to explore the latest technologies, invest in them, and to do its own research and development. “Go get smart; go to conferences; do training; hire really smart people; just look at what’s new, in particular around data center and cloud; that’s what our team does,” he said.

    Although Booz Allen is not a products company, it builds software tools. And because it’s not a products company, there isn’t much resistance to open sourcing those tools. Its biggest open source project so far is Project Jellyfish, a platform that allows a company to manage multiple cloud services. Built in collaboration with Red Hat – the biggest success story in open source enterprise software – the tool has project and financial management capabilities, as well as analytics for better efficiency.

    Cottrell’s team invested in building the tool and used it to help several customers fill a gap, but then open sourced it. “We’re not a products company, so we open sourced it,” he said.

    Open sourcing a software tool alone by itself is only part of the equation. It’s when a project attracts a critical mass of outside contributors that it becomes a valuable open source project. Creating a community around a project is “the hard part,” as Cottrell put it. Project Jellyfish has some outside contributors, but most of the participants are from Red Hat. Some are from Microsoft, and a few are from companies that contributed from behind the scenes and didn’t want to be named.

    For Booz Allen, participating in the open source community is partially an exercise in resume building, Cottrell said. Putting projects out there helps customers better understand the breadth of capabilities the company has. The community-development aspect of open source is another reason to participate.

    Much of the innovation in IT today happens in communities around open source projects, and traditional enterprise IT users have started to pay attention. For a company like Booz Allen, which serves some of the largest of those enterprise IT users, it is crucial to participate in open source, since its customers look to it for innovation.

    4:00p
    The Next Energy Challenge of Computing

    Sumit Sadana is Executive Vice President, Chief Strategy Officer and General Manager of Enterprise Solutions for SanDisk.

    Computing always seems to be facing an energy crisis.

    In the 1940s, mainframes were powered by power-hungry (and fragile) vacuum tubes. If you tried to make a Google data center out of early supercomputers like the ENIAC, it would consume as much energy as all of Manhattan.

    Back in the ’90s and early 2000s, chip designers warned that chips could begin to emit the same amount of heat—for their size—as rocket nozzles or nuclear power plants, a trend that was stemmed with the advent of multithreaded and multicore devices.

    Virtualization, new data management strategies, and innovative cooling technologies implemented over the past decade, meanwhile, helped pave the way for hyperscale data centers. Ebay, for instance, saved $2 million in data center energy costs by slightly changing its software code on some applications.

    So, have these latest advances taken us to energy efficiency nirvana? Not by any means. We’re still using far more than we need. The National Resources Defense Council estimates that data center energy consumption in the U.S. alone could be cut by 40 percent with existing technologies and more effective monitoring, saving their owners $3.8 billion a year and reducing millions of tons of emissions.

    Just as demand for data centers will continue to grow, so will energy needs. Rapid access to data is the lifeblood of the global economy. Businesses and organizations will soar or sink on their ability to leverage data to achieve new scientific breakthroughs, improve customer service or gain market share. With data center construction growing at 21 percent a year and more countries implementing carbon policies, taking a business as usual approach to energy will only create headaches down the road.

    In the next wave of efficiency, expect to see a tremendous amount of focus on software-defined storage (SDS) and flash memory. Why storage? For one thing, many have already adopted virtualization for servers to raise utilization above the anemic 6 percent to 12 percent levels of the recent past. Storage is today’s low-hanging fruit.

    Second, storage is in the midst of a once-in-a-generation transformation. Flash memory, the primary storage technology for digital cameras and cellular phones, has been moving into data centers over the past few years. Flash systems can deliver data at a faster rate and with far less energy. You’ll see new data center architectures that incorporate both technologies in a way that maximizes bits, bandwidth and electrons. You can analogize the impact of flash will have on data centers to the impact fiber optics had on communications: by improving performance and efficiency at the same time, you fundamentally change what’s possible.

    A hard drive-based storage system for a 50TB database, for example, might require a power budget of 8,800 watts (4,000 watts to run the storage system and 4,800 for cooling.) A similar system could be built with SSDs with a power budget 1250 watts (568 watts for systems and 682 watts for cooling), an 85 percent savings.

    Energy savings can further be increased by leveraging the increased data throughput to reduce the number of servers needed. Companies such as Pandora and AT Internet, in fact, have managed to reduce server count by 40 to 75 percent. More is accomplished with less.

    Beyond the data center, flash will pave the way for the Internet of Things. McKinsey & Co. estimates that $5.5 trillion worth of economic value could be generated by integrating IoT technologies into heavy industry with a substantial portion of the savings coming from efficiency. Industry consumes more than half of the energy in the world, even more than transportation. Experts estimate that industrial customers could further reduce their consumption by an additional 14 to 22 percent with things like intelligent HVAC and leveraging data to control production. Even if we could only harvest a fraction of the potential through intelligent systems, the impact would be significant.

    The impact will even be more profound in emerging nations like Nigeria, India and China where the spread of technology can be hampered by blackouts, power theft and weak grid infrastructure. By consuming less energy, technology becomes more robust, economical and versatile. It’s that simple.

    Energy concerns won’t stop the digital revolution. However, we are going to need to take actions so energy won’t slow it down.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

     

    5:46p
    IPv6 Adoption Grows to 10 Percent as Internet Protocol Turns 20

    WHIR logo

    By theWHIR

    As the IPv6 protocol celebrates its 20th birthday this month, new statistics from Google indicate that IPv6 adoption among its users has surpassed 10 percent.

    IPv6 adoption has grown from less than 6 percent around December 2014 to 9.10 percent on Dec. 29, 2015.

    Google measures IPv6 deployment by having a fraction of users execute a Javascript program that tests whether the computer can load URLs over IPv6, according to a report by Ars Technica.

    Belgium has the highest rate of IPv6 adoption at 44.32 percent, followed by Switzerland at 30.89 percent, while the United States is among the highest at 25.63 percent. Africa has the lowest rates of IPv6 adoption, with Ethiopia and Botswanaamong the African countries with zero percent adoption.

    IPv6 adoption continues to be tracked closely as the American Registry for Internet Numbers (ARIN) issued the final IPv4 addresses in its free pool in September, meaning that IPv4 has reached depletion.

    So if IPv6 has been around for 20 years, why is it taking so long for IPv6 adoption to grow?

    Ars Technica writes that “even though all our operating systems and nearly all network equipment supports IPv6 today (and has for many years in most cases), as long as there’s just one device along the way that doesn’t understand the new protocol—or its administrator hasn’t gotten around to enabling it—we have to keep using IPv4. In that light, having ten percent of users communicate with Google over IPv6 isn’t such a bad result.”

    Indeed, web hosts have been rolling out IPv6 support for years. In 2014, for example, Irish web host Blacknight launched its IPv6-enabled shared hosting platform.

    This first ran at http://www.thewhir.com/web-hosting-news/ipv6-adoption-grows-to-10-percent-as-internet-protocol-turns-20

    8:15p
    Switch Contracts for Solar Power for Its Entire Data Center Footprint

    Switch, operator of the massive SuperNap data center campus in Las Vegas, has signed its second solar power purchase agreement, which will ensure all of its Nevada data centers are fully powered by renewable energy.

    The company announced last year an agreement to buy energy generated by a 100 MW solar farm in southern Nevada and made a commitment to powering its data centers 100 percent with renewable energy, as it became one of the first two data center providers to join the White House-driven climate pledge for the private sector. Switch signed the second PPA, for energy from an 80 MW solar project that’s also being built in southern Nevada, in December.

    The company doesn’t disclose how much power its data centers consume. However, according to Adam Kramer, executive VP at Switch, the 180 MW in capacity it has contracted for will be enough to offset consumption of its existing Las Vegas campus as well as the new one it is building near Reno, Nevada, where the anchor tenant will be eBay.

    Investments in renewable energy by Switch and Equinix – the second data center provider that joined the American Business Act on Climate Pledge late last year – signal a rise in demand for renewable energy by data center customers.

    Equinix is the biggest retail colocation provider in the world, and you’d be hard pressed to name a major provider of cloud or other internet services who does not use its facilities. Last year, Equinix agreed to buy enough renewable energy to offset energy consumption of its entire North American portfolio.

    Switch, while smaller than Equinix, has some of the most well-known brands on its roster of 1,000 or so customers. The company lists Google, DreamWorks, HP, Boeing, Fox, Microsoft, and Cisco as its customers. Many of them, such as HP, Google, and Cisco, have also joined the climate pledge, and many who haven’t joined still have renewable energy as part of their corporate sustainability goals.

    “The interest [in renewable energy] has picked up dramatically,” Kramer said about customer demand. “We’ve seen enormous interest.”

    First Solar will build the two solar projects that will supply clean energy to Switch. Named Switch Station 1 and Switch Station 2, they are expected to come online before the end of 2016.

    Switch will be buying the energy from its southern Nevada utility NV Energy, using a special renewable energy tariff that was established last year, partially as a result of negotiations between the data center provider and the utility. The rate Switch will pay for the energy will include all of the costs associated with operating the solar farms.

    9:13p
    Apple Doubling Down on Data Center Construction in Reno

    Apple has filed for approval to build another massive data center campus adjacent to the existing Apple data center site in Reno, Nevada, local officials told the Reno Gazette Journal.

    Codenamed “Project Huckleberry,” the plans call for a new shell with multiple data center clusters and a support building. Its design is similar to the company’s existing campus at Reno Technology Park, called Project Mills.

    Mills isn’t fully built out yet, and when it is, it will consist of 14 buildings, totaling more than 400,000 square feet.

    Apple applied for a permit to build a new 50 MW electrical substation at the site last year to support its growth in Reno. The campus is currently being served by a 15 MW feed from the utility NV Energy, according to the Gazette Journal.

    The new substation is crucial, as the power supply available there today is nearing capacity, Trevor Lloyd, senior planner at Washoe County, told the local news service.

    Apple is the first company to have built a data center at Reno Technology Park, a site developed specifically to attract data centers and other energy-intensive high tech facilities. The area has seen a pickup in construction activity since Apple started building there in 2012, and economic development officials cite the Apple data center as the reason more high tech firms have decided to move in.

    Tesla is building a massive battery plant nearby, and Las Vegas data center provider Switch is building its second campus there. Neither project is within RTP territory, however.

    Managed services company Rackspace has expressed interest in building a data center at RTP.

    9:45p
    DuPont Fabros Wants to Sell New Jersey Data Center, Exit Market

    DuPont Fabros Technology wants to sell its Piscataway, New Jersey, data center and exit the New Jersey market, which it said was best-suited for retail colocation providers, rather than for companies whose business model is to lease wholesale data center capacity, such as itself.

    The plan to sell the property is part of a change in strategy the company has started implementing last year, moving away from retail colocation, expanding the variety of wholesale products it offers, and entering new markets.

    Like its biggest competitor Digital Realty Trust, DFT has been rethinking its business strategy, although Digital Realty has decided to expand aggressively into the retail colocation business with its acquisition of Telx, while DFT is stepping away from retail completely. Digital Realty has also been selling properties that don’t align with its new business strategy.

    The company leases lots of space and power capacity to the likes of Microsoft, Facebook, and Yahoo. It currently has 12 data centers in four US markets: Northern Virginia, New Jersey, Chicago, and Silicon Valley. The facilities total 3 million square feet and nearly 270 MW of critical power capacity, the bulk of which is in Northern Virginia.

    Proceeds from the sale of the New Jersey data center will partially fund DFT’s entry into new markets, including Toronto, Portland, and Phoenix. The company has already started building a massive data center in Toronto.

    “NJ1 is a first-class data center, and we have developed many valuable customer relationships during our ownership,” Chris Eldredge, DFT’s president and CEO, said in a statement. “NJ1’s location is best-suited, however, for more retail-oriented operations. Our plan to exit the New Jersey market with the sale of this property will allow redeployment of capital in target markets that match our objectives for growth and profitability.”

    As a result of implementing the plan to market the New Jersey data center for sale, the company expects to incur an impairment charge from $115 to $135 million in the fourth quarter of 2015. The charge will lower the facility’s previously estimated value as part of DFT’s portfolio to its current fair value.

    While it will not impact funds from operations, the charge will lower the real estate investment trust’s earnings per share by $1.41 to $1.66.

    The 360,000-square-foot facility, which sports a massive rooftop solar array, has 88,000 square feet of data center space built out, 70 percent of which is leased to customers. Half of the facility’s 18 MW of power capacity is spoken for, and there’s room to develop a second phase.

    << Previous Day 2016/01/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org