Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, August 4th, 2017
Time |
Event |
12:00p |
Equinix Positioning for “Next Wave” of Cloud Data Center Deployments As the data center expansion frenzy by the largest cloud providers in top markets becomes less of a frenzy and more of an incremental growth story, Equinix is positioning itself to capture the next wave of cloud data center growth, which, according to its executives, will be driven by the next generation of applications and opportunities in new markets around the world.
For the world’s largest data center provider that means making some changes to the way it develops business strategy and the way it structures deals with some of its clients.
“We’ve lived with six years or seven years of a wave — we call it kind of the first wave in cloud — and we think there’s another wave coming,” Stephen Smith, Equinix CEO, said on a call with analysts Wednesday. “Obviously, the big hyper-scalers and the cloud providers have learned from their first wave of deployments,” so the next wave “is going to be different.”
To make sure its strategy is in tune with those changes, the company has created a new business group led by one of its top executives. The new Strategy, Services, and Innovation unit consists of the office of the CTO, business development, product management, and product engineering, as well as several new business teams, all charged with “evaluating and translating key market, competitive, and technology trends into actionable business requirements.”
Charles Meyers, who’d been Equinix’s COO for the past four years or so, was appointed last month to lead the SSI unit as President of Strategy, Services, and Innovation.
New Cloud Architecture Driving Data Center Decisions
As cloud providers have learned from the first several years of deploying cloud availability zones, network nodes, and access points, Smith said, they’re changing both the architecture of their cloud infrastructure and the edge of their deployments. “Edge” in this case means all the points where an individual cloud company’s network ends, handing off traffic to another service provider who takes it to the end user.
“We’re watching all that; we’re working very closely with all of them; and yes, we want to position to move quickly as they continue to deploy, and we think it’s still very, very early,” he said.
While there isn’t a comprehensive list of all the changes taking place – the new unit was formed to figure that out – at least some of the drivers are clear.
A More Distributed Cloud
The Internet of Things requires a more distributed data center topology, where many relatively low-capacity nodes are deployed closer to where many devices generate data. Some companies have already started investing in this edge computing infrastructure – AT&T and NTT Communications are two of the more prominent examples – but that investment is bound to accelerate once the anticipated 5G wireless standard comes out, enabling the kind of data transfer rates required for next-generation applications like self-driving cars and augmented and virtual reality.
Read more: What’s Behind AT&T’s Big Bet on Edge Computing
IoT edge control points “are going to be connected to the cloud and are going to require storage, networking, and server infrastructure all over the world to drive connected cars, connected tractors, sensors, cameras — all these things that are going to get connected to the cloud and to the internet,” Smith said.
How Equinix’s strategy is going to reflect this remains to be seen. The more likely scenario is its site-selection thinking will be heavily influenced by the ability to either host edge computing nodes or quickly aggregate data from as many of them as possible; while its technology investment decisions will be focused on enabling interconnection of networks that carry that data.
“You’ll probably see Equinix taking a more active role in positioning ourselves for extending the interconnection footprint to where these aggregation points matter, and how they evolve … and how the edge moves,” Smith said.
That may mean expanding in secondary data center markets or entering new ones Equinix hasn’t traditionally pursued. The company has already seen demand from cloud providers pick up in places like Sydney, Melbourne, and Osaka after they built out their initial points of entry into Asia-Pacific — the top-tier markets like Singapore, Hong Kong, and Tokyo.
See also: Edge Data Centers in the Self-Driving Car Future
New markets this next wave of cloud deployment may take Equinix into some emerging metros in Southeast Asia, South Korea, India, South Africa, and additional Latin American countries (it already has data centers in Brazil and Colombia).
Preparing for Bigger Deals
The next wave may also mean making different types of deals with some of Equinix’s customers. While the company doesn’t do wholesale leases in their traditional form, it has made large-capacity deals with several clients it considers important strategically.
Because it gets higher returns on data center space where many players with smaller footprint interconnect, Equinix has been reluctant to do large single-tenant wholesale deals. While it doesn’t sound like it’s going to start doing those now, the company is going to be more flexible in the way it structures deals with cloud providers who need lots of capacity in particular markets.
What shape those deals will take is unclear at this point; figuring that out is one of the things the new SSI group has been charged with. Equinix is seeing more interest from hyper-scale cloud providers in larger-capacity deals, and it wants to be able to deliver.
“It is something that we are actively looking at within the context of the new SSI organization,” Meyers, the SSI group’s president, said on Wednesday’s call. “It’s to say, hey, are there a very select set of customers and requirements on the sort of hyper-scale side that we view as strategic, that we would like to increase our appetite for? And then, what are the ways that we can do that both from a design and engineering and deployment methodology, as well as how we would finance those transactions?” | 3:00p |
Switch Donates $3.4M in Data Center Services for Reno’s New Supercomputer The University of Nevada, Reno, is on its way to having a new supercomputer. Called Pronghorn — after the American antelope, the fastest mammal in North American — the new $1.3 million, 310 TFLOPS high-performance cluster is part of the University’s initiative to reach R1 Carnegie Research Classification. When completed, it will provide 30 times more computing power than the university’s existing HPC system.
“High-performance computing is critical for modern research and scientific discovery,” the university’s president, Marc Johnson, said in a statement. “The impact of this will be multi-dimensional; it will allow for faster analysis and exchange of large scientific datasets. It will contribute to deeper discovery across a range of research disciplines university-wide and to development of industry partnerships.”
Proghorn will be housed in the new Tahoe Reno 1 facility by the Las Vegas-based data center provider Switch. The data center, whose anchor tenant is eBay, opened in February. Switch has pledged to be a benevolent landlord, by donating $3.4 million in critical infrastructure support, including space, power and security, for the next five years.
“Making Nevada the most connected state and driving economic development through technology and data analytics are critical priorities that Switch shares with the University of Nevada, Reno,” explained Adam Kramer, Switch’s executive vice president for strategy. “This collaborative project will cement the university’s commitment to strengthen its status as a top-level research university and its ability to partner with the private sector.”
According the the university’s Office of Information Technology, the system will be built by Dell EMC and DDN Storage.
“The idea is to build an infrastructure with enough capacity so we have what we need with additional ‘head room’ for future development,” the university’s vice provost and chief information officer, Steve Smith, said.
According to specs published by the university, Pronghorn will consist of 64 CPU nodes, with each node utilizing 32 E5-2683 v4, 2.1GHz Intel Xeon processors. Counting the processors used by 32 GPU nodes, the system will employ a total of 2,400 processors with 19TB memory. Storage will utilize DDN GridScaler appliances implementing the IBM General Parallel File System with 1 PB capacity.
“Some big data projects require large-scale memory while others require high-speed networks,” Jeff LaCombe, the chair of the university’s faculty-based Cyberinfrastructure Committee, said. “We are looking to balance both.”
Initial hardware installation is expected to be completed in September 2017, with availability for faculty and investors scheduled for November. The system should be fully operational in January 2018. It will be used for research that will include artificial intelligence, robotics, and computational biology.
The project is being funded by the State of Nevada Knowledge Fund (facilitated by the Governor’s Office for Economic Development); a donation from a university supporter and noted researcher, Mick Hitchcock; the university’s Research & Innovation division; its Office of Information Technology and faculty investors.
Industry access to Pronghorn will be coordinated through the Nevada Center for Applied Research, a research and technology center that makes the school’s facilities, equipment and talent available to industry through customized, fee-based contracts. Industry partners must have a tangible connection to the university, such as a research collaboration. | 5:30p |
Alphabet’s Green Energy Ambitions Hit Turbulence
Mark Bergen (Bloomberg) — On May 16, Makani released a YouTube video. A camera pans on a T-shaped airplane, with wings stretching 85 feet holding eight small turbines and a tether connecting it to a tall ground station. The plane swoops into the air. It dips and soars, looping elegantly in circles that mimic a windmill, something it was built to replace.For more than a decade, engineers have been building this “energy kite” to harness wind power with 90 percent fewer materials than conventional wind turbines. But Makani, one of the oldest green-energy projects at Alphabet Inc.’s X research lab, is struggling to take flight. Support from its parent has waned in recent years, according to multiple people who have worked at and with the company. Several key Makani members have left. A former executive called it a “shell” of its former self. The people asked not to be identified discussing private matters.
The story of Makani is part of a broader struggle at Alphabet to develop clean energy technology with climate-saving promise. The tech giant is the largest corporate consumer of renewable power, and it continues to explore renewable ideas. But since Makani joined X in 2013, Alphabet has canceled or let languish an effort to create a smart grid system, a way to make fuel from seawater, and floating solar panels, a project that hasn’t been reported before.
See also: Here’s How Much Energy All US Data Centers Consume
For some, this is the result of Alphabet’s naïve approach to the field. “The reason they’re doing it is arrogance, not expertise,” said Saul Griffith, a co-founder of Makani who left the company in 2009. “It’s the triumph of novelty over rigor.”
Still, executives at X say they’re committed to the field. “It is hard, and many other people have given up,” said Obi Felten, a director at X. “If we don’t solve it in our generation, many other things will become irrelevant.” In July, X spun out a new drilling technology for geothermal home heating, a less-ambitious endeavor called Dandelion. Another early-stage X idea, code-named Malta, aims to store renewable energy with molten salt.
It’s also a difficult time for all renewable energy efforts. Prices for competing energy sources like oil and natural gas have slumped. Government support for fossil fuel alternatives has withered. Inside Alphabet, Chief Financial Officer Ruth Porat has instilled new cost controls and curbed some ambitious, long-term projects with uncertain payout. “The stock market does not reward Google for doing intelligent things in energy,” Griffith, the Makani co-founder, also said.
Astro Teller, the head of X, said Makani is transitioning out of a “building phase” to deployment. The team flies their kites regularly, looping them at different diameters, speeds and angles. “They have this year been very aggressively out testing,” Teller said in a recent interview. “They now do, every four to six weeks, what would have been a miracle last year.” Google co-founder and Alphabet president Sergey Brin said the company continues to be excited about airborne wind energy “and Makani has demonstrated incredible progress in that respect.”
See also: Google Will Be Powered Completely By Clean Energy in 2017
But Teller stressed that Makani’s mission — finding a new way to generate energy — is now just part of an array of efforts to combat climate change. Solar and wind energy are plentiful. Larger hurdles, and larger markets, exist in cutting the inefficiencies in using, transporting and storing energy. Making new, renewable energy is less of a moonshot (X’s raison d’être). “To the naïve observer, that looks like the world of solving climate change,” Teller said. “It’s not a rounding error, but it’s only one of the factors.”
Makani first connected with Google in 2006, when energy markets — and the internet search giant — looked very different. Like Google’s founders, Makani’s creators were unorthodox. Griffith, a MacArthur fellow, tinkered with homemade kite-powered surfing. Don Montague, his co-founder, raced them professionally. The pair hired fellow novelty sailing enthusiasts and eccentrics. Makani, which means “wind” in Hawaiian, ran out of an old naval air station in Alameda, California. Corwin Hardham, a third founder, told reporters he wind-surfed and paddle-boarded to the office.
“Makani is a really, really cool, important next-generation pursuit,” said Cathy Zoi, a former Department of Energy assistant secretary who served on Makani’s board. “It’s an entirely new way to approach wind.”
See also: Cleaning Up Data Center Power is Dirty Work
Google’s founders Brin and Larry Page met Griffith at a TED Conference and Montague while kite-surfing. The internet billionaires invested $10 million in Makani in 2006. Another $5 million came from the philanthropic arm, Google.org, in 2008. Then the financial crisis hit, putting steady funding for Makani on ice. Montague and Griffith left, leaving Hardham in charge. Tests to develop working kites continued. But tragedy struck in October 2012: Hardham died, at age 38, of a chronic heart condition.
“Those six-to-twelve months after Corwin passed were very hard for a lot of people,” said Gia Schneider, Hardham’s partner at the time. Hardham handled fundraising, and the employees worried the company might shut down in his absence. But about six months after Hardham’s death, Google swooped in and acquired Makani outright.
After the deal, Google made Makani a “moonshot” at X , squeezing it in with the self-driving cars, high-altitude internet-beaming balloons and Google Glass. Green energy partners and associates stopped hearing from them. “Once things go inside Google, everything gets super, super secret,” said Zoi. The last time she saw Makani’s plans, in 2013, she said it had a “plausible roadmap” to commercial deployment.
Inside X, Makani received the gospel of “10x” — an edict from Page and Brin to produce tech ten times better than what’s currently out there. Still, Makani was relatively small. Its budget clocked in around five percent of the overall X budget of $500 million in 2014, according to one former executive who did not want to discuss internal issues publicly. Headcount rose and the unit was pushed to craft ambitious deals with utility companies, sometimes ahead of the technical schedule of its product, this person said.
Other X projects, like its life sciences and driverless car teams, cut partnership deals before spinning out from the lab. Makani completed its first test flight with a commercial-scale kite late last year and has done dozens more since, but it hasn’t begun supplying energy to any utilities yet. The companies Makani deals with are more conservative than those in most other industries. Teller attributed reluctance to embrace new technology in the sector to the need for “bankable” projects. New energy generation methods must pretty much be guaranteed to work to reward utility investments that typically take many years or decades to produce returns, he explained. “It’s a somewhat risk-averse industry,” he said.
Energy markets have moved against Makani, too. The cost of producing wind energy has dropped below where several early members of Makani expected. And the price of wind turbines has fallen about 35 percent since 2007, according to data compiled by Bloomberg New Energy Finance. That hurt Makani’s odds of offering a service that competes with turbines, according to a former senior executive at Alphabet who worked on energy projects including Makani. “This is a tough order for emerging tech to meet,” said Bloomberg New Energy Finance analyst David Hostert. An X spokeswoman said Makani isn’t competing directly with turbines because its system can be installed more easily in less-accessible areas, like mountainous regions.
Makani staff numbers have fallen from well over 100 to below 50 in the past two years, according to two former employees. Most of its pioneering members have left. Montague leads a firm making water vessels driven by kites. Griffith runs Otherlab, a robotics and environmental research company. The X spokeswoman wouldn’t disclose Makani’s headcount, but confirmed that it has declined as Makani moved from kite building to testing, which requires fewer people.
Other energy efforts across Google have also struggled. One is code-named Horizon, a project testing solar panels that rest on reservoirs, harnessing energy from both the sun and water, an approach others have tried with limited success. Horizon’s fate is uncertain, according to three people with knowledge of the project. X’s Felten declined to comment on Horizon. Another Google project tried to build software and equipment to manage power lines for utilities or a smart grid. That was shelved after the creation of the Alphabet holding company in 2015, according to three people familiar with the company.
Then there was Foghorn, a two-year X effort to turn seawater into a new liquid fuel. That was shut down in 2016 after failing to reach a price point to compete with traditional fuels. Kathy Hannun, leader of Foghorn at X, went on to form to form Dandelion, the geothermal spinoff. She stressed that Foghorn folded due to market dynamics, not a lack of support from X for renewable energy. Many of X’s most ambitious projects are expected to hit dead-ends. “That’s just the nature of the place,” she said.
Teller dismissed any notion that the company’s clean-energy moonshots are driven by wide-eyed idealism. “We have a saying here: passionately dispassionate,” he said. “I don’t care how much you care about climate change. If your proposed solution won’t actually [be] a successful business that addresses climate change, how passionate you are about climate change is irrelevant. It won’t actually help climate change. So we should stop. We should stop immediately.”
| 5:51p |
Developers Plead with Boardrooms to Take IoT Security More Seriously  Brought to you by IT Pro
As the U.S. government looks to introduce new legislation that will require Internet of Things (IoT) devices to bake in more security measures, new survey data from Netsparker this week suggests the move could be coming just in time.
Web application security provider Netsparker released data on Thursday that found 52 percent of web developers think IoT and smart home technologies are the most vulnerable technologies to attacks, more vulnerable than web apps and online services (41 percent). Smart TVs, connected cars, and ATMs are also at higher risks of hacking than other technologies, developers say.
Developers said that companies attitudes around what they don’t know won’t hurt them could be doing damage to their security posture. Between a lack of IT understanding and budget (57 percent each, respectively), followed by an absence of concern (39 percent), and the fact that cybersecurity is too complicated to understand (30 percent) all indicate that management needs to take security more seriously.
Sixty-one percent of developers surveyed feel that the government is the most vulnerable sector to hacking, followed by the financial services industry, which 50 percent of developers agree is the most at risk.
More developers feel that media (44 percent) and communications (32 percent) are more vulnerable to hacking than healthcare (31 percent).
See also: Microsoft Says Government Should Regulate IoT Security
Interestingly, while the survey was conducted at the beginning of July through third-party research company Propeller Insights, it seems that these respondents were right about media being at risk. Just this week, hackers stole 1.5 terabytes of data from HBO, resulting in the unauthorized release of several upcoming episodes and scripts from Ballers, Insecure, and Game of Thrones. HBO executives continue to investigate whether the hackers also breached company communications.
The survey also asked developers to weigh in on the issue of election hacking, and found that 57 percent of developers feel that democratic governments are vulnerable to election hacking because political parties lack the adequate IT and security expertise, and are using outdated and potentially insecure polling equipment (54 percent).
“Because of recent election-related events, it’s not surprising that developers and IT professionals have so little confidence in the ability of governments to prevent hacking. But the reality is that all organizations and enterprises should take precautions to prevent data breaches,” Netsparker CEO Ferruh Mavituna said in a statement.
See also: New Bill Aims to Address Gaping Holes in IoT Security
Election hacking in the U.S. got a lot of air time this year with questions around Russian interference in the presidential election, but governments around the world have been grappling with the issue for decades. Last year, in a profile of notorious political hacker Andrés Sepúlveda, Bloomberg said that he and his team rigged major political campaigns across Latin America: “For $12,000 a month, a customer hired a crew that could hack smartphones, spoof and clone Web pages, and send mass e-mails and texts. The premium package, at $20,000 a month, also included a full range of digital interception, attack, decryption, and defense.”
At Defcon last week, participants showed how easy it is to hack a voting machine, taking just an hour and a half to break into the WinVote machine, which had a number of serious flaws. | 6:37p |
Friday Funny: the Office Cloud Here’s the cartoon for this month’s Data Center Knowledge caption contest.
This is how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon. Submit your caption in the comments below.
Congratulations to Mitch Schaub, who won July’s contest for the Modular Data Center cartoon. His caption was:
 Did you ask them to ship us a hard copy of that file?
Some good submissions came in for last month’s Racking and Stacking edition; all we need now is a winner. Help us out by submitting your vote below:

Take Our Poll
Stay current on Data Center Knowledge’s data center news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or Facebook or join our LinkedIn Group – Data Center Knowledge. | 7:02p |
Exec’s Departure Hints at Problems in Lenovo’s Data Center Business Monday’s sudden departure of Lenovo’s president of North America, Emilio Ghilardi, hasn’t done the company any good from a PR perspective. Tech media folks are beginning to take note that the position Ghilardi held has become a revolving door, and are connecting the dots with other issues the China-based company is facing to paint a picture of a company treading through troubled waters.
It’s obvious that Lenovo knows where it wants, or needs, to go. Traction in the data center is crucial for the business — at least if it wants to grow — but so far its wheels seem to be spinning.
The company has much to overcome in order to get back on track:
- Ghilardi’s departure shed new light on the fact that the company is having difficulty finding stability at the top in its all-important North American operations. He’s the fifth person to hold the North America president position in five years — with two of his recent predecessors ending up working for Dell, which competes in both PC and server markets. North America is especially important to Lenovo as it attempts to build its server business. Success in the global data center market starts in the US, making strong leadership on this side of the Pacific crucial. It also doesn’t help that no reason for Ghilardi’s departure has been offered — leaving the impression of a shake-up, perhaps born out of desperation. Silence as they say, can be deafening, and can lead everyone from media pundits to potential customers to assume the worst.
- Business isn’t good. Unless or until Lenovo finds success selling servers into the data center space, desktops and laptops remain its core business, which is a no-growth market that has shrunk during each of the last five years.According to a report by the Triangle Business Journal, in May the company reported its smallest quarterly operating profit in six years — $74 million in income on revenue of about $9.6 billion. In April, the company was knocked out of the top spot it’d held since 2013 in IDC’s Worldwide Quarterly Personal Computing Device Tracker, losing to Hewlett Packard Enterprise.
- When Lenovo bought IBM’s server business in 2014 for $2.1 billion, it evidently didn’t negotiate to have server talent included in the transaction. CRN on Wednesday quoted an unidentified source “close to Lenovo” who said, “One of the things that went catastrophically wrong when they bought the server business from IBM is the best and brightest engineering talent never joined the company. Lenovo bought a product business with no road map and no future. They didn’t get the internal intellectual property to help them navigate to the hyper-converged market.”Lenovo evidently thought they’d hit the ground running, as they had when they purchased IBM’s PC business in 2005. Marketing PCs and laptops is much different than selling servers, however, especially considering that in 2005, double-digit annual growth for PC sales was the norm. IBM’s ThinkPads already had a loyal user base; all Lenovo had to do was produce a quality product to allay fears.
- Lenovo doesn’t seem to “get” the server market. In another article, CRN talked to a slew of data center solutions provider executives who all repeated the same point — that Lenovo isn’t marketing effectively. This quote from FusionStorm’s CEO, Dan Serpico, is reportedly typical: “I don’t see them in the market. We’re seeing [Chinese rivals Supermicro and Huawei] a lot, and we’re not seeing Lenovo. We might sell a couple million dollars of Lenovo a year, but I don’t know anyone over there at all. I’ve had people from Huawei call me. I’ve had meetings with Supermicro. There’s a genuine effort by competitors to reach out. If there is at Lenovo, I’m not seeing it, and you would think we’d be someone they’d like to go after.”The acquisition of IBM’s server business instantly made Lenovo the third-largest player in the global x86 server market. By March of this year, according to Gartner, based on revenue, it had dropped to number five.
For the time being, the senior vice president and general manager of Lenovo’s worldwide Enterprise Business Segment will be walking in the shoes left empty by Ghilardi. According to Lenovo, “He is familiar with the North America market from his longtime, successful leadership of Lenovo’s Global Accounts business.”
He might not be the person to fill the shoes permanently, however, if Lenovo wants to become a star in the data center. Somebody charismatic is needed, who knows the server business from the inside out, and who is comfortable getting out of the office to pump hands — someone who can motivate the team to make presentations at conferences and, as Serpico pointed out, build relationships with solutions providers.
All indications are that Lenovo has developed solid products with its ThinkSystem and ThinkAgile lines. It’s time to wear out a little shoe leather selling them. |
|