Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 1st, 2017
Time |
Event |
1:00p |
This Company Owns the High-Density Data Center Niche in Silicon Valley 
Not everybody needs a high-density data center to host their equipment, but if you do, options are limited if you want to outsource to a colocation provider. While you can find a provider that will design a custom environment to support your load (if your deployment is big enough), that’s not exactly the same as simply leasing a rack in a colo, and certainly much more expensive.
Predictions about a decade or so ago that data center power densities would skyrocket across the board turned out to be mostly wrong, and colocation companies have designed their facilities accordingly. Densities have remained steady in most cases, but some workloads have emerged that require the kind of power-per-rack that’s usually reserved for supercomputers. The high-density data center as a service has become a niche market, and some companies have emerged to fill it.
One such company is Colovore, which launched its first data center – located in the midst of one of Silicon Valley’s biggest data center clusters in Santa Clara, at 1101 Space Park Drive – in 2014. Today, with the first 2MW phase of the facility fully occupied by clients, the company is building out Phase Two, expecting at least a portion of it to go to existing customers.
The workloads that require Colovore’s 20kW to 35kW per rack include machine learning, bioinformatics, rendering for applications used to operate autonomous cars, and private cloud, Sean Holzknecht, the company’s president and co-founder, said in an interview with Data Center Knowledge.
GPU clusters running software for machine learning occupy a large percentage of the facility, he said. Such GPU clusters are currently the best available architecture for training machine learning algorithms and require high power density from the data centers that house them. As another example, a company called Cirrascale has geared its data center outside of San Diego specifically for machine learning, and densities in the facility reach north of 30kW per rack.
Colovore founders started by examining the biggest headaches IT departments working in colocation data centers have. One of the common problems they saw was people trying to solve density issues by doing things like spreading fewer servers among many racks. That’s what lead them to bet on high density, Holzknecht recalled. So far, the bet has paid off.
One of the reason’s Colovore’s model works is its location, he said. Many of the companies pushing the envelope with machine learning, cloud, and self-driving cars are located in Silicon Valley and ready to pay for high-density data center space to house their gear because designing and operating such facilities in-house is complicated and expensive.
The company uses rear-door heat exchangers to cool its high-density racks. First phase of the facility is designed for 20kW per rack, while the second one will be 35kW per rack, using the same cooling distribution design but with rear-door heat exchangers rated for 35kW per rack instead of 20kW.
The data center has a raised floor, but it’s not used to deliver cold air to the IT equipment. Instead, underneath the raised floor is a custom system of water pipes and pumps to deliver cold water to the heat exchangers. The system had to be custom-designed because the typical off-the-shelf rear-door heat exchanger solution doesn’t provide much more than 200kW of total cooling capacity, Holzknecht said. At warehouse scale, those stock water supply units will take up more space than server cabinets.

Getting from 20kW to 35kW is simply a matter of supplying more gallons of chilled water per minute and setting a lower supply temperature on the water delivery loop.
Another common problem of IT staff working in colocation facilities Colovore set out to address is colocation staff who do not understand IT. The data center services industry grew out of commercial real estate and telecoms industries, neither of which has a history of collaborating with IT, Holzknecht, who’s worked for various data center providers for about 17 years, said.
Sysadmins and engineers working inside colocation facilities are used to a “take it or leave it mentality” in many of the workers that staff colo facilities. It’s common, for example, to have a security guard working for an outsourcer running reception and being the first person that greets you when you arrive at a colo. If you’re an IT person with critical infrastructure in the facility and in a “hair on fire” situation, you need to speak with somebody who can help you as soon as possible, and chances are that security guard is not going to understand what your problem is exactly.
To be sure, Colovore’s security team does not do IT, but they’re not the first point of contact for customers, Holzknecht said. Customers are greeted by people who speak their language.

With second 2MW phase under construction and room for two more similar phases, next on the agenda for Colovore is adding more sites. The long-term plan is to have at least three data centers in the San Francisco Bay Area and one outside for disaster recovery. “The plan all along was to grow out of the Bay Area,” Holzknecht said. | 4:00p |
ARM CEO Won’t Rule Out Major Deal After Takeover by SoftBank Giles Turner (Bloomberg) — ARM’s new owner SoftBank is encouraging the British chip designer to think in “the broadest terms possible,” said Chief Executive Officer Simon Segars. That includes major deals and global expansion.
Segars didn’t rule out the possibility of a large deal, citing backing from SoftBank Group Corp.’s Masayoshi Son. “He is constantly discussing what is going on the world among the group’s companies and the best way to build the business even more strongly,” said Segars in an interview at the Mobile World Congress in Barcelona Tuesday. “He’s thinking decades out in the future.”
SoftBank purchased ARM in July for $32 billion. The Tokyo-based company expects ARM to be worth at least $100 billion in the next three years, according to one executive at SoftBank, who asked not to be named as they aren’t authorized to speak publicly on the matter.
ARM is continuing to hire in the U.K., a key part of the takeover agreement with local regulators. ARM had 1,600 staff in the U.K. at the time of the deal and is targeting 3,200 by 2021. It has already hired a “few hundred”, said Segers.
ARM is also hiring in the U.S., where it has 1,000 staff, and acknowledged the need to hire in China, where it is currently doubling its office space in Shanghai.
The Cambridge-based company has already made a flurry of small acquisitions, recently acquiring Swedish firm Mistbase and U.K.-based NextG-Com, two companies focused on cellular technology.
SoftBank has been busy undertaking a flurry of deals. The technology giant is merging the satellite startup it’s backing, OneWeb Ltd., with Intelsat SA. SoftBank is also near to closing its $100 billion Vision Fund with Saudi Arabia and other high-profile backers.
While Segars stressed it is business as usual, it is clear Son sees ARM as key to SoftBank’s future. Son said Monday that he expects one computer chip to have the equivalent intelligence of a human with an IQ of 10,000 within the next 30 years. The growth in computer ability was “why I acquired ARM,” said Son.
Around 16.7 billion ARM chips were shipped in 2016, around 530 per second. In ARM’s 27-year history, around 100 billion chips have been shipped, half of which over the past four years, according to the company. | 4:30p |
Five Things I’m Looking Forward to at Data Center World It’s March, and Data Center World is upon us. The team at Data Center Knowledge is gearing up for one of the year’s biggest data center conferences, which this spring is happening in downtown Los Angeles, home to some of the West Coast’s most important network interconnection and data center hubs.
While there are many things planned for the show that I’m excited about, I thought I’d share my top five with our readers:
Humans are Your Biggest Security Risk
Once on the FBI’s Most Wanted list for hacking into 40 corporations, Kevin Mitnick has since crossed over to the light side, working today to make companies and individuals more aware about the dangers of cybercrime. He’s been convicted of numerous crimes, including hacking into Pacific Bell’s voice mail computers and copying proprietary software from major cell phone and computer companies.
In his keynote at Data Center World, Mitnick will demonstrate how hackers today use a combination of social engineering (a term he coined) and technological exploits to penetrate companies’ systems. He believes people are the weakest link in the security chain and his presentation will show why.
Crowdsourcing Server Design for One of the Biggest Clouds
Last October, Microsoft made an unusual hardware announcement. The company released details of its next-generation cloud server design that at the time was incomplete. The point was to apply to hardware development the same process that made open source software development such a success. Released through the Open Compute Project, the design was meant for others to see, use, and contribute to as they see fit, with Microsoft hoping to benefit from crowdsourced brainpower.
At Data Center World, Kushagra Vaid, general manager for Azure Cloud Infrastructure at Microsoft, will share the company’s experience with this unorthodox approach to designing cloud infrastructure.
The Gravity of Interconnection
Without interconnection, the linking up of networks to exchange data traffic, there would be no internet, and few companies have been as pivotal in the development of key interconnection hubs responsible for the internet as we today know it as Equinix. Today, the frontier in interconnection is linking up clouds and companies that use them, and the Redwood City, California-based colocation giant is working to ensure as much of that interconnection happens inside its facilities as possible (as are its many competitors).
In a Data Center World keynote, Peter Ferris, senior VP in the office of the Equinix CEO, will talk about the ongoing digital transformation of the enterprise and about the ways the success of enterprise cloud relies on proper cloud connectivity in the data center.
The Future of Distributed Computing ?
It turns out that blockchain, the underlying technology behind bitcoin, is extremely useful for many more things than digital currency. The distributed database is secure and fault-tolerant by design, making it suitable for everything from identity management to storing medical records. IBM and Northern Trust recently launched a solution for managing private equity funds built on a blockchain network; a group of 30 companies including Microsoft and JPMorgan Chase are working on a blockchain-based corporate computing system for tracking financial transactions and contracts; and Britain’s former Prime Minister David Cameron recently said blockchain-based technology can be used to fight government corruption.
So, what does it all mean for the data center? At Data Center World, Ravi Meduri, an engineering services VP at Innominds Software, will speak about the potential changes to the underlying structures of internet, networks, storage, and compute layers that are likely to accompany proliferation of blockchain. More importantly, it’s likely that blockhain will eventually live entirely in the cloud, which has its own set of implications for data center operators.
Learning from the Data Center Hive Mind
Finally, networking. That’s one of the biggest reasons to come to a conference, and if you’re interested in meeting people that manage corporate technology infrastructure or work in data centers, Data Center World always delivers. It’s difficult to get a different perspective on your issues while staying inside the four walls of your office or server room, and while the organizations may be different, most of the problems their data center teams face are exactly the same. If you want to learn from your peers, we’ll see you in LA!
– Yevgeniy
Data Center World is taking place April 3 – 6 at the Los Angeles Convention Center. Secure your spot here. | 5:00p |
Former eBay CEO Joins ServiceNow as CEO and President  Brought to You by Talkin’ Cloud
Enterprise cloud services firm ServiceNow has named John Donahoe president and CEO on Monday. The former eBay CEO and president will succeed current ServiceNow president and CEO Frank Slootman, who is stepping down from his management role on April 3 and will continue to serve as chairman of the board.
Prior to joining ServiceNow, Donahoe was CEO and president at eBay from 2008 to 2015. During his tenure there, revenues more than doubled to $18 billion. Previously, Donahoe spent more than 20 years at consulting firm Bain & Company where he eventually served as CEO.
“John is a highly capable and proven CEO, uniquely suited to lead ServiceNow in its next phase of growth and we are thrilled that the Board’s search for my successor has resulted in John joining the Company,” Slootman said in a statement. “John’s extensive track record of creating value, driving innovation and scaling a large technology organization will be critical as we continue to move ServiceNow up the enterprise value chain. Our goal going forward is to not just serve the IT executive ably, but to help CEOs solve their most pressing issues, and there is nobody better to help us get there than John.”
Slootman added that it has been a “privilege to lead ServiceNow as CEO for the past six years.”
“I am honored to lead ServiceNow,” Donahoe said. “Frank has done a tremendous job building the company into the fastest growing enterprise software company in the world. ServiceNow is extremely well positioned to expand its leadership in the years ahead. Working alongside Frank and the Board, the management team and I intend to capitalize on our opportunities to drive growth and create value for our customers, partners, shareholders and employees.”
Currently, Donahoe serves as chairman of the board at PayPal, while serving on the board of directors for Nike, Intel, and nonprofit advisor The Bridgespan Group.
Donahoe joins the company as analysts have identified ServiceNow as a potential target for acquisition this year as companies look to expand their cloud services. ServiceNow declined to comment.
In the fourth quarter of 2016, ServiceNow subscription revenues grew 41 percent year-over-year to $344.6 million.
“We finished 2016 with strong momentum and our business is firing on all cylinders,” Slootman said in January. “Total revenues in 2016 grew 38 percent making ServiceNow the fastest growing enterprise software company with more than $1 billion in revenue.”
In February, ServiceNow announced a multi-year, strategic partnership with IBM to offer ServiceNow’s cloud-based service automation platform and IBM products and services.
Last month, ServiceNow acquired DxContinuum to leverage machine-learning capabilities and data models developed by the machine learning company in its ServiceNow platform and across its products.
This article originally appeared on Talkin’ Cloud. | 5:43p |
What Blockchain Means (Hint: Not Just Bitcoin), and Why You Should Care  Brought to you by MSPmentor
If you haven’t been paying much attention to blockchain — the distributed database technology made famous by Bitcoin — it’s time to start.
Here’s an overview of what blockchain is and why it matters now.
Blockchain 101
Blockchain is a method of keeping track of transactions through a distributed database (sometimes also called a distributed ledger).
The basic idea is this: Every time a transaction happens or a new piece of information is acquired, it is recorded in a database that a network of people can store and access.
Once data is added to the distributed database, it can’t be erased.
The distributed nature of the database is blockchain’s killer feature.
Because the database is distributed across a network, it is highly transparent.
Wondering what blockchain means for the data center? This April, at Data Center World in Los Angeles, Ravi Meduri, VP of engineering services at Innominds Software, will walk you through the implications of this distributed database technology on the underlying structures of the internet, networks, storage, and compute layers. Register here.
No one can modify the database without his change being discovered by everyone else on the network.
This prevents fraudulent activity and mitigates the risk of security attacks.
There is no central database that hackers can exploit, and no centralized authority with unilateral power to decide what information is in the database.
Blockchain became famous with the launch of Bitcoin, an open source online payment system that relies on blockchain to record payment activity.
But Blockchain is about more than just Bitcoin.
Things to Know about Blockchain
On that note, here are the things you should know about blockchain:
- Bitcoin is only one of many examples of blockchain technology.
- Blockchain is a concept, not a specific technology or piece of software. There are many possible ways to implement a blockchain.
- So far, blockchain has been used mostly in relation to payment transactions. But its uses are not strictly limited to payment processing. A blockchain database could also be deployed to record identities, data storage and more. Projects like Onename and Storj are examples of this.
- Blockchain is increasingly popular within mainstream industries like financial services. Banks, like Goldman Sachs and Bank of America, are now investing in blockchain technology. That’s a significant change since the days when blockchain revolved mostly around Bitcoin, an alternative, unregulated currency system that established financial companies mostly eschewed.
The Future of Blockchain
To date, Blockchain’s large-scale, real-world applications have been limited mostly to Bitcoin.
Other blockchain platforms are small-scale experiments or are still being developed.
Still, it’s worth starting to pay closer attention to blockchain now, if you are not already.
The blockchain world has grown rapidly in the past several years.
It has become very diverse and gained the backing of deep-pocketed organizations.
Right now, it remains unclear exactly how we’ll be using blockchain in five or ten or twenty years.
But it’s a pretty safe bet that we will be using it.
Wondering what blockchain means for the data center? This April, at Data Center World in Los Angeles, Ravi Meduri, VP of engineering services at Innominds Software, will walk you through the implications of this distributed database technology on the underlying structures of the internet, networks, storage, and compute layers. Register here.
This article originally appeared on MSPmentor. | 6:18p |
IT Automation: What to Automate and When? Raju Chekuri is CEO of NetEnrich.
The job of IT managers has changed in recent years. Instead of doing everything themselves,they look to introduce tools that can do the dirty work, so that they can do more planning and strategic work for the business.
It also doesn’t make much sense to pay high IT salaries for time-consuming and repetitive configurations, patching, and the like. Determining what should be the company’s next IT automation candidates, however, is no easy task.
The logical approach is to survey the organization’s most frequent occurring problems. This could be network slowdowns (bandwidth issues), poor CPU utilization, or any number of common issues affecting users or wasting money. Yet automating those tasks, such as by automatically de-prioritizing low priority traffic (media downloads) may not fix the actual problem.
First, you first should attempt to discover the root cause. A common root cause issue is capacity. To solve that, you must analyze your top consumers of bandwidth over a period of time, and then rationalize the optimal bandwidth to allocate to various traffic segments. The lesson is, search for and fix the root cause before applying tools to a problem.
Take an IT automation deep dive this April at Data Center World in Los Angeles, where Joel Sprague, principal systems engineer at General Dynamics, will talk about automation functions that have proven useful in a private cloud setting over the last five years. He will also cover tips on deciding when automation makes sense, and when it doesn’t. Lastly, he will touch upon some of the advantages in agility offered by stateless computing, hybrid cloud, and cloud bursting. Register here.
Why not automate everything, you may ask, and cover all the bases? That sounds sensible in this day and age, but it’s not. If a task only occurs a couple of times a year and without significant manual effort, consider the cost of acquiring and managing a tool to automate the task.
It probably won’t be worth the investment. A good example is server provisioning, which now happens even less frequently given the move toward cloud-based infrastructure.
When evaluating new opportunities for IT automation, prioritize on the areas which can reduce large amounts of manual labor, improve quality, reduce defects, and simplify complex tasks so that you don’t have to put senior engineers on the job.
Categorize Your Manual Tasks into Three Buckets
- Too involved. The process is a complex, multi-step procedure involving multiple systems and subsystems. This is a prime candidate for automation, because of the complexity involved. That increases the risk of failure from human error.
- Too long and tedious. The process takes a long time to complete. For example, setting up a software-defined data center requires an individual to run manual tasks over several days, but not continually. So instead of having employee monitoring a project with long periods of dead time in between work, you automate the entire process. This can also reduce errors.
- Neither one. The task fits into neither category, which means it doesn’t need to be automated yet.
Prioritizing and Tools
Now that you have a list of IT automation projects, which should you take on first? It’s useful to look at these activities within the three phases of infrastructure lifecycle management (ILM): provisioning, operations, and audit. Of those three areas, one of them likely has a higher impact on your business. A hosting company, for example, would prioritize on being most efficient in provisioning, because that drives revenue. Whereas, a company in which compliance is critical to staying in business should focus on automating auditing tasks first.
Finally, you’ll have to make some decisions around tools. A large enterprise can afford automation suites for each core infrastructure area: networking, storage, and servers. A smaller company will likely need to pick and choose. Select tools that fit the infrastructure area where your most common problems occur, or where you have the most requirements.
While there are plenty of commercial tools on the market, acquiring lots of point tools gets expensive. If your team is strong technically, go the open-source route. Your team will be able to handle the extra work involved to configure and learn the tools, and you will save a lot of money. Otherwise, choose commercial tools where ease-of-use is built in and support is always available from reputable vendors.
Today, there are many excellent reasons to focus on IT automation. It is more accurate, and usually more cost-effective. It allows for the standardization of IT processes to reliably meet business needs. Yet there’s always a balance. We cannot replace humans in the data center and intelligent decision-making by experienced engineers will continue to be a highly valuable asset for CIOs and CTOs. The trick is to balance humans and machines, with a goal toward extreme efficiency and supporting business objectives.
Take an IT automation deep dive this April at Data Center World in Los Angeles, where Joel Sprague, principal systems engineer at General Dynamics, will talk about automation functions that have proven useful in a private cloud setting over the last five years. He will also cover tips on deciding when automation makes sense, and when it doesn’t. Lastly, he will touch upon some of the advantages in agility offered by stateless computing, hybrid cloud, and cloud bursting. Register here.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. |
|