Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, February 13th, 2017
Time |
Event |
1:00p |
How to Get a Data Center Job at Google If you haven’t worked on Google’s data center team in the past, you won’t have the exact expertise you’ll need when you join, so, although expertise is important when applying for Google data center jobs, what’s more important is a sharp and flexible mind.
No one built a network to support applications at Google’s scale before Google, so Googlers learn many of the skills required to operate and continue expanding that network on the job.
“There is no book that you can refer to,” Joe Kava, Google’s VP of data centers, says. “The technology we deploy was invented here, so you’re not going to get a person with that specific expertise.”
In the nine years that he’s been at Google, the company’s data center team has grown more than 10-fold, and it’s continuing to grow, although not as quickly as it has in some previous years, when he would hire more than 100 people per year. The core team today is big enough to where it doesn’t have to hire at such extreme rates, but the search for good people never stops, and it has gotten harder.
The data center industry has been growing rapidly over the last couple of years. Operators of cloud platforms at global scale, companies like Google and Facebook, as well as data center providers like Equinix and Digital Realty Trust, have been expanding their infrastructure around the world, and competition for talent is heating up.
That, combined with relatively low numbers of women and young people entering the data center profession, makes it increasingly difficult to find the right people – even for Google.
Read more: The Data Center Industry Has a Problem: Too Many Men
“I think it’s getting harder because the data center sector as a whole has been pretty hot,” Kava says.
His approach to hiring is in line with the company’s overall staffing philosophy, developed by Laszlo Bock, Google’s former senior VP of operations, who left last year to help other companies create the employee culture Google is famous for.
Here’s how Bock described the number-one element of his philosophy two year ago to The New York Times’s Thomas Friedman:
“The No. 1 thing we look for is general cognitive ability, and it’s not I.Q. It’s learning ability. It’s the ability to process on the fly. It’s the ability to pull together disparate bits of information. We assess that using structured behavioral interviews that we validate to make sure they’re predictive.”
That general cognitive ability is more important for people working in Google data center jobs today than ever. The cut-and-dry division of labor, where some people did racking and stacking, some provisioned servers and installed operating systems, some oversaw network connections, and others did service maintenance in production, has been replaced by automation.
With most operations tasks handled by software, you need especially sharp people on board to troubleshoot when systems fail or to figure out solutions in atypical scenarios, outside of the existing automation capabilities. This is equally true in IT, software, facilities operations and other areas.
“When something goes wrong and automation can’t account for it, figuring out why has become more of a specialty skill, and we need much higher quality people to do those kinds of corner cases,” Kava explains.
See also: What Cloud and AI Do and Don’t Mean for Google’s Data Center Strategy
If expertise is not the top priority, it certainly is one of the main considerations – Bock ranked it as the fifth most important factor. Not having gone through formal higher education doesn’t automatically disqualify you – although a college degree doesn’t hurt – but having solid grasp of the technical fundamentals is important.
People in Google data center jobs are generalists, or systems-level engineers, but they typically have at least foundational knowledge of computer science, electrical, or mechanical engineering.
“Having that solid engineering or technical foundation to build from is important,” Kava says. “You can acquire that foundation in different ways.”
Where Kava’s team does rely more on formal education when sorting through resumes is data center design. Most people on the design team are licensed engineers in regions where they work.
Skills he looks for in design and operations candidates run the gamut: from traditional data center disciplines like electrical and mechanical engineering, security, and supply-chain management to building information modeling and statistical probability analysis. If you’re looking at several design options, which one is more likely to deliver the right level of availability?
See also: Data Centers Scrambling to Fill IT Skills Gap
Advanced understanding of control systems is growing in importance, because Google’s data center network is becoming ever more complex and distributed. This is where knowledge of machine learning comes in handy. As we’ve written before, Google uses deep neural networks not only to figure out which ads to display at the top of your search results but also to improve its data center efficiency.
Skills? Check. Extraordinary cognitive ability? Check. The third big box is personality. Bock wouldn’t have what is more than likely a lucrative consulting gig was it not for Google’s famous company culture, and that begins with personality of the people the company hires.
The key traits there are being collaborative, being willing to take ownership of projects and issues, and being transparent. There’s no room for the knowledge-is-power attitude common in corporate environments, where people guard their influence by sharing as little information as possible.
Finally – and it’s a cliché but its importance cannot be overstated – be a pleasant person. “You want people that have a good sense of humor, people you’d like to be spending eight hours a day around,” Kava says. | 4:00p |
How Fake Data Scientists Could Hurt Your Business Celeste Fralick is Principal Engineer for Intel Security.
Big data has arrived, and so have the data scientists. The demand for immediate business intelligence and actionable analytics is encouraging many people to adopt the title of data scientist. However, there is a missing link between big data and effective data models that is too often being filled by people without sufficient background and expertise in statistics and analytics. There is an underlying belief that data modeling is just another tool set, one that can be quickly and easily learned.
While these “fake” data scientists mean well, what they don’t know could substantially hurt your business. Developing, validating, and verifying the model are critical steps in data science, and require skills in statistics and analytics, but also creativity and business acumen. It is possible to build a model that appears to address the foundational question, but may not be mathematically sound. Conversely, it is also possible to build a model that is mathematically correct, but does not satisfy the core business requirements.
With past teams, we assembled a data science center of excellence in response to these concerns. One of our first tasks was the creation of an analytic lifecycle framework to provide guidance on the development, validation, implementation, and maintenance of our data science solutions. A key part of this process is an analytic consultancy and peer review board that provides external viewpoints as well as additional coverage for these complex products.
There are a range of possible types of analytics, from descriptive (what is happening) to prescriptive (what will happen and what is recommended), but all require a rigorous development methodology. Our methodology begins with an exploration of the problem to be solved, and runs through planning, development, and implementation.
The first step to developing an effective analytic model is defining the problem to be solved or the questions to be answered. Complementing this is a risk assessment, which includes identifying sources of error, boundary conditions, and limitations that could affect the outcome.
The second step is detailed input planning, which starts with a more complete definition of the requirements necessary to meet the expectations of the ultimate consumer of the output. An assessment of existing models and the current state of analytics should follow, to avoid duplicating efforts or recreating existing work. The first peer review happens during this step, to assess the plan and get comments on the concept.
The third step is development of the actual algorithm to be used. This is usually an iterative approach, beginning with an initial hypothesis, working through one or more prototypes, and refining the algorithm against various cases. When a final version is ready, it is put through two series of tests: validation that the model meets the requirements, that the right algorithm has been developed; verification that the model is mathematically correct, that the algorithm has been developed right. There is another peer review during this step, which will include, or be immediately followed by, a review by the customer or end user.
Whether the intended user is an internal department or business unit, or an external customer, there are some key questions that they should be asking during this review:
- How accurate is the model, how are the ranges and sources of error dealt with?
- How does the model answer the requirements?
- How does the model react to various scenarios in the environment?
- What are the test results, confidence values, and error rates?
- Is there any intellectual property incorporated into the model, and who owns it?
Finally, once these reviews have been successfully completed and questions answered to the customer’s satisfaction, it is time to implement the model. During the operating life of the model, regular reviews should be conducted to assess if any new or updated data is affecting the results, or if any improvements are required. Part of this review should be an evaluation of the analytic product and the criteria for when its output is no longer relevant or its use should be discontinued.
Data science is rapidly growing as a tool to improve a wide range of decisions and business outcomes. It is important to know what questions to ask, both about the qualifications of your data scientists, and about the proposed analytic model. A good analytic process can bring the “fake” data scientists into the fold without too much heavy lifting, and your team – and output – will be stronger for it. There is a way to do analytics correctly, and doing it wrong can be worse than doing nothing.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:15p |
Pentagon Hires Hackers to Target Sensitive Internal Systems Nafeesa Syeed (Bloomberg) — The Pentagon is paying hackers to test its key internal systems for vulnerabilities — and they are finding weaknesses faster than expected.
In a pilot project this past month, the Pentagon’s Defense Digital Service let about 80 security researchers into a simulated “file transfer mechanism” the department depends on to send sensitive e-mails, documents and images between networks, including classified ones. The effort was important enough that staff for new Defense Secretary James Mattis were briefed on the ongoing program his first day on the job.
Lisa Wiswell, whose title at DDS is “bureaucracy hacker,” said she told Pentagon cyber analysts to be on standby after the program started Jan. 11, but added that nothing would likely turn up for a week. Within hours, though, the first report from a hacker highlighting a risk arrived.
“That was surprising,” Wiswell said in an interview at her Pentagon office. “I was like, ‘I don’t know what else is going to come down the pike if we’ve got stuff that’s falling this quickly.”’
Wesley Wineberg, a security researcher based near Seattle who took part in the experiment, said it was his first time looking into a government system. He hadn’t expected it to be from the Defense Department.
See also: Renown Hacker: ‘People, Not Technology, Most Vulnerable Security Link’
‘Quite Weak’
“Parts of the system appeared to have been well designed and reviewed from a security perspective, and other parts were quite weak,” Wineberg said via e-mail. “Over the years I have learned not to have any expectation that a system will be any more secure than another system just because of its importance or criticality.”
With concerns about cyber vulnerabilities rising across the U.S. government, the cyber firm Synack Inc. received a three-year, $4 million contract in September to carry out “bug bounties” across the Pentagon. The Redwood City, California-based company vetted and recruited security researchers from the U.S., Canada, Australia and the U.K., according to Mark Kuhr, Synack’s chief technology officer and a former National Security Agency analyst. The exercise ran through Feb. 7, with more expected.
See also: Snowden-Era Paranoia Fuels Data Center Networking Startup Boom
Because of security concerns, hackers didn’t get direct access to operational networks. Instead, the digital service replicated the file transfer systems in a “cyber range,” a kind of digital laboratory resembling the original environment. The company also added extra security layers to make sure adversaries didn’t compromise the hackers’ computers or enter into the range.
Pentagon Briefing
“We had to assume that their entire laptop is compromised — the Russians are sitting on the laptops — how do we prevent them from accessing the challenge,” Kuhr said. “How do we prevent them from accessing any vulnerabilities that could be taken from the challenge?”
Convincing senior leaders at the Pentagon that it was a safe endeavor took time and effort, the digital service said. Chris Lynch, director of the DDS, said he briefed Defense Secretary Mattis’s staff on their first day in office about the program. The file transfer tool is important because it securely moves some of the most important information for Defense Department missions both within the Pentagon and in the field.
“We have an absolute need to be able to relay a command, trust that it’s going to get to a destination and interpret that and then do what it says,” Lynch said in an interview. “If there’s any element when you don’t have trust in that pipeline, that undermines a lot of how the department works.”
The digital service urged hackers to try bypassing the file-transfer protections; pull data out of a network that they weren’t supposed to have accessed; and “own the box,” or take control of the system. Officials won’t specify the gaps that were discovered, but say department cyber experts are now fixing the problems.
The program grew out of earlier projects by the digital service, which is part of the White House’s U.S. Digital Service, started by the Obama administration and so far retained under President Donald Trump. Last year, the service held “Hack the Pentagon,” where outsiders hunted for bugs in the Defense Department’s public websites. The file transfer exercise marked the first attempt to pool hacking talent for internal networks.
See also: Does Cisco’s Data Center Analytics Update Truly Enable Zero-Trust?
Synack, which has done similar custom hacking programs at banks and credit card companies, paid hackers based on the severity of the problem they uncovered. The biggest reward totaled $30,000 in the recent competition.
The experiment comes as the Defense Department faces challenges in handling cybersecurity. The department bolstered spending on capabilities and expertise to build better cyber defenses, yet during tests, critical combatant command missions remain at risk from advanced nation-state actors, according to the Pentagon testing director’s annual report published in January.
Modern Warfare
“Cyber-attacks are clearly a part of modern warfare, and DOD networks are constantly under attack,” the report said. “However, DOD personnel too often treat network defense as an administration function, not a warfighting capability,” and until that approach changes, the department “will continue to struggle to adequately defend its systems and networks from advanced cyber-attacks.”
In addition, the need for “red teams” – cyber experts that test whether department networks and systems can withstand intrusions – has more than doubled in the past few years. But a significant number have left for the private sector, finding better salaries and more relaxed work settings. As a result, the remaining red teams “are unable to meet current DOD demand,” the testing director said.
The digital service says other parts of the Pentagon have expressed interest in doing similar tailored hacking projects, including around the security of ground command and control systems and internal human resources portals. Sometimes it’s the simplest cracks found in the networks that most unsettle cyber experts.
“An adversary doesn’t need to spend millions of dollars focusing on the most serious, complicated flaws,” Wiswell said. “When we do stupid basic things you bet the adversary would rather use that vector into our networks because it’s cheaper – we’ve lowered the barrier to entry.” | 5:23p |
The Doyle Report: New Rules for Computing  By The VAR Guy
Depending on how you count, we are either in the third wave of technology transformation, the fourth or even the fifth.
The designations don’t matter as much as the impact. Ross Brown, senior vice president of worldwide partners and alliances at VMware, simply calls the current era the “new wave” of computing. It is having profound impacts on the channel and beyond. Take software development.
If something is ready available in the cloud, why not leverage it? Many technology buyers don’t need or want ownership of basic capabilities. They simply want to move swiftly ahead with their digital objectives.
“This shift in philosophy is foundational,” says Brown. This is because the thinking now disconnects physical infrastructure from applications delivery, which, though admittedly wonky, is a big deal to CIOs and the partners that support them. For most of their careers, these IT professionals have prioritized things such as infrastructure, security, redundancy, etc. Now? Competitive pressures have them thinking more about functionality, ease-of-use and time-to-market.
For partners who used to think in terms of resource optimization, five-nines reliability or bullet-proof security, this change in priorities is a radical shift. Speaking recently in San Jose at a tech event for channel leaders, Brown identified what he believes are the “new rules” of computing for 2017. They include the following.
- The SaaS experience has become the norm. The notion of “always on, just works and comes with 24×7 remote support” functionality has become the new corporate standard. No matter where they are developed, how they are deployed or even paid for, customers want the access and performance of business apps to work like Box or Office 365.
- The idea of a “single, integrated stack” has taken hold. Customers want one model for security, storage, network topology and more. While this has led to a fight among the leading stack developers (think the Cloud Foundation, Microsoft Azure, OpenStack, etc.) it has given hope to CIOs everywhere that they can shift more of their spending to app development and deployment from systems integration. Partners who believe that customers will value the stacks they cobble together from piece parts will fall behind.
- The abstraction of hardware has led to the separation of hardware configuration and management from software capabilities. “The move from ‘I design with hardware in mind’ to ‘I design with abstraction in mind’ is really key,” says Brown.
- Software-as-appliances are in decline. Former “software-as-appliance vendors” are now delivering intellectual property (IP) as virtual machines (VMs) in stacks, including firewalls, IPS, WAN gateways and more.
- Customers now worry about cloud lock-in on various code bases.
These new realities have left many solution providers in a quandary, especially those that sell networking and storage solutions but do not touch applications. How can they stay relevant when the bulk of spending is going to line-of-business managers who are interested in business outcomes and not systems integration? It’s a pressing question for thousands of channel companies.
So what does it all mean? Several things. For one, “your mess for less” outsourcing will suffer. IP assets sold as virtual services will grow increasingly more attractive than IT value delivered by physical labor.
In addition, Brown predicts, single-layer solution partners and VARs, no matter how large and capable, will face headwinds. The CIOs they sell to are growing weary of serving as project management offices (PMOs) that take orders from line-of-business executives and then break down these requests into discrete tasks doled out to third party contractors and in-house staffers. “The normal model of enlisting a VAR or MSP to do a discrete task around a layer of IT is under direct attack,” Brown says.
In the meantime, ISV and applications-specific IP will become a sustainable differentiator for many partners who change their business models.
Finally, appliances will go virtual and be designed by IP owners, not integrated by CIOs using the “PMO” approach.
When will all this happen? Brown says the channel can expect big changes from vendors in the next 24-36 months, which will lead to widespread customer shifts over the next five years and more.
“I’m teased internally that I’m always looking ahead three years out in a company that looks at things 90 days out,” jokes Brown. “But that’s the role of a good channel chief.”
This article originally appeared on The VAR Guy. | 7:25p |
Alibaba Doubles Hong Kong Data Center Capacity and Other APAC Data Center News Alibaba Expands Hong Kong Data Center Capacity
Alibaba has substantially expanded its Hong Kong data center capacity, more than doubling the physical footprint of its cloud in one of Asia Pacific’s key financial and connectivity hubs.
Alibaba, whose cloud customers include Nestle, Philips, and Schneider Electric, also recently opened data centers in Germany, Australia, Japan, and the United Arab Emirates. Its cloud now has physical footprint in 14 regions. The biggest concentration is in mainland China, but there are also Alibaba cloud data centers in Silicon Valley, Northern Virginia, and Sydney.
Read more: Hong Kong, China’s Data Center Gateway to the World
STT Launches Three Singapore Data Centers
Singapore Technologies Telemedia, which last year bought a majority stake in Tata Communications’ data center business in India and Singapore, has brought online three colocation data centers in Singapore, bringing its footprint in the major network and business hub to five facilities.
Like Hong Kong, Singapore is a network gateway to the major markets in mainland China, making it a coveted data center location for foreign companies looking to serve the Chinese market or Chines companies looking to do business internationally.
STT GDC, STT’s data center colocation business, now operates 45 data centers in Singapore, China, India, and the UK, a market it entered by entering into a joint venture with Virtus, acquiring a 49 percent stake in the UK data center provider.
Read more: Mega-Clouds Drive Shift to Mega-Data Centers in Singapore
Australian Data Center Startup AirTrunk Raises AU$400M
AirTrunk, an Australian colocation startup building massive data centers in the Melbourne and Sydney markets, has secured AU$400 million (US$305 million) in new funding, majority of it coming from Goldman Sachs and TPSS. The deal is a combination of equity (about 60 percent) and debt, according to The Australian Financial Review.
The company was founded by Robin Khuda, the former CFO of NextDC, one of Australia’s biggest colocation providers, which went public in 2010. Khuda is hoping to take advantage of the demand for data center capacity by hyperscale public cloud providers, such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform. |
|