Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, February 7th, 2017

    Time Event
    2:02p
    Does Cisco’s Data Center Analytics Update Truly Enable Zero-Trust?

    The latest release of Cisco Tetration, its data center analytics service for networked application performance, announced last week, completes a feature that more DevOps professionals have been requesting – a feature that Cisco touted when the service was first introduced last June: application segmentation.  It should enable DevOps to devise rules and policies for network traffic based solely upon the applications that generate it.

    Cisco told reporters that its implementation of app segmentation fully enables the zero-trust model, which security engineers define as a policy enforcement regime that treats all traffic as untrusted unless a rule is enforced to explicitly allow it.

    “The way one implements zero-trust is [to] assume all traffic is bad unless a policy states otherwise,” Yogesh Kaushik, Cisco’s senior director of product management, wrote in a note to Data Center Knowledge.

    But professionals in the infosec and networking industries say Cisco’s implementation of machine learning algorithms — as Kaushik described it — may not be zero-trust as it’s generally known.  At issue is whether a baseline of trust, even if that baseline is generated by an algorithm, provides the reliable default level of skepticism that DevOps expects.

    What’s the big deal about getting the definition right?  Cloud service providers, enterprise data centers, and public sector services at the municipal, state, and federal levels, are all studying zero-trust more seriously as a more effective methodology for locking down systems and protecting customer data.

    When the cloud services market became bogged down with an overabundance of “services-as-a-service,” the US Commerce Dept.’s NIST weighed in, publishing specifications for SaaS, PaaS, and IaaS that the world now follows.  As important as personal data protection has already become, NIST, or an agency commanding equal respect, may be called upon to decide where trust ends and skepticism begins.

    Baseline

    The telemetry which populates Tetration’s data center analytics engine is acquired from multiple sources, including from APM-like agents inserted into workloads, as well as directly from Cisco’s Nexus 9000 switches, as we reported last June.  At a granular level, Tetration aims to determine which applications are responsible for what packets, and execute and enforce rules based on those determinations.

    As Cisco’s Kaushik told us, “The first problem Tetration solved was looking at the current data center and apps and showing exactly what communication occurs.  We then use machine learning to identify patterns of behavior and a baseline.  The customers at this point can either (a) take the current pattern and implement a policy, so the same behavior persists in future (if it ain’t broke, don’t touch it); or (b) a better model:  Use the baseline for what it is: a baseline for current behavior, and start pruning edges and communications to see how it impacts applications.  So with few iterations, you can tighten the policy.”

    Kaushik admitted that this second release of the data center analytics service is still limited to virtual machine-based workloads, as opposed to Docker or OCI containers on distributed or microservices systems.  That feature remains forthcoming, he said.

    DevOps will be able to write policies for Tetration using a number of methods, he added, including through an open API that accesses its streaming analytics using the Apache Kafka model.  This way, developers or DevOps may write scripts or applications that address Tetration using languages such as Python and Scala.

    The latest Tetration hardens the definition of what’s trustworthy and what’s not, enabling the individuals responsible for managing specific classes of traffic to make those determinations.  For example, he suggested, an information security professional may decide that a financial database must be inaccessible to all but authorized finance applications, or alternately, that all Windows Server instances that are without a specific security patch, should be treated as inoperative.  (Microsoft has implemented a similar feature in its own server OS since Windows Server 2008 R2.)

    “The Tetration platform takes all this input, along with current behavioral data, and merges the policy to create a unified common trust model,” Kaushik went on.  “If someone changes one of the rules, the platform re-computes the complete trust model in real-time.  That’s what we push down to servers and infrastructure.  If a new workload pops up, we push the right policy based on its attributes, such as finance data or unpatched OS, etc., rather than ephemeral characteristics like IP address, etc.”

    Zero-Trust with Added Trust?

    “In the spirit of zero-trust (verify, but never trust),” Lori MacVittie, F5 Networks’ principal technical evangelist, wrote in a note to Data Center Knowledge, “accepting a set of exceptions generated on the basis of ‘normalcy’ — which implies frequency and not necessarily legitimacy — certainly seems to negate the purpose.  Doing so automatically pretty much guarantees violation, as there’s little ‘verify’ occurring.”

    “My understanding of the concept of zero-trust,” stated Chet Wisniewski, senior security advisor for security services maker Sophos, “is literally what the words are.  You don’t trust anything.

    “The antiquated concept that there’s bad guys on the outside and good things on the inside — and that there is such a thing as an ‘inside’ and an ‘outside’ — is a broken model,” Wisniewski continued.  “Zero-trust turns that model around, and says, ‘Just because Sally and Greg work for you, don’t trust that what they’re doing is safe.’  Because it may or may not be.”

    Rather than whitelisting baseline behaviors, Wisniewski perceives zero-trust as accepting nothing as normal or natural behavior — not even what machine learning algorithms may detect.  “No traffic, by its existence or where it came from, is necessarily good traffic.”

    That said, he acknowledges that methods of learning general traffic patterns — even algorithmic methods — may not only be useful but necessary.  This way, aberrations such as “Sally” accessing a boatload of records at 3:00 a.m., can be red-flagged.

    “The model has to be fluid,” said Wisniewski.  “It’s not something that a human being can define, because Sally’s job may evolve over time, and she may have a role change.  There are a million different things that have to be fluid in an environment, and it’s really hard for models to evolve if they’re made by humans.  There needs to be an algorithmic angle to it.”

    That actually sounds more and more like the model Cisco’s Kaushik described.  So maybe the problem is not with how Tetration actually works — just how the data center analytics service is being marketed.

    “Zero-trust requires a legitimate reason for the exception other than, ‘It’s normal behavior,’ particularly when it comes to applications,” wrote F5’s MacVittie.  “Just because there’s regular (normal) communication between two apps or app components does not (or should not) imply it’s legitimate, especially if you’re trying to move an existing environment to a zero-trust model.”

    2:03p
    Top Five Data Center Stories: Week of February 3

    Here are the top stories that appeared on Data Center Knowledge this week:

    Google Ramped Up Data Center Spend in 2016 – The spike in capital spending is also in line with Google’s announcement last March that it would ramp up investment in data centers to support its enterprise cloud services, a business whose growth has been a top priority for the company as it races to catch up with cloud leaders Amazon and Microsoft.

    Delta Cancels 280 Flights Due to IT Outage – This was a second system-wide outage for Delta in six months due to IT problems and a second major airline outage within a week’s time. More than 200 United Airlines flights were affected by an IT outage on January 29.

    QTS Buys Large Dallas Data Center from Insurer HCSC – There is demand for large-capacity data center leases in the Dallas data center market from hyperscale cloud companies as well as pent up demand from more traditional corporations, and data center providers like QTS are racing to make inventory available to these customers, according to the commercial real estate firm Jones Lang LaSalle.

    Vertiv Uses Machine Learning to Automate Data Center Cooling – The idea with iCOM Autotuning, Vertiv’s new software feature, is to use machine learning techniques to control all of the elements automatically.

    Digital Bridge-Backed DataBank Buys Cleveland, Pittsburgh Sites from 365 Data Centers – DataBank considers the data centers in Cleveland and Pittsburgh “key interconnection assets” and plans to leverage them as “anchors” for further expansion in the two markets.

    Stay current on data center industry news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or Facebook or join our LinkedIn Group – Data Center Knowledge

    2:03p
    Getting to a Digital State: What’s the Rush?

    David M. Gervon is the Channel Manager for Cisco/Meraki.

    The world is changing and evolving every day. Technology is at the forefront of this with a velocity that we have never seen before.

    Very recently, Cisco released its latest Cloud Index Report in an ongoing effort to examine the growth and digitization of the modern industry and the data center. The report showed us that:

    • Hyperscale data center traffic will quintuple by 2020 and will account for 53 percent of all traffic within a data center.
    • Global cloud IP traffic will account for more than 92 percent of total data center traffic by 2020.
    • By 2020, enterprise workloads will account for 72 percent of total data center workloads.
    • When it comes to big data and IoE, the amount of data stored on devices will be five times higher than data stored in data centers, at 5.3ZB by 2020.

    Because of these rapid advancements, we are able to accomplish tasks faster, have meetings around the world without leaving our desks, and soon travel with no hands. This is fantastic while trying to achieve the highest level of efficiency, but is there loss?

    What happens to our brain when we utilize all that is around us to accomplish that task faster? From writing to typewriters, to computers and talk to text/type. Does elegance leave does thought slow or does it speed? Can we say it better if we do it slower or is the speed of light and our first thought the best thought?

    I cannot answer these questions, but definitely I want to know.

    One example is participating in meetings around the world without leaving our desk. I agree this is great, and I do this multiple times a day/week. From 7:30 a.m. to 7:30p.m., I can reach and educate many audiences; but what is lost, or is there loss?

    Personally, I like to have an in-person meeting sometime prior to that first remote call. Now, this is not always practical so many times after the first call I try and schedule that in person meeting.

    Here’s why.

    Much is lost without the interpersonal live communication, especially when people do not have their video turned on. Even if they do, it’s sometimes difficult to lead a meeting and try and look at everyone’s video. I pride myself on being able to educate people the way that they learn best, and without being in front of them makes this is a difficult feat.

    Furthermore, pleasantries and non-work discussions are sometimes dismissed on the 30-minute call or meeting that is scheduled. Many do not realize that this “non-work stuff” is an essential part and key to longevity in business. Yes, we can transact over and over again, but think about what happens when you actually take the time to get to know someone. There is a time where you go from people selling to people to people helping people. Relationships are built on trust not on a pile of cash transactions.

    Let’s move to everyday interactions. A little less of walking and texting, headphones in; our minds racing. And, a little more ‘good morning,’ please and thank you, and excuse me. Give this a try next time you board a plane, check into a hotel, or call customer service – “hello, how are you?” The person on the receiving end may be disarmed and surprised, but I bet they smile and respond (you can hear smiles over the phone too).

    Now, all of this seems negative in regard to technology, and it may be. However, I am a strong believer in technology, disruption, and this digital revolution. What people need to realize is there is balance to everything. We must leverage technology as a tool while still building our interpersonal communication skills. Gartner estimates by 2020, 100 percent of IT roles will require an intermediate level of proficiency in business acumen. This means that administrators, engineers, architects, and IT professionals must be able to communicate effectively to be successful in a digital world.

    The firm goes on to state that systematic communication practices will allow CIOs to clearly identify the changes in the business and how they will affect the IT strategy, establish clear roles and contributions on an employee level, and inspire actions and commitments to deliver better business results.

    “Developing strong business acumen in IT is a prerequisite to effectively shift IT focus from optimizing IT operational efficiency to driving business effectiveness, value creation and growth,” said Lily Mok, research vice president at Gartner. “At the heart of an effective IT communication strategy is the ability to clearly link the vision, strategy and action plans of IT to the business to drive desired behaviors in the workforce that contribute to improved IT performance and business outcomes.”

    The person who understands how to hone in and utilize new, powerful technologies, while also maintaining a strong connection to the analog world, is the one with the keys to unlock this new, modern, communication paradigm.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    5:41p
    Data Centers Scrambling to Fill IT Skills Gap

    It’s a tough world when it comes to recruiting and retaining staff. According to a study by TEKsystems, 81 percent of IT leaders say it’s difficult to find quality candidates, and almost half don’t expect to fill an IT position within the anticipated time frame. Meanwhile, only about one-third of data center managers and CIOs believe their organization have the skills in-house to address their needs.

    This is being made all the more acute by the changes taking place within IT. The cloud, convergence, the Internet of Things (IoT), virtualization and mobility have shifted the demands being placed upon the data center. Modern technology configurations and a reliance on external services are shifting staffing and training priorities.

    “The evolving digital world and the cloud require a change in data center strategy with different skill sets coming to prominence,” says Karsten Scherer, global analyst relations lead for TEKsystems.

    He sees a growing tension between traditional data centers and the mega data centers operated by service providers such as Google, Amazon Web Services and Microsoft Azure. The latter group is either increasing facility size or adding new data center facilities.

    “They are building out their data center presence to satisfy demand and they compete with traditional data centers for the same pool of talent,” says Scherer. “Coupled with attrition, an aging workforce and younger people wanting to work elsewhere, we have competition for a dwindling pool of labor regarding data center skills that is only getting worse.”

    While the pool of qualified professionals for the data center is stretched thin, the roles themselves are evolving. The growing prevalence of hybrid environments mean that data center managers have to learn to deal with a mix of on-premises assets as well as off-premises services they source from vendors via the cloud. As a result, the building blocks of job descriptions are beginning to reflect this: Scherer notes a higher emphasis on wordings like emotional intelligence, bridge-building and silo-busting, as well as terms such as creative, adaptable, strategic, innovative, alliance-builder and negotiator cropping up more and more in job requirements.

    “Traditional data center skills won’t go away, but there is a greater need for skills on how to manage cloud vendor relationships as well as aligning the cloud to the overall goals of the business,” says Scherer. “Architecture-oriented roles and business analysts are seen as critical in order to pull all of these parts of the ecosystem together and help them talk to each other.”

    TEKsystems’ annual survey indicates that programmer and developer skill sets are the hardest to place, and have remained so for over the past three years. For 2015, software engineers were the second most difficult role to fill, while architects, project managers and security specialists came in third, fourth and fifth respectively. Additionally, 51 percent expect to pay increased salaries for these skill sets due to the competitive nature of these roles.

    Staff Development

    Greg Schulz, an analyst with StorageIO Group urged data center managers to develop broader skill sets among their staff. A combination of infrastructure skills spanning several areas among servers, storage, networking hardware, software, services, cloud, virtualization and data protection will be vital to have as data center personnel will be expected to cover a much wider area. A few specialists will remain, but more and more staff will be required to possess a broad range of skills. Data center managers, therefore, are advised to begin training personnel in duties that sit beyond their current assigned roles.

    “Data center infrastructure management (DCIM) is another area that will become more important over time,” says Schulz. “It addresses habitats for technology and facilities, including energy management, how it ties to the applications and the work they are doing.”

    Scherer echoes this. He says that given the increased focus on brokerage duties between the cloud and the internal data center, several staff should be trained to act as the interface between vendors, IT and line of business leaders. These individuals have to be good at managing contracts and Service Level Agreements (SLAs), procurement and more. These are not always talents that come naturally to technical specialists so some training is probably going to be necessary.

    “The cloud requires a broader knowledge base and set of skills, so this will impact the more focused support roles like storage admins, server admins, client services and help desk,” says Scherer. “Many organizations are re-training these IT pros to augment their skill sets toward a wider set of tasks.”

    New Talent

    However, training alone won’t cut it. Attrition is inevitable. Some personnel will retire, some will relocate, and a few could well be poached by the Googles of this world with the allure of bean bag seats, air hockey tables and an endless supply of M&Ms in the workspace. So data centers will have to up their game to find new talent.

    “As far as attracting new talent, it’s important to communicate what’s being brought to the table that’s of value to the job candidate—not just salary and benefits, but also the kinds of projects the employee will work on, the room for growth and the skills that can be acquired,” said Scherer.

    In some cases, it may be wise to engage a third party to help find qualified staff additions. TEKsystems surveys indicate that data centers are increasingly leaning on external partners to find candidates for their open roles.

    Non-traditional talent pools are also an important area to include in the search for new blood. Military veterans, local universities and STEM (science, technology, engineering and math) programs are gaining in popularity as ways to strengthen the talent pipeline.

    And, AFCOM and Data Center World Global – 2017 in Los Angeles, April 3-6 are taking action to help this issue and plan for the future by introducing the industry to college students and individuals interested in STEM topics. This educational session will welcome local area college students to the industry and the event. Immediately following this panel introduction, students will be given a tour of the Data Center World and HostingCon exhibit halls to talk to industry leaders and vendors. Find out more here.

    Skills Gap

    Data center managers are advised to heed this advice even if the current picture in their own data center is relatively rosy. Numbers from numerous sources all point to the same thing—challenging days lie ahead on the personnel front.

    As well as the TEKsystems survey results showcased above, the latest CompTIA IT Skills Gap survey shows the existence of a significant shortfall of data center talent. Most firms (72 percent) plan on addressing this problem with staff training. Additionally, the latest report from Foote Partners discovered that the average market value for a total of 368 IT certifications being tracked has increased for eight consecutive quarters. According to Ted Lane, an analyst at Foote Partners, this is unprecedented in the 16 years his company has been tracking the reporting compensation for IT skills and certifications. Similarly, 406 non-certified skills being tracked posted gains in market value in 2015.

    In other words, IT skills are in high demand and this is showing up as higher pay rates. Therefore, data center managers can expect it to be tougher to hire in new talent, they are likely going to have to pay more for these individuals if they have any chance of landing them, and they are going to face far more pressure than ever to retain their current staff roster, many of whom will be on the shortlists of persuasive headhunters. All of this is happening while the cloud sucks more and more functions out of internal data centers, threatening their very existence.

    “Data center managers must try to ensure the overall acumen of their team grows to include capabilities to work with their cloud vendors effectively,” says Scherer. “Companies that don’t have the internal firepower to manage or guide those relationships and hold those vendors accountable regarding performance toward their business objectives will not reap the potential that the cloud model promises.”

    5:59p
    Machine Learning Gains Momentum in MSP Space

    Brought to you by MSPmentor

    Broad adoption of powerful cloud computing has unleashed innovation in artificial intelligence technologies, and 2017 is poised to be the year that AI and machine learning applications make their way into the hands of the general public.

    For those in IT and – more specifically – the managed services space, tools driven by AI are increasingly popping up in everything from customer service and security, to CRM and remote monitoring and management.

    Machine learning can have a particular impact for IT tech services firms, where increased efficiency can translate directly into more revenue falling to the bottom line.

    See alsoA Cloud for the Artificial Mind: This Data Center is Designed for Deep Learning

    “There is an absolute revolution occurring in artificial intelligence,” John Ball, general manager of Salesforce Einstein, told Bloomberg when that AI product launched in September.

    Machine learning, which represents one type of artificial intelligence, is joined at the hip with big data.

    The technology can access voluminous data from applications like the Salesforce CRM, discern patterns and apply algorithms to automatically change the operation of applications and improve business outcomes.

    Einstein, for example, can help salespeople reach appropriate prospects more effectively or guide visitors in a strategically customized journey through a company’s website.

    Salesforce users don’t need to do anything to leverage the machine learning capabilities. They simply use the CRM and the features work automatically in the background.

    “This is democratizing AI so that every company can benefit from these techniques,” Ball said.

    See alsoHow a Tech Company from the 60s is taking on AI, IoT

    Automating customer service interactions with support bots has been among the more common early applications of AI technology.

    Microsoft Bot Framework is designed to work with email, texts, Facebook Instant Messenger, Skype and other platforms to provide automated service interactions with customers.

    SupportBots.io, a service developed specifically for MSPs and IT services providers, can be linked to a company’s PSA to enable automated creation or modification of support tickets, scheduling of appointments, or arranging of a call back from a live service representative.

    But even those applications are merely scratching the surface of AI’s potential in IT.

    In November, Symantec launched Endpoint Protection 14, a layered suite of cyber-defense tools that relies on machine learning to detect potential threats and execute a response based on analysis of more than 4 trillion threat types previously identified through log data.

    “Symantec Endpoint Protection 14 is the industry’s first solution to fuse essential endpoint technologies with advanced machine learning and memory exploit mitigation in a single agent, delivering a multi-layered solution able to stop advanced threats and respond at the endpoint regardless of how the attack is launched,” the company said in a statement announcing the launch.

    Other potential applications in the IT services space are just in their infancy.

    MSP toolset maker Atera announced this week that it was doubling the size of its research and development budget, seeking to innovate new capabilities for its all-in-one RMM, PSA and remote access platform.

    Among other applications, Atera CEO Gil Pekelman envisions incorporating machine learning to help MSPs with tasks like how to program an RMM to monitor an optimum assortment of metrics.

    “You have hundreds of things to choose from,” he said. “How do you know what are the right things to monitor?”

    In the past, such decisions were based on the MSPs experience, intuition and maybe suggestions from peers, Pekelman said.

    “Here, machine learning is able to (evaluate) the whole list and start learning,” he said. “It’s deciding all the time what is the optimal combination of data points to monitor…while legacy systems require the human to make a best guess.”

    But as more and more such decisions come to be made by machines, new questions will arise about the composition and duties of MSP employees,” Pekelman said.

    “What becomes the place of the human in this environment? It’s innovation,” he said. “The MSP is going to be taking care of strategy and architecture,” Pekelman continued. “That, the machine can’t do.”

    This article originally appeared on MSPmentor.

    6:08p
    The Multi-Cloud Convergence Tipping Point

    Tony Bishop works in Global Vertical Strategy & Marketing at Equinix.

    Cloud adoption has matured to an advanced stage where enterprises are increasingly relying more on cloud infrastructures, and the industry at large is extremely bullish when it comes to cloud futures. Cisco predicts that global cloud IP traffic will almost quadruple between 2015 – 2020, reaching 14.1 zettabytes. By then, global cloud IP traffic will account for more than 92 percent of total data center traffic. This surge in cloud adoption also represents a huge shift in cloud spending by IT organizations, directly or indirectly affecting more than $1 trillion dollars in IT purchases dedicated to the cloud by 2020, according to Gartner.

    Forrester predicts 2017 will be the tipping point for cloud adoption and sees a convergence of multiple clouds across the enterprise as “CIOs step up to orchestrate cloud ecosystems that connect employees, customers, partners, vendors and devices to serve rising customer expectations.”

    In 2017, more than 85 percent of enterprises will commit to multi-cloud architectures that IDC describes as “encompassing a mix of public cloud services, private clouds, community clouds and hosted clouds.” We see much of the multi-cloud migration within our customers stemming from diverse organic cloud adoption by different groups within the same organization. And, the majority of enterprise hybrid-cloud adoption is coming from businesses leveraging the flexibility and cost-effectiveness of public clouds, while securing sensitive assets in on-premises IT or a private cloud for protection and compliance.

    Are You Ready for Cloud Convergence?

    The cloud is now a major catalyst for changing how enterprises will do business in the emerging global digital economy. Some of its greatest benefits to organizations are:

    • Faster access to infrastructure and IT resources and services
    • Greater speed-to-market and global expansion
    • Business continuity and disaster recovery
    • Higher performance and scalability

    The economies of scale of pay-per-use cloud business models are also enticing enterprises to move to the cloud, however, they have also been a major source of confusion and frustration for companies. Today, there are multiple ways to buy cloud services ̶ on-demand, pre-paid, reserved capacity, monthly enterprise agreements ̶ and this trend will accelerate in 2017.

    Also, migration of those applications that are not “cloud-ready” is not a slam-dunk. This has brought about the rise of cloud migration and orchestration tools, such as open source containers (Docker, Mesosphere) and container-based migration services from leading cloud providers such as Amazon, Google and Microsoft. These solutions are making the “lift-and-shift” application migration model more viable, and it is expected that in 2017, these tools and advancements in automated cloud orchestration and management will accelerate the rate of cloud migration given their low cost for bulk application migrations.

    The Next Steps

    Ultimately, a well-planned hybrid and multi-cloud cloud migration strategy is necessary to facilitate comprehensive assessment, migration and optimization plans to reduce cloud migration risks and costs. Your strategy should also include cloud exchanges for fast, cost-effective, direct and secure provisioning of virtualized connections to multiple cloud vendors and services to best leverage the flexibility and agility converged cloud infrastructures contribute to becoming a competitive digital business.

    Ultimately, it will be an ubiquitous cloud infrastructure providing the backbone for digital business. To ensure cloud convergence success, cloud strategies cannot be from siloed and fixed; but rather organizations need to take a more, holistic integrated and dynamic approach to cloud interconnection in order to best position business and IT infrastructures for digital transformation.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    7:12p
    Snowden-Era Paranoia Fuels Data Center Networking Startup Boom

    By Jordan Robertson (Bloomberg) — Of all the lasting effects of Edward Snowden’s leaks, there’s one photo that leaves a particularly strong mark. In it, U.S. federal employees in T-shirts and blue jeans are seen intercepting network equipment from Cisco Systems Inc. at a shipping facility. The feds in the photo, their faces obscured, were reprogramming the machines to spy on people’s activities.

    The image captured a deeply held paranoia within Silicon Valley’s biggest internet companies: In an era of increasingly sophisticated nation-state hacking, how can we trust that network infrastructure isn’t compromised before it’s dropped off at the company loading docks?

    This fear has created a sense of urgency for Apple Inc., Google, Facebook Inc. and other technology giants that have been devising their own alternatives to Cisco, which controls more than half of the market for network equipment. After the photo was published, Cisco filed a public complaint with the White House, arguing that spying by the National Security Agency was hurting U.S. companies. Cisco told Bloomberg it doesn’t work with governments on backdoors for its products and maintains tight checks on its processes and supply chain to assure customers of their security.

    Read more: NSA’s Hardware Tampering May Alter Global Product Flow

    While Cisco’s dominance isn’t in danger of slipping any time soon, the industry’s creeping concerns over cybersecurity have created an opening for new businesses and equipment-design skunkworks inside large companies. In the three years since the Snowden leaks, networking software and equipment startups raised $6.35 billion, a 47 percent increase over the prior three years, according to researcher CB Insights. “We’ve lost confidence in the vendors in the wake of the Snowden revelations, and that is a weakness and an opportunity,” John Kindervag said in an interview last month as a vice president at Forrester Research. (He recently left the market analysis firm to become an executive at Palo Alto Networks Inc.)

    One company that’s benefiting is SnapRoute Inc., which was founded by a former manager of Apple’s global data center network. The startup makes a cheaper, simpler network switch than the ones Cisco sells. And unlike most switches, it’s open-source, allowing customers to look for bugs, performance glitches or backdoors that might allow a government to peek inside.

    SnapRoute announced a $25 million round of funding Tuesday from AT&T Inc., Microsoft Corp., Lightspeed Venture Partners and Norwest Venture Partners. The startup counts Facebook among its customers.

    Facebook is also a founding member of the Open Compute Project, which develops and shares open-source data center designs. It launched the project in 2011 after revealing details about a data center it built in Prineville, Oregon, using only Facebook-designed servers, power supplies and backup systems. Alphabet Inc.’s Google, Apple, Goldman Sachs Group Inc. and Microsoft are now members. So is Cisco. It’s playing along with a potential competitor because Cisco Chief Executive Officer Chuck Robbins has said the company needs to be “part of every technology discussion that our customers want to have.”

    See alsoVendors Take Facebook Data Center Switches to Market

    The high cost of traditional networking products was the main reason for Amazon.com Inc.’s investment into creating its own equipment. “It was cost that caused us to head down our own path,” James Hamilton, vice president and distinguished hardware engineer for Amazon Web Services, said at a conference in November. “Networking gear is really expensive.”

    Besides looking to save a lot of money on premium equipment, companies are placing a higher value on transparency. Cisco guards its code and designs, making them difficult to repair when things break. A web hosting company filed for bankruptcy protection after a series of Cisco switches failed and a major customer left, while Cisco worked for months on a fix. Cisco has declined to comment on that case, saying only that it tries to fix problems quickly.

    Read more: Data Center Provider Peak Hosting Files for Bankruptcy

    By 2020, spending on open-source and self-built switches and other network technologies will account for at least 20 percent of the global data center market, up from less than 2 percent last year, according to researcher Gartner Inc. Big Switch Networks Inc., Cumulus Networks Inc., Pluribus Networks Inc. and SnapRoute are among the companies cultivating a niche that’s putting pressure on leaders Cisco and Juniper Networks Inc. and their proprietary code, said Naresh Singh, an analyst at Gartner.

    The giants are already under pressure from software-based networking alternatives like SnapRoute’s, and the adoption of open-source tools from mega users, such as Facebook and Goldman Sachs, poses an even bigger threat to their businesses, Singh said. Cisco said some companies balk at using open-source network equipment, citing maintenance “complexity and hidden costs.”

    SnapRoute founder Jason Forrester said the idea for his startup came from a key discovery he and his colleagues at Apple made when they began designing their own networking software and switches. Forrester (no relation to the market research firm) left Apple in 2015 but declined to talk in detail about his work there. “Switching wasn’t as hard as Cisco and others led customers to believe,” he said at SnapRoute’s offices in an industrial part of Palo Alto, California, located 10 miles from Apple’s campus.

    Switches from SnapRoute are $30,000 to $40,000 cheaper than comparable brand-name models, Forrester said. And whereas switches from Cisco and other big suppliers can have tens of millions of lines of code, SnapRoute’s has just 22,000, he said. This means fewer features, so SnapRoute may not be an attractive option for some companies. But the simpler code makes it easier for customers to sift through in search of hidden spying devices.

    —With Ian King

    8:36p
    WordPress Bug Allows Hackers to Alter Website Content

    A WordPress bug called REST API Endpoint allowed more than 67,000 websites to be hacked over the past two weeks, but the company  has since rolled out a new version of the content management software with a patch to fix the problem, according to bleepingcomputer.com. The bug enabled hackers to infiltrate back end systems and change or inject words within content.

    Although web security firm Sucuri informed WordPress back on Jan. 20 about the vulnerability to sites using 4.7 and 4.71 versions, the two companies decided to wait until last week to publicly announce the bug until it could successfully roll out a fix in WordPress 4.72, said Sucuri security researcher Marc-Alexandre Montpas in a blog post. If your website is one of the 27 percent of all sites that use WordPress–Data Center Knowledge being one–Sucuri highly recommends that you update to 4.7.2 as soon as possible.

    We have here, but not before a few headlines on Data Center Knowledge were altered to read “Hacked by (insert group name here)”. Sucuri also warned that version 4.7.2 may not automatically update even if that feature is turned on in WordPress.

    “Due to this type-juggling issue, it is then possible for an attacker to change the content of any post or page on a victim’s site,” Montpas wrote. “From there, they can add plugin-specific short codes to exploit vulnerabilities (that would otherwise be restricted to contributor roles), infect the site content with an SEO spam campaign, or inject ads, etc.”

    Although thousands of site were compromised, and until recently continued at the pace of 3,000 defacements a day, according to bleepingcomputer.com, it would have been even more widespread had the public been notified of the bug right away.

    “We believe transparency is in the public’s best interest,” WordPress Core Contributor Aaron Campbell wrote in a blog post. “It is our stance that security issues should always be disclosed. In this case, we intentionally delayed disclosing this issue by one week to ensure the safety of millions of additional WordPress sites.”

    For additional information about the bug, visit The Whir.

    Also, in the upcoming Data Center World conference April 3-6 in Los Angeles, former renown hacker Kevin Mitnick will present tips for spotting and preventing such attacks during his keynote address.

    << Previous Day 2017/02/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org