Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, July 14th, 2017

    Time Event
    5:00p
    GCP Expands Data Center Reach to London

    The Google Cloud Platform is growing, with new data centers popping up everywhere. Yesterday it announced that a new Google data center in London is now open for business. This comes on the heels of recent cloud data center openings in Australia and Singapore.

    The company is working to catch up with Amazon Web Services, which is the reigning triple crown winner in the public cloud arena. At last count, AWS has 43 availability zones in 16 regions. With the addition of London, which adds three zones to Google’s global coverage, the search king’s cloud now weighs in with 30 zones in 10 regions.

    Does this mean Google is gaining on Amazon in physical presence? Hard to tell. GCP has plans to expand to Frankfurt, the Netherlands, and Finland in Europe, and to four other cities worldwide. That will bring Google to 50 zones in 17 regions, which would put it a couple of lengths ahead in the data center race. However, Amazon has plans to build new data centers to cover an additional 11 zones and four more regions, enough to comfortably stay in the lead.

    Google Cloud Platform data centers

    Map of Google Cloud Platform’s existing and planned data centers (Image: Google)

    Of course, there’s more to the public cloud business than the number of operational data centers. Plenty more. But having data centers located close to potential customers can make them…well, real customers. And there’s no shortage of potential customers in London. According to the Brookings Institution, the city has the fifth largest metropolitan economy on the planet.

    See also: Oracle’s Hurd Bullish on Cloud Business, Says Enterprise Market Largely Untapped

    “Incredible user experiences hinge on performant infrastructure,” Dave Stiver, product manager for Google Cloud Services explained in a blog announcing the opening. “GCP customers throughout the British Isles and Western Europe will see significant reductions in latency when they run their workloads in the London region. In cities like London, Dublin, Edinburgh, and Amsterdam, our performance testing shows 40%-82% reductions in round-trip time latency when serving customers from London compared with the Belgium region.”

    The Google data center in London will offer compute, big data, storage, and networking services.

    It’s notable that until yesterday the Belgium region was the closet region to the British Isles, which might’ve proved problematic for UK companies if Brexit is ever realized. According to Google, this was not a consideration, with GCP’s global president for cloud customers, Tariq Shaukat, telling Business Insider, “The decision pre-dates Brexit.”

    See also: Can Google Lure More Enterprises Inside Its Data Centers?

    If Brexit never comes to fruition, which seems to remain a possibility, GCP is already committed to compliance with all European Union regulations, including the privacy centered General Data Protection Regulation, which is set to take effect in May 2018.

    “[W]e’ve worked diligently over the last decade to help customers directly address EU data protection requirements,” Stiver said. “Most recently, Google announced a commitment to GDPR compliance across GCP.”

    As might be expected, the response to the announcement by UK business boosters has been effusive.

    “Google’s decision to choose London for its latest Google Cloud Region is another vote of confidence in our world-leading digital economy and proof Britain is open for business,” Karen Bradley, Secretary of State for Digital, Culture, Media, and Sport said in a statement. “It’s great, but not surprising, to hear they’ve picked the UK because of the huge demand for this type of service from the nation’s firms.”

    5:20p
    Checklist for Getting a Grip on DDOS Attacks and the Botnet Army

    Heitor Faroni is Director of Solutions Marketing for Alcatel-Lucent Enterprise.

    Distributed Denial of Service (DDoS) attacks jumped into the mainstream consciousness last year after several high-profile cases – one of the largest and most widely reported being the Dyn takedown in Fall 2016, an interesting example as it used poorly secured IoT devices to coordinate the attack.  While not necessarily a new threat, they have in fact been around since the late ’90s.

    When you consider that Gartner predicts that by 2020 it is predicted there will be 20 billion connected devices as part of the growing Internet of Things, the need to implement the right network procedures and tools to properly secure all these devices is only going to grow.

    The New Battleground – Rent-a-bots on the Rise

    Put simply, DDoS attacks occur when an attacker attempts to make a network resource unavailable to legitimate users by flooding the targeted network with superfluous traffic until it simply overwhelms the servers and knocks the service offline. Thousands and thousands of these attacks happen every year, and are increasing both in number and in scale. According to some reports, 2016 saw a 138 percent year-over-year increase in the total number of attacks greater than 100Gbps.

    The Dyn attack used the Mirai botnet which exploits poorly secured, IP-enabled “smart things” to swell its ranks of infected devices. It is programmed to scan for IoT devices that are still only protected by factory-set defaults or hard-coded usernames and passwords. Once infected, the device becomes a member of a botnet of tens of thousands of IoT devices, which can then bombard a selected target with malicious traffic.

    This botnet and others are available for hire online from enterprising cybercriminals; and as their functionalities and capabilities are expanded and refined, more and more connected devices will be at risk.

    So what steps can businesses take to protect themselves now and in the in the future?

    First: Contain the Threat

    With the rise of IoT at the heart of digital business transformation and its power as an agent for leveraging some of the most important technological advances – such as big data, automation, machine learning and enterprise-wide visibility – new ways of managing networks and their web of connected devices are rushing to keep pace.

    A key development is IoT containment. This is a method of creating virtual isolated environments using network virtualization techniques. The idea is to group connected devices with a specific functional purpose, and the respective authorized users into a unique IoT container. You still have all users and devices in a corporation physically connected to a single converged network infrastructure, but they are logically isolated by these containers.

    Say, for example, the security team has 10 IP-surveillance cameras at a facility. By creating an IoT container for the security team’s network, IT staff can create a virtual, isolated network which cannot be accessed by unauthorized personnel – or be seen by other devices outside the virtual environment. If any part of the network outside of this environment is compromised, it will not spread to the surveillance network. This can be replicated for payroll systems, R&D or any other team within the business.

    By creating a virtual IoT environment you can also ensure the right conditions for a group of devices to operate properly. Within a container, quality of service (QoS) rules can be enforced, and it is possible to reserve or limit bandwidth, prioritize mission critical traffic and block undesired applications. For instance, the surveillance cameras that run a continuous feed may require a reserved amount of bandwidth, whereas critical-care machines in hospital units must get the highest priority. This QoS enforcement can be better accomplished by using switches enabled with deep-packet inspection, which see the packets traversing the network as well as what applications are in use – so you know if someone is accessing the CRM system, security feeds or simply watching Netflix.

    Second: Protection at the Switch 

    Businesses should ensure that switch vendors are taking the threat seriously and putting in place procedures to maximize hardware protection. A good approach can be summed up in a three-pronged strategy.

    • A second pair of eyes – make sure the switch operating system is verified by third-party security experts. Some companies may shy away from sharing source code to be verified by industry specialists, but it is important to look at manufacturers that have ongoing relationships with leading industry security experts.
    • Scrambled code means one switch can’t compromise the whole network. The use of open source code as part of operating systems is common in the industry, which does come with some risk as the code is “common knowledge”. By scrambling object code within the switch’s memory, even if a hacker could locate sections of open source code in one switch each would be scrambled uniquely, so the same attack would not work on multiple switches.
    • How is the switch operating system delivered? The IT industry has a global supply chain, with component manufacturing, assembly, shipping and distribution having a worldwide footprint. This introduces the risk of the switch being tampered with before it gets to the end-customer. The network installation team should always download the official operating systems to the switch directly from the vendor’s secure servers before installation.

    Third: Do the Simple Things to Secure Your Smart Things

    As well as establishing a more secure core network, there are precautions you can take right now to enhance device protection. It is amazing how many businesses miss out these simple steps.

    • Change the default password One very simple and often overlooked procedure is changing the default password. In the Dyn case, the virus searched for default settings of the IP devices to take control.
    • Update the software As the battle between cybercriminals and security experts continues, the need to stay up-to-the-minute with the latest updates and security patches becomes more important. Pay attention to the latest updates and make it part of the routine to stay on top.
    • Prevent remote management Disable the remote management protocol, such as telnet or http, that provide control from another location. The recommended remote management secure protocols are via SSH or https.

    Evolve Your Network

    The Internet of Things has great transformative potential for businesses in all industries, from manufacturing and healthcare to transportation and education. But with any new wave of technical innovation comes new challenges. We are at the beginning of the IoT era, which is why it’s important to get the fundamental network requirements in place to support not only the increase in data traversing our networks, but enforcing QoS rules and minimizing risk from cyberattacks.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    5:45p
    They’re The Fastest Traders, So Why Aren’t They Thriving?

    Annie Massa and Charlotte Chilton (Bloomberg) — High-speed traders have hit the wall.

    Even as critics of high-frequency firms have argued their speed and technology give them an unfair advantage, the traders are facing diminishing returns. For one thing, there’s been relatively little turbulence in this bull market, a challenge for high-frequency traders and other market makers because it restrains volatility and trading volume, curbing their profits.

    “It’s not going to come back,” said Richard Johnson, a market structure and technology analyst at Greenwich Associates LLC in Stamford, Connecticut. “For a few years after the crisis, people were in denial and kept their business running as it had been. It’s not going to get back to that level, people have to accept that.”

    That’s created an identity crisis for the fastest traders, forcing companies like Virtu Financial Inc. to seek out new businesses and pushing others out entirely. One stat helps boil it all down: Market makers in U.S. stocks produced $1.1 billion in revenue last year, compared with $7.2 billion in 2009, according to estimates from Tabb Group LLC.

    Their struggles matter because market makers help investors buy or sell when they want to. The cutting-edge of market making now centers on high-speed data transmission — beaming information across oceans with underwater fiber-optic cables and across continents with microwaves; in a bygone era it was people jostling for position in a trading pit.

    Virtu, a New York-based firm that trades more than 12,000 securities and other financial instruments on over 235 markets around the world, is at the vanguard of automated trading, yet it’s embarked on a potentially company-altering acquisition. It’s buying rival KCG Holdings Inc. for about $1.3 billion, bolting on a company with more than five times as many employees.

    One allure of the deal is KCG’s salesforce, which Virtu plans to deploy toward persuading other traders to license its state-of-the-art technology. Virtu already has a deal that allows JPMorgan Chase & Co. to access the Treasury market with its systems. Virtu Chief Executive Officer Doug Cifu and Bob Greifeld, Nasdaq Inc.’s former CEO who will become Virtu’s chairman after the KCG deal closes, see potential for the company to build a larger business selling trading technology.

    Read More: Furious Land War Erupts Outside CME Data Center

    Profit at Virtu has come down from its peak, with net income dropping to $158 million last year from $197 million in 2015. Its shares trade below the company’s initial public offering price of $19. Virtu fell 0.3 percent to $16.80 at 10:01 a.m. New York time Thursday.

    High-frequency traders use advanced technology and a deep understanding of the electronic marketplace to carry out or cancel trades. As regulation introduced in the early 2000s drove the U.S. stock market into the electronic realm, the strategy paid off. But they watched the cost of managing their complicated networks climb — which have grown to include investments in microwave towers and the fastest proprietary data sold by exchanges. Some firms say they’re reaching a point of diminishing returns in the arms race for the fastest networks, while others are doubling down on speed as a tool.

    Getting Squeezed

    “The pure speed trades are squeezed on two sides, by rising cost of infrastructure and low volatility, which gives you less reward for being the fastest,” said Eric Pritchett, chief executive officer and head of risk at Boston-based electronic trading firm Potamus Trading LLC. “That doesn’t mean there’s not still a big prize if you’re the winner. So it ends up being a better opportunity for the few firms who remain in it.”

    The squeeze is also pushing trading firms to scout new revenue opportunities.

    “High-frequency traders will move into areas they weren’t in before,” said Ari Rubenstein, co-founder of Global Trading Systems LLC, one of Virtu’s competitors. If you’re an executive at a speed-trading firm in this environment, he added, “you better be coming up with things outside the box.”

    Rubenstein’s company last year bought a business on the floor of the New York Stock Exchange, where it shepherds the stock of publicly traded giants including Berkshire Hathaway Inc. and Twitter Inc. He said the purchase will give his firm the chance to offer other services directly to those companies, such as opportunities to trade foreign currencies and Treasuries, to help them hedge risk on their balance sheets.

    The degree of the market’s doldrums is striking. The CBOE Volatility Index — better known as the VIX — sank to a 23-year low on June 2, showing investors see continued calm. Coupled with the rising cost of the technology that they need to do business — like fast-moving streams of data — several firms have pursued deals this year.

    Teza’s Exit

    Teza Technologies LLC shifted away from trading to focus on its hedge fund,  selling some technology assets to Quantlab Financial LLC. “It is quite clear, on the whole, that making money in HFT has grown more difficult,” Teza said in a statement. “Our firm sees its future in the investment management business.”

    Interactive Brokers Group Inc., a broker that helped invent electronic trading, sold its options market-making business. The company “continues to focus on bringing the latest technological innovations to clients in other areas of our business, which are expanding rapidly,” it said in a statement.

    Chopper Trading LLC, a Chicago-based speed trader, sold assets to DRW Holdings LLC in 2015.

    Just this week, Crain’s Chicago Business reported that proprietary trader Mocho Trading LLC closed up shop after opening about a year ago. Mocho Trading didn’t immediately return requests for comment.

    Wanting Goldilocks

    High-frequency traders want “a Goldilocks market with just the right amount of volatility,” said Jim Angel, a professor at Georgetown University in Washington. “They don’t want to see a flash crash kind of day, they want just enough of things happening so that people change their minds and want to trade.”

    Fierce competition among high-frequency traders also batters their profits. Several are jockeying to position their microwave communications towers as close as possible to the data center used by CME Group Inc., owner of enormous futures exchanges, outside Chicago. In that battle, using the fastest data transmission equipment for just a few extra feet matters.

    “The firms that are going to be successful have optimized their business, they’re lean and can make profit in this environment,” said Greenwich’s Johnson.

    Even exchanges are taking notice of how record-low volatility is affecting high-speed traders, who are part of their customer base. Nasdaq CEO Adena Friedman said in April that the New York-based exchange operator is already contemplating potential effects from deals including the Virtu and KCG tie-up, which could effectively eliminate one customer for the data feeds it sells to traders.

    “Consolidation in HFT could have some impact on our, maybe, data and connectivity services,” Friedman said on a quarterly earnings call with analysts. “We don’t think there will be a significant impact on Nasdaq,” she added.

    To navigate these challenging times, HFT firms have “to be as efficient as possible, where they are trying to control their costs as best they can,” said Richard Repetto, an analyst at Sandler O’Neill & Partners LP. Low volatility is “a market condition they have to deal with,” he added.

    6:19p
    Why Docker is So Popular

    By The VAR Guy

    By now, you’ve almost certainly heard of Docker containers. You know Docker is massively popular. But do you know why? Here’s a look at the factors driving tremendous interest in Docker today.

    Before delving into a discussion of the factors behind Docker’s popularity, it’s worth noting that Docker is not the only container platform out there. Nor was it the first to come along.

    Other frameworks, like OpenVZ and LXC, were available starting in the mid-2000s. Other container-like technologies, such as FreeBSD jails, go back even further. Docker was released only in 2013, making it a very young technology compared to most of today’s mainstream enterprise technologies.

    Curiously, however, it was Docker, not a more mature container platform, that has risen to massive prominence over the past few years. That’s an outcome worth pondering — for the purposes both of understanding what makes containers in general so popular, as well as why Docker in particular has succeeded so spectacularly, whereas alternative container frameworks have seen little adoption.

    Explaining Docker’s Popularity

    So, let’s consider those two factors.

    First, here’s why containers in general have proven so appealing to companies large and small over the past several years:

    • They start and stop much faster than virtual machines.
    • They are more portable because container host environments are very consistent, no matter which type of operating system is hosting them.
    • Containerized applications are easy to scale because containers can be added or subtracted quickly from an environment.
    • Containers make it easy to break complex monolithic applications into smaller, modular microservices.

    Then there’s the question of why Docker specifically has become so popular. That’s a harder question to answer, but I think the following factors are at play:

    • Docker was open source from the start. This helped Docker to appeal in the technology market, which by 2013 was beginning to see open source as the default mode of software production. (Had Docker emerged just five years earlier, when commercial interest in open source was less intense, I don’t think being open source would have helped Docker as much.)
    • Docker appeared at the right time. By 2013, virtual machines were finally becoming a dated technology. Organizations were looking for a leaner, meaner way of deploying applications, and Docker happened to fit the bill. When OpenVZ and LXC appeared in the mid-2000s, traditional virtualization had not yet run its course, so those container frameworks were less appealing.
    • Docker coincided with the DevOps revolution. DevOps, which became popular in the early 2010s, emphasizes agility, flexibility and scalability in software delivery. Docker containers happen to provide an excellent building block for creating software delivery pipelines and deploying applications according to DevOps prescripts.

    The Linux Comparison

    In many respects, Docker’s rather unpredictable success mirrors that of the Linux kernel in the 1990s.

    As I’ve noted previously, Linux entered the world as an obscure software project run by a Finnish student who had no funding or advanced equipment. Linux ended up becoming tremendously successfully, whereas more prominent, better-funded, professionally managed kernel projects like those of GNU and BSD saw limited adoption.

    The differences between Docker and alternatives like LXC were perhaps not as pronounced as those between Linux and GNU Hurd. Still, there are parallels between the rise of Linux and Docker over a relatively short period of time from obscure projects to ones of huge commercial importance.

    Conclusion

    I think timing explains why Docker containers became so popular. Docker containers solve the software delivery and deployment problems that many organizations have sought to address over the past five years. While earlier container frameworks offered similar solutions, interest in them was limited because the problems they solved were not as pressing at the time of their debut.

    In general, I think timing had less to do with Linux’s success than it did with Docker’s. Linux succeeded largely because the other kernel projects (especially GNU’s) were in disarray, and because Linux adopted an innovative, decentralized development strategy early on. But timing certainly at least helped Linux to succeeded, as it did Docker.

    This article originally appeared on The VAR Guy.

    7:00p
    Open Security Controller Waiting for Developer Interest

    Brought to you by IT Pro

    On June 28, the Linux Foundation announced the Open Security Controller Project for the orchestration and automation of software-defined network security functions used to protect east-west traffic within data centers. It’s not a new project, Intel has been working on it for some time, and made it the subject of a presentation at February’s security focused RSA Conference in San Francisco. What is new is that it’s now an open source project under the care and guidance of the Linux Foundation, sponsored by Huawei, McAfee, Nokia’s Nuage Networks, Palo Alto Networks, and of course, Intel.

    Right now, to quote Gertrude Stein out of context, “there is no there there.” The problem? There doesn’t seem to be much support.

    I’d keep an eye on this one, however, because it’s going to get off the ground, and likely pretty quickly. Why? Because it’s necessary, and the Linux Foundation has the funds and moxie to spur development — which is why Intel donated the project.

    “Our contribution of the Open Security Controller to the Linux Foundation will help accelerate the adoption of software-defined security, as demonstrated by the participation of the other founding members who are among leaders in the delivery of security solutions,” Rick Echevarria, vice president of Intel’s software and services group explained.

    Then there’s Red Hat, which would probably welcome a vendor neutral and relatively easy-to-use SDS orchestration feature to customize for its hybrid cloud stack.

    The project offers DevOps a single pane of glass approach for orchestrating software-defined security services in SDNs. Being vendor neutral, IT teams will be able to choose whatever SDN and security vendors they like. The initial code from Intel is available now, but unless I’m missing something, it only seems to support deployment on OpenStack at present.

    When it’s ready for prime time, administrators will be able to use it to orchestrate deployment of virtual network security policies and apply different policies for different workloads.

    “Software-defined networks are becoming a standard for businesses, and open source networking projects are a key element in helping the transition, and pushing for a more automated network,” said Arpit Joshipura, the Linux Foundation’s general manager of networking and orchestration said. “Equally important to automation in the open source community is ensuring security. The Open Security Controller Project touches both of these areas.”

    At RSA, Intel’s Manish Dave and Tarun Viswanathan did something of a proof of concept live demonstration of OSC protecting a VM from an east-west attack from another VM. It went well, of course, but that was the only trick this pony did. We’ll see what develops.

    This article originally appeared on IT Pro.

    << Previous Day 2017/07/14
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org