Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, August 23rd, 2017

    Time Event
    12:00p
    Cumulus Brings Takes Its Open Network OS into the Container Age

    Cumulus Networks, the software-defined network technology firm that is helping commoditize data center network switches with its Linux network operating system has been expanding its product line. In June the company introduced the NetQ validation tool and today it announced the release of Host Pack, a suite of tools to aid in the deployment and operation of production-ready web-scale networks for containers and microservices.

    To understand what Host Pack does, perhaps it’s better to start with NetQ.

    “NetQ is really just an agent that runs on all of our switches,” Cumulus CEO, Josh Leslie, told Data Center Knowledge. “It streams all of this information about the network to a central database. We’ve built a number of intelligent queries in that database, and that gives our customers great power to understand if there is some problem. I can now look at the entire fabric and understand, when did the state of my network change, or where did it change, or what aspect of it changed.”

    Host Pack takes that same technology stack and pushes it out onto the host.

    Modern networks rely on containers, he explained, which are constantly being created and destroyed, with workloads often being moved to different physical machines or even being migrated to different data centers. Anywhere along the line a situation can arise that can cause anything from a major degradation of performance to a network collapse — all of which can be difficult to diagnose. Host Pack is meant to address some of these challenges by offering network operators end-to-end visibility into containerized applications, partly through integration with common container orchestration platforms like Mesosphere and Kubernetes.

    “So, we understand everything happening,” Leslie said. “It’s no longer just a black box. We’re no longer looking at that one server as the only thing that the network knows about. We now can look at each and every individual container. We can know that container’s up and it should be advertised on the network; or that container’s down and it shouldn’t be; or that container moved, so we need to make sure we can balance the traffic to that container; or that container is part of a larger group of containers that represent the service, so that service is or is not available. We have awareness of those things because we’ve extended this technology stack onto the host.”

    According to Cumulus, the key capabilities and benefits of Host Pack include:

    • Granular container visibility for faster debugging: Host Pack gives operational and development teams shared visibility of application availability through popular container orchestration tools such as Mesosphere, Kubernetes, and Docker Swarm. Enabled by NetQ running on the host, network operators can easily view the health of container services, keep track of container locations, track IP addresses and open ports, and have deep insights into where an issue resides, allowing for faster troubleshooting.
    • Simplified network connectivity for improved performance: With the use of routing protocols such as FRRouting and BGP unnumbered directly on the host and in a Layer 3 architecture, Cumulus’ network fabric is able to dynamically learn about containers and distribute these addresses throughout the network to ensure predictable performance between containers across host environments. This removes the complications of a Layer 2 overhead, provides rich and reliable multipathing, simplifies IP address management, and increases reliability.
    • A common data center operating model, Linux, from network to containers: Cumulus Linux utilizes the same Linux networking model that is foundational to container systems. This enables the use of a common operational toolset, guarantees interoperability, and reduces complexity across the entire data center.

    Want to take a test drive to try it on for size? You can drive a cloud-based demo version Host Pack for free using Cumulus in the Cloud with Mesosphere.

    12:30p
    This is Europe’s Hottest Emerging Data Center Market

    Typically, when talking about the internet’s geography, people in various corners of the industry that collectively builds out the global network see any particular region around the world through a prism whose sides are the region’s few key cities where most networks converge and interconnect. New York and Ashburn in the Eastern US; Miami and Dallas in the American South; Singapore, Hong Kong, and Tokyo in Asia – those are all metros where for a number of reasons many network operators have chosen to link their networks, forming interconnection hubs that grow more attractive to their peers and other players in the ecosystem as more of them join. It’s a snowball effect.

    In Europe, the hubs have traditionally been Frankfurt, London, Amsterdam, and Paris, or FLAP; and for decades this quartet has been sufficient to serve Europe and whatever non-European markets companies in Europe have wanted to reach. But today’s explosion of demand for digital content in Africa, Middle East, and Asia means European interconnection hubs now play a much bigger role. That change has created an opening for a new hub to emerge, and the fastest-emerging one today is Marseille. The big port city in the south of France that’s fought long and hard to shed its reputation as a criminal hot spot and build its image as a welcoming tourist destination on the Mediterranean has a huge geographic advantage over the other European hubs for companies that want to deliver digital services in the high-growth markets outside of Europe.

    Submarine cable consortia – the telco cartels that control most intercontinental bandwidth – have known this for many years; cables that land in Marseille take advantage of the straight shot across the Mediterranean to multiple North African countries, but also east, via Alexandria, through the Suez Canal, along the Gulf of Suez, and across the Indian Ocean to Mumbai. Responding to the new demand, two new cables recently came online, laid roughly along the same route but reaching further into Asia, all the way to Singapore, Viet Nam, and Hong Kong. In addition to the biggest hubs, cables stretching from Marseille land along the way in places like Catania, Istanbul, Tripoli, Haifa, Djibouti City, Doha, Karachi, Penang, you get the idea.

    Click map to enlarge (Source: Interxion)

    Today, most of the demand is driven by digital content. An office worker on a bus to work in Karachi expects to watch a soccer game on their phone the same way a college student in New York expects to watch a Kanye West video while sitting on a lawn in Central Park.

    The demand is so urgent that Interxion, one of Europe’s largest data center providers, had to scramble to build a facility to house a network point of presence (POP) in Marseille for one of the two new cables. Operators of the AAE-1 cable wanted two POPs in the city immediately, but Interxion only had one. It had recently secured two large buildings in Marseille-Fos Port (the city’s main port) for expansion, but hadn’t yet started construction. The solution was to deploy pre-fabricated data center modules by Schneider Electric inside one of the buildings (an old port warehouse) and just enough cooling and backup power infrastructure outside to support the second POP, all within two months.

    The data center module housing the AAE-1 POP inside Interxion’s otherwise empty warehouse in Marseille-Fos Port, June 2017 (Photo: Yevgeniy Sverdlik)

    First Mover

    Amsterdam-based Interxion is enjoying a first-mover advantage in Marseille. Its investment in 2014 to acquire a data center there from the French telco SFR is expected to pay dividends for years to come, painting the way to solidification of the company’s grip on the market that’s become more important than it’s ever been as a strategic interconnection point for connectivity between Europe, Africa, Middle East, and Asia. The data center, called MRS 1, was already an aggregation point for eight cables that landed in Marseille and elsewhere on the Côte d’Azur at the time, but the two new ones — AAE-1 (landing in Marseille) and SeaMeWe-5 (landing in nearby Toulon) — would be game-changers. Not only would they bring more bandwidth, they would dramatically shrink network latency on the route.

    Now that the cables are live, roundtrip latency between Marseille and Singapore, for example, has gone from north of 200 milliseconds to about 130 milliseconds, according to Fabrice Coquio, Interxion France president. The effect of that latency drop on the market is what Interxion bet on when it bought MRS 1. “When you’ve got not only the pipe growing but also the latency dropping, then some applications – particularly from the cloud sector, digital media sector – can require to be positioned in a very specific data center, so that they can benefit from that latency effect and the capacity effect,” he said in an interview with Data Center Knowledge.

    Interxion France president Fabrice Coquio displays trays of network cross-connects at MRS 1 (Photo: Yevgeniy Sverdlik)

    In other words, if you control a data center that provides access to low-latency transcontinental networks, you have an asset where cloud and content giants – the likes of Google, Amazon, Facebook, and Microsoft – simply have to be. “Overnight almost, because of these two cables, Marseille moved from a telecom-transit city to a content city,” Coquio said.

    Quickly growing demand in markets south and east of Europe combined with access to so many cables that connect Europe to those markets make Marseille a sought-after gateway and Interxion a gatekeeper. There was no other carrier-neutral data center provider in the city when the SFR facility changed hands, and whatever player may want to enter the market now will almost certainly have to go through Interxion to get to the networks.

    “First-mover advantage is very big in the colocation industry, especially when you’re talking about a secondary market like Marseille,” Jonathan Hjembo, senior analyst at the telecommunications market research firm TeleGeography, said. “The ball is rolling in their (Interxion’s) favor. They have the ecosystem that everyone needs to interconnect with right now.”

    Fastest-Growing Pipes

    In recent years, bandwidth demand in North Africa, Middle East, and Asia (let’s call them NAMEA) has grown faster than in any other market. As a result, Marseille has become the fastest-growing market in Europe in terms of international network bandwidth, Hjembo said in an interview with Data Center Knowledge. “Marseille is there to serve those markets,” he said.

    Most traffic to and from the FLAP metros goes through Marseille to reach NAMEA countries, and since 2013, international bandwidth in the French city has grown at a compound annual rate of 60 percent by TeleGeography’s estimate. That’s total bandwidth on international cables that land in the city. “None of the other big hubs are close to that,” Hjembo said. “You combine demand from three separate sub-regions converging on one point in Europe, [and] that certainly explains a lot of the demand there.”

    There are a couple of alternative locations for linking Europe to NAMEA, but neither has seen the kind of growth Marseille has. Trying to hedge its bets across the region, the internet exchange DE-CIX, for example, deployed exchange points in Istanbul and Palermo (in addition to Marseille). It’s been trying to get Istanbul to grow “for ages” but with little success, Njembo said, due in large part to Turkey’s political instability. Palermo doesn’t necessarily lose to Marseille in terms of location, and many submarine cables land there, but Sicily’s capital just hasn’t seen the kind of growth Marseille has, with international internet bandwidth in Palermo barely placing it on the list of top 25 hubs in Europe.

    3:00p
    Diminishing Returns from Legacy Technology?

    Jeff Carr is the Founder of Ultra Consultants.

    Manufacturers and distributors face an ever-increasing realization: Their legacy systems deliver diminishing benefits and lack go-forward capabilities. The ERP software may be a victim of end-of-life “sunsetting” where support is discontinued. Another possible scenario involves the vendor putting the product in “maintenance mode” with limited updates. These legacy ERP software packages may no longer address the critical needs of today’s organization in areas of reporting, planning or overall efficiency.

    Organizations depending on legacy products are also limited due to a lack of mobile functionality, analytics, CRM, and other applications that that are needed in today’s competitive environment. With manufacturers and distributors saddled with ERP solutions that were state-of-the-art a decade or so ago, many organizations find that the time is now for an ERP selection project that considers modern technology and systems that best fit their unique needs.

    Upgrading or transitioning to a new ERP system – or any enterprise software solution – is a potentially disruptive process which can impact business continuity. Keeping the ERP project on time and on budget is certainly a challenge.  Choosing the right technology solution is never easy, but informed businesses start with an analysis of the current state and a clear definition of the desired future state. Fortunately for the companies on the cusp of making that plunge into choosing a new ERP solution, there is a proven methodology that helps companies successfully execute these projects and help reduce risk.

    Companies in this position should start by asking themselves the following questions:

    Ongoing support: Is there uncertainty in obtaining adequate support, upgrades or maintenance?

    A common scenario is that the legacy enterprise software system the organization has used for years is going out of business or has been acquired by a competitor and is no longer maintained or upgraded. If that is the case, then an ERP project to evaluate an appropriate solution should be a priority.

    User interface: Are there onboarding challenges and the risk of attrition with millennial employees who are handicapped from a “green screen” ERP interface?

    Legacy enterprise systems are often distinguished by an antiquated “look and feel” that can frustrate today’s generation of employees. An interface that is difficult to change, customize by user or role can make everyone’s job more difficult.

    Modern feature-set: Does the current ERP system support the company’s plans for growth in new distribution channels and e-commerce?

    Decades old legacy systems often don’t include the functionality that is available with modern enterprise systems. Nor do they offer features that drive business process improvement such as e-commerce or custom reporting. Modern ERP solutions go beyond first-generation systems by enabling modern initiatives such as e-commerce, supply chain management, customer relationship management and business intelligence. For example, companies anticipating expansion into new markets may require systems that support multi-language, multi-currency functionality. Insight into business intelligence (BI) trends helps inform strategic decision-making and is often the difference between success and failure in a competitive market. The ability to leverage mobile applications is increasingly critical, too. Most legacy systems are lacking in these areas.

    Scalability: Will the technology support future growth and competitiveness?

    Enterprise systems must accommodate growth. Legacy ERP systems are ill equipped to handle changes as a company grows. Does the current system limit number of data entries or crash when multiple end users access it concurrently? Does the system update in real-time or correlate across applications? If these features are not in place, a company will find it difficult to scale and accommodate business expansion and growth.

    Standalone systems: Are current processes supported by discrete, non-integrated applications, spreadsheets and work-arounds?

    It’s common for organizations to rely on standalone systems to offset the limits of a legacy ERP solution. A stand-alone purchasing system, for example, might have been implemented to help address the limits of an outdated ERP solution. Manual work-arounds are common within organizations saddled with a legacy system. Many companies find it necessary to manually enter data because the current system is unable to share data or is based on an incompatible data structure. This can lead to wasted labor costs and incorrect data entries, often leading to expensive downtime. If employees need to enter data on spreadsheets or cut and paste data into reports, chances are good you should plan on evaluating and choosing a new system to maximize efficiency and free up staff to work on other higher value tasks.

    What’s the Next Step?

    After answering these questions, you are likely in one of two places: You have accepted the realization it’s necessary to begin the process of evaluating and choosing a new technology solution, or you are confident in your current system.

    Choosing a new enterprise software solution is an important challenge and requires careful planning and research into potential solutions. Today’s modern technology supports business process improvements and best practices that lead directly to business performance improvement.

    By approaching the evaluation process and ERP selection criteria in an informed manner, companies can ensure they make the right decisions to maintain or establish a competitive edge over other companies in the marketplace.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    5:19p
    Apple Said to Be Mulling Iowa Data Center

    The city council of Waukee, Iowa, a town with about 20,000 residents, is holding a special meeting Thursday to hear the public’s opinion about a mysterious “Project Morgan.”

    But city and state officials told The Des Moines Register that Project Morgan is code for Apple data center. The company is reportedly planning to follow Google, Facebook, and Microsoft in building a data center in Iowa.

    Agenda for a separate Thursday meeting, of the Iowa Economic Development Authority, includes a review of Application for Investment by Apple in Waukee. According to the report, the Authority’s board will discuss economic incentives for the company. Incentives, usually in the form of tax breaks, are common way state economic development officials lure data center development to their states.

    The existing Apple data center campuses are in Newark, California; Maiden, North Carolina; Prineville, Oregon; and Reno, Nevada. But as other tech giants that provide services over the internet, the company has to continue expanding data center capacity to support growth of its user base and new, more data-intensive services.

    The Apple data center in Maiden, North Carolina. (Photo: Apple)

    No details about Project Morgan have been released publicly, and it’s not entirely clear whether Apple has made the final decision to build in Waukee. For these hyper-scale data center operators, site selection is a never-ending process, and companies often start negotiations with state and local officials in multiple locations before making the final call.

    For everything we know about Apple data centers, visit our Apple Data Center FAQ.

    6:00p
    Your Server Died and Your Backups are Gone. Here’s What to do Next.

    Brought to you by IT Pro

    Admitting that you have made a mistake is never easy. When it happens at work, it can be even more difficult to own up to because in some cases you may feel your career is on the line. But how you deal with the aftermath of a mistake as an IT pro can go a long way in showing your manager that you are dependable in a crisis and know how to use creative problem-solving under pressure.

    While not all IT screw-ups go as viral as this one on Reddit, mistakes can happen at any time and to companies of any size. For a recent example, look at Cisco. Earlier this month, its engineering team made a configuration error on its Meraki object storage, leading to loss of user data. The company had to work over the weekend to investigate what data could be recovered and what tools it can build to help customers identify the data that had been lost.

    In this example, the issue was externally facing, so a PR strategy had to be devised. But what if the issue is something more internally-facing, like a company server that has been fried with no backups?

    IT Pro asked experts to weigh in on what the best course of action is in the case that a server goes down, and the backups that were supposed to be in place are not working. What are the steps that an IT pro should take before, during, and after to recover from the situation and move on?

    Before a similar scenario happens to you, it is important to know that there are many preventative steps that can be taken. But if it does happen, know that you are hardly the first person to deal with it, and you won’t be the last.

    Understanding the Business Requirements of Backup

    ClearSky Data CTO and co-founder Laz Vekiarides recalls a situation with one of his customers who was backing up data from a legacy system, only to realize that the data they needed to retrieve had actually been lost … five years earlier. The problem was that the data backup was not tested, so no one knew it wasn’t working until it was too late.

    “They wanted to retrieve a piece of data that was backed up and they realized that their backups were fried,” he said.

    Not checking that backups are working is one of the biggest mistakes Vekiarides sees companies make. He said that companies should have a discipline of doing a cursory check on systems now and then to make sure that they work and bring back data.

    Vekiarides has worked in technology for around 20 years, starting in networking before transitioning to data storage in 2002, where he ran the development team at Equallogic. When it was acquired by Dell in 2007, he ran the software development organization. He founded ClearSky Data along with CEO Ellen Rubin back in 2015 to provide storage-as-a-service to customers storing hundreds of terabytes of data.

    “The best practices in general involve periodic backups and you have to make sure that you adjust your backups either snapshots or physical copies of data … you need to make sure that they are done with the correct periodicity,” he said. “Each application is different. Each application has a particular window of time which is the amount of tolerable data loss, and it really depends on the business need.”

    For example, he says, test and development data will require a different resiliency plan than a system of record for a point of sale system, which has a “very low tolerance for data loss.”

    A mismatch between backup and business requirements can be disastrous. Vekiarides said he has seen one example where researchers building enormous data sets with hundreds of terabytes of data are “throwing them on storage that is not backed up at all.”

    “So if anything bad were to ever happen, they would lose a year’s worth of work,” he said.

    Marty Puranik is CEO of Atlantic.Net, a cloud services and web hosting company based in Orlando. He agrees that it is pertinent to test backups. It is advice that GitLab shared earlier this year among lessons learned from when it lost 300GB of customer data from its primary database server.

    “The first thing is to test your backups so you don’t end up in that situation. But, if something bad happens like this, the best course of action is to come clean and start working on the next steps. By doing this, you avoid the problems of trying to cover-up and get to a solution faster which is what management really wants,” Puranik said in an email.

    ‘Crowd-Sourcing Panic Mode’

    Let’s say you thought your backups were running smoothly, but something has gone wrong and now you realize you can’t retrieve the data.

    First, the experts agree: don’t panic, and definitely don’t try to ignore it.

    “Eventually someone is going to notice that this data is missing, especially if it’s critical data for the company,” John Martinez, Evident.io‘s VP of Customer Solutions said. And worse than losing your job, the company could face legal action depending on the type of data that has been lost.

    Martinez has worked at cloud security company Evident.io for the past three and a half years. Prior to that he worked in sys admin roles at Netflix and Adobe Cloud. He said he has seen scenarios in his career where backups disappeared and data is long gone, but fortunately there are ways to recover.

    The first question to ask yourself if you are in this situation is how important is that data? If it is absolutely critical, there are strategies that you can employ to help piece together some of the missing data, Martinez said. The first one is what he calls “crowd-sourcing panic mode.”

    “We start talking to engineers that have been in the organization for a long time, they might have something squared away on their laptop, they might have an offline copy somewhere,” he said. “This is where you sort of go from ‘what are security best practices of having the data and the data retention’ to where ‘we’re not going to judge you based on you having a copy of this particular data on your laptop because you’re saving the company’s bacon.’”

    “In all of the situations I’ve lived, even though it might be egg on the face for the person responsible for the backup policies, etc. it’s one where we sort of rally around and try to do what’s right for the business,” he said.

    “One piece of advice that I’d have for an up and coming systems engineer that’s in the thick of it is to not panic, don’t worry about losing your job, but worry about doing what’s right and getting the data back,” Martinez said.

    Atlantic.net’s Puranik adds that keeping a level-head and acting professional throughout is critical.

    “If you handle it as a professional, and get through the crisis it’s usually OK,” Puranik said. “In addition, you would want to follow up with a post-mortem explaining how it happened, and what you’re doing, so it doesn’t happen again. Most managers realize IT pros are human, but it’s important for IT professionals to act professionally, especially when things go wrong (as they always will be given enough time).”

    Reverse Engineering, Using Cloud Tools

    From a technical perspective, there are ways for IT pros to piece together missing data through reverse engineering, Martinez said.

    In the case of data loss around a product, look at code repositories to get back to the best state you can so you can fill in the blanks, he said.

    Cloud providers provide tools around snapshots or multiple regions which can be extremely useful in the instance of retrieving data. “I can have much greater insurance tools at my disposal with the cloud,” Martinez said.

    Finally, make sure that you identify what happened so you can fix that business problem moving forward. Be sure to do your research and leverage the tools that exist to help you automate a lot of the processes around data backup — just don’t forget to test them.

    << Previous Day 2017/08/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org