Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 21st, 2015

    Time Event
    12:00p
    How Enterprise Cloud and Virtual Networking are Changing the Telco Market

    Delivering enterprise connectivity services today is very different from even five years ago, and telecommunications companies that have dominated the market for many years today are having to make a lot of adjustments to the way they do business.

    Technological concepts like Software Defined Networking and Network Function Virtualization are changing the way carriers design and manage their networks and open opportunities for delivering services in new ways. Meanwhile, rising enterprise demand for cloud services has created both new market opportunities and powerful new competitors for telcos.

    We caught up with Nav Chander, research manager for enterprise telecom at IDC, who recently completed a study of the top enterprise connectivity service providers, to talk about the effects the advent of SDN and NFV and the rise of enterprise cloud are having on the market.

    SDN and NFV Change Network Architecture and Telco Services

    As far as telcos are concerned, SDN is an enabling technology, not a revenue-generating product in itself. It is a new way to architect their networks that enables them to manage infrastructure and deliver services in new ways. “What’s more interesting is NFV and virtualization of services,” Chander said.

    NFV is a blueprint for defining those services, things like VPN, WAN, intrusion detection, firewall, and so on. It is virtualizing functions that used to be performed by physical boxes. They become software defined, but they are actual revenue-generating services.

    AT&T and Japan’s NTT Communications are examples of service providers that use SDN technologies in the most advanced ways.

    AT&T has been aggressively transitioning to virtualized network management. Its Network On-Demand service, for example, enables enterprises to consume bandwidth the way customers consume cloud infrastructure: scale up or down dynamically, in real-time, based on demand.

    NTT, which started using SDN as a tool about five years ago to manage interconnection between its own data centers eventually turned that data center interconnection technology into an enterprise product. The SDN infrastructure supports NTT’s global private cloud services delivered from about 130 data centers in close to 200 countries.

    Customers can quickly provision compute, storage, and networking resources anywhere on this global network. “The advantage is they no longer have to own and manage and connect that,” Chander said.

    Competing for Enterprise Cloud Dollars Won’t Be Easy

    The big difference between what NTT is offering and what the big cloud infrastructure providers like Amazon Web Services and Microsoft Azure are offering is NTT’s services are delivered over a private network rather than the public internet. This supposedly makes the services faster and more secure.

    But it doesn’t mean NTT is not competing with Microsoft and Amazon. Leading public cloud providers all have private offerings and partnerships with colocation providers and carriers that enable enterprises to consume their services over private network links.

    NTT today provides its private cloud services to some of the largest multinationals, Chander said. “It’s a great market, but it’s limited. Meanwhile, (the rest of) the enterprises still will go to cloud providers and perhaps system integrators.”

    For most customers, Amazon and Microsoft are by far the most trusted cloud providers, and addressing the market beyond the largest global-scale players will not be easy for NTT and its peers. “It’s a huge uphill battle,” Chander said.

    The Unique Position of Colos

    Amazon and Microsoft’s command of the cloud market puts data center providers like Equinix in a highly advantageous position. These data center providers “have the opportunity to really capture the cloud exchange market,” he said.

    Companies like Equinix, CoreSite, Datapipe, Interxion, and Telx can give enterprises both space, power, and cooling for their servers and direct private network links to servers of the big cloud providers. They act as intermediaries between enterprise customers and as many cloud providers as they can get to colocate and interconnect in their facilities.

    These services are in high demand and growing. “It’s an early-stage market,” Chander said. “I think it’s going to be very high-growth.”

    IDC expects enterprises to at a minimum double and at a maximum quadruple their use of cloud services, both Infrastructure-as-a-Service and Software-as-a-Service, over the next two years, he said. They may be using Azure or AWS today, but they will also want to connect privately to Salesforce, Oracle, or HP, for more services.

    This trend hasn’t been lost on Equinix and its peers. Equinix has been growing the ecosystem of cloud providers in its data centers aggressively, starting with IaaS and more recently focusing on SaaS firms.

    Colocation providers don’t have a monopoly on this market, however. All the major network carriers, including NTT, AT&T, Level 3, Orange, and Verizon, among others, also offer private connectivity to big public cloud services.

    The way enterprises use IT is changing, and these changes are affecting the entire ecosystem of vendors and service providers that cater to the enterprise market. Everybody, from hardware suppliers to network carriers, will have to make big adjustments to the way they develop and use technology and take products and services to market.

    3:00p
    It’s Time to Remove Roadblocks to Full Enterprise Cloud Adoption

    Navneet Singh is Senior Manager of Product Management for Bracket Computing.

    Today’s enterprise needs to be nimble—prepared to react quickly to changing market conditions. Traditional data center architectures provide the security, performance and availability necessary to support predictable, ongoing business processes, but enabling more agile business models requires levels of scalability, flexibility and speed of deployment that aren’t possible using traditional infrastructure. Enter the cloud.

    Public clouds provide IT with an alternative to managing their own data center infrastructure, freeing them to focus on essential business innovation. These clouds offer powerful building blocks that make them very appealing, however, questions remain around security, control and the ability to achieve consistent performance. These concerns must be answered for enterprises to feel comfortable running production workloads or moving sensitive data to multi-tenant environments. As a result, enterprise use of the public cloud has remained a great idea whose time has yet to come.

    Not anymore.

    Enterprises today can leverage the scalability and flexibility of the public cloud while maintaining the same robust policies and SLAs as in their existing data center. Here’s how:

    Identify where fast scaling and flexibility are most needed

    The cloud’s hyperscale capacity is virtually limitless. Amazon, Google Microsoft and other cloud providers can and will continue to build gigantic data centers that provide almost limitless capacity for companies around the world. Launching a new product? Provision additional servers. Holiday shopping season is over? Scale capacity back to save money. Expanding into the Asian market? Amazon has a data center in Singapore that can provide all the capacity you need.

    Economies of scale and global footprint mean letting cloud vendors purchase, deploy, provision and manage physical infrastructure while enabling enterprises to only purchase what they need, when they need it and where it makes sense when growth is fast or usage is highly variable. Identifying the most appropriate applications to run in the cloud enables enterprises to take advantage of dynamic market opportunities and frees up both personnel and resources to meet other needs.

    Set and meet SLAs in real time

    SLAs keep IT honest, making it responsible for maintaining acceptable performance and availability service levels for the rest of the business. Why should SLAs go out the window when workloads are migrated to the cloud? They shouldn’t. Enterprises should set service levels for key metrics such as response time and IOPS on the public cloud and manage the infrastructure programmatically in real time to meet them. This will deliver a better experience for users and developers and ensure infrastructure meets the needs of the business.

    Extend your zone of trust

    With so much at stake, enterprises are skeptical about letting cloud providers handle security and data protection for their most sensitive workloads. And who can blame them? Recent high-profile events have exposed vulnerabilities in both on-premise and cloud environments, leading enterprises to close ranks and tighten control over their data. However, as we’ve seen, physical control over data isn’t by itself sufficient. Logical control built on safeguards such as encryption and key management, authentication and network segmentation is critical—no matter where applications run.

    Fortunately, enterprises now have the ability to extend security and data management policies from their data centers to the cloud, ensuring consistency and a single operational model across all their environments. This includes encrypting data that resides across environments and maintaining ownership of key management. At the same time, data management technologies inherent on data center infrastructures—such as snapshotting, cloning, redundancy, data striping, backup and replication—can also be extended to workloads running in the cloud. This ensures that existing policies in the data center are applied consistently across all environments—whether the infrastructure is on-site, in a private cloud or in the public cloud.

    Pre-configure resources for speed

    Enterprises can package these policies and service levels with required resources to create application-specific templates that can be replicated exponentially across data centers and the cloud. Doing so will allow users to quickly deploy highly-reliable, highly-secure workloads at the press of a button to custom environments made up of on-site infrastructure, public cloud platforms or a combination of the two. Configurations can be tweaked automatically in real time, ensuring that applications run consistently across different environments.

    Enterprises no longer have to choose between the flexibility and scalability of the cloud and the reliability and performance of the data center. They can have their cake and eat it, too. Done right, this control over public cloud infrastructures makes it possible—and desirable—to deploy even the most sensitive production workloads in the cloud.

    Roadblocks removed.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:33p
    Amazon Data Center Outage Affects Netflix, Heroku, Others

    logo-WHIR

    This article originally appeared at The WHIR

    From around 2 am to 8 am PDT Sunday morning, Amazon Web Services reported increased errors for the various services running out of its North Virginia data center, impacting some of the internet’s most popular sites and cloud services.

    Amazon’s NoSQL database service, DynamoDB, went down early Sunday at the company’s North Virginia data center which hosts the US-East-1 availability zone, causing other services that rely on DynamoDB to have increased error rates.

    Several organizations that depend on that region saw their services go down for several hours Sunday morning. Some of the services reportedly impacted include Netflix, Reddit, Product Hunt, Medium, SocialFlow, Buffer, GroupMe, Pocket, Viber, Amazon Echo, NEST, and IMDB. Some services that use AWS such as Twitter and Slack did not experience downtime. Platform-as-a-Service Heroku can’t boot new Linux containers known as “dynos.”

    Among the services impacted include Amazon CloudWatch, AppStream, CloudSearch, Cognito, EC2 Cloud, EC2 Containers, Elastic Load Balancing, Elastic MapReduce, Elastic Transcoder, ElastiCache, Glacier, Kinesis, Machine Learning, Mobile Analytics, Redshift,Amazon Relational Database Service, Simple Email Service, WorkSpaces, Auto Scaling, AWS CloudFormation, CloudHSM, CloudTrail, CodeCommit, CodeDeploy, CodePipeline, Elastic Beanstalk, Lambda , and Storage Gateway.

    Amazon’s US-East-1 outage has illustrated the importance of cloud redundancy, which many web professionals quickly pointed out on Twitter. For instance, Laurie Voss, co-founder and CTO of javascript package manager npmjs, noted that it pays to use multiple colocation providers to have true redundancy.

    Others mentioned that there could be too much reliance on single cloud services, whether it’s Amazon, Microsoft or IBM. Adam Thody, Technology VP of Toronto development shop Normative, said this is the disadvantage to treating US-East-1 like a “personal server closet”. React.js Training co-founder Michael Jackson noted on Twitter, the AWS outage “reminds us that we all have a single point of failure now. In some ways, we used to be more resilient than that.”

    This first ran at http://www.thewhir.com/web-hosting-news/aws-north-virginia-data-center-outage-affects-netflix-heroku-others

    3:37p
    EMC Rolls Out Hyper Converged Infrastructure with Arista Switches

    Looking to simplify provisioning of compute, storage, and networking resources in a data center, EMC unveiled ScaleIO Node.

    The solution is based on software EMC gained when it acquired ScaleIO last year. It is now bringing that technology to market in the form of a hyper converged infrastructure system based on standard x86 servers. Bundled with those servers are EMC Storage Area Network products and network switches from Arista Networks.

    Aimed primarily at high-end IT organizations that are looking to mirror the way web-scale IT organizations now deploy infrastructure, the set of servers comes pre-integrated in a way that still lets customers choose what software they want to deploy on them, Jyothi Swaroop, senior director of product marketing with EMC ScaleIO, said.

    Customers can configure ScaleIO Node with operating system or hypervisor they want. In the future EMC plans to offer additional network switching options.

    The major differentiation is that ScaleIO Node is designed to scale well into the 1,000-node range, compared to rival hyper converged infrastructure systems that typically only scale to about 30 nodes, according to Swaroop.

    “In an ideal world we would just sell the software,” said Swaroop. “But customers are telling us they don’t have hundreds of engineers to configure systems.”

    In terms of performance, EMC claims that a 500-node implementation of ScaleIO can generate 100 million IOPs, which is eight times better than any traditional SAN offering currently available.

    In general, EMC is moving well beyond its traditional storage base to compete more aggressively in the server space. Last week it revealed a global alliance with Dimension Data that will be making use of EMC servers and storage to deliver a broad range of public and private cloud services. The end goal of that effort will be to make it simpler for IT organizations to adopt modern server platforms without having to build and configure systems themselves.

    The degree to which IT organizations will adopt pre-configured hyper converged infrastructure systems such as ScaleIO or opt to rely on external service providers remains to be seen. Vendors increasingly see an opportunity to remove customers from the systems integration equation in the expectation that those resources will be reinvested in building and deploying more applications.

    4:21p
    Skype Outage Impacts Users Worldwide

    logo-WHIR

    This article originally appeared at The WHIR

    Skype is reporting an issue on Monday that is blocking Skype users from making calls. It is unclear just how many users are impacted but users in North America, Europe and India have reported having issues on Twitter.

    The issue started Monday morning and according to an update by Skype users signed in to Skype will not be able to change their status and their contacts show as offline.

    “We’re doing everything we can to fix this issue and hope to have another update for you soon,” Skype said in a status update. “Thank you for your patience as we work to get this incident resolved.”

    Skype has more than 300 million users as of 2013, and is one of the most popular and recognizable video call and chat service online. Skype has more than 4.9 million daily active users. Earlier this year, Microsoft introduced Skype for Business, rolling out the service to Office 365 customers first.

    We are aware of an issue affecting Skype status at the moment, and are working on a quick fix: http://t.co/ymSzmrgEX0 pic.twitter.com/8LoqqL0hh7

    — Skype Support (@SkypeSupport) September 21, 2015

    In addition to disabled calling features, “a small number of messages to group chats are not being delivered, but in most cases you can still instant message your contacts.”

    Skype users may also not be able to sign in to use the service, and any changes to account information, such as credit balance or profile updates, may take a “little while to be displayed.”

    Users will also have difficulty loading Skype Community pages.

    This week Microsoft added Object RTC API to the latest insider preview of Windows 10, making it possible for Skype users using Microsoft Edge browser to make voice or video calls without using a plugin.

    This first ran at http://www.thewhir.com/web-hosting-news/skype-outage-impacts-users-worldwide

    8:28p
    Data Center World: Stanley Cup Winner Bill Clement on Successful Leadership

    NATIONAL HARBOR, Md. — Being a successful leader has nothing to do with a job title and everything to do with the choices we make that lead to earning respect and trust from those around us.

    That’s the message two-time Stanley Cup winner and now retired 11-year veteran of the NHL, Bill Clement, gave during his keynote at Data Center World on Monday. He knows firsthand the importance of leadership, positive thinking, teamwork, and sacrificing for the good of the cause—on and off the ice rink.

    “It all about the ability to influence moods, attitudes, behaviors, and contributions to the workplace culture,” Clement said.

    Embrace Challenge

    The first choice that people must make on their journey to become a successful leader, he said, is embracing challenges, not just accepting them. The author of the book, EveryDay Leadership, used convergence between IT and facilities as an example. It used to be that two people managed the areas individually and more often than not didn’t see eye-to-eye. That disconnect continues to be a big problem in the industry, but today one person is increasingly being asked to do both jobs. There’s no room for barriers in the data center, said Clement, so it’s key that you embrace new roles and do your best to learn what they entail.

    He also suggested that people avoid living in the past and always look forward to the future. That’s particularly true for data center professionals who can ill afford to fall behind the technology curve, he said. Nowadays, that’s a recipe for disaster; not just for the data center but for the entire organization.

    Bill Clement speaks at Data Center World Fall 2015

    Bill Clement speaks at Data Center World Fall 2015

    Don’t Be an Energy Vampire

    Another important aspect to leadership Clement focused on was energy—both positive and negative.

    “Energy sources are people that you can plug into when you’re down or just want to smile. When you’re in their company, it’s always a positive experience. They’re the first ones to help, and they leave every situation and every person better than they were,” Clement explained.

    “An energy vampire is somebody that sucks the life out of every situation they go into. They’re always sick, or tired, and you become sick and tired in their company. You get the feeling that failure is right around the corner. They leave everything worse than they found it.”

    Part of an energy vampire’s persona also involves the “90/10 principle,” said Clement. We have no control over 10 percent of the things that happen to us—the weather, global economy, the guy cutting in front of you in traffic, the government. Ninety percent of life is how we react to that 10 percent.

    It’s important to remain positive and to be a finder of solutions instead of creator of problems, he said, especially when it might impact the evaluations other people make about image of your data center or company name.

    Push Versus Pull

    Finally, Clement spoke about the difference between pulling and pushing people. Pushing involves telling people what to do, where to go, what they’ll get paid, and when to leave. It’s leading from a position of authority. “Pulling is motivating to the point of inspiration and helping people feel that their contributions are important,” he said.

    Proudly sporting a #10 jersey under his jacket, Clement spoke passionately about the most influential leaders in his life, crediting them with the two Stanley Cup rings he wore as he addressed attendees. One was the captain of the Philadelphia Flyers, Bobby Clark. Clement had missed the first two games of the seven-game Stanley Cup final series due to a ligament tear in his knee; and he feared he may not get the chance to play in one ever again. So, he had his doctor remove the cast and tried to skate, but the inflexibility and pain proved too much.

    The team trainer suggested that he stop trying to skate and just hit the whirlpool before the start of game four and see how it felt. As Clement soaked his knee, Clark came into the room and sat beside him. He said, “I need to tell you something,” Clement recalled. “I don’t think we can win the Stanley Cup without you. The minors can’t do what you can do. I don’t want you to hurt yourself, but when you’re ready to come back, we’re ready to take you back.”

    He didn’t say anything negative; didn’t raise his voice and made me feel vital to the outcome, explained Clement. Although he took the ice limping, he finished games four, five, six, and helped the Flyers clinch the championship against the Boston Bruins in game seven.

    “I wouldn’t have been in any of those games if it hadn’t been for Bobby,” Clement admitted.

    “Nobody is an Island”

    Another leader instrumental in his life was Flyers’ head coach Freddy Shero. After dropping the first game in the Stanley Cup final series with just 52 seconds left in the game, Shero gave the team the option of practicing for a 1 ½ hours or playing nine holes of golf the next day. The team chose golf; and while the Flyers eventually won the series, Clement said he always wondered why Shero put his career on the line by allowing the team to spend that day of leisure. After all, had they lost by a landslide, he would have been criticized and possibly fired.

    When Clement finally got the opportunity to ask him that question, he said the coach replied: “The thought had crossed my mind about what might happen to my career, but I knew you knew how to skate and shoot and hit. I never coached a team that had a bond as good as you. I wanted you to laugh together and enjoy each other in an environment outside the ice rink.”

    Clement said the fact that Shero sacrificed his own needs for the good of the team and constantly gave more to the culture of the team than he took largely contributed to their success.

    “Nobody is an island, no one works in a vacuum or lives in a vacuum; at some point, we all need other people to ensure our success,” he concluded.

    10:25p
    Data Center World: Take Measures to Prevent Data Breaches, Avoid Liability Costs

    When it comes to determining whether an in-house data center, colocation, containers, the cloud or some mix works best for your company, a broad array of factors must be considered when calculating the Total Cost of Ownership for each scenario.

    However, as Mark Evanko of BRUNS-PAK told attendees at Data Center World on Monday, it’s no longer just about facilities infrastructure, energy efficiency, migration, network costs, or computer hardware and software, to name just a few. In the face of growing data breaches and security flaws, data center managers must also put a price tag on the costs of keeping data safe and liability issues should one occur.

    That means before doing all of the other legwork for calculating TCO, a thorough inventory and prioritization of the types of data being processed needs to happen, said Evanko. If most of your data falls into the critical category i.e. health, top secret, financial, tax records/social security, research, or academic records, costs to employ a security plan or repair your company’s reputation post-breach could easily be a multi-million dollar endeavor.

    If your data is mostly non critical in nature, such as social media, search engines, iTunes, surveys, maps or non-sensitive market data, a breach might be more of an inconvenience than a blemish on your company’s reputation or a costly lawsuit.

    “I am not against colocation or cloud,” Evanko said. “It’s right for a temporary app or non-critical data, but maybe you should keep the ‘crown jewels’ at home.”

    Liability has become a huge issue in the face of recent, highly publicized breaches on the IRS, Anthem Health, JPMorgan Chase, Target, Home Depot, and even presidential Democratic candidate Hillary Clinton.

    Target beat a lot of companies to the punch after it paid the US government $10 million to remove liability for the stealing of customers’ records. Evanko estimated that Target permanently lost 5 to 10 percent of its customers as a result of the incident.

    Other companies may not get off nearly as easy. There’s legislation brewing that would make organizations far more accountable for breaches of personal information and require them to pay actual damages to individuals, something he thinks will reverse the trend toward cloud and colocation back to in-house.

    It’s an issue that is only going to become more complex as time goes on. Evanko posed some interesting questions about liability: “What is the responsibility of a board of directors to stockholders, or trustees of an academic university?”

    Those questions have yet to be answered, however, the BRUNS-PAK engineer believes that colos will eventually be slapped with responsibility for stolen data and ensuing ramifications. The current liability of the third-party provider for damages is zero.

    “Liability will soon be extended down to the colocation provider along with everybody else that touches that data,” says Evanko. “Most colocation providers don’t automatically cover customers if their data is either stolen or corrupted.”

    Should that happen, costs to lease space could rise astronomically. Evanko gave an example of one company upping the price per square foot per month from $35 to $350 when it was told by the client that liability would fall on its shoulders should data be compromised.

    Unfortunately, the Identity Theft Resource Center predicts that it’s going to get a whole lot worse—and more expensive—before it gets better. Security breaches were up 20.5 percent last year and are expected to grow significantly over the next two years.

    This is clearly an issue that will challenge data center managers for years to come. Evanko suggests that after you make make security a part of the TCO calculation and bring it to the attention of C-level executives, let them make the decision to spend or take the risk.

     

    << Previous Day 2015/09/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org