Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 7th, 2015

    Time Event
    12:00p
    Equinix Intros Converged Infrastructure that Connects to AWS

    By offering direct, private network connections from colocation data centers or corporate IT facilities to their cloud servers, public cloud providers like Amazon and Microsoft addressed two big hurdles to public cloud adoption by enterprises – concerns with security and performance – but setting up interconnects for cloud connectivity is a complex network engineering task, and many customers simply don’t have the necessary skills.

    Those who can afford it hire third-party professional services, and those who cannot try and do it themselves. Many in the latter category fail, Greg Adgate, VP of global partnerships and alliances at Equinix, said. Equinix, for whom private interconnects between customer servers and cloud-provider servers inside its data centers is a rapidly growing business, hasn’t had the engineering resources allocated specifically for this but has used its solution architects and sales engineers to provide customers with guidance. But solution architects “are not supposed to be out configuring routers,” he said. “They’re supposed to be designing solutions.”

    To close the gap, Equinix today announced new professional services to help customers configure their networks to work with Amazon Web Services Direct Connect through the Equinix Cloud Exchange platform and a blueprint for a Cisco-NetApp converged infrastructure solution for Direct Connect customers who don’t already have infrastructure inside Equinix data centers that can be connected to the Amazon cloud.

    Service Providers Race to Offer Direct Cloud Links

    Public cloud connectivity through private network links is promising to become a big business for data center colocation providers and network carriers. Microsoft has an equivalent offering for its Azure cloud called ExpressRoute, and Google Cloud Platform has something similar called Carrier Interconnect. The private connectivity service for IBM’s SoftLayer cloud is called Direct Link.

    While Equinix is one of the largest, AWS has numerous other major data center service providers as Direct Connect partners, including Colt, CoreSite, Datapipe, Interxion, NTT, and Telx. Major network carriers, such as Level 3, AT&T, Verizon, and Zayo, also have the potential to be strong competitors in this space, since they can provide direct connectivity from customers’ own corporate data centers to the cloud.

    Converged Infrastructure for Hybrid Cloud

    Equinix’s new professional services capabilities and the enabling technology, announced at this week’s AWS re:Invent conference in Las Vegas, come primarily from Nimbo, a certified AWS and Microsoft Azure partner Equinix acquired early this year. This is the first Nimbo-based offering Equinix has rolled out publicly, Adgate said.

    The converged infrastructure solution is similar to FlexPod, the pre-integrated hardware-and-software bundle that combines Cisco UCS servers and NetApp storage, but optimized for AWS cloud connectivity

    The reference architecture has been tested and validated, and Amazon engineers played a big role in its creation, according to Adgate. “They participated in testing and validation and the design,” he said about Amazon.

    Amazon’s interest in making Direct Connect easier is substantial, according to him. Direct Connect enables enterprises to finally run production workloads in the cloud, and those workloads demand a lot of compute capacity.

    The bundle, sold through partners, is a quick way to create a hybrid infrastructure that combines on-prem servers and storage in an Equinix data center with public cloud connectivity services. “We have a partner community that has an opportunity to monetize this by wrapping a managed service around it,” Adgate said. “We have a number of NetApp partners and a number of Cisco partners. Some of them are delivering FlexPod today.”

    Cisco Intercloud Fabric Part of the Package

    The reference architecture includes Cisco’s Intercloud Fabric software. Intercloud, based on the open source cloud infrastructure software OpenStack, is a platform for connecting on-prem data center infrastructure to cloud providers that participate in Cisco’s cloud initiative, launched in 2014. Cisco has amassed a long list of Intercloud partners, comprising of service providers and software vendors, including Equinix, QTS, Sungard, Peak 10, Basho, Citrix, Cloudera, Couchbase, Hortonworks, and Chef, among others.

    While Intercloud enables users to connect to a variety of cloud providers, the solution announced today is optimized for connecting to AWS specifically, Adgate said. Intercloud is a feature that comes with Cisco’s compute stack and is not required to connect to AWS via Direct Connect, according to him.

    Direct Connect is available at eight Equinix data centers around the world: three in the US, one in Germany, one in Singapore, two in Japan, and one in Australia. But customers can use it through the Equinix Cloud Exchange through any of its facilities in 33 markets worldwide.

    3:00p
    Open Source Needs Enterprise Developers

    Itamar Haber is Chief Developer Advocate for Redis Labs.

    Open source projects have risen in prominence over the past few years and are becoming important assets to enterprises. A recent report indicates that some 78 percent of enterprises use open source, and two-thirds build software for their customers that is based on open source software.

    If you take the case of open source DBMS-es, IDC predicts adoption will grow six times faster than the DBMS market; and as a result, 70 percent of new applications are projected to be on open source RDBMS within the next few years. The importance of open source developers has grown as more enterprises are starting to rely on open source projects.

    As the story of open source growth becomes more prolific, however, the importance of enterprise developers to the maturity of open source projects is vastly underemphasized. While projects find a community with open source developers, enterprise developers are instrumental in making these projects viable for demands of enterprise environments.

    The Benefits of Open Source

    Open source creates a community around software projects, spurring innovation, skirting issues of lock-in, and putting architectural control back into the hands of the enterprises using the technology. Open source also takes away the fear of being constantly at the mercy of vendors who often bundle unnecessary add-ons to cash-cow license purchases, creating shelf-ware with little real value. A strong community ensures that projects continue to grow and improve, cutting edge technologies remain accessible to all, and the pace of innovation continues. In recent times, the creation of new open source foundations supported by major vendors such as Google, Facebook, VMware, IBM, RedHat and others reflects the movement toward allowing innovation to flourish collaboratively and outside organizational boundaries.

    However, while open source developers are key to the health of a community, enterprise developers play an equally important role in helping projects to mature. As an example, Salvatore Sanfilippo, creator of the staunchly open source software Redis, was surprised to note that within two years of Redis becoming available, he was being approached by developers inside very large enterprises who were using Redis to handle increasing volumes of traffic. They were using Redis under the radar because it satisfied a need their old school software platforms could not even conceive of handling. He was even more surprised when these enterprise developers started to change the conversation within the Redis community.

    A Tale of Two Developers

    While open source developers often focus on the creation of new and interesting features to the core technology, enterprise developers are more conscious of solving problems with the right enterprise controls in place. As enterprise developers become more involved in a project, they gradually change the conversation around the project to issues and implementations that are important within an enterprise.

    With Redis for example, open source developers drove toward increasingly efficient commands and new data structures to satisfy their cutting-edge needs. Enterprise developers added a bent toward scalability, availability and reliability, which led to a focus on improving high availability and making clustering available.

    A similar emphasis on the concept of high availability (HA) and clustering in the early 2000s allowed Linux to really stack up in the data center.

    Enterprise developers will also work doggedly to resolve tough issues, jumping hurdles until the solution is found – because often these issues solve the kinds of production problems that are keeping them up at night. Open source developers might not have the same motivation to track the same bug for two weeks straight because the issue may not mean as much as the problem they are trying to solve.

    While the interests of open source and enterprise developers seem to be at odds, striking a balance becomes vital, both for the adoption of the technology among the community and for its advancement as a solution for industry scenarios that demand stability and scalability.

    A Balanced Ecosystem

    In the fast world of software development, you need open source developers that can innovate with very fast iterations to push forward functionality. At the same time, open source communities need enterprise adoption to push reliability and stability boundaries, helping a project move from the early innovative stages to becoming a solution that can work in real-world industry situations.

    The difficulty for every project that is walking the line between innovation and becoming enterprise ready is balancing the demands of both groups. With open source, managing change can be a challenge and the community often has diverse opinions, but the greater the diversity and the more developers, the more likely it is that a project is successful.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:50p
    Switch Expands to Europe with €300M Milan Data Center Build

    Switch, the Las Vegas data center provider known for its penchant for massive-scale data center campuses, flashy futuristic interior design, and armed ex-military security personnel, is expanding into Europe. The company has kicked off construction of a 450,000-square-foot data center in Siziano, Italy, just outside of Milan.

    The project is overseen by Supernap International, an entity formed by Switch in partnership with investors ACDC Fund, Orascom TMT Investments, and Accelero Capital. The companies are investing about €300 million ($338 million) in the project.

    Khaled Bichara, CEO of Orascom and co-CEO of Accelero, has been appointed to lead Supernap International as the company’s CEO. In a statement, Bichara said Milan was only the first international location for Switch’

    “We are pleased to begin the development of our model starting from Italy,” he said. “Our project is based on a very precise mission: using redundant systems and smart design to respond to the businesses’ needs to cover multiple geographical areas, and thus to circulate considerable amounts of data between countries.”

    The 40-megawatt facility will provide about 19MW of critical power across four data halls. As it’s done in its Las Vegas data centers, Switch plans to design and certify the facility to Uptime Institute’s Tier IV reliability standard.

    Milan is not a major data center market, but Switch has not pursued top-tier markets in the US. Its first location outside of Las Vegas will be in Reno, Nevada, where it is building a $1 billion data center campus with eBay as the anchor tenant.

    In Milan, its biggest competitor will be TelecityGroup, a European data center heavyweight currently going through the process of being acquired by Equinix, the Redwood City, California-based colocation giant, for $3.6 billion. Telecity has three data centers in the Milan market.

    4:35p
    Gartner Symposium/ITxpo 2015: Failure IS an Option

    varguylogo

    This post originally appeared at The Var Guy

    By Charlene OHanlon

    Successful companies are the ones that let—no, encourage—their employees to fail. It’s what Pixar does, and there’s no arguing the successes it’s had.

    Such was the tenor of the conversation between Bob Safian, editor in chief of FastCompany, and Ed Catmull, president of Pixar and Disney Animation Studios, during the Wednesday keynote discussion at Gartner Symposium/ITxpo 2015.

    “There is a real palpable danger around failure,” Catmull said. “It’s impossible for people to emotionally separate both the negative and the positive meanings of failure.”

    Positive failure enables people to view the experience with more perspective, ultimately regarding it as a lesson learned. But negative failure carries baggage—feelings of inadequacy, embarrassment, depression—that does more harm than good.

    “Failure is a necessary consequence of doing something new,” he said. “If you’re able to realize [positive failure and negative failure] are two different things, you can separate the emotion from it. They have both meanings, and if people are not aware of them they will fall back to the negative meaning. But if you allow them to fail, you turn it into a learning experience.”

    Companies, he said, too often focus on failure as a negative, which leads to an unhappy and ultimately less productive workforce.

    “You need to allow people to make mistakes,” he said.

    Catmull noted he has said every movie Pixar has made “sucked at the beginning.”

    “If things don’t suck at the beginning, you’re almost done,” which stems the creative process. “Most times, though, it doesn’t work.”

    Too often, he said, management judges a team by the ideas it has, which is wrong. At Pixar, “our measure of the team isn’t the output, it’s the spirit. If they are having a good time and working hard, they also know it sucks and they will make the changes.

    “The only time we have failure is when the team falls apart,” because of negative failure and negative judgment.

    Catmull recalled taking his management style to Disney, which went through a long spate of animated movie failures before Catmull joined the organization.

    “Because they had failed they were open to a different way of thinking about things. We got rid of ‘there’s a right way of doing things.’ The notion there had been ‘feed the beast.’ It doesn’t have a negative connotation, but it’s where the bulk of the costs are and the most revenue—it’s production. So we told them not to confuse the creative front end with production.”

    The result was a string of hits including “The Little Mermaid,” “Aladdin” and “The Lion King,” to name a few. Along the way, Catmull noted, a few corporate concepts ended up by the side of the road.

    “We decided it is better to fix problems than to try and prevent them all. Some corporate policies are put into place to prevent errors. But often they end up dragging things out,” he said.

    Rather, companies should encourage their employees to do what they believe is best for the company, even if sometimes its goes against corporate policy.

    “Zero errors is meaningful in some places, such as aircraft industry or medical industry or financial or manufacturing,” he continued. “It’s an easy concept there. But life is not like that. The concept of zero errors gets in the way of thinking how to get there creatively.”

    This first ran at http://thevarguy.com/information-technology-events-and-conferences/100715/gartner-symposiumitxpo-2015-failure-option

     

    6:46p
    AWS Finds Way to Move a Lot of Data to Cloud Faster – by Putting it on a Shipping Truck

    Two of the things Amazon has proven it is really good at as a company is shipping packages and storing and processing data. This morning at its AWS re:Invent conference in Las Vegas, the company’s cloud services arm announced a data migration service that combines both.

    Enterprise data migration to the Amazon Web Services cloud over a Wide Area Network can be a lengthy process and a costly one in terms of network bandwidth consumption. Moving 100 terabytes of data from an on-premise data center to an AWS one, for example, can take as long as 100 days, according to Andy Jassy, senior VP for AWS, who delivered the event’s opening keynote.

    Moving that much data over a long distance can be much faster using shipping trucks. The new Amazon Snowball appliance is a high-volume data storage server in a rugged, temper-proof and water-proof enclosure that will show up at your data center doors after you order it and get picked up when you’re done loading data onto it to be taken to an Amazon data center for uploading to your AWS environment.

    As enterprises start to use more public cloud services, such as AWS, Microsoft Azure, or IBM SoftLayer, they identify more applications and data they host in their corporate data centers that can be moved to the cloud. The ability to do that enables some of them to reduce their on-prem data center capacity needs, so they look at cloud as an opportunity to consolidate data centers and spend fewer resources on managing physical infrastructure.

    Data migration at large scale is one of the big challenges in moving corporate applications to the cloud, and Snowball is Amazon’s answer to that challenge.

    The first model has 50TB of storage capacity. The enclosure comes with the necessary cabling and a mounted Amazon Kindle which displays the shipping label. Once the unit is full, the label changes automatically do display its next destination, and UPS is notified that the unit needs to be picked up.

    It encrypts data automatically and, if needed, converts it to the objet storage format. Once the Snowball arrives at an AWS data center, data gets uploaded into S3, Amazon’s cloud storage service, and decrypted.

    With two Snowballs, you can move 100TB of data to AWS inside of a week instead of 100 days, Jassy said. Each unit costs $200 to rent and ship two ways.

    7:11p
    Rackspace Brings Signature Fanatical Support to AWS

    logo-WHIR

    This article originally appeared at The WHIR

    Rackspace announced the launch of its anticipated AWS managed services offerings Tuesday at AWS re:Invent. A post on the official Rackspace blog accompanying the announcement asked the obvious question: Why is Rackspace supporting Amazon Web Services?

    The short answer is “customer demand,” and it was addressed by Rackspace and CEO Taylor Rhodes in August, when the company confirmed rumors that the partnership was in development. At the time, Rhodes cited slower than hoped for growth in Rackspace’s public cloud products. Fanatical Support services, however, have remained strong.

    “We have been with Rackspace since 2004 and the company has a proven reputation of delivering the highest level of customer support,” said Jonathan Issler, Director of Operational Technology at Infosnap. “We have some workloads that run on AWS as well and we’ve researched other companies that could potentially provide support for this platform, but none were a good fit. When Rackspace approached us about its Fanatical Support for AWS offering, we signed immediately based on the strength of their managed cloud experience and expertise.”

    Competing directly with AWS, Google Cloud Platform, and Microsoft Azure makes continually less sense (and money), and Rackspace’s core business has been evolving towards managed services. It began offering managed Azure services last November.

    The initial Rackspace AWS services launched at re:Invent are Fanatical Support for AWS, Rackspace Managed Security and Compliance Assistance for AWS, and Rackspace Managed Cloud for Adobe Experience Manager.

    “Our mission is to provide best-of-breed customer service and expertise on top of the world’s leading technologies, and we see substantial demand from customers who want to leverage AWS for mission critical applications,” said Chris Cochran, Senior Vice President and General Manager at Rackspace. “Launching our new business unit and adding AWS support to our portfolio of managed service offerings enables us to give our customers more choice in their infrastructure deployments.”

    Fanatical Support for AWS is divided into “Navigator,” which provides Rackspace tools and expertise, and a fully managed offering called “Aviator.” Both are currently available to US customers and in beta for non-US customers, except for AWS China and US GovCloud regions. Both Rackspace Managed Security and Compliance Assistance for AWS are available in beta for Fanatical Support for AWS customers. Rackspace Managed Cloud for Adobe Experience Manager provides a managed automation platform on AWS to reduce the complexity associated with Experience Manager. It is now available in three AWS regions in the US.

    This first ran at http://www.thewhir.com/web-hosting-news/rackspace-brings-signature-fanatical-support-to-aws

    << Previous Day 2015/10/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org