Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, November 12th, 2014

    Time Event
    1:00p
    Datapipe, Equinix Meld Colo and AWS in Fully Managed Hybrid Environments

    Equinix and Datapipe have teamed up to provide hybrid IT solutions that combine managed Amazon Web Services cloud and data center colocation. It has been possible to create such hybrid environments out of Equinix data centers before. What’s new is the full managed services offered by Datapipe.

    “We’re working on a joint go-to-market with Equinix,” said Datapipe vice president Craig Sowell. “We’ll build and deploy infrastructure into an Equinix facility, then leverage our knowledge of AWS and Direct Connect. Our expertise is managing multiple platforms.”

    Datapipe has been managing AWS deployments for customers since 2010. The new offering will include dedicated IT environments within Equinix data centers with private connectivity to Amazon’s public cloud.

    The companies have not yet revealed which Equinix data centers the service will initially be available in, but the plan is to eventually roll it out across the company’s entire global footprint.

    “We see our abilities around managed hybrid as real differentiation right now,” said Sowell. “There are some providers that do parts, but nobody really does it globally. Partnering with Equinix makes complete sense because they have top-notch data centers.”

    Demand for hybrid IT services on the rise

    The collaboration is aimed at those with compliance needs but who want to leverage the flexibility that comes with pay-as-you-go public cloud services.

    “We’re seeing a lot more AWS adoption in the enterprise,” said Sowell. “We’re seeing it increase in large enterprise clients for more mission critical apps that require multi-region high-availability architectures.”

    Equinix said recently that private connectivity services to public clouds represented its fastest growing segment and having a partner who will make leveraging these services easier for customers will only accelerate it.

    Datapipe benefits from access to Equinix’s customer base and by being able to bundle colocation space with its offerings.

    “Our collaboration with Datapipe on a managed hybrid cloud solution for AWS removes many of the common barriers to cloud adoption,” said Chris Sharp, vice president of cloud innovation at Equinix. “It offers customers the best of both worlds by providing Equinix’s secure data center platform, including private access to AWS, along with Datapipe’s expertise in designing and managing an optimal IT architecture for enterprises.”

    Datapipe specializes in compliant managed hosting and cloud services. It recently expanded its government sector capabilities with the acquisition of Layered Tech in August. Layered Tech formed its government business services unit.

    Datapipe solves major cloud security concern

    The company also recently enhanced its managed security services for AWS around access control, which Sowell believes will be attractive for enterprise customers.

    “In many instances you have to hand over root credentials to hosting provider,” he said. “Risk-averse clients aren’t comfortable with that. What we’ve done is we’ve defined the model in which we can manage our client’s account without any Datapipe employee needing access to those root credentials. We see it as a tipping point for us.”

    Security remains a major concern for enterprises whose IT leaders consider using public cloud services. In a recent survey of more than 650 global enterprise IT leaders, security, reliability, and performance were listed as the top three drivers for a direct connection to the cloud.

    Datapipe has been offering direct connections to AWS from its own data centers in San Jose, Calif., Seattle, Ashburn, Va., New Jersey, London, Iceland, Singapore, Hong Kong, and China.

    4:30p
    Caution Signs on the Road to a Software-Defined Data Center

    Kent Christensen leads Datalink’s virtualization practice, directing the adoption of virtualization hardware and software technologies and services.

    If you’re an IT person involved in data center operations, you’ve heard a few things about the software-defined data center (SDDC). This is one of the latest acronyms touted by a variety of industry vendors and open source organizations. In fact, just about every main vendor involved in servers, cloud, networking or storage (or the software to manage any of these areas) has their own vision for SDDC. Big SDDC proponents include VMware, Cisco and OpenStack, among others.

    What SDDC is all about

    Currently, the SDDC acronym is easier to spell out than it is to define. It is a concept that is as much about IT architectural theory and philosophy as it is about the high-level technical platform or ‘dashboard’ you may ultimately deploy to automate, monitor and manage your emerging service-oriented (ITaaS) or cloud architecture.

    The overall SDDC vision goes something like this: Someday, you can have software (via a software-based control plane or overarching management console) automate the running of just about everything in the data center—servers, network, compute, etc. Said software will also logically abstract (or virtualize) features of the underlying hardware so that you might, conceivably, use various commodity hardware components. Your software-based controls for all these moving parts of the data center will move up the stack to reside in a universal software platform.

    For purveyors of SDDC, this vision is the ultimate Holy Grail for how a next-generation data center should operate.

    Sounds good so far. So, what’s the problem?

    The challenge with SDDC is that most of the vision is still just that: A vision, with not too many clear, real-world use cases. Early adopters tend to be hyper-scale cloud providers like Google, Amazon and Microsoft, who use their own, homegrown SDDC constructs. Other highly competitive companies or cloud service providers may also see the need to gain extra competitive edge with a faster move to embrace SDDC. But these are still a relatively small breed.

    Some of what I’ve described about SDDC might sound a lot like what your IT organization is already doing. Many are doing advanced server virtualization with advanced management of dynamic workloads. On the storage and network side, many are also doing something similar, with software policy-based functionality that helps automate and manage many virtual components of traditional hardware.

    Many have progressed from pockets of virtualization to the wider development of virtual data centers (VDC) and the use of converged infrastructures (CI), fabric architectures or unified “pods.” These unite many layers of the compute/server/storage stack together, often with deep integration by vendors that strives to automate many previously manual operations associated with resource configuration, provisioning and monitoring.

    Are all of these SDDC? They are part of it. The missing piece remains the higher layer of automation, orchestration and management that ties it all together. At this point, this piece is more vision than reality for most of today’s data centers.

    Move forward or wait?

    Some vendors would argue they have SDDCs missing pieces right now and can give you the exact blueprint of steps needed to bring it to your data center. Even if you aren’t ready yet to jump feet-first into the land of SDDCs, you’ve probably already begun the journey.

    We tend to see SDDC as part of an incremental journey, just like IT’s journey to VDCs, private cloud, and ultimately, as brokers of an ITaaS-based hybrid cloud world that offers an efficient mix of internal and external cloud services. In this evolution, we see organizations developing their hybrid strategy as part of a larger SDDC push toward data center automation and orchestration.

    Knowing SDDC is part of the journey, what advice can we offer?

    1. Study the visions of key, early SDDC proponents like VMware, Cisco and OpenStack. These have high-level ideals in common, but their execution surrounding SDDC is very different. In the case of OpenStack, you are dealing with open source software that may offer less vendor lock-in but may still be somewhat immature for enterprise deployment. On the VMware and Cisco side, study how much vendor lock-in might be involved if you go with one vision or the other and want to experiment or switch later to another SDDC management layer.
    2. There are a lot of ponies in this race. Pick yours carefully. You may find you’ve already bought into the current vision of your main hypervisor vendor or your main networking or storage vendor. Or, you might be an early fan of open source methods. You may even find you like a vendor’s emerging roadmap that gets you from where you are to that vendor’s vision of SDDC.

    Before you make large investments in the higher-level abstraction of SDDC, consider a smaller pilot or trial period. Remember: He who controls the abstraction layer is He who will have inherent control of your data center. As your data center transforms into more of a hybrid cloud architecture, He who controls that abstraction layer will also have more inherent control over your cloud operations. This harks back to some of the early points I made in 2012 when I urged readers to own their own cloud.

    By all means, move forward toward the utopian ideal of SDDC. But, as the construction signs say: Caution. Proceed with Care.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:00p
    Data Center Jobs: ViaWest

    At the Data Center Jobs Board, we have a new job listing from ViaWest, which is seeking a Data Center Engineer – Electrical Emphasis in Englewood, Colorado.

    The Data Center Engineer – Electrical Emphasis is responsible for monitoring the buildings HVAC, mechanical and electrical systems, performing preventive maintenance, site surveys, replacement of electrical and/or mechanical equipment, overseeing vendor facility maintenance, reading and interpreting blueprints, engineering specifications, project plans and other technical documents, and performing operation, installation and servicing of peripheral devices. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    6:51p
    Peak, IaaS for Cloud Resellers, Raises $9M in Equity

    Peak, a company that sells “white label” cloud services, meaning others sell its services as their own, has raised $9 million in equity funding, making for a total of $16 million raised since December 2013.

    Formerly named PeakColo, the company places a lot of focus on developing its technological capabilities. Its cloud infrastructure is based on VMware vCloud, NetApp, Cisco UCS, and Open Compute servers.

    The recent $9 million round comes from current investing partners Meritage Funds and Sweetwater Capital, as well as new investor Ares Capital.

    The company has been expanding its cloud node capacity and signing new key partnerships, including a recent agreement with Telx to make its services available to cloud resellers through the data center provider’s cloud exchange. It will use the funds to continue expanding.

    Peak raised $7.5 million in 2012 to expand into new geographic markets, including booming Chicago.

    “With the additional funding, Peak is able to expand into additional markets, serve a larger client base, and expand product development,” Luke Norris, the company’s founder and CEO, said in a statement.

    Making Cloud Connectivity Easy

    Peak has been focused on one of the biggest hurdles to adding cloud to existing services: the network. It has done work on interconnecting clouds, and last year published two patents for accomplishing cloud connections via Layer 2 connectivity, as opposed to Layer 3.

    Its service called Peak to Peak Direct Connect is a way for a cloud reseller to connect their customer’s own data center to Peak’s cloud through a dedicated private network link.

    The Layer 2 connect means customers can use their own IP schema, firewalls, and routing, which makes it less complex than connecting through Layer 3. The conversion to Layer 3 is usually a time-consuming process, particularly for larger companies with a multitude of devices.

    The company has focused on making it easy for channel partners to connect and establish Peak clouds that play nice with existing setups.

    IT consulting and lifecycle solutions provider Komodo Cloud recently tapped the Layer 2 capabilities. “Their approach is radically different from their competitors,” said Eric Hughes, CEO of Komodo. “With their expertise and patented private Layer 2 direct connectivity, we can move our clients to the cloud extremely fast.”

    Peak hooked up with Komodo through one of its large distributors, Arrow Electronics.

    Other customers include Westcon, Comstor, and Avnet. Cloud nodes are in eight markets across the U.S. and Europe: Silicon Valley, Seattle, Denver, Chicago, New Jersey, New York, Atlanta, and the U.K.

    7:30p
    GE’s New Dual-Genset Switchgear Can Manage 64 Loads

    GE announced its latest paralleling switchgear for mission critical facilities, such as data centers and hospitals, which can manage failover for 64 separate power loads in a building.

    Such granularity provides a lot of flexibility in setting up how a building’s backup power system works. For example, the system can be programmed to switch power off in a cafeteria if a server room needs the capacity in case of a utility outage.

    Another big feature in the latest Digital Commander Paralleling Switchgear is the two-generator configuration. This makes it possible to do maintenance on a generator without affecting the data center’s ability to withstand an outage, since another generator is available.

    Using both the dual-genset feature and the ability to manage lots of separate loads, if there is a utility outage, and one of the generators isn’t working, the system can be set up to put critical loads on the working generator, while switching off power to non-critical areas of the facility.

    “If a generator fails and backup power is needed, mission-critical facilities can’t afford to have power interruptions,” Travis Deutmeyer, product manager for GE’s Critical Power business, said in a statement. “Digital Commander helps to prevent these interruptions and gives facilities the flexibility to prioritize their power needs and redirect power to where it’s needed the most.”

    The new GE switchgear is called Digital Commander because of its fully digital design. IT doesn’t have analog meters or require manual processes. Everything is done through its 18-inch touchscreen.

    The system can control up to 16 generators from the screen. It supports a range of vendors, including Caterpillar, Cummins, Kohler, and MTU Onsite Energy.

    8:14p
    IBM Wants to Load Doctor Watson to Your Smartphone

    IBM has invested into a genetic testing services laboratory in a bid to give Watson, its self-learning computing system that can be controlled by human voice, the ability to answer questions a user may have about their personal health.

    The investment makes Pathway Genomics Corp. one of the best capitalized healthcare startups, with $80 million raised total, according to IBM. The company did not disclose the size of its investment.

    The ultimate goal is to build a mobile app that will be called Pathway Panorama and draw on millions of pages of medical journals, clinical trial data, the user’s DNA, healthcare records, and lifestyle data to answer questions posed in natural language. The user will be able to ask the app things like how much exercise they should do or how much coffee they can drink on any particular day, for example.

    “The medical industry is undergoing a dramatic and systemic change, giving consumers and their physicians a powerful tool built upon cognitive learning, and Watson will make the change even more transformative,” Michael Nova, chief medical officer at Pathway Genomics and member of Watson Advisory Board, said in a statement.

    For IBM Watson Healthcare Only One App

    IBM first introduced Watson in 2008, when the system appeared on the game show Jeopardy, winning the game against two of its human champions. Since then, the company has been hard at work productizing the technology, which not only understands the subtleties of human speech but also learns as it goes.

    IBM already has numerous cloud services that leverage Watson and provides APIs developers can use to leverage Watson capabilities in their own applications through its Bluemix Platform-as-a-Service.

    Earlier this year, the company committed $1 billion to investments in the various businesses around Watson and opened a headquarters building in New York City dedicated exclusively to Watson.

    8:30p
    Hackers Use DNS TXT Records to Amplify DDoS Attacks: Akamai Report

    logo-WHIR

    This article originally appeared at The WHIR

    Cybercriminals are using DNS TXT records in order to amplify DDoS attacks, according to a security bulletin (PDF) published on Tuesday by Akamai’s Prolexic Security Engineering and Research Team (PLXsert). Several campaigns observed since October 4 included large DNS TXT records crafted from White House press releases, which PLXsert says attackers can use to amplify responses and direct the resulting traffic to target sites including DNS servers.

    PLXsert suspects the attacks are launched using the DNS flooder tool, which Akamai also released a threat advisory for on Tuesday.

    Attackers have used large TXT records to attack sites including isc.org and many .gov sites in the past, but TXT records crafted to increase response have only been observed recently. The crafted TXT records observed in the October campaign originated from the guessinfosys.com domain.

    The DNS reflection and amplification attack peaked at 4.3Gbps and was targeted primarily at the entertainment industry, though high tech consulting and education companies were also targeted. The attacks spanned anywhere from about 7 to 17 hours, varying in intensity and duration throughout October.

    “DNS reflection attacks can be blunted at the network edge. An access control list (ACL) would suffice but only in cases where available bandwidth exceeds attack size,” said Bill Brenner, Akamai Senior Program Manager for Editorial, Information Security Group in a blog post. “Some DNS servers will attempt to retry the response using TCP, but when the request is sent to the target host, no transfer will occur and the attempt will fail.”

    Akamai recommends a DDoS cloud-based protection service, such as the one it offers, to defend against the amplified reflection attacks, which the bulletin says use the same tactics as similar campaigns, such as SNMP, SSDP, or CHARGEN.

    Regular threat advisories from Prolexic and Akamai’s State of the Internet reports have documented an ongoing increase in DDoS attack frequency, length and duration, as well as new strategies used by cybercriminals.

    Industry responses have included bringing more DDoS mitigation solutions to customers, as when CloudSigma began offering Black Lotus protection to its cloud hosting customers last week. Service providers can also keep informed of their options and opportunities through events like a webinar on the changing DDoS landscape presented by the WHIR on Wednedsay afternoon.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/hackers-use-dns-txt-records-amplify-ddos-attacks-akamai-report

    9:00p
    AT&T Delays New Fiber Rollout Until Net Neutrality Rules Sorted Out

    logo-WHIR

    This article originally appeared at The WHIR

    AT&T CEO Randall Stephenson told investors on Wednesday that the telecom will wait to roll out fiber connections to 100 cities until rules around net neutrality are solidified.

    According to a report by Reuters, Stephenson told investors that AT&T “can’t go out and invest that kind of money deploying fiber to 100 cities not knowing under what rules those investments will be governed.”

    AT&T announced its plans to expand its fiber network up to 100 cities, including 21 new major metropolitan areas, in April. The list of 21 candidate metropolitan areas included Atlanta, Chicago and Miami.

    Earlier this week, US President Barack Obama released a statement on net neutrality, urging the FCC to use “common sense” when coming up with the rules, which ideally would include a ban on paid prioritization. Obama also prompted the FCC to reclassify consumer broadband service under Title II of the Telecommunications Act.

    AT&T GigaPower currently has speeds of 300 megabits per second and plans for a 1 GBPS speed upgrade in the near future, according to a report by The Motley Fool on Sunday, which compared AT&T fiber to Google Fiber. The report noted that building the infrastructure to support fiber is not cheap.

    “Goldman Sachs estimates that if Fiber were to reach just 50 million households, less than half of all US homes, Google would have to spend up to $70 billion, or $140 billion to provide nationwide coverage,” the report said.

    “In other words, Google doesn’t have the operating cash or the support from investors to invest in Fiber at a rate to remain competitive with AT&T’s construction. In 2013, Google spent $7.35 billion on capital investments, AT&T spent $21.2 billion, and therefore it’s hard to imagine where Google can find those additional billions of dollars annually to reach 50 million households faster than AT&T.”

    It remains to be seen if AT&T’s delay in rolling out fiber due to net neutrality concerns will impact its position in the fiber internet market.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/att-delays-new-fiber-rollout-net-neutrality-rules-sorted

    11:14p
    AWS Rolls Out New Cloud Database, Slew of Enterprise and Developer Features

    Amazon Web Services executives talked about making the cloud even more developer friendly and “consumerizing” many enterprise processes and tools in Wednesday’s keynote at the company’s re:Invent conference in Las Vegas. The theme was rebuilding traditional enterprise solutions and processes to capture the agility and speed of cloud, illustrating that AWS enterprise is a huge focus for the company.

    Amazon announced a new AWS database engine called Aurora, more services for developers, key management and encryption enhancements, and an upcoming service catalog. Google, Amazon’s rival in public cloud services, also recently rolled out a slew of new cloud services.

    Refreshingly, keynote speakers did not bring up cloud price wars, focusing instead on new services, which suggests that AWS is trying to move the conversation away from pricing and towards features. A recent Peer 1 study found that many customers were getting “confused” and “annoyed” by frequent price cuts.

    AWS announced it had recently achieved ISO 9001 certification, which makes it a more feasible option for the healthcare vertical. ISO 9001 helps healthcare meet regulatory requirements.

    The company also divulged some of its cloud usage numbers. AWS now has 1 million active customers and thousands of system integrators. The AWS Marketplace has 1,900 listings and customers run more than 70 million hours of software per month. Amazon’s deployment service, called Apollo internally, pushed 50 million deployments in the last 12 months.

    AWS has a massive ecosystem of third-party service providers that offer solutions based on its cloud services, AWS delivering the core infrastructure, and the third-party providers packaging it with other features tailored for enterprise consumption. Now that Amazon is expanding its own value-add service portfolio, some of these providers may have to compete with the giant for business. Where providers can really differentiate now is hybrid and multi-cloud deployments, since Amazon’s tools are specific to Amazon.

    Aurora: Commercial-Grade Cloud Database

    Aurora is a relational AWS database built from the ground up. “Databases are still built around the mainframe mindset,” said Anurag Gupta, the product’s general manager. “We started with a blank piece of paper. It’s a database built for AWS cost structure.”

    It is built around Service Oriented Architecture and multi-tenant scale-out components and uses S3 and EC2, AWS storage and compute services, respectively.

    It’s compatible with MySQL, but performs five times better, according to AWS. Customers can move data in or out of the database with a few clicks.

    The big sell is that it is a commercial-grade database available for a fraction of the cost. The R3.large instance of Aurora is 29 cents an hour. It is five times larger than the largest MySQL RDS available, the company said, and can handle 6 million inserts per minute and 30 million selects.

    Data in the new AWS database is continuously backed up to S3. It tolerates disk failures, and crash recovery takes seconds. Database cache survives a restart with no cache warming required.

    Automated Key Management

    Key rotation is so hard, many companies avoid it despite of the security benefits. To address this problem, AWS introduced a key management service, “consumerizing” a traditional enterprise process.

    “This solves a problem we’ve consistently heard from customers,” Andy Jassy, senior vice president of web services at Amazon, said.

    It lets the admin create, disable, view, and set policies. There is forced and automated key rotation and access visibility through AWS Cloud Trail. Cloud Trail logs and tracks all API calls to your account. It ties into several new products and has been expanded.

    Broader Configuration Management

    Another new feature called AWS Config provides full visibility into all AWS resources a user has deployed. It helps manage relationships between resources and predict effects of a change made in one part of the environment elsewhere.

    AWS Config shows configuration changes by the hour and by the day. Everything is tracked in Cloud Trail, which acts like a homebase for several of these services.

    Config follows a recent announcement of a tool by ScienceLogic that does similar mapping of interdependencies.

    AWS Service Catalogs Coming in 2015

    Forrester has named service catalogs “the cornerstone of service and delivery automation,” and enterprises have been asking for one from AWS for a long time, Jassy said. Now, one is coming in 2015.

    It will allow admins to create a portfolio of products and make them easily discoverable. It will give them fine-grain access control to help meet compliance needs by department. Once again, all the activity will be tracked by Cloud Trail.

    CodeDeploy, Codepipeline, Code Commit

    There were several other developer-friendly service additions. One of them, CodeDeploy, presents a standardized way to deploy apps to AWS.

    It enables simultaneous deployment of groups of instances, rolling upgrades or roll-backs, and automated deployment health tracking. It works with most programming languages and tool sets.

    Another new feature, Codepipeline, is a continuous build, test, and integration service based on technology used pervasively inside Amazon. It enables repeatable, automated integration.

    Finally, Code Commit is a managed code repository in the cloud. It allows you to host code for stage, test, and production and integrates with GitHub, Chef, and Puppet. “This set of services is designed to work together,” said Jassy.

    Enterprises Delegating Crown Jewels to AWS

    Company execs touted a pick-up in AWS enterprise adoption, which has gone beyond strictly test and development. “Companies are migrating entire data centers to AWS now,” said Jassy.

    Examples of companies that have done that are publishing houses Conde Nast and Newscorp, and energy giant Hess Corp. Newscorp is changing from a 45-data-center infrastructure to one that combines six data centers and AWS.

    Intuit is one of the latest big companies to make the move. The company chose to move to AWS rather than renew an expiring data center lease.

    “Last year we did 10 acquisitions, and half were already running on AWS, making the integration that much easier,” said Tayloe Stansbury senior vice president and CTO of Intuit. This suggests that “born on” AWS companies continue to proliferate.

    << Previous Day 2014/11/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org