Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, May 14th, 2015

    Time Event
    12:00p
    Compass Piloting Wearable Technology In The Data Center

    Compass Datacenters is experimenting with wearable technology in the data center in a pilot project. The company is outfitting data center workers with a combination of high-tech visors and other key technology that steers them through maintenance and other standard operating procedures.

    The pilot project links mobile and wearable technology together. It’s currently taking place in Compass’ suburban Columbus data center in collaboration with American Electric Power (AEP), which the facility was built for, and software-systems developer ICARUS Ops.

    By the end of 2015, Compass expects the wearable tech to be deliverable and ready for live use on the data center floor. The project will come together across three phases, with Compass and AEP currently collaborating on the first phase, application development.

    During the first phase, Compass’ extensive facility documentation and best practices are being converted from traditional paper documents into actionable, interactive checklists. The operational documentation checklists will then be fed into a software app designed for display on the wearable visor tech.

    A data center worker interacts with the checklists via the wearable technology, turning that employee into a walking, talking encyclopedia of knowledge about the facility, according to Chris Crosby, CEO of Compass.

     

    Look ma, no hands

    Interactive checklist display (image: Compass Datacenters)

    The second phase will add a web-based management console that will allow a customer to view the status of work being conducted, manage assignments of workers and incorporate that information into other systems.

    “Properly designed digital systems incorporate continuous improvement processes that push the throttles to the stops when it comes to a learning organization,” said Joe Jones, president and founder of ICARUS Ops.

    Jones said these systems aren’t static, but continuously evolving. A user makes a suggestion on a mobile device, it goes to an approving authority and the change is then quickly made online. It pushes instantaneously to all devices across multiple locations.

    “Once operators know their inputs are valued the system gets smarter every day,” said Jones.

    Interactive electronic checklists have been used in commercial aviation and other high reliability industries to great success. ICARUS Ops has done extensive work with airlines, helping them to improve safety and reliability. The company believes there are a lot of parallels with the data center industry.

    “Data centers are very much like modern jets when it comes to expense, complexity, redundancy and the quality of the operators,” said Jones. “There is, however, a sharp contrast in the quality of procedures and checklists used. ICARUS Ops is bringing airline-quality digital checklists to data center operators to trap errors, save lives, improve uptime and efficiency. We are very excited to be working with Compass datacenters. They are very innovative and know how to harness best practices.”

    The interactive checklists track and time stamp activities, enabling audit-friendly records while ensuring that human error is reduced dramatically.

    “The biggest source of failure and downtime in the data center industry is human error, just like it was in the airline industry before these systems were implemented,” said Jerry Leekey CEO and founder ICARUS Ops. “By combining airline quality digital checklists with mobile and wearable technology like Google Glass, companies can have total visibility into: What was done? Who did it? When was it done? And was it done correctly?”

    The wearables also allow for complete interaction without having to take off personal protection equipment (PPE). It helps with OSHA (Occupational Safety and Health Administration) compliance by reducing the likelihood of shirking maintenance procedures due to inconvenience, according to Crosby.

    “Projects like this underscore how important a role data center personnel have in helping facilities achieve the highest possible uptime,” said Crosby. “In our industry, machines and technology get a lot of the spotlight, but people provide the heartbeat of data centers and this project empowers data center workers to have an even bigger impact. This gives professionals in these mission critical facilities the tools to operate facilities in an even more efficient, reliable way. Yes, it’s cool technology, but it puts people at the center of the data center, which they can help organizations get the most out of their data center investment.”

    1:00p
    Beyond OpenSSL Vulnerabilities: The Call for Better Secure Shell Key Management

    Matthew McKenna is chief commercial officer for SSH Communications Security.

    Two-thirds of all websites rely on OpenSSL for encryption and user information across the web. Launched in 1998, the protocol arrived just in time to solve the dilemma of how to safely and securely transfer personal data – including financial information – from end-user to website. Ultimately, OpenSSL—an open project that any coder or programmer can work on—makes e-commerce and other types of online transactions and interactions possible.

    Though the majority of websites use OpenSSL as their encryption standard, the OpenSSL project has a tiny budget with just one full-time employee and a handful of part-time workers and volunteers. Deploying software so widely with so little supervision creates significant security issues, which of course were exploited by 2014’s notorious Heartbleed vulnerability. Heartbleed woke up everyone to the risks that open source software can present if its management, development and design lack strong oversight and budget.

    As a defense mechanism, Google last year created BoringSSL, an offshoot of OpenSSL proprietary to Google. The company really had no other choice: It had been managing more than 70 patches to OpenSSL, and that figure was growing. This constant patching was making it difficult for Google to maintain consistency across multiple code bases and causing security concerns. With the rollout of BoringSSL, Google is seeking to create an encryption solution that interfaces more securely and efficiently with its Chrome and Android products.

    BoringSSL was born from the realization that open source vulnerabilities can pose serious security risks. Another case in point is the hacker group that took up a challenge by Cloudflare and exploited Heartbleed to steal private Secure Shell security keys, which can be used to gain access to an organization’s most sensitive assets.

    Key Risks

    The theft of Secure Shell keys is a serious issue. Secure Shell works quietly behind the scenes in virtually every network worldwide to encrypt connections and access the organization’s network. Associated with each key is an identity: either a person or machine grants access to information assets and performs specific tasks. Because they are often used to secure remote administrator access to the network, Secure Shell keys provide access to some of the most critical information within an organization.

    As such, it’s obvious how important it is to manage these keys properly. In a recent report, IDC listed how mismanaged Secure Shell keys can cause the following identity and access management (IAM) risks:

    • Unused keys that still grant access to critical hosts
    • No visibility into the purpose of key pairs
    • Limited control over the creation of Secure Shell keys
    • Secure Shell key usage that circumvents IAM controls
    • Ease of copying and moving private keys
    • Limited ability to identify and remove revoked, orphaned and unauthorized keys

    These risks must be taken into account when creating a security plan. In the wake of today’s popularity boom in machine-to-machine (M2M) activity, ensuring that Secure Shell key management best practices are identified and followed is more important than ever.

    The Challenge of M2M Transfers

    IAM is a critical part of any comprehensive security plan that helps organizations control access to cloud infrastructure, applications, servers, and both structured and unstructured data. IAM solutions are good at managing the identities assigned to human users, but aren’t good at managing the identities assigned to the automated processes that drive computing in large-scale data centers. As these M2M non-human identities grow in number, it’s becoming clear that traditional IAM solutions aren’t able to manage the identities performing the bulk of operations.

    Because a secure, encrypted channel is needed for M2M data transfers, most identities that enable these processes use Secure Shell for authentication and authorization. However, gaps exist in the IAM governance processes for identities that use Secure Shell. Instead of taking the secure route of centralizing key provisioning, for example, application developers, application owners and process owners might all have privileges to create and assign identities. Taking this approach results in a lack of proper control and oversight over creation of identities and their authorizations. Without central management and visibility, enterprises cannot be sure how many identities have been created, what these identities are authorized to perform, and which are no longer needed.

    Security Questions

    Many companies have begun to re-evaluate how they use and manage open source technologies, both in their products and within their organization, as a result of the Heartbleed vulnerability. That’s a good thing. The point here is not that open source is bad. Rather, it is an opportunity for technology executives to take another look at the necessary but often-neglected infrastructure that their businesses run on, especially when it is something as ubiquitous and critical as encryption technologies like SSL or Secure Shell.

    When evaluating the security level of infrastructure, ask these fundamental questions:

    • Do we know who is creating keys?
    • Do we know who has access to what?
    • Can we tell if someone has acted maliciously?
    • Are our enterprise open source technologies properly managed?
    • Can we rapidly respond to vulnerabilities by rotating keys or updating to new versions?
    • Does either a vendor or internal resources properly support our open source software, or are we relying solely on someone’s good will?

    Best Practices for a Safer Network

    Untold numbers of individuals and organizations have used OpenSSL for more than 15 years. It provides encryption and a safe channel for sending sensitive information in general. Vulnerabilities exist within any software, but if they are discovered in the software that encrypts your data, that vulnerability becomes a call to action. That call becomes more urgent in light of hackers’ ability to steal Secure Shell keys by exploiting OpenSSL vulnerabilities like Heartbleed.

    Organizations need greater visibility into the use and authorization of keys, and stronger IAM controls in light of increased M2M activity and centralized provisioning will help to keep track of all the keys. These practices comprise a comprehensive security plan that will help close the door on outside threats.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:04p
    Xangati Adds Support for XenServer Hypervisors

    At the Citrix Synergy 2015 conference this week Xangati announced that it has extended the number of hypervisors it can monitor to include XenServer.

    Xangati provides service assurance analytics for hybrid cloud and virtual infrastructures. While Xangati already supports VMware and Microsoft Hyper-V, Atchison Frazer, vice president of marketing for Xangati, says that thanks to increased usage of cloud services customers now want to be able to also monitor open source XenServer and Xen Project hypervisors that have been embraced mainly by cloud service providers such as Amazon Web Services, Rackspace, 1&1 Internet, IBM SoftLayer and Korea Telecom.

    Rather than having separate IT monitoring frameworks for each hypervisor environment, Frazer says that IT organizations that are short on staff are looking to consolidate the management of IT operations across public and private clouds.

    In general, Frazer notes that the rise of the OpenStack cloud management framework is also serving to make IT organizations more cognizant of open source hypervisors.

    “We see a lot of organizations looking at XenServer as an open source alternative to KVM Kernel-based virtual machines,” says Frazer. “There’s also a lot of use of XenServer in virtual desktop infrastructure (VDI) deployments.”

    Xangati analytics software itself gets installed as a guest on top of the hypervisor. In the case of Citrix XenServer, that means being able to capture data from Citrix XenDesktop and XenApp virtualization software along with the NetScaler networking appliances.

    At the conference this week Citrix also released a service pack update for XenServer 6.5 that adds enhanced graphics support via Intel GVT-d GPU pass-through for Windows and nVIDIA GPU pass-through for Linux along with Docker container run-time management capabilities.

    The rate at which IT organizations will move to unify IT management is, of course, a subject of debate inside and out of data centers. While VMware is clearly the dominant hypervisor inside most enterprise data centers, open source hypervisors have gained traction in the cloud. Microsoft, meanwhile, continues to gain Hyper-V ground with each new deployment of Windows Server 2012.

    At the moment, it’s clear that most IT organizations have already deployed multiple hypervisors, some of which Frazer notes now come embedded inside applications. While most IT organizations manage deployments of these hypervisors in a semi-autonomous fashion, Frazer says the number and types of hypervisors that need to be managed inside and out of traditional data centers increases a generational shortage of IT administrators with virtualization expertise, likely to soon force the management issue.

     

     

     

     

    1:49p
    CyrusOne Hosting Open House For N. Virginia Data Center

    CyrusOne is holding an open house celebration today for its new data center in Sterling, Virginia. At full build, the 14-acre site will accommodate a shell of 400,000 square feet with up to 240,000 square feet of colocation space, supported by up to 48 megawatts of critical load.

    CyrusOne broke ground on the N. Virginia data center campus in April and brought the first 37,000 square feet of colocation space online in January. The initial phase of the 146,000 square foot building has capacity for up to 12 megawatts of critical load.

    The CyrusOne executive team will be in attendance, along with executives from the Loudoun County Chamber of Commerce, Loudoun County Economic Development team and other public officials. Fear No Ice will hold an ice sculpting performance complete with chainsaws.

    Northern Virginia is home to one of the world’s largest clusters of data center real estate, and over 70 percent of internet traffic passes through Loudoun County. For this and many reasons, the market is strategically imperative for the national colo provider. CyrusOne said it was the most requested location on the part of customers, and pre-leasing activity backed these claims.

    Buddy Rizer, director of economic development in Loudoun County, has helped cultivate a very healthy data center industry in the county.

    “We not only congratulate CyrusOne on opening an industry-leading data center, but we also congratulate the hundreds of key customers that will be served by CyrusOne’s Loudoun facility,” said Rizer in a press release. “Loudoun County has worked hard to promote an atmosphere that cultivates economic development and growth within the data center community. Aside from our prime location, which provides access to the entire East Coast, we deliver competitive electricity rates, and superior reliability and connectivity and one of the most highly educated and highly skilled I.T. workforces in America.”

    Many service providers are active in the crowded and competitive market, including Equinix, DuPont Fabros Technology, Digital Realty Trust, RagingWire, Latisys, COPT, AT&T, Verizon Terremark, CenturyLink Technology, EvoSwitch and many others.

    Despite a healthy amount of colocation space totaling over 5.2 million square feet, pricing has remained solid in the N. Virginia data center market thanks to healthy demand. The facility benefits both CyrusOne’s existing customers as well as opens up the provider to a new audience.

    “We’re excited that our new East Coast facility is up and running so we can continue to meet the growing demand from current and prospective customers in this important region,” said Tesh Durvasula, chief commercial officer, CyrusOne in a release. “Our Massively Modular engineering approach to scale quickly and efficiently ensures we’ll always have space ready when our enterprise-level customers need it.”

     

    2:55p
    DE-CIX Opening Internet Exchange In Istanbul, Turkey

    Major Internet exchange operator Deutscher Commercial Internet Exchange (DE-CIX) will open a new exchange in Istanbul, Turkey, joining two others recently announced in Palermo, Italy, and Marseille, France. All three are expected to open in the third quarter this year.

    The Germany-based DE-CIX successfully operates and manages Internet exchanges throughout the globe, and Istanbul completes its trifecta along the border of the Mediterranean Sea.

    DE-CIX Istanbul will serve as a neutral interconnection and peering point for Internet Service Providers (ISPs) from Turkey, Iran, the Caucasus region and the Middle East. The Internet exchange will start with a single location and expand to multiple data centers in the metro area over time.

    Istanbul is a popular landing point for traffic in the region, including providers of connectivity to Middle Eastern markets such as MedNautilus, new terrestrial cable builds AMEER (Alternative Middle East European Route); GBI North; JADI (Jeddah-Amman-Damascus-Istanbul); and RCN (Regional Cable Network),

    The World Bank records Turkey’s Gross Domestic Product (GDP) at more than $800 billion USD and the country’s economy is growing rapidly. The country has over 470 Internet providers and around half the population are users, according to the International Telecommunication Union (ITU).

    “Istanbul is already a hub for finance, logistics and transport. Like we’ve experienced in Frankfurt, Internet infrastructure follows these developments,” said Harald Summa, chief executive officer of DE-CIX, in a press release. “The need has grown tremendously to interconnect the critical traffic streams that travel from other regions through Istanbul and bring them closer to their destinations. Content, cloud, gaming and other providers will meet the eyeball networks halfway at this new exchange.”

    Internet user numbers in-country are growing by 25 percent each year. Like many emerging Internet markets, consumption on mobile devices is leading the way.

    Other DE-CIX Internet exchanges throughout the globe include UAE-IX in Dubai, United Arab Emirates (UAE), which recently announced record growth; and DE-CIX in Frankfurt, Germany.

    DE-CIX landed in the U.S. in 2013 and continues to expand in Manhattan, most recently establishing a point-of-presence in CoreSite’s New York campus this week.

    3:00p
    How Cloud is Getting Security Right

    It really feels like we’ve crossed a new threshold in the whole cloud conversation. In working with cloud delivery models, conversations with new perspective customers are continuing to increase. Organizations are realizing real business gains by moving a part of their environment into some kind of cloud platform.

    Right now, it feels like the hybrid cloud is becoming one the predominant models being discussed and adopted. Why? It’s becoming easier to do so. But it hasn’t always been about the ease of moving into the cloud. Sure, there are challenges around unique applications and very specific use-cases. Still, one of the most dominating cloud conversation topics seems to always revolve around security.

    Let’s start here – during the recent Gartner Symposium, analysts described cloud computing as a style of computing in which scalable and elastic IT-enabled capabilities are delivered “as a service” using Internet technologies.

    “Overall, there are very real trends toward cloud platforms, and also toward massively scalable processing. Virtualization, service orientation and the Internet have converged to sponsor a phenomenon that enables individuals and businesses to choose how they’ll acquire or deliver IT services, with reduced emphasis on the constraints of traditional software and hardware licensing models,” said Chris Howard, research vice president at Gartner. “Services delivered through the cloud will foster an economy based on delivery and consumption of everything from storage to computation to video to finance deduction management.”

    This means that organizations are gearing up to allow the cloud to become their complete service delivery model. It’s clear that cloud computing is offering a very dynamic competitive advantage by allowing next-generation resource utilization. Furthermore, cloud is evolving to create even better security designs.

    According to Gartner, the increasing adoption of mobile, cloud, social and information (often interacting together) will drive use of new security technology and services through 2016.

    The report goes on to state that a significant number of security markets are being impacted by newly emerged delivery models. This is resulting in the growth of cloud-based security services, which are transforming, to different degrees, the way security is supplied and consumed by customers. While cloud-based services’ competitive pricing puts pressure on the market, the cloud is also providing new growth opportunities, as some organizations switch from deploying on-premises products to cloud-based services or cloud-managed products. More than 30% of security controls deployed to the small or midsize business (SMB) segment will be cloud-based by 2015.

    With all of this in mind, it’s clear that both cloud computing and the cloud security model are going to continue to evolve. There will be new kinds of standalone cloud-ready security platforms as well as those built into existing cloud service provider architectures. I recently wrote that for now, cloud computing has really done a good job staying out of the spotlight when it comes to major security issues. Yes, Dropbox might accidentally delete a few of your files, or some source code becomes exposed. But the reality is that a public cloud environment hasn’t really ever experience a massive data breach. Ask yourself this question, what would happen if AWS lost 80 million records like in the very recent Anthem breach? The conversation around public cloud security would certainly shift quickly. But the reality is that they haven’t. Maybe this gives us more hope that the cloud architecture is being designed in such a way that data is properly segregated, networks are well designed, and the proper boarder security technologies are in place.

    So what are CSPs doing today to better secure your workloads? How are they creating security platforms that are “born in the cloud?” Let’s look at a few ways that big cloud provider are creating some great security strategies.

    • Creating user, client, workload multi-tenancy (policies, rules, etc). The most important concept to remember is that public cloud providers were built with the base architecture of multi-tenancy. They designed their systems knowing that they would have to carve out physical and virtual space to a large number of customers deploying unique workloads. Throughout all of this, they must maintain visibility, high security standards, contextual-policy engines, and how various workloads are interacting. From there, they must deliver powerful portals to each user – where they truly feel like they have their own bit of cloud. Creating this kind of multi-tenant environment isn’t easy. But big cloud providers are doing a great job making sure your slice of the cloud is locked down and secure.
    • Designing resource automation and orchestration (locking down resources to appropriate racks, dynamically shifting user resources, etc). Cloud providers make more money by creating better efficiency within their own environment. This means it’s in their best interest to design a cloud architecture capable of dynamic resource utilization and distribution. Furthermore, cloud management systems allow server and resource policies to span racks, zones, and even data center clusters spanning a region – or the world. Cloud providers also have monitors checking in on policies, their resources, and how virtual and physical components are being utilized. This kind of visibility takes on a security stance as well. If there is an issue, cloud providers can lock down a service, VM, physical server, or even an entire rack. Remember, cloud providers have a lot of physical as well as software-based security solutions.
    • Leveraging open-source technologies. Cloud providers utilize a lot of different powerful tools to create a dynamic and secure cloud architecture. They’ll integrate key management systems, encryption, policy enforcement and a lot more. They’ll also design solutions that allow your workloads to be HIPAA, PCI, FISMA, and IEEE compliant. Open-source technologies allow cloud providers to create powerful API architectures, customized – policy-driven – multi-tenancy platforms, and hypervisor extensions for the customer. These kinds of technologies allow CSPs to have granular controls around how they design their control mechanism and how they secure their resources.

    Cloud computing and the services it offers is specifically designed around security delivering multi-tenant workloads. Think of cloud providers like AWS or even Azure as big banks. Back in the day, it was easy to rob a bank and take its resources. Now, however – it’s much more challenging to accomplish this. And, just like cloud security, there are different kinds of banks with different levels of security. The better the bank (or cloud provider) the better your security will be. This analogy is here to remind you of a couple of things.

    1. As long as there are resources that are valuable, there will always be targets.
    2. Cloud providers will have different security designs. Some can handle compliance-based workloads, for example, while others cannot.

    I can honestly say that if you were hesitant about looking at a cloud model because of security, it’s time you overcome that hurdle and work with a cloud partner. The kinds of direct benefits that cloud computing can offer as a far as user optimization, workload delivery, and creating that next-generation business model is making this architecture more critical for businesses to consider. Most of all, many cloud providers are offering easy ways to get started by taking their cloud offerings for a test drive. Still, as secure as cloud might be, you must always ensure that your workloads and data points fall under data center, security, and compliance best practices. In part two of this security series, we take a look at where security is overlooked, and how problems can quickly arise.

    4:30p
    Akanda Releases Orchestration Software for OpenStack Networks

    Looking to provide a layer of network virtualization that is not tied to any Layer-2 switching environment, Akanda this week released version 1.0 of its namesake network orchestration software designed for environments running the OpenStack cloud management framework.

    At the same time, Akanda and Cumulus Networks revealed they have formed an alliance through which the two companies will advance the adoption of bare-metal switches inside the data center.

    Akanda CEO Henrik Rosendahl says the company’s orchestration software is designed to function as the centralized management layer for all OpenStack-related networking decisions, including managing routing, load balancing, firewall and more.

    Among the first Akanda customers is DreamHost, a cloud service provider that has already deployed Akanda in a production environment. Akanda’s open source network virtualization project was initially developed within DreamHost in 2012 and spun off into a separate entity in October 2014.

    The core components of Akanda are Akanda Rug, an orchestration service for managing the creation, configuration and monitoring the health of Akanda Software Routers in an OpenStack cloud, and Akanda Appliance, which is a Linux-based virtual machine that provides routing and Layer 3 and above services in a virtualized network environment. This includes a REST API for managing the appliance via the Akanda Rug orchestration service.

    Collectively, Akanda reduces OpenStack complexity by replacing many of the agents that OpenStack Neutron API relies on to communicate with a single control point for all networking services.

    While a firm proponent of open networking, Rosendahl says asking IT organizations to replace network switches in order to embrace software-defined networking and network virtualization ignores economic reality. Most IT organizations already have network switch investments that usually don’t wind up being upgraded but once every five years.

    “Unless you can run your software on top of existing Layer-2 infrastructure you’re really only talking about deploying SDN in green field opportunities,” says Rosendahl. “There are not many of those.”

    While not many organizations are replacing proprietary networks with open networking alternatives wholesale just yet, Rosendahl nevertheless says interest is running high. IT organizations have taken note of the discrepancy of what it costs then to deploy a virtual machine versus a cloud service provider, says Rosendahl. While they may never be able to completely close that gap, Rosenthal says shifting to open networking technologies is one way to close that gap.

    Like many of the companies that have embraced open networking, Akanda is betting that that open source first mantra that many IT organizations have in place for software will soon be extended to network infrastructure. The degree to which that occurs inside various data centers over the next several years will vary greatly. But as is often the case when it comes to being caught between being able to scale an environment and the cost of proprietary technologies required to do that, IT organizations appear to be increasingly voting with their feet for open source technologies.

     

    4:51p
    Cloud Application Delivery Provider Instart Logic Nets $43 Million For Expansion

    As fast as it is helping to accelerate cloud applications, Silicon Valley startup Instart Logic continues to grow, landing a $43 million expansion funding round to support that growth.

    The round follows a previous funding round almost exactly a year ago. The company continues to build a patent portfolio for its technology that it calls Software-Defined Application Delivery platform (SDAD).Instart Logic hopes this intelligent software will disrupt the traditional content delivery network (CDN) market through showing its approach solves performance challenges inherent in wireless connections making content delivery networks obsolete, according to the company.

    The Series D round included some key hires, and was led by new investors Four Rivers Group and Hermes Growth Partners, with participation from existing investors including Andreessen Horowitz, Kleiner Perkins Caufield and Byers, and Tenaya Capital.

    The company also announced that it has added Rafael Torres as Chief Financial Officer (CFO), Shailesh Shukla as vice president of products and strategy, and Justin Fitzhugh as vice president of technical operations.

    In addition to raising additional funds and filling key executive positions in the company, Instart Logic continues to win big accounts – noting that Staples, The Washington Post, TUI Group and Pizza Hut all utilize its service. Advisory firm Enterprise Strategy Group says that for the top 30 e-commerce websites based on the Keynote Mobile Commerce Performance Index, SDAD delivers a 75 percent overall reduction in mobile website load times as compared to traditional CDNs.

    “Instart Logic is developing the speed, security and scale capabilities that companies like Apple, Amazon and Google have built for themselves and are making this available to everyone,” said Brian Melton, managing director at Tenaya Capital. “With its software-defined application delivery approach, the company is disrupting the legacy content delivery network market to meet the current and future needs of global enterprises.”

    6:50p
    AWS Tackles Cloud Skills Shortage with AWS Educate

    logo-WHIR

    This article originally appeared at The WHIR

    AWS has developed a program to teach cloud skills to students called “AWS Educate,” Amazon announced Thursday. The program is available for free to approved institutions, educators, and students.

    Educators and students can apply to AWS Educate for credits for a range of AWS services, including Amazon Elastic Cloud Compute, Simple Storage Service, Relational Database Service, CloudFront, DynamoDB, Elastic MapReduce, Redshift, and Glacier. Further credits will also be available to those affiliated with member institutions.

    The program also includes access to AWS Essentials courses and self-placed labs for educators, collaboration forums, and a range of educational content and AWS materials like webinars, instructional videos, University lectures and customer case studies.

    “Based on the feedback and success of our grant recipients and the global need for cloud-skilled workers, we developed AWS Educate to help even more students learn cloud technology firsthand in the classroom,” said Teresa Carlson, Vice President, Worldwide Public Sector, AWS. “We’re pleased to offer AWS Educate to educators, students and educational institutions around the world.”

    Cloud skills, and particularly AWS skills and certification, were identified by a recent “IT Skills & Salary Report” as being increasingly in demand, and increasingly well paid. A new cloud security certification program was announced in April, and cloud adoption was seen as outpacing cloud skills growth even in 2013, when Rackspace opened its Open Cloud Academy.

    Some educators have responded by bringing cloud learning into the classroom, and AWS hopes its new program will lead to greater adoption of cloud into school curriculum.

    Dr. Majd Sakr, computer science professor at Carnegie Mellon University, said, “Three years ago, I began incorporating AWS services into my cloud computing courses. The cloud resources AWS provided me has allowed me to really challenge my students to develop real-world solutions to problems they might face in their careers. One such project involves giving students 1.2 terabytes of Twitter data and asking them to compete against other students by building a tweet query web service that meets correctness, budget and throughput requirements.”

    The benefits for AWS include a workforce more skilled and familiar with their services potentially pushing for their adoption in the workforce, as well as brand benefits, and ultimately a more cloud-savvy talent pool to recruit from.

    This first ran at: http://www.thewhir.com/web-hosting-news/aws-tackles-cloud-skills-shortage-with-aws-educate

    << Previous Day 2015/05/14
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org