Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, October 8th, 2015
| Time |
Event |
| 12:00p |
Public Cloud Drives Enterprise Data Center Consolidation Amazon Web Services, which just a few years ago was a provider of rentable virtual servers thought to be a good fit for startups and test and development workloads only, is a different beast today, working hard – and in many cases successfully – to become the cloud enterprises migrate their corporate data centers to.
The need for new kinds of applications and availability of cloud services is forcing companies to take a hard look at their IT infrastructure and the cost of operating it, and many of them find that it doesn’t make business sense to keep managing on-prem data centers, or at least that it doesn’t make sense to manage the amount of on-prem data centers they manage, and that they can get away with managing fewer.
AWS isn’t the only public cloud provider fighting for the hearts and minds of enterprise IT of course. Its rivals like Microsoft and IBM are making substantial inroads in the enterprise cloud market and have built formidable alternatives to the public cloud giant.
But the giant is making a lot of money in the market and claims its revenue is growing faster than anyone else’s. In this year’s second quarter, the revenue run rate of Amazon’s cloud business was $7.3 billion – about 80 percent up year over year, according to Andy Jassy, senior VP for AWS, who delivered the opening keynote at the AWS re:Invent conference in Las Vegas Wednesday.
“We are seeing a very significant uptake in public cloud use across enterprises,” Richard Villars, VP of data center and cloud market research at IDC, said. In surveys and interviews, IDC sees a much broader commitment among enterprises to public cloud as a platform for new applications and as a driver for data center consolidation plans. The latter is a more recent trend which has become visible over the past eight months or so, he said.
Companies IDC is talking to have plans to consolidate corporate data centers and move workloads to AWS, Microsoft Azure, IBM SoftLayer, and in some cases Google Cloud Platform. Public cloud isn’t the only option of course – companies are also increasingly looking at managed services and hosting – but the primary drivers are similar: “People just want to get out of the data center business,” Villars said.
Market Realities Demand Infrastructure Rethink
C-level technology execs from General Electric and Capital One, two companies among some of the largest enterprises, said they are implementing data center consolidation plans that consist of shutting down most of their data centers and moving application workloads into AWS. The two execs participated in Wednesday’s keynote as speakers.
Rob Alexander, CIO at Capital One, said the bank is going from operating eight data centers in 2014 to five in 2016, and then to three in 2018. Moving most data and applications to the cloud enables Capital One to roll out new software faster, have elastic capacity that can expand during high-demand periods, such as Cyber Monday, and contract when demand is lower, he said.
Capital One has realized it can now deploy some of its most critical production platforms on AWS, according to Alexander.
Building and deploying new applications and delivering them in new ways are a big part of the reasoning behind moving to cloud infrastructure. For banks, delivering software products on mobile devices is crucial. “Mobile is moving fast,” Alexander said. “Mobile has become the preferred channel for our customers.” Capital One’s latest mobile banking application runs on AWS.
Banking is one of the industries where the ability to build and deploy software quickly has become a primary way to stay competitive. “We have to be great at building software and data products if we’re going to win at where banking is going,” Alexander said.
Connected Devices Need Distributed Infrastructure
Enterprises are spending more and more on outward-facing applications that deliver content and services to customers on mobile devices in new geographies, and “if your data center is sitting in your headquarters, it’s probably in the wrong place,” Villars said. The distributed nature of public cloud infrastructure allows companies to deliver those services from data centers closer to the users.
GE, while a very different business from Capital One, is also finding that public cloud is a better infrastructure option than running everything internally. In some ways, its needs are similar to Capital One’s: it also has to send and receive data from end-point devices it hasn’t communicated with in the past. Except, for GE those end points are things like wind turbines, aircraft engines, or machines at manufacturing plants, Jim Fowler, the industrial giant’s CIO said while on stage at re:Invent.
GE’s ongoing data center consolidation program is expected to take the company from 34 data centers to four, which will hold only the most secret and valuable data it has, Fowler said. The rest – thousands of workloads – will go to AWS over the next three years.
Besides supporting internal IT, GE recently became a cloud service provider itself, albeit its cloud services are of a different breed than those AWS provides. Its Predix cloud is a Platform-as-a-Service offering that interconnects devices in manufacturing facilities and helps optimize operations by leveraging data analytics. GE calls it the “industrial internet.”
Cloud Feature Set Grows to Speed up Transition
Public cloud is becoming more and more attractive for enterprises, and cloud providers and the myriad of companies that provide services around the big public cloud platforms continue rolling out features that make it increasingly so. A number of the new service announcements Jassy made during Wednesday’s keynote were aimed at helping enterprises move applications from on-prem facilities to the cloud or start using cloud-native services.
Amazon Kinesis Firehose, for example, is a service that allows companies to quickly load streaming data into AWS for analytics. The new AWS Database Migration service enables them to easily migrate production databases to AWS with minimal downtime. AWS Config Rules allows enterprises to set up compliance rules for resource configuration and get alerts when the rules aren’t being met. Amazon Inspector is an automated security assessment service that identifies security or compliance issues on AWS, assessing network, VMs, OS, and app configuration.
With the new Amazon Snowball service, AWS will ship any number of ruggedized, temper-proof 50TB storage servers directly to your data center and then back to one of its cloud data centers after you’ve loaded them with your enterprise data to be uploaded into AWS. The service is meant to speed up data migration from on-prem data centers to AWS, which at that scale can take several months if done over a network. | | 3:00p |
If Cloud is the New Normal, Is Enterprise the New Tech Company? 
This article originally appeared at The WHIR
To get ahead in the cloud era, enterprises have to start thinking like technology companies and seriously consider whether it makes more sense to build vs. buy.
At the AWS re:Invent keynote on Wednesday, Andy Jassy, VP of Amazon Web Services, said that although the “cloud is the new normal”, he is often asked for best practices for enterprises moving to the cloud. There is no one way that enterprises do this. Jassy spoke on Tuesday about how partners are key in its growth strategy and the enterprise transition to cloud.
Some have 2-5 year plans to move everything to the cloud (AOL, Hertz, Talen Energy, and others), but still there are “loads of enterprises not ready to retire data centers yet,” said. These include Comcast, Gannett, Johnson and Johnson, Intuit and others.
One thing that is evident from the companies AWS pulled on stage during the keynote at its annual cloud conference, which attracted 19,000 attendees this year, is that enterprises that aren’t necessarily thought of as technology companies are having to act like them in order to stay innovative and focus on core competencies.
Capital One Draws in Talent with AWS
AWS customer Capital One CIO Rob Alexander said that with mobile usage twice that as web, “it’s clear…that we need to be great in building amazing digital experiences for our customers.”
Capital One has been investing in engineering talent and has adopted an open source-first policy. While the company started in the cloud in a more experimental mode, developers have driven usage of the platform.
“Use of AWS is a great draw for talent,” Alexander said.
Cloud usage at Capital One is also saving it money on data centers. Last year, Capital One had 8 data centers, and by 2018 it expects that number to be down to 3.
GE Talks Transformation
GE is one of the darlings of enterprise cloud, hiring 2,000 IT professionals over two years. It says it employs a buy vs. build mentality for “things that matter.”
In other words, being really good at cloud infrastructure isn’t going to help it sell more locomotives or aircraft engines, Jim Fowler, GE CIO said.
By 2020, $15 billion of GE’s revenue will be from software, he said.
GE has been working with AWS for the past 4 years, and plans to migrate over 9,000 workloads to AWS over the next three years, going from 34 data centers to 4 data centers that will “only hold things that are our secret sauce,” Fowler said.
MLB to Spin Out Tech Company Based on AWS
Baseball may be an old school American passion, but MLB is embracing AWS to develop a platform that will help fans watch and interact with the game in new ways.
Major League Baseball Advanced Media (MLBAM) EVP and EVP Joe Inzerillo said that the ability to move fast with AWS and get things deployed is key to its success.
Last year at re:Invent, MLBAM launched Statcast, a platform that runs on AWS that integrates video and stats, providing more context to the game. Since then, MLBAM has looked at ways to provide a more interactive experience through 3D – bringing fans right into the game, as well as developing next generation imaging, which collects 3D camera data to show the 360-degrees of a play, which could have “ramifications from entertainment to officiating,” Inzerillo said. It is hoping to roll that out soon, as early as next season.
Using lessons from its own OTT streaming service, mlb.tv, which is over 12 years old and has more than a million subscribers, MLBAM is working with WWE, Playstation VUE, HBO Now, PGA Tour, and most recently, the NHL, to power their OTT streaming services. The company is well on its way to spinning out a technology company, Inzerillo said.
This first ran at http://www.thewhir.com/web-hosting-news/if-cloud-is-the-new-normal-is-enterprise-the-new-tech-company | | 3:30p |
Three Symptoms of Flash Fever and What You Need to Recover Vishal Misra is founder and chief scientist of Infinio.
Does your data center have flash fever? If it doesn’t, I’m sure some of your neighbors do.
Declining prices and a rich selection of flash-based products are fueling an adoption of flash storage across the enterprise. However, it still comes at a price premium that many organizations can’t afford, leaving ample space for alternatives that offer some of the same benefits at a fraction of the cost.
Many of today’s servers are loaded with abundant, extremely fast memory and processing power that often sits idle and can be exploited to provide flash-like performance. Flash-based solutions may seem like silver bullets that can magically fix every issue in your IT system, but despite the benefits they can add, they can also introduce new complexities and costs that can easily be avoided.
If you’ve already adopted flash, don’t make the mistake of discounting its shortcomings rather than seeking out alternative options that might be a better fit for your environment. To harness the full potential of storage, consider the performance and business issues you’re looking to solve in the first place. You may find that modern architectures exist that can deliver flash’s results at a fraction of its cost.
Diagnosing Your Data Center
If you’re using flash and unsure if it’s living up to your initial expectations, you’re not alone. For most organizations, storage issues come down to cost, performance and capacity. Look out for these major symptoms of flash fever:
- You’re paying too much for what you need. Flash storage guarantees consistent performance for highly transactional, read-intensive workloads. On the other hand, the high I/O requirements of virtualized environments, such as server and desktop infrastructures, are a great fit for DRAM-based server-side caching technologies that are not dependent on flash. Performance requirements are often in milliseconds, but flash provides microseconds, which is not needed and is an unnecessary expense that can and should be avoided.
- Your applications are write-intensive. Flash devices have a finite number of program-and-erase cycles, meaning a high number of writes can make the platform economically unrealistic and impractical. In this case, the fixed lifetime of solid-state devices (SSDs) accelerates the refresh cycle and adds another recurring expense.
- You need high storage capacities, but they have to fit your budget. It’s true that design constraints prevent SSDs from reaching the same capacities as hard disk drives (HDDs). Many of today’s SSDs are below 1 terabyte (TB) per unit, compared to 10TB provided by some of the leading-edge HDDs, averaging $.03/GB. SSDs will most likely never provide comparable massive capacities, and their price per gigabyte (GB) can measure up to 20 times higher than typical HDDs. Capacity-optimized storage arrays combined with server-side storage acceleration technologies can deliver the most dramatic economic benefits.
Making It Work
For some IT managers, one way to recover from flash fever is to adopt a host-accelerated model for storage performance. These systems use scale-out platforms to decouple I/O performance from storage capacity and provide the best performance at the lowest price. As a result, users can access server-side, scale-out performance that can be paired with any capacity architecture, including legacy arrays. Some users choose to realize immediate performance benefits on an existing storage area network (SAN) or network-attached storage (NAS) array.
However, the most dramatic economic benefits can be achieved when this architecture is a part of a new deployment, combining storage acceleration software on the host with a dense, capacity-optimized array for ultimate flexibility, control and efficiency.
For issues rooted in storage capacity, shingled magnetic recording (SMR) drives are emerging as the next ultra-dense storage medium. The newest 10TB SMR disks are priced at just 3 cents per gigabyte, making them ideal building blocks for inexpensive capacity. On the performance side, non-volatile, dual in-line memory modules (NVDIMMs) are emerging. NVDIMMs plug directly into the memory bus or DIMM slot on a motherboard instead of the higher latency PCIe. Writes will be completed in fewer than 10 microseconds, an order of magnitude faster than the current mainstream flash can handle.
What ever way you choose to recover from flash fever, the potential of existing arrays and new storage deployments will grow when you simply consider the other options in today’s IT landscape. At all costs, however, you should avoid wasting any resources – memory, processing, budget or otherwise.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| | 4:30p |
AWS Releases Slew of Services Designed to be Cheaper, Easier than “Old Guard” Tools 
This article originally appeared at The WHIR
LAS VEGAS — With all the excitement around enterprises moving to the cloud, AWS is recognizing that a lot of that momentum is being driven by developers.
In the keynote on Wednesday morning at AWS re:Invent, Andy Jassy, VP of AWS, spoke about how the cloud has given developers and “builders” more power, and made several announcements of new services directed at these particular users.
“When you talk to developers and line-of-business managers, cloud represents freedom and controlling your own destiny,” he said. “Once builders get a taste of this there’s no way they’ll go back to doing things the old way.”
AWS itself has over 1 million active customers (which it classifies as a non-Amazon entity that has used the platform within 30 days). Its cloud services are growing exponentially – from Q2 2014 to Q2 2015, Amazon EC2 usage increased 95 percent YoY, Amazon S3 data transfer increased 120 YoY and database services usage increased 127 percent YoY, respectively.
Amazon Kinesis Firehouse – Making Uploading Streaming Data Easier
On Wednesday, AWS made a number of announcements directed at developers, including Amazon Kinesis Firehouse, a fully managed service, building on Amazon Kinesis Stream, for loading streaming data to Amazon S3 and Amazon Redshift, with other AWS data stores coming soon.
Amazon Kinesis Firehouse can capture data from any streaming data source and load it directly into AWS, in real-time.
Jassy said that the service elastically scales for the customer so they don’t have to worry about adjusting streams, and the data can be encrypted.
Amazon Snowball – A Shippable Data Transfer Solution
Among the other announcements is AWS Snowball, a physical data transfer service. At $200 a pop, Snowball is a faster, and more cost-effective, way to upload 50TB or more,Bill Vass, head of AWS storage service said.
Snowball is about 1/5th the cost of running data over the network, and encrypts data end to end, featuring a tamper-proof, secure enclosure, and can convert legacy file systems to object storage.
Its e-ink label turns into the UI once it reaches its destination, a journey that is fully trackable. And Snowballs can be reused to transfer more data.
Amazon QuickSight – Cloud-Based Business-Intelligence for all Employees
Amazon QuickSight is available now in preview, and like other AWS services announced at re:Invent, was designed to be a cost-effective alternative to “old-guard” tools. Dr.Matt Wood, GM of product strategy for AWS, said that the cloud-powered business intelligence service can produce its first virtualization in 60 seconds, and partners can use the service to increase query times of their own BI tools.
Designed for employees of all technical skill-level, Amazon QuickSight uses an in-memory calculation engine that it calls SPICE to perform advanced calculations and quickly render visualizations.
It allows users to build live dashboards, and given permission, can look inside data sources already in an AWS account, inspect data types inside the data, and look for relationships between data types automatically. It also suggests the best visualizations for data.
Within the platform, data-driven stories can be shared with colleagues or embedded in websites, and more.
AWS Database Migration Service, Amazon RDS for MariaDB
“It’s rare that I meet with an enterprise customer that isn’t looking to flee their database provider,” Jassy said. Databases have traditionally been very expensive and proprietary, with high amounts of lock-in, he added.
AWS launched Amazon Aurora last year, which promised the same performance of commercial databases with open source pricing. Aurora is now the fastest growing AWS service, according to Jassy.
On Wednesday, AWS launched support for MariaDB on Amazon RDS as a fully managed service, allowing customers to deploy a MariaDB database with a few clicks in the AWS Management Console.
Also on the database side, AWS announced its Database Migration Service available in preview, which allows users to easily migrate production databases to AWS with minimal downtime as it allows you to continue to replicate data from source to the new target.
A feature of the migration service, AWS Schema Conversion Tool, ports database schemas and stored procedures from one database platform to another, allowing customers to move applications from Oracle and SQL Server to Amazon Aurora,MySQL, MariaDB, and soon PostgreSQL.
This first ran at http://www.thewhir.com/web-hosting-news/aws-releases-slew-of-services-designed-to-be-cheaper-easier-than-old-guard-tools | | 6:28p |
New Massive-Memory AWS Cloud VMs First IaaS to Use Intel’s Latest Xeon The new breed of enterprise applications, where massive amounts of corporate and customer data is constantly analyzed in real time, requires a lot of computing horsepower, and those workloads are best served by computing architectures that rely on parallel processing, where a lot of processes happen concurrently.
As it continues its rapid charge on the enterprise IT market, Amazon Web Services wants to make sure companies move as many workloads out of their corporate data centers into its cloud data centers as possible, and workloads that need the most computing muscle are among the ones that are hardest to move to the cloud.
This morning, on stage at the company’s re:Invent conference in Las Vegas, Amazon CTO Werner Vogels announced a new kind of cloud VM designed to make that move easier. The X1 instance, which the company expects to roll out next year, has 2TB of memory and 100 processor cores, enabling the kind of high concurrency Big Data analytics needs.
Powered by Intel’s latest Xeon E7 v3 processors, the instance is a full order of magnitude larger than Amazon’s current-generation high-memory instances, AWS chief evangelist Jeff Barr, wrote in a blog post. It is designed for applications like SAP Hana, Microsoft SQL Server, Apache Spark, and Presto.
AWS will be the first Infrastructure-as-a-Service provider industry wide to offer the might of Intel’s latest Xeon as a cloud service, Diane Bryant, head of Intel’s data center group, said during Thursday’s keynote at the show.
“Cloud computing is … the benchmark for delivering efficient and accessible technology,” Bryant said. But performance is critical for real-time, end-to-end data analytics solutions that can crunch through massive data sets, which means “cloud computing needs extreme performance.”
Public cloud has become big business for Intel, which often designs custom processors for the world’s leading cloud providers, such as Amazon and Microsoft. Last month, for example, Microsoft rolled out the latest DV2 cloud VMs on its Azure cloud, powered by customized Xeon E5 v3 processors.
In addition to announcing its most powerful cloud VMs yet, AWS also announced the tiniest cloud instances it has every announced. T2.nano, slated for availability later this year, has one virtual CPU core and 512 MB of memory.
Designed for applications like low-traffic websites, T2.nano is a “burstable” service, meaning a customer has the ability to burst to higher capacity than the tiny instance offers when necessary, but accumulate CPU credits during low-demand periods. | | 10:07p |
What Thailand Floods Taught AWS Hardware Team about Supply Chain Management In the summer of 2011, when monsoon weather in Thailand caused massive floods that crippled manufacturing plants and caused a global shortage of hard drives (the world learned then that Thailand is where hard drives came from), Amazon’s infrastructure team was just starting to design its own data center hardware, such as servers and network switches.
The volume of hard drive components the company needed at the time was small enough to avoid having to go through any extreme pain to source them during the crisis. Had Amazon Web Services been producing its own hardware then at the rate it is producing it today, the situation would have been dire.
AWS is one of the world’s largest single-user data center operators. Like its peers and competitors – Google and Microsoft are the two primary examples – it designs its own hardware to make sure the feature set matches the needs of its software and to take advantage of cost reductions that come with sourcing hardware at massive scale. Becoming their own hardware suppliers means getting supply chain management down to a science to ensure they don’t run out of capacity in the face of rising demand.
Lessons the AWS infrastructure team learned in Thailand in 2011 and what they did after, based on that knowledge, turned into some of the most important processes it does today to ensure it can deliver the data center capacity the company’s exploding cloud business needs, Jerry Hunter, VP of infrastructure at AWS, said Wednesday while on stage at the annual AWS re:Invent conference in Las Vegas.
“Turns out it can be pretty hard to get the stuff that you need,” he said. And it wasn’t only about the hard disks. Components like the motors that spin the disks, for example, predominantly came from one supplier, whose plants were on the flood plain, they learned.
The disaster was a wake-up call for the team, which realized they had to gain a deep understanding of the component supply chain if they were to weather another incident of this scale.
Gaining that level of understanding wasn’t easy. When AWS asked local manufacturers to provide some insight into their operations, the manufacturers at first declined, and it took a lot of relationship-building to get access to the necessary knowledge, Hunter said. “We still have great relationships with those vendors today.”
The team also spent time learning about the processes used by their counterparts on the retail side of Amazon, in its Fulfillment Centers, which are famous for being able to deliver packages quickly, and used that knowledge to improve its data center hardware supply chain, turning it “from a potential liability into strategic advantage,” he said. |
|