Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, April 9th, 2015

    Time Event
    12:00p
    Amazon Wants to Replace the Enterprise Data Center

    SAN FRANCISCO – You often hear experts talk about hybrid IT infrastructure, a combination of in-house data center resources and cloud services hosted by outside providers, as the ultimate enterprise IT setup that enables a company to leverage the benefits of cloud’s agility but retain the control of on-premise infrastructure.

    Stephen Orban, head of enterprise strategy at Amazon Web Services, however, painted a very different vision of the future during his opening keynote at the company’s conference in San Francisco Wednesday. Hybrid infrastructure does have a role to play in that vision, but the role is transitory. It’s not the end goal.

    The ideal goal for AWS is for every single function supported by enterprise data centers today to be offered by Amazon as a service delivered out of its data centers. In that vision, the role of hybrid is simply to smooth the transition and let customers’ existing infrastructure investments reach their natural end of life. “Most large organizations have existing IT investments that haven’t yet fulfilled their life,” he said.

    Why? According to Orban, AWS can provide those functions cheaper and faster, leaving enterprise IT teams with more free time and resources to add value to the organizations they support.

    AWS Casts a Wide Net

    And the company has been busy fleshing out its portfolio of enterprise services to make sure it supports the full range of enterprise data center needs. “Nowhere else can you find an inventory of IT solutions this large,” Orban said.

    AWS now has services that support business applications under the purview of CTOs, corporate apps overseen by CIOs, and end-user IT services that are run by people in top IT support roles in the enterprise. Underneath them are of course the Amazon cloud division’s bread-and-butter Infrastructure-as-a-Service catalog, and everything is secured and controlled by a series of information security services.

    Examples of the business apps are things like Elastic Beanstalk and Lambda, which help with infrastructure management, or SNS and Mobile Analytics, which make it easier to provide mobile applications. Examples of corporate applications are things like mail and docs, and the end user computing portfolio includes things like cloud desktops and AppStream, a service that lets you deliver Windows applications to user devices out of Amazon data centers.

    Amazon’s strategy for tackling the enterprise data center market extends beyond simply replacing functions currently handled in-house with services. That, like hybrid, is part of the transition and followed by a new version of the IT organization, where business application, corporate application, and end-user computing functions are joined by DevOps, the function of ensuring the IT infrastructure is responsive to quickly changing needs of enterprise software developers.

    The company has also set up a professional services organization to guide enterprise IT users through key decisions about replacing existing functions with cloud services or adding new ones.

    Competing for Enterprise Data Center Spend

    AWS is not alone in its pursuit of the massive annual enterprise IT spend. And it is massive. In January, Gartner estimated worldwide IT spending in 2014 to have been about $3.7 trillion and forecasted it would reach $3.8 trillion this year.

    Companies spent $141 billion on data center systems in 2014, $317 billion on enterprise software, and $956 billion on IT services.

    Growth in server and data center storage systems purchases has slowed, which Gartner attributes to longer replacement life cycles and a higher-than-expected rate of switch to cloud services.

    Competing with AWS for the portion of total spend being diverted to cloud services are other internet giants like Google and Microsoft, as well as enterprise-IT mainstays like IBM and VMware. Companies building private OpenStack clouds on their own or together with OpenStack vendors are also a growing source of competition, and the likes of IBM, HP, and Dell are happy to enable users to build their own cloud infrastructure using the open source architecture.

    Nobody of course knows exactly what the future of enterprise data centers is going to look like. Whether most companies will eventually hand the business of operating IT infrastructure over to service providers like Amazon entirely or find that the hybrid approach is best for them is hard to predict.

    But the approach Amazon has taken is a holistic one. Trying to offer a cloud-service alternative to every enterprise data center function in existence, the company wants to be ready to become the one-stop enterprise IT shop if the need arises.

    2:30p
    Riverbed Pulls Remote Offices into Data Center

    Looking to take the whole concept of virtual data center to the next logical level, Riverbed launched an upgrade to the Riverbed SteelFusion platform that makes it possible to remove all servers and storage from branch offices.

    Josh Dobies, senior director of product marketing and strategy for SteelHead and SteelFusion platforms, says that rather than trying to remote-manage IT infrastructure in branch offices, it’s now feasible to provide a complete virtual instance of the IT environment for branch office within a centrally managed data center environment.

    “Right now IT being delivered at the branch office is rigid, insecure, and inefficient,” says Dobies. “We’re trying to make provisioning new services for the branch office as easy as provisioning a VM.”

    Dobies says at the core of Version 4.0 of Riverbed SteelFusion is a new FusionSync capability that leverage Riverbed wide area network (WAN) optimization and storage technologies to keep track of changes to the virtual instance of a local server environment that is presented to users in a branch office.

    With half of IT budgets allocated to support remote offices, Dobies says Riverbed SteelFusion creates an opportunity for IT organizations to dramatically reduce costs. Not only has WAN optimization technology evolved in terms of I/O throughputs that can be sustained, the latest version of the Riverbed SteelFusion appliance makes greater use of caching using up to 256GB of memory, says Dobies.

    The end result, says Dobies, is an ability to deploy a “Zero IT Branch” computing strategy that might actually encourage organizations to open more branch offices because the IT infrastructure needed to support them can be deployed as a virtual entity that physically resides in a centralized data center environment.

    IT organizations have not only spent a lot of money on tools to monitor IT infrastructure deployed in remote offices, many of them also wind up having to physically visit those offices multiple times a year. Add up all the travel expenses associated with visiting those remote offices and the amount of time required to get there and the cost and productivity challenges associated with supporting remote offices rapidly becomes substantial. Of course, organizations can opt either hire full time IT personnel to support a remote office, hire a contractor or try to turn a member of the office staff into the local IT expert. But each of those approaches either incurs or takes someone away from their primary job function.

    In contrast, providing IT services within a virtual data center environment should not only lower the total cost of IT; it should just as importantly lead to a more consistent delivery of higher quality IT services.

    3:00p
    Microsoft Intros Hyper-V Containers in Bid for Azure Developer Supremacy

    Microsoft made a couple big container moves recently. The first is the introduction of Hyper-V Containers, a new hypervisor that runs containers safely on Windows Server. The other is the Microsoft Nano Server, a stripped-down, minimal footprint Windows Server install option made for cloud and containers.

    Both the Hyper-V Containers and Nano Server are a bid to entice developers to Microsoft’s cloud platform.

    Hyper-V Containers is used in conjunction with Docker containers. It combines the maneuverable, agile Docker container technology with the isolation of virtual machines or “enhanced isolation,” as Microsoft dubs it. VMware touts a similar story for its hypervisor and Docker, which Microsoft added to support Azure late last year.

    The Nano Server is on the same playing field as CoreOS, a lightweight and lean flavor of Linux. CoreOS recently raised a $12 million round from Google, another cloud competitor. Canonical’s Ubuntu Snappy and Red Hat’s Project Atomic are also competing offerings.These lightweight operating systems and containers go hand-in-hand in making the app lifecycle nimble.

    Google, Microsoft, Amazon Web Services and others are trying to attract application development onto their clouds. The biggest trend in application development is containers, which make apps easy to deploy anywhere by packaging their dependencies. Containers eliminate a major app development headache by making sure an app is tuned for each infrastructure in which it will reside.

    Developers have embraced containers en masse. While the enterprise wants in on the action, there’s apprehension as to whether it’s quite enterprise-ready–even though that is Microsoft’s goal in introducing the two new products. Docker is already supported and, like Azure, appeals to enterprises.

    “As developers look to expand the benefits of containers to a broader set of applications, new requirements are emerging,” wrote Mike Neil, general manager of Windows Server. “Virtualization has historically provided a valuable level of isolation that enables these scenarios, but there is now opportunity to blend the efficiency and density of the container model with the right level of isolation.”

    3:30p
    Tips on Managing Your Vendors

    A data center is only as good as the humans behind it. While a data center is a structure housing a complex ecosystem of IT equipment, supported by cables and networking gear, power distribution units, cooling equipment, generators or flywheels, and so much more, it only runs well when qualified people design, build and maintain the multi-dimensional environment.

    Chris Papp, VP Sales, Air Force One, a mechanically-centered engineering firm with hundreds of alliance partners, will moderate a panel, “Best Practices in Managing Outsourced Service Providers in the DC,” at the spring Data Center World, which aims to assist data center managers and operators in running their operations smoothly and without a hitch by selecting top-notch providers and working with them well.

    The trade show and educational conference convenes in Las Vegas, NV on April 19-23. The educational tracks will include many topical sessions, covering issues and new technologies that data center managers and service providers as well as owners and operators face.

    Panel on Working With Your Providers

    “Data centers have trouble managing providers,” Papp said. “It’s important that proper planning and communication take place and everyone works together and gets you what you need.”

    The panel includes an end user who will share from the perspective of a data center manager and an engineering firm representative who can speak to designing and building data centers and the associated challenges. “Often the end user doesn’t know what they want,” Papp said about the design process.

    There are many common issues managing providers across the data center life cycle, said Papp, including the engineering design and construction, operations, including equipment preventative maintenance and the end of life issues, such as equipment replacement and expanding the data center to meet capacity demand.

    Papp said the panel will discuss how to measure success/failure; general planning (project scope, timeline, contingency planning); and best practices in communication with contractors.

    Many Issues Arise from Poor Selection

    Some other issues that often arise is the end user selects an engineering firm doesn’t have experience with data centers or the contractors installing equipment are not the same people who can service the equipment.

    “It’s a matter of ongoing quality control,” said Papp, adding that these kinds of issues are widespread. “These issues don’t discriminate. We find them among all kinds of end users. Larger organizations have so many complexities that it leads to paralysis and they can’t communicate. Or smaller businesses don’t have resources or they have selected an engineering firm that doesn’t have the reach they need.”

    For Air Force One, they have partners across North America who are qualified to work in the data centers across North America. “We have the same level of expertise throughout the partners and we qualify selected techs across North America,” Papp said, adding that there are 500 companies in his network and the company has 20 to 30 years of referrals.

    Qualifying Service Providers

    Selecting the right outsourced provider can take time and needs a lot of attention to detail, but careful selection will pay off later.

    “We look at the history, referrals, local requirements, national reach, services provided, background/security checks, safety training, expertise in field, response time,” Papp said.

    At Air Force One, we evaluate what communications are needed and what is the time frame from service order to solutions, he said. This is an important item to understand if you are not tolerant of downtime.

    This elements and others in the outsourced provider process will be discussed during the panel presentation at Data Center World. To learn more, attend this panel at at spring Data Center World Global Conference in Las Vegas. Register at the Data Center World website.

    4:00p
    Taking Your Data Center to Next Level With Lithium-Ion Power

    Emilie Stone is General Manager for Methode – Active Energy Solutions.

    In 1859, Western Union installed one of the first lead-acid battery rooms to provide back-up power for its telegraph services. Since then, advances in power electronics efficiency, battery packaging (sealed versus valve-regulated), and capacity have been made, but the technology is fundamentally the same. The back-up battery and uninterruptible power supply (UPS) market is ripe for innovation. Lithium-ion batteries, with their high energy density, minimal maintenance, and low capacity fade, are poised to upend our existing assumptions about what a UPS is capable of doing.

    The term “lithium-ion” is used to describe a class of batteries, typically with rechargeable/secondary cells and a lithium-based cathode. While the precise capabilities of each lithium-ion chemistry vary, they all perform better than lead acid. For example, a common lithium nickel-manganese-cobalt (Li-NMC) cell compared to a common VRLA (valve-regulated lead-acid) cell will have 150 percent of the energy density (W∙h/L) and 275 percent of the specific energy (Wh/kg), meaning it is smaller and lighter for a given capacity. For the data center, the benefit equates to a battery offering more power in a smaller footprint, freeing up valuable data center real estate.

    In addition, Li-NMC exhibits 190 percent of the cycle life at a higher depth-of-discharge (80 percent versus 50 percent for VRLA), which translates into more useable capacity for the UPS. NMC cells are typically rated to a full-power operating environment of 40-45°C versus 25°C for VRLA. NMC batteries also maintain 92 percent efficiency at a 1C° discharge versus 60 percent for VRLA. This results in less waste heat in a rack.

    There are, however, two major obstacles to widespread lithium-ion use: cost and safety. Just a few years ago, lithium-ion batteries were roughly four times the cost of lead-acid batteries. However, as cell production has increased to support the Electric Vehicle industry and consumer products, the cost will continue to fall. Lithium-ion production is projected to grow by 67 percent over the next five years.

    Another cost driver for lithium-ion technologies is the use of a battery management system (BMS). While the primary function of the BMS is to maintain a safe operating environment for the cells, voltage, current and temperature, the BMS also provides invaluable insight into the state-of-charge and health of the battery – all key for making more informed decisions about if and when to replace the battery.

    In addition to the BMS, safety technology within a lithium-ion cell, such as current-interrupt devices, positive temperature coefficient fuses, and vents are commonly implemented. Pack-level safeties such as fusing and thermal dissipation measures also guarantee a safe operating environment for the cells, making them a stable solution for data centers.

    In a data center environment, these benefits translate into savings in space, weight and replacement that directly contribute to the bottom line. It is now possible to get 6kW of power in a rack in a 2U package, weighing less than 100lbs. Rather than employing a heavily cooled and reinforced battery room, the UPS can be deployed in the rack or end-of-row to offer flexibility and simple power runs. Higher cycle life means a lithium-ion UPS can last up to seven years without service. Combined with fast recharge time, this also means lithium-ion batteries can be used for non-traditional UPS functions like supplementing the grid to load balance to maintain power budget at the rack level. All of these compelling benefits are bringing lithium-ion batteries to the forefront of stationary storage applications and forging the path for their future in the data center.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:30p
    US Army Taps IBM’s Government Cloud for Logistics Support

    The U.S. Army is using a hybrid cloud from IBM to process more than 40 million transactions a day, more than the New York Stock Exchange.

    IBM’s cloud connects into the Army’s on-premise environment to power broader analytics in the Army Logistics Support Activity (LOGSA). LOGSA is one of the Federal Government’s biggest logistics systems, providing integrated logistics support of Army operations worldwide. The hope is cloud will improve efficiency and effectiveness of logistical coordination.

    The shift to hybrid has resulted in cost savings of 50 percent, according to IBM, with the Army now focused on incorporating new analytics capabilities that take advantage of flexible cloud infrastructure. Examples include condition-based maintenance and data mining.

    LOGSA provides logistics intelligence, life cycle support, technical advice, assistance to the current and future forces, and integrates logistics information.

    LOGSA is home of the Logistics Information Warehouse, the Army’s official storehouse for collecting, storing, organizing, and delivering logistics data. LIW provides services to more than 65,000 users and 150 partners around the world.

    “The Army not only recognized a trend in IT that could transform how they deliver services to their logistics personnel around the world, they also implemented a cloud environment quickly and are already experiencing significant benefits,” said Anne Altman, general manager for IBM’s U.S. Federal business unit, said in a press release. “They’re taking advantage of the inherent benefits of hybrid cloud: security and the ability to connect it with an existing IT system. It also gives the Army the flexibility to incorporate new analytics services and mobile capabilities.”

    IDC recently named IBM a leader in U.S. Government cloud Infrastructure-as-a-Service. Two SoftLayer data centers in Ashburn, Virginia, as well as one in Dallas were opened specifically for federal government workloads. IBM’s SmartCloud for Government is FedRAMP certified.

    IBM also has a deal worth up to $1 billion with the Department of the Interior to help modernize to cloud and SaaS.

    5:00p
    VMware Data Center for Cloud Lights Up in Melbourne

    VMware has opened a vCloud Air data center in a Telstra facility in Melbourne. There are now just under ten VMware data centers in support of its public cloud infrastructure, aimed squarely at the enterprise IT shop. vCloud Air is pitched to those with vSphere environments in their facilities wanting to hook into public cloud.

    The VMware data center will serve several surrounding cities as well as New Zealand. Serving directly from within Australia reduces latency for those customers and addresses data sovereignty concerns.

    Melbourne will also offer an updated disaster recovery service out of the data center. Disaster recovery has been a popular use case for vCloud Air.

    “Australian businesses will have the ability to seamlessly extend applications into the cloud without any additional configuration, and will have peace of mind, knowing this IT infrastructure will provide a level of reliability and business continuity comparable to in-house IT,” said Duncan Bennet, a vice president and managing director for VMware in the region. “It means businesses can quickly respond to changing business conditions and scale IT up and down as required without disruption to the overall business.”

    Australian telco and data center services giant Telstra is providing the data centers behind VMware’s public cloud in the country. VMware announced its partnership with Telstra late last year.

    VMware Cloud Services Business Unit executive vice president and general manager Bill Fathers called Australia “a critical mature market and bellwether for enterprise IT in the Asia Pacific region.”

    Melbourne enhances vCloud’s Asia Pacific coverage, joining a data center in Tokyo. A possible next location is Singapore, a major hub in the region.

    The majority of vCloud Air data centers are in the U.S. In Europe, there are two in the U.K. and a recently opened location in Frankfurt.

    The company added Virtual Network Functions through several vendors using vCloud as the underlying infrastructure last month.

    vSphere 6, the latest release of VMware’s flagship cloud software suite, recently entered general ability. vSphere is a software complement to vCloud infrastructure in a wider hybrid cloud strategy.

    5:33p
    Digital Realty Appoints CIO

    Digital Realty, a San Francisco-based global data center services giant, has appointed the first CIO in the company’s 10-year history, signaling that the change in strategy it kicked off early last year would not be limited to pruning of its massive real estate portfolio and partnerships with service providers.

    Its new CIO will be Michael Henry, currently senior vice president and CIO at Rovi, a cloud media technology company based in Santa Clara, California. He will start at Digital next week.

    Digital’s CEO William Stein said Henry will drive technological innovation that enables the company’s employees, such as new IT platforms and systems.

    “The exchange and analysis of data is critical to all of our clients, and it certainly is for our employees,” Stein said in a statement. “Having Michael’s outstanding talents as a transformational technologist, as well as a thought leader, will accelerate unlocking employee innovation and insight and directly benefit our clients.”

    Digital has built its global business as a provider of wholesale data center space and is known primarily as a data center landlord. Since last year, however, the company has been trying to change that perception.

    It has been pursuing more and more partnerships with companies that provide services beyond its core space-and-power competency. It has also placed more emphasis on retail colocation space in several of its properties that have especially high numbers of carriers.

    That the company has hired a CIO to give more sophisticated tools to its employees shows that like many other big enterprises it is looking to technology to increase its value to customers.

    Digital CTO Jim Smith will retain his role, while also overseeing the company’s property and technical operations.

    << Previous Day 2015/04/09
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org