Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, March 15th, 2017

    Time Event
    12:00p
    Meet Microsoft, the New Face of Open Source Data Center Hardware

    If you’re running a cloud platform at Microsoft’s scale (30-plus regions around the world, with multiple data centers per region) and you want it to run on the best data center hardware, you typically design it yourself. But what if there are ideas out there — outside of your company’s bubble — that can make it even better? One way to find them is to pop your head out as much as possible, but which way do you look once you do? There’s another way to do it, and it’s proven to be quite effective in software. We’re talking about open source of course, and Microsoft — a company whose relationship with open source software has been complicated at best — has emerged as a pioneer in applying the open source software ethos to hardware design.

    Facebook, which has been a massive force in open source software, became a trail blazer in open source data center hardware in 2011 by founding the Open Compute Project and using it to contribute some of its custom hardware and data center infrastructure designs to the public domain. But there’s been a key difference between Facebook’s move and the way things work in open source software. The social network has been open sourcing completed designs, not trying to crowdsource engineering muscle to improve its infrastructure. Late last year, Microsoft, an OCP member since 2014, took things further.

    In November, the company submitted to the project a server design that at the time was only about 50 percent complete, asking the community to contribute as it takes it to 100 percent. Project Olympus (that’s what this future cloud server is called) is not an experiment. It is the next-generation hardware that will go into Microsoft data centers to power everything from Azure and Office 365 to Xbox and Bing, and it’s being designed in a way that will allow the company to easily configure it to do a variety of things: compute, data storage, or networking.


    Learn about Project Olympus directly from the man leading the charge next month at Data Center World, where Kushagra Vaid, Microsoft’s general manager for Azure Cloud Hardware Infrastructure, will be giving a keynote titled Open Source Hardware Development at Cloud Speed.


    There’s a link between the design’s modularity aspect and the open source approach. Microsoft wants to be able to not only configure the platform for different purposes by tweaking the usual knobs, such as storage capacity, memory, or CPU muscle, but also to have a single platform that can be powered by completely different processor architectures — x86 and ARM — and it wants to have multiple suppliers for each — Intel and AMD for x86 chips, Qualcomm and Cavium (and potentially others) for ARM. The sooner all the component vendors get involved in the platform design process, the faster a platform will emerge that’s compatible with their products, goes the thinking.

    Read more: Why Microsoft Says ARM Chips Can Replace Half of Its Data Center Muscle

    “Instead of having a team do everything in-house, we can work with these folks in the ecosystem and then bring the solutions to market faster, which is where differentiation is,” Kushagra Vaid, general manager for Azure cloud hardware infrastructure, said in an interview with Data Center Knowledge. “So, it’s been a completely different development world from that standpoint versus what we used to do.”

    Making Data Centers Cheaper to Operate

    Microsoft has already seen the benefits of going the Open Compute route with its current- and previous-generation hardware developed in collaboration with suppliers. Project Olympus widens the pool of potential engineering brainpower it can tap into. About 90 percent of the hardware it buys today is OCP-compliant, meaning it fits the basic standard parameters established by the project, such as chassis form factors, power distribution, and other specs. It hasn’t always been this way.

    Microsoft’s Project Olympus servers on display at the Open Compute Summit in Santa Clara in March 2017 (Photo: Yevgeniy Sverdlik)

    The company started transitioning to a uniform data center hardware strategy for its various product groups around the same time it joined OCP. The goal was to reduce cost by limiting the variety of hardware the groups’ infrastructure leads were allowed to use. And Microsoft decided it would make that hardware adhere to OCP standards. So far, Vaid likes the results he’s seen.

    Standardization has helped the company reduce hardware maintenance and support costs and recover from data center outages faster. A Microsoft data center technician no longer has to be acquainted with 10 to 20 different types of hardware before they can start changing hard drives or debugging systems on the data center floor. “It’s the same hardware, same motherboards, same chassis, same racks, same management system, it’s so much simpler,” Vaid said. “They spend less time to get the servicing done. It means fewer man hours, faster recovery times.”

    One Power Distribution Design, Anywhere in the World

    Project Olympus is the next step in Microsoft’s pursuit of a uniform data center infrastructure. One of the design’s key aspects is universal power distribution, which allows servers to be deployed in colocation data centers anywhere in the world, regardless of the local power specs, be they 30 Amp or 50 Amp, 380V, 415V, or 480V. The same type of rack with the same type of power distribution unit can be shipped for installation in Ashburn, London, Mumbai, or Beijing. The only thing that’s different for each region is the power adaptor.

    This helps Microsoft get its global hardware supply chain in tune with data center capacity planning, and few things have more bearing on the overall cost of running a hyper-scale cloud – and, as a function of cost, end-user pricing — than good capacity planning. The less time passes between the moment a server is manufactured and the moment it starts running customer workloads in a data center, the less capital is stranded, which is why it’s nice to have the ability to deliver capacity just in time, when it’s needed, as opposed to far in advance to avoid unexpected shortages. “Essentially we can build a rack one time, and after it’s built we can very late in the process decide which data center to send it to, without having to worry about disassembling it, putting in a new PDU,” Vaid said. “It all delays the deployment.”

    Taking Cost Control Away from Vendors

    If the Project Olympus design was 50 percent done in November, today it’s about 80 percent complete. Vaid wouldn’t say when Microsoft is expecting to start deploying the platform in production. The company still maintains a degree of secrecy around timing of deployment of its various technologies.

    He also had no comment on the report that surfaced earlier this month saying Hewlett Packard Enterprise was expecting Microsoft to cut back on the amount of data center hardware it would buy from the IT giant. The report was based on anonymous sources, and Vaid pointed to the fact that HPE was listed as one of the solution providers involved in Project Olympus.

    HPE of course was one of several on the list, along with Dell, ZT Systems, Taiwanese manufacturers Wiwynn and Quanta, and China’s Inspur. That’s one of the biggest reasons behind OCP to begin with. The hyper-scale data center users drive the design, while the suppliers compete for their massive hardware orders, leaving little room for suppliers to differentiate other than on price and the time it takes to deliver the goods, flipping the world upside down for companies like HPE and Dell, who until not too long ago were enjoying market dominance. “We have full control of the cost,” Vaid said. “We can take what we want, we can take out what we don’t want, and the solution providers are just doing the integration and shipping.”

    3:00p
    Ascenty Secures $190M to Fund Data Center Construction in Latin America

    Data center provider Ascenty announced that it has secured $190 million in financing to help fund the building of five new data centers in one of the most under-served regions in the world—Latin America—and to refinance existing debt.

    CEO Chris Torto said in a statement that he sees securing the five-year syndicated loan from ING, Itaú BBA, and two international banking institutions as a vote of confidence.

    “We decided to increase our debt financing to allow us to accelerate the company’s expansion into new markets. In 2017, we will launch five new data centers, all of which are under construction. This new debt financing provides proof of last year’s great results as well as renewed commitment from our banking partners for Ascenty’s rapid expansion”, he said in a statement.

    Later in the day, Ascenty also reported on its website that the first unit in its 43,000-square foot site in Brazil’s capital city of São Paulo was successfully commissioned and has been operational since March 1. Called São Paulo 1, it is the company’s fifth Brazilian operation. The other four Brazilian data centers reside in Campinas, Jundiaí, Hortolândia and metropolitan Fortaleza.

    According to Roberto Rio Branco, Ascenty’s commercial, marketing and institutional director, the company is doing all it can to fill a data center void in the region.

    “We built this world-class data center to meet the needs of the market. Some of the country’s leading companies are headquartered in the region. These clients can now host their data on a site near their headquarters. With our fiber optic network connecting our data centers and leading telecom carriers, we deliver complete colocation, cloud, connectivity and managed services solutions to our customers. ”

    Interestingly, while Brazil has a total of 24 metropolitan areas with populations that exceed 1 million, half of them lack multi-tenant data centers, according to 451 Research. In order to meet the demand, Ascenty is building a second facility in the capital, called São Paulo 2, now in the final stage of construction and scheduled to launch in April. This comes one day after Equinix announced the opening of its fifth data center in São Paulo.

    News of  Ascenty’s funding and commissioning of its São Paulo data center illustrate the company’s rabid appetite for continued growth in the region and reasons behind its head-turning 75 percent net revenue boost last year and expectations for another 85 percent pop in revenue from the planned launch of several new data centers in 2017.

    While Ascenty started out as a data center provider in Brazil, it is also building a new data center in Santiago, Chile, and is looking at other markets in Latin America like Mexico and Colombia.

    3:30p
    Three Key Things to Rethink in an Era of Hyperconverged Infrastructure

    Subbiah Sundaram is Head of Data Protection for Comtrade Software.

    The hyperconverged infrastructure (HCIS) market continues to grow as it unifies powerful data center resources and persists as the most efficient enabler of Infrastructure-as-a-Service (IaaS). In fact, according to a recent Gartner report, hyperconverged integrated systems will represent more than 35 percent of total integrated system market revenue by 2019. However, with the onset of hyperconverged infrastructures, IT must also transform how they tackle key areas of business.

    From budgets to organizational structures, to a product’s time-to-market, hyperconverged infrastructure is enabling the enterprise to revamp IT. But before a business jumps head-first into a HCIS system, here are the top three things it must address in order to be successful.

    Monitoring

    Most customers who adopt hyperconverged infrastructure already have some other infrastructure in place. They are bringing in HCIS for new and next-generation workloads, and to provide agility to their business. Anytime your business depends on its infrastructure, a continuous, at-a-glance monitoring of storage and compute resources is crucial to ensure technology applications are available and performing at a high-level. If proper monitoring is in place and if there is an issue, the infrastructure team will catch it and make adjustments before the business feels any disruption or negative impact.

    This is especially relevant with HCIS because when the density of applications increases, monitoring becomes even more critical. Most organizations where HCIS is being introduced already have a centralized way of monitoring the infrastructure (if not they should invest in one!). For instance, a company will have a networks operation center (NOC) and invest in monitoring solutions, such as System Center Operations Manager (SCOM). Businesses should invest in HCIS as well as in integrating their HCIS monitoring into a centralized management infrastructure to avoid silos that are not managed. The solution a business uses to integrate into its centralized monitoring system should be “smart” and feed only synthesized and relevant data that is appropriate for this type of monitoring process. If you feed raw information to the centralized monitoring system, it will overwhelm the system and make the job of the NOC person a nightmare. Monitoring solutions should add enough data to the published information for the person in the NOC to be able to quickly identify the issue and activate the right course of action to resolve it.

    Applications

    The ability to discover, visualize, configure, backup and monitor modern IT applications in a traditional infrastructure is hard, and does not get easier in the hyperconverged infrastructure world. HCIS vendors do a much better job of providing good solutions for a virtual machine (VM)-centric approach of management. When organizations implement new solutions like HCIS, they can find themselves engrossed in the Infrastructure-as-a-Service (IaaS) model and may miss the key reason that they are implementing the new infrastructure–for their business applications. The person running HCIS should also keep applications as their primary focus and everything they do should be in the context of what’s best for the business critical applications. Unfortunately, this is easier said than done, and today in most cases it’s very hard for the person running the infrastructure to know what applications they are monitoring. The main reason being that most solutions take an “all-or-nothing” approach and require application credentials to provide the context.

    IT operators should look for solutions that will give them a holistic view on what applications are running in each of the VMs without being too intrusive. This will enable them to make the right choice when it comes to making smart tradeoffs with applications and prioritizing which problem to solve first. When it comes to troubleshooting performance problems, knowledge of which applications are running where and their access patterns, are of top importance. Knowledge of the business applications is critical even for infrastructure sizing and planning.

    Backup and Recovery

    Legacy data backup and recovery solutions are not aligned with modern IaaS data protection requirements. When a business implements hyperconverged infrastructure, it has embarked on a journey to rethink the way it is going to buy and manage its infrastructure. Data backup and recovery is an integral part of the production infrastructure, and it’s ironic that businesses sometimes do not also rethink their data protection strategy when they decide to implement a HCIS. This can be equated to buying a new, high- performing, sports car and putting your old, worn-out tires on it.

    HCIS solutions need data protection solutions that leverage built-in snapshots, clones and replicas. If you can recover data in seconds or minutes, why would you wait for the data to stream in from remote storage? Oftentimes businesses implement HCIS solutions and get carried away by VMs, and think that data backup and recovery at the VM level is sufficient. Organizations need to make sure their data protection solution delivers application-focused backup that provides application-specific, granular recovery.

    Last but not least, organizations need to evaluate the simplicity of their data backup and recovery solutions. HCIS solutions make IaaS much easier and the backup and recovery solution the business chooses should make data protection as a service (DPaaS) simpler too. Setup of the DPaaS system should take equal to or less than the time it took to setup the HCIS system; backup should be enabled in just a few steps and automated recovery should be intuitive not just to the systems administrator, but also to the applications/database administrator.

    Conclusion

    As businesses adopt hyperconverged infrastructure systems, it is a great opportunity for them to rethink their infrastructure monitoring, applications management and data protection. Do not force-fit your traditional IT infrastructure and management software and processes into the next-generation solution. If you do, your organization will not realize the true value of your next-generation infrastructure and it is like asking your horse drawn cart to deliver your e-commerce services. Good luck!

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    4:34p
    Societe Generale Taps Amazon, Microsoft Clouds as Banks Target Costs

    By Fabio Benedetti-Valentini, Nicholas Comfort and Giles Turner (Bloomberg) — Societe Generale SA is embracing the cloud.

    The Paris-based bank is working with Microsoft Corp. and Amazon.com Inc. to become one of the first large European banks to adopt cloud computing for the bulk of its operations.

    Societe Generale will start using external cloud services by June for some non-client content, such as financial research and marketing data, said Carlos Goncalves, the head of global technology services. By 2020, the bank intends to have 80 percent of its infrastructure on internal and external cloud networks.

    “We are ready to go to scale,” Goncalves said. “According to the European Central Bank, we have put in place the benchmark for the industry.”

    While it’s not unusual for small lenders to use cloud providers to cut technology bills, the French bank is among the first large financial firms on the continent getting ready to shift most of its operations to the cloud. The decision signals an accelerating evolution in how banks handle one of their most valuable and sensitive assets: information.

    See also: Google Expands Cloud Data Center Plans, Asserts Hardware, Connectivity Leadership

    Societe Generale’s developers and engineers from Microsoft Azure and Amazon Web Services have been running pilot programs for more than a year to test the security and reliability of the so-called public cloud — Internet-based computing that lets users store and process information at giant third-party data centers.

    An ECB spokeswoman declined to comment on individual institutions.

    Opening Up

    Regulatory concerns have until now led Europe’s top banks to confine their use of public clouds to less sensitive operations like product development. Now, cloud companies say pressures on the financial industry to cut costs and improve returns are pushing more banks to strike deals with the big providers such as Microsoft, Amazon, IBM Corp. and Alphabet Inc.’s Google.

    “It is more something that has opened up in the last 18 months to two years,” said Matt McNeill, the head of Google Cloud Platform for the U.K. & Ireland.

    See also: Can Google Lure More Enterprises Inside Its Data Centers

    Moving to a cloud-based model can initially help save 10 percent of a bank’s annual operations and information-technology budget, said Likhit Wagle, general manager of banking and financial services at IBM. Savings may reach 40 percent for lenders able to work out what technology systems aren’t needed anymore, he said.

    “Many banks have started using cloud,” said Sean Foley, chief technology officer for Microsoft’s financial-services business. “Initially these were largely private clouds, but now increasingly the majority of work in this area is shifting toward public clouds.”

    Bloomberg LP, the parent of Bloomberg News, also offers data-storage services to financial institutions.

    HSBC, CaixaBank

    Google this month identified HSBC Holdings Plc as a cloud client. In the U.S., Capital One Financial Corp. is moving a number of core business and customer applications to AWS.

    Spain’s CaixaBank SA, another early cloud adopter, keeps “sensitive” information on its internal cloud, but may start moving some such data to external clouds as soon as next year as rules across Europe become clear, said Mario Maawad, director of fraud prevention.

    Regulators are keen to make sure the banks remain responsible for their data, regardless of where it resides.

    “It’s the bank’s responsibility to ensure security and data protection,” said Slavka Eley, the head of the European Banking Authority’s supervisory convergence unit in London. “If they outsource to the cloud, they need to cover the risk in contracts and their activities.”

    Accounts Hacked

    Banking authorities in countries including the Netherlands and France already spelled out guidelines or good practices for cloud use, and a continent-wide framework is underway. The EBA plans to provide its final guidance later this year after issuing a consultation paper in the second quarter.

    There have been notable hacks of cloud providers. In 2012, more than 60 million accounts were breached at cloud-storage company Dropbox Inc.

    “Regulators are extremely keen on making sure the banks are going to be able to sign off on the requirements around security,” said IBM’s Wagle. “We are working very closely, both through our cloud business and security business, with our clients so they can meet those regulatory requirements.”

    Societe Generale started its in-house cloud three years ago and plans to continue relying mostly on that, while keeping key market activities running on physical systems, Goncalves said.  Still, the bank recently completed the legal terms of future partnerships with cloud providers in a way that satisfies requirements from banking regulators, he said, describing the cloud as “an innovation enabler.”

    9:36p
    Nielsen Data Center Outage Delays Weekend TV Ratings

    If Donald Trump was waiting this past Sunday morning for rating of the Saturday Night Live episode that aired the night before, he probably had to wait a few hours longer than usual.

    That’s because Nielsen, the consumer-research company that has for years ruled TV ratings but has recently struggled to adapt to the digital-media environment, suffered from a data center outage in its Global Technology and Innovation Center in Oldsmar, Florida.

    There was a power outage at the facility early Sunday morning, and while power was back on within several hours, the systems in the data center had to be rebooted, Deadline Hollywod reported, causing a delay in generating audience ratings for shows such as American Crime, NCIS: LA, and SNL, hosted by Scarlett Johansson and featuring once again Alec Baldwin as President Trump.

    It’s unclear what caused the power outage, and why the data center’s backup power systems did not kick in (or whether the data center had backup systems to begin with). A Nielsen spokesperson did not respond to a request for comment in time for publication.

    The company did send Deadline Hollywood a statement regarding the outage:

    “A power outage at our Oldsmar Data Center impacted the availability of some Nielsen applications and the planned delivery of some Nielsen data for Sunday, March 12th and Monday March 13th. We are actively working to resolve the issue and will continue to provide clients with updates as more information becomes available.”

    See also: No Shortage of Twitter Snark as AWS Outage Disrupts the Internet

    << Previous Day 2017/03/15
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org