Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, July 15th, 2015
Time |
Event |
12:00p |
The Rise of Managed Third Party Cloud The value proposition of a managed service provider has always been taking over laborious and complicated aspects of a company’s IT, so that the company may focus on core competencies. So what happens when these businesses want to leverage the benefits of cloud? Managed service providers are increasingly doing the same thing they’ve always done, but now atop of the big cloud platforms.
While still very much in its infancy, managed third party cloud managed services will potentially shake up the data center world. Managed services providers are often colocation customers. Who provides what value is shifting, and colocation providers need to position themselves to capture that shift. Managed services providers who are in early with third party cloud management stand to gain.
“It is early enough in the game that there are not necessarily any outright leaders,” said Philbert Shih. “But the managed hosters that play in the mid-tier to enterprise space have forged ahead by being early adopters and providers like Datapipe have made significant investments – even M&A – in pursuit of this capability.”
Amazon Web Services has so far been the big benefactor of the rise of third party managed cloud. It has an ecosystem of what it “consultants”, currently consisting of around 20 approved AWS Managed Service Provider Partners that have met program requirements and passed a third party audit of their AWS Managed Service capabilities. One of those providers is Datapipe, which recently extended its managed AWS services around automation, orchestration and security.
Media Temple recently teamed up with AWS and announced its entrance into managed AWS. The company launched Cloud Architect and SysAdmin services around AWS this month (via theWHIR).
The other big public cloud platforms have recognized the opportunity presented by skilled managed service providers that can ramp in customers into their clouds and are also courting the new breed of consultants and managed services providers. Datapipe recently added managed Microsoft Azure to its arsenal.
Effect on the Managed Services Landscape?
Shih sees it as a positive trend for managed services providers. “Service providers can get out of trying to compete with massive-scale clouds head-on and start pivoting into areas that play to their strengths: managed services is one of those areas,” said Philbert Shih There is also a capital efficiency component. Managing a third party cloud allows hosters to spend less on infrastructure and re-deploy capital into strategic initiatives.
Media Temple said it did extensive market and customer research before adding managed AWS services. “Companies of all sizes often lack the technical expertise to fully unleash the power of AWS,” said Brendan Fortune, product director, Cloud Solutions at Media Temple. “We have been architecting and supporting hosting solutions for 16 years, so it made sense for us to put our time and energy into building and managing high-performance, cost-optimized services that deliver the most value to our customers – rather than focusing exclusively on the hosting building blocks, which AWS provides.”
The Supply Chain Impact
There’s some question as to how this shift affects the data center providers that house the managed services providers. As the mix of cloud versus touchable assets shifts, so potentially does the amount of space taken up in a colocation provider.
“While it might mean less infrastructure being hosted in data centers, the massive-scale cloud present a ton of upside when it comes to interconnection,” said Shih. “Data center operators are also going to be very happy to see their own managed hosting tenants grow and expand. It is far-reaching value chain and data center operators stand to play a big part in it.”
Datapipe CTO John Landy doesn’t see a drastic shift in the supply chain. “The supply chain remains similar but the hyper-scale cloud providers are owning most of the stack below the hardware so their purchasing power is being aggregated for hardware, network, and physical datacenter acquisition,” he said. “The ability to provide colocation services will still exist for clients who need management of existing/more complex physical environments, specific targeted geographic needs (where public doesn’t exist), and/or for compliance (and security) requirements.”
There is still a need for bare metal servers and traditional IT Hardware hosting given the heterogenous infrastructure market that resides in the marketplace today, Landy added. Datapipe has worked on go-to market with Equinix; Equinix itself encourages managed hybrid hosting through Performance Hub and Cloud Exchange.
Media Temple’s Fortune said their product is intended for net new customers: “those whose specific needs for customized auto-scaling, load balancing, and DevOps aren’t met by traditional VPS products,” he said. “As a result, we anticipate no effect on colocation providers.”
How Did We Get Here?
Managed services providers traditionally undertake anything and everything in high touch roles. With the rise of cloud computing, the first instincts of these service providers were to provide cloud platforms themselves, pitching private cloud management and building Infrastructure-as-a-Service offerings themselves. Now, private cloud in combination with managed public cloud seems the be the right recipe.
“Over time you might see hosters simply drop their public cloud infrastructure offering and replace it with an AWS or Azure,” said Shih. “The economies of scale that a massive-scale platform can offer will be too difficult for service providers to match. Hence, it will make sense to just get out of that business altogether over time.”
“These are powerful, well supported public clouds with large teams behind them that continue to innovate,” said Datapipe’s Landy. “The capital expenditure would be extreme due to the footprint of the hyper scale data centers but also the breadth of software services that these providers have implemented for IaaS solutions.”
Hosting providers will still offer private cloud, and that makes sense since that is where they are getting growth, according to Shih. “They are not getting as much growth in multi-tenant public cloud.” | 3:00p |
Pennsylvania Considering Data Center Tax Breaks Data center tax break legislation is moving through the Pennsylvania Senate and House of Representatives. Both data center owners and tenants would be exempt from sales and use tax on equipment and software purchases if they satisfy a set of criteria.
Pennsylvania is not a major data center market, but judging by the legislative effort, there’s willingness among lawmakers to make the state more attractive for data center construction. While data centers don’t create lots of jobs, they bring tax revenue from massive power purchases and attract other high-tech businesses, and the jobs they do create usually pay high salaries.
States that passed laws to create or extend data center tax breaks this year include Missouri, Oregon, and Washington. Officials also sometimes decide on data center tax incentives on a project-by-project basis. A recent example of this was a package of tax breaks for Google’s upcoming data center in suburban Atlanta.
Tax breaks have become a common tool state and local governments use to compete with each other for big data center projects in recent years. Availability of tax incentives is among key factors companies consider when going through the data center site-selection in addition to things like power cost and availability and network infrastructure.
Not all states that have data center tax breaks provide them to data center tenants. The bills introduced in Pennsylvania are for both owners and tenants, as long as they invest $25 million or $50 million in the data center over the course of four years, depending on the size of the county the data center is in, according to the text of the senate bill.
In addition to the investment requirement, the data center owner or tenant has to pay a minimum of $1 million in salaries per year to its employees.
The bills would exempt all data center equipment and software purchases for new builds from state, county, and city sales and use taxes. The tax breaks would not, however, apply to telco data centers that do not provide colocation services.
Another exception is equipment that generates energy the operator sells to a utility company. | 3:30p |
Four Questions to Ask When Powering IT at the Network Edge Brian Kennedy is the Senior Product Marketing Manager for Emerson Network Power’s AC Power group.
After years of IT centralization and data center consolidation, CIOs and IT managers are now turning their attention to the network edge and the vital role it will play in supporting big data and the Internet of Things. As data processing and application services move to the edge, IT organizations face the challenge of how best to achieve a level of resiliency and control similar to that found in the primary data center across multiple, smaller facilities that typically lack on-site technical support.
Here are four questions to ask when selecting a power system for remote facilities:
What Type of UPS Should I Deploy?
There are several different UPS topologies used in network edge facilities. Choosing the right one depends on how an organization prioritizes criticality and cost. While the traditional default topology for edge facilities has been line-interactive, which provides adequate protection and good economy, more enterprises are moving to double-conversion units as the criticality of edge facilities grows. Double-conversion UPS units protect against a broader range of power anomalies than line interactive units, delivering a level of protection similar to large data center UPS systems. They are more expensive to deploy and operate at slightly lower efficiencies than line interactive units, but these costs are often dwarfed by the downtime costs a double conversion unit prevents. Most UPS providers offer both line interactive and double conversion topologies sized for edge facilities.
How Do I Make Sure the Power System Doesn’t Limit our Ability to Respond to Change in the Future?
Data center managers who lived through the rapid growth in data center capacity that occurred between 2003 and 2008, know the role infrastructure systems play in enabling or impeding growth. With the rapid growth in data and services many edge facilities will be expected to absorb, the ability to quickly scale the power system is more important than ever. Power deployment best practices can help organizations select and implement the right size system while maintaining the flexibility to adapt to changing needs in the areas of run times, capacity and electrical distribution. Key to this approach is ensuring the room can support future power requirements and that a strategy is in place to quickly deploy additional UPS capacity as needed. Rack mount UPS units can be deployed faster than large data center UPS systems but still may require specialized installation support. Infrastructure vendors now offer service options that include deployment as well as ongoing maintenance to support fast, hassle-free system expansion.
What Operating Parameters Do I Need to Manage to Maximize Performance?
Remote facilities often operate without onsite IT support, and the power systems can provide a window into remote operation through monitoring of system status, energy utilization, and battery capacity. In highly critical facilities, basic power monitoring can be supplemented with monitoring of room temperature, humidity, leak detection and physical security.
How Do We Optimize Costs for Remote Power Systems?
Cost is always a factor in IT and, as edge facilities proliferate, facility managers will need to ensure they are managing edge facility total costs as efficiently as possible. Like most purchase decisions, however, driving initial cost as low as possible is rarely cost-effective in the long run. System performance in terms of downtime prevention, energy costs, product lifecycle and support must all be factored into the economic analysis. As edge facilities proliferate, some UPS manufacturers are now offering complete service programs for these facilities that are designed to increase the predictability of lifetime service costs while minimizing the risk of downtime.
Edge facilities represent an increasingly important access point between data centers and remote users and devices. The power systems that maintain continuity and provide visibility into these facilities will have a significant impact on network resiliency, scalability, manageability and security. Taking the time to evaluate your options and standardize on a system that delivers the lowest lifetime costs and the highest uptime is a critical step in the implementation of an edge strategy.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:57p |
Seagate Intros Data Management Services for Hybrid Cloud Making it clear that its ambitions extend well beyond disk drives, Seagate today unfurled a bevy of data management services and products that span hybrid cloud computing environments.
At the core of those offerings is Seagate Backup and Recovery Software and Cloud Backup and Recovery Services that the company claims deliver up to a 400-percent performance improvement in the speed of backup, restore, and replication than the company’s previous EVault offering.
In addition, Seagate unveiled its Backup and Recovery Private Cloud, a multi-tenant instance of data protection software that IT organizations or cloud service providers can deploy themselves to provide up to two petabytes of storage, and Data Management Services, a service through which Seagate analyzes data usage to advise customers on what tier of storage particular data should reside to store it most cost effectively.
Finally, earlier this week, Seagate revealed that it has extended an existing storage alliance with HP and IBM involving its ClustorStor appliances into the realm of high-performance computing.
In general, David Flesh, vice president of marketing for Seagate, said the company is investing millions of dollars acquiring and building on-premise storage systems that are complemented by a set of tightly integrated cloud services.
“We want to be seen as being much more than a hard disk company,” said Flesh. “We’re delivering storage solutions that can scale up to petabytes.”
For example, Flesh said, Seagate is investing in a Lustre-based parallel file system that will be able to scale storage orders of magnitude higher in 2016 than Lustre does today.
Obviously, Seagate also plans to leverage its disk-drive manufacturing despite its foray into data management services to compete more aggressively on-premise and in the cloud.
The challenge it faces is that high-performance disk drives are being rapidly replaced by solid-state drives made up of Flash storage. The total amount of secondary storage that resides on hard disk drives continues to expand.
As a result, Seagate is trying to compensate for lost profit margins on disk drives by expanding its portfolio of storage systems and cloud services.
Of course, those moves create something of a conflict with other manufacturers of those systems that might want to incorporate Seagate drives. But given the fact that just about every manufacturer of hard disk drives is expanding their storage systems business to address the same profit margin issues, most storage vendors are simply going to have to get used to new forms of “coopetition” across the storage system market. | 5:36p |
China’s Milkyway 2 Ranked Fastest Supercomputer for Fifth Time China’s Milkyway 2 has ranked as the fastest high-performance computing system in the world for a fifth consecutive time on the bi-annual Top500 list of the most powerful supercomputers in the world.
Also known as Tianhe-2, it remained at 33.86 petaflops (quadrillions of calculations per second), which was almost double that of system that took second spot — the US Department of Energy’s Titan supercomputer.
There has been little change among the 10 fastest supercomputers on the list in recent years.
The only change in the June 2015 edition was in the seventh spot — the Shaheen II Cray XC40 system installed at King Abdullah University of Science and Technology. The Saudi Arabian system achieved 5.536 petaflops to become the highest-ranked Middle East system in the history of the Top500 list and first from the region to break into the top 10.
The number of supercomputers on the list using accelerators, such as Intel’s Xeon Phi chips or Nvidia’s GPUs, grew to 88 systems. Phi powered Milkyway 2 and Stampede, the Texas Advanced Computing Center’s system at the University of Texas at Austin that came in second. Nvidia GPUs were used in Titan and Piz Daint, the system at the Swiss National Supercomputing Center in the sixth spot.
More than 86 percent of Top500 systems are using Intel processors.
The Linpack benchmark used to rank the Top500 list has given way in recent years to the High Performance Conjugate Gradients (HPCG) benchmark, which hopes to make a more relevant metric for evaluating HPC systems.
A list of fastest supercomputers according to the HPCG benchmark had its third edition was released this week as well. It is mostly a mix of similar Top500 systems and also has the Milkyway 2 system in the top. | 5:53p |
Petaflop Cray Supercomputer at TACC to Crunch HIPAA, FISMA Data Texas Advanced Computing Center at the University of Texas at Austin announced Lonestar 5, its first ever Cray supercomputer and second petaflop system at the university.
Leapfrogging the Lonestar 4 system by a lot in terms of performance, the new Cray XC40 system will be equipped with Intel Xeon E5-2600 v3 processors and is expected to reach a peak performance of 1.25 petaflops. Pushing for maximum core density, the new system will feature 30,048 compute cores, two 12-core Xeons on each of the 1,252 compute nodes. It will use a 1.2PB DataDirect Networks storage system as well as a Dragonfly network topology and the Cray Aries system interconnect.
Expecting to be deployed this year, Lonestar 5 will be used as the primary high-performance computing resource in the University of Texas Research Cyberinfrastructure initiative. Researchers across all 15 of the university system’s institutions will be able to benefit from Lonestar 5.
Working with data sets for the Health Insurance Portability and Accountability Act (HIPAA) and Federal Information Security Management Act (FISMA), TACC’s director of HPC Bill Barth said, the new system “will perhaps be the primary computing system for health researchers for the Dell Medical School in 2016.” Barth added that they “will add new users across the state through our ability to support private health data.”
At 5.168 petaflops, TACC’s another supercomputer, Stampede, has ranked among the top 10 systems on the Top500 list of the most powerful supercomputers in the world six times, including the most recent June 2015 list. Stampede is based on Dell PowerEdge servers with Intel Xeon and Phi processors.
Lonestar 5 will share a work file system with TACC’s Stampede, Maverick, and Wrangler systems, all integrated to support simulation data, Big Data applications, and visualization data for users at TACC. | 8:55p |
Google Rolls Out Windows Server 2012 R2 Support on Compute Engine Our sister site WindowsITPro recently reported that many companies are still scrambling to figure out how best to get to a supported platform for Windows Server 2003 after Microsoft ended support on the product on Tuesday this week.
Of course, the company hopes you’ll choose Windows Server 2012 R2, which is probably its best server OS in a long time. It was architected from the ground up as a hybrid cloud enabler, allowing companies to take advantage of as little or as much of Microsoft Azure services as needed but still supply a solid private data center OS.
On the day of expiration, Google was on tap to offer holdouts an opportunity to take advantage of its cloud compute services for migrations. In preview for a number of months, Compute Engine now offers full Windows Server 2012 R2 support as a general-availability release.
Read the complete article, including which server operating systems are supported by Google Compute Engine, at: http://windowsitpro.com/windows-server-2012-r2/google-rolls-out-windows-server-2012-r2-support-its-compute-platform |
|