Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, November 19th, 2015
| Time |
Event |
| 1:00p |
Hölzle: Google Will be the Android in Cloud After playing an oracle of sorts, asked to share his thoughts on the future of computing, Urs Hölzle ended up defending Google’s future in the enterprise cloud market on stage at GigaOm Structure in San Francisco Tuesday.
While the company is one of the leaders in Infrastructure-as-a-Service, due to its advanced technological capabilities and scale, it has not yet earned a reputation of a serious player in the enterprise market. Interviewing Google’s VP of technical infrastructure on stage at Structure, Stacey Higginbotham, a senior editor at Fortune, asked why that is.
“Reputation lags reality, but I think reputation will catch up with reality,” Hölzle replied, promising visible change “soon.” Google has several announcements in the works that will remove any doubts about its commitment to the enterprise, he said, while also acknowledging that the company is “coming from behind” in this market.
Taking the long-term view, cloud today is where smartphones were in 2007, Hölzle said. In other words, this is just the beginning.
“It’s just so early-stage in the grand scheme of things,” he said. While Amazon and Microsoft are leading today, owners of most IT workloads have not yet picked a cloud for those workloads.
“The next five years in cloud are going to [bring] much more evolution than in the last five years. And I think we’re going to be the Android in that story.”
When Apple released the first iPhone in June 2007, Google’s now open source mobile operating system was unheard of. Today, Android runs on about 80 percent of the world’s smartphones, according to the most recent figures from the market research firm IDC.
Speaking on the same stage, well-known Silicon Valley venture capitalist Vinod Khosla said if anybody could give Amazon a run for its money in the cloud market, it would be Google or Microsoft. Google, he said, had “staying power” and “generally better technology.”
Microsoft’s strength, according to Khosla, was in its many years of experience in the enterprise market. The company understands enterprise needs and has many ongoing relationships in the space, he said.
Adrian Cockcroft, another superstar technologist and VC, noted that Google to date has not been playing the same game in the cloud as Amazon and Microsoft have been. The latter two have been expanding the scale of their cloud infrastructure, building more and more data centers around the world at break-neck pace, while Google has been focusing more on bringing down the cost of its cloud services.
Hölzle didn’t fail to mention the low cost of using Google’s IaaS cloud. “We’re by far the undisputed price leader, both in storage and in compute,” he said. “That is something that I would expect to continue.”
Gartner considers Google a “visionary” in the IaaS market who is ahead of other visionaries, such as IBM, VMware, and CenturyLink, but far behind Microsoft Azure and Amazon Web Services.
The fastest-growing need among Gartner’s clients – enterprises, mid-market businesses, and technology companies “of all sizes” – is to replicate their in-house data centers in the cloud, the firm says. In other words, they want to have the same control over IT operations they have on-premise, even if they do use third-party managed services. And that’s the lens Gartner uses to look at the IaaS market.
“Although Google has significant appeal to technology-centric businesses, it is still in the rudimentary stages of learning to engage with enterprise and midmarket customers and needs to expand its sales, solutions engineering, and support capabilities,” Gartner cautioned in its Magic Quadrant report on the IaaS market published in May.
The research and consulting firm heard from prospective Google customers who had a hard time getting a hold of the company’s sales staff or receiving guidance on appropriate solutions.
While focusing on “cloud-native” applications, Google has lagged in building a strong case for other workloads. “Google needs to earn the trust of businesses,” Gartner said. The company “lacks many capabilities important to businesses that want to migrate legacy workloads to the cloud.
It also lacks a strong partner program. It has started building one, however, revolving around Kubernetes, its open source app-container cluster management software.
Hölzle also mentioned that, while Google doesn’t have the reputation of a long-time enterprise player, it has in fact been selling enterprise products for a long time, starting with its on-premise enterprise search, launched 13 years ago. “And we have lots of enterprise customers already today, who are happy customers,” he said. | | 4:00p |
Why SMBs Should Consider Hybrid Cloud Backup Stuart Scott is a Cloud Expert at CloudBerry Lab.
If you’re looking to build a backup solution that is scalable, cost-efficient and allows multiple disaster recovery scenarios, then switching to a hybrid cloud backup strategy is for you.
Hybrid solutions work in conjunction with your existing backup applications and policies. Local on-premise data can remain local if required and still have the flexibility of expanding onto cloud storage should you require additional capacity. Utilizing this cost-effective storage means you can store a full copy of all your backups on the cloud in the event of a site disaster.
Almost any size of business can take advantage of a hybrid backup solution, from the smallest of start-ups to huge conglomerates. How it’s implemented may differ between them, but both ends of the scale can achieve an efficient, secure, highly available and scalable solution. Consider an appropriate-sized solution for your business; this could be as simple as storing one data set on a local NAS drive with a resilient copy stored in the cloud, or as complex as implementing a VM appliance installed within your data center acting as a gateway between your private network and the cloud.
Where Should You Store Your Data?
Do you want all data stored on the cloud with frequently accessed data cached locally on-premise for fast access? If so, this significantly reduces the amount of additional storage required at your data center. If latency is of significant importance you might configure your hybrid solution to store all data locally and implement an asynchronous backup of that data to the cloud for DR. This offers low latency of your data but safeguards you from a site failure. Whichever solution is available, it’s a best practice to store a copy of all data to the cloud for DR.
The key point of hybrid solutions is that they allow you to store data both locally and in the cloud. You must understand your business demands and have awareness of the type of data you need to backup to ascertain where its primary storage location will be.
Determine how long your backups will take, and define the backup window during low network traffic for increased performance. By implementing bandwidth throttling on your network it would allow you to maximize the network throughput for backups (70 to 80 percent) during evening and weekend off peak hours, while only consuming minimal network throughput (10 to 20 percent) during peak office hours. By doing so, it allows you to perform backups multiple times a day ultimately reducing your RPO (Recovery Point Objective).
Storing your data on the cloud has many advantages, but when you need to retrieve large amounts consider how long it will take to get it back. Retrieval rates for the data via your ISP and from a cloud service itself needs to be factored in to ensure it aligns with any existing SLAs you have with customers regarding the restoration of data.
Lifecycle policies can be configured to move your data to storage services with the best data retrieval rate (maintaining your SLAs) while utilizing the most cost-effective storage solution. A policy may store the newest on AWS S3, then after a defined period, the lifecycle policy could automatically move that data to the AWS S3 Infrequent Access service. Now the data would still benefit from S3’s instant data retrieval but at a reduced cost. Finally, after another defied period the policy could move the data to the Glacier service for the highest level of cost savings at the sacrifice of the instant retrieval function, which would now be degraded to 3 to 5 hours. Understand the levels of storage services available and how they may affect your SLAs, and use tools such as lifecycle policies to maintain management.
Adopting hybrid connectivity is a fast, simple, reliable, secure and cost-effective way to store your data for backup and disaster recovery purposes. However, consideration into its implementation needs careful review and understanding for it to be successful.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:05p |
Anyone Can Now Build Facebook’s Data Center Switch The design of Facebook’s top-of-rack data center switch the social network created in-house to fit its hyperscale needs is now open source, meaning anyone can use it to build switches through a design manufacturer.
All details needed for any manufacturer to produce the switch are now open source, including the schematic, a detailed list of materials, and other files. The design is distributed through the Open Compute Project, the open source hardware and data center design initiative Facebook started in 2011.
Wedge 40, the 40-Gig switch announced last year, was the first switch Facebook designed to bring more flexibility to the way it builds and manages its data center networks. Its most important aspect is disaggregation between switch hardware and software that runs on it – different from closed networking bundles traditionally sold by big vendors like Cisco, HP, and Juniper.
Because those traditional switches are “black boxes,” customization is limited, and companies like Facebook, Google, and Amazon — companies that build data centers at massive scale — customize a lot of their infrastructure for efficiency and for their specific applications.
Facebook has also created an open source Linux-based operating system for its switches, called FBOSS. A networking software company called Big Switch has an Open Network Linux image that provides everything you need to run FBOSS on Wedge, meaning a fully open source, programmable data center networking solution is now available.
Wedge and FBOSS behave more like server hardware and server software, meaning the company’s data center managers can deploy, monitor, and control switches the same way they manage servers and storage.
Facebook has now scaled Wedge in production, with several thousand of these top-of-rack switches running in its data centers. “Eventually, our goal is to use Wedge for all our top-of-rack switches throughout all of our data centers,” Facebook engineers Jasmeet Bagga and Zhiping Yao wrote in a blog post.
Wedge is a building block in Six Pack, another data center switch Facebook designed. It is a higher-capacity aggregation switch that enables the company to create a custom network fabric in its data centers. Six Pack puts 12 Wedge switches in a chassis along with fabric cards.
The company is already working on the next generation of Wedge. Wedge 100 will be a 100-Gig switch designed to handle higher speeds and more complexity, Bagga and Yao wrote. | | 7:57p |
IBM Recruits More Government Cloud Partners 
This post originally appeared at The Var Guy
IBM expanded its ecosystem serving the US federal government on Tuesday with the addition of five new cloud partners, including Arrow Electronics, Avnet Government Solutions, Ingram Micro, Tech Data, and ViON.
Each of the new partners will work with IBM in an effort to drive higher hybrid cloud solution adoption among federal customers by bundling their solutions with IBM Cloud.
“Federal agencies are really beginning to view cloud as transformational, but first they need a reliable partner to help them get started,” said Anne Altman, general manager of IBM U.S. Federal and Government Industries, in a statement. “With our partners, we have a strong understanding of the IT investments agencies have made, and how to help them with their migrations to cloud in a secure and strategic way.”
All of IBM’s new partners will focus on reselling and marketing IBM Cloud Services with an emphasis on helping government customers to implement more hybrid cloud solutions, according to the announcement.
IBM’s federal market portfolio includes FedRAMP compliant services, which are required for many government agencies dealing with sensitive personal and private information. Partners also have access to Bluemix, IBM’s Cloud platform, in addition to the company’s Software as a Service platform.
The federal cloud market is expected to yield lucrative dividends in the coming years, especially considering the gap between overall federal cloud enthusiasm and actual government spending, according to IDC. Thus far, only six percent of federal applications run in the cloud, but IDC found that the Administration’s proposed fiscal 2016 budget estimates a whopping $7.3 billion will be spent on cloud computing. If these figures remain accurate in the coming months, IBM and other cloud service providers could be on the cusp of forming some very large deals with federal partners.
This first ran at http://thevarguy.com/cloud-computing-channel-partner-program/ibm-adds-five-new-partners-us-federal-cloud-ecosystem | | 8:12p |
New Features in Docker Could Help Web Hosts Provide Safe Container Hosting 
This article originally appeared at The WHIR
Application containerization software platform Docker launched several security enhancements this week at DockerCon EU that could also help make multi-tenant Docker environments easier for web hosts to manage.
New features include first hardware signing for authenticating container images, content auditing through image scanning and vulnerability detection, and granular access control policies with user namespaces.
Also this week at DockerCon EU, Hewlett Packard Enterprise released a new portfolio of solutions built for Docker.
Hardware Signing and Image Scanning
These new capabilities, in combination with Docker’s existing security options, ensure the publisher of the content is verified, chain of trust is protected and containerized content is verified via image scanning.
Docker enterprise marketing VP David Messina said, “A base thing as you look at hosting applications in the cloud is the content itself – where did it come from? Who is the publisher? Do I have a chain of trust around that content? I need to know that the content that I’m getting from the publisher is actually the content that I receive.”
Container image hardware signing builds on Docker Content Trust, which leverages Notary and The Update Framework (TUF) to verify the image publisher and validate content. Docker Content Trust’s hardware signing feature uses Yubico’s YubiKey technology for touch-to-sign code signing. This enables code to be digitally signed during initial development on particular hardware and be verified through subsequent updates.
Docker is also offering a new secure service for its dozens of Official Repos from Independent Software Vendors to granular auditing of images, presenting the results to ISVs and sharing the final output for Docker users to make decisions on which content to use based on their security policies. If an issue is detected, the ISV can fix any vulnerabilities to upgrade the security profile of their content.
“It’s very powerful concept – the key thing here is determining what’s inside the container,” Messina said. “In multi-tennacy, you want to know all you can know about your containers, and that’s what you’re able to do with these capabilities and that transparency, and doing so with our ISV partners.”
Hardware signing and scanning container images helps address the trust and integrity of application content, essentially making it easier for web hosts to ensure they aren’t hosting compromised images. They also don’t have to rely on the information published by each ISV on the state of their content and have to actively monitor the common vulnerabilities and exposures for each one.
Better Access Control and Isolation
User namespaces, which Messina noted is one of the most requested Docker features, give IT operations the ability to separate container and Docker daemon-level privileges, meaning that the containers themselves don’t have to access the host root. Admins can then lock down hosts to a restricted group of sysadmins, and assign privileges for each container by user group to prevent one organization from having control over another’s application services.
“User namespaces creates a differentiated model for active control where the operations team or hosting provider can assign privileges to effectively a user or a multi-tenant client a certain set of privileges. But you as the operator can maintain the maximum level of privileges related to the Docker daemon,” Messina said.
“Ultimately this is setting up a very interesting path for more and more providers hosting containers and hosting Docker on bare metal.”
User namespaces and hardware signing is included in the 1.9 Experimental release. And image scanning and vulnerability detection is now available for all Official Repos on Docker Hub.
This first ran at http://www.thewhir.com/web-hosting-news/new-features-in-docker-could-help-web-hosts-provide-safe-container-hosting | | 10:37p |
Amazon Buys More Wind Power for Cloud Data Centers Amazon Web Services, the e-commerce giant’s cloud services arm, has contracted with a wind farm developer for energy from a future 100 MW wind project in Paulding County, Ohio, to offset grid energy consumption of its cloud data centers, the company announced Thursday.
Utility-scale renewable power purchase agreements are becoming increasingly common among hyperscale data center operators like Amazon, its cloud services rivals Google and Microsoft, as well as Facebook, which does not provide cloud services but has multiple massive data centers in the US and Europe to support its user base. This year Equinix also started contracting for utility-scale renewables – something commercial data center service providers, whose customer base includes the aforementioned cloud giants, have traditionally been reluctant to do.
About one year ago, AWS made a commitment to power its operations entirely by renewable energy. The cloud provider said earlier this year that about one quarter of energy it consumed was renewable, and that its goal was to get to 40 percent renewable by the end of 2016.
Amazon’s Ohio data centers are not online yet. In a project announced earlier this year, the company is building data centers in at least three cities in the state: Dublin, New Albany, and Hilliard. It is advertising data center jobs in all three cities plus Columbus, possibly indicating plans for a data center in the state’s capital too.
Its current East Coast cloud data center cluster is in Northern Virginia, a region with one of the world’s highest concentrations of data centers.
The Ohio wind farm will be called Amazon Wind Farm US Central. Being built by EDP Renewables, it is expected to come online in May 2017.
Earlier this year, AWS announced wind and solar power purchase agreements with developers in North Carolina, Indiana, and Virginia. It has also agreed to a pilot project in California to use new battery technology by Tesla for energy storage for data centers that host its us-west-1 cloud availability region. The pilot’s goal is to demonstrate an energy storage solution that can address the problem of intermittent energy generation by renewable sources. |
|