Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, September 12th, 2014
| Time |
Event |
| 12:00p |
Ditching Windows Server 2003 is Necessary if Not Easy Windows Server 2003 is quickly reaching the end of its life and a mass migration is set to occur in the world’s data centers. Microsoft estimates there are about 23.8 million instances of Windows Server 2003 running today across 11.9 million physical servers worldwide, counting for about 39 percent of the entire Windows install base. That’s a lot of migration that needs to happen between now and July 14, 2015, when Microsoft will stop providing support for the old OS.
This migration won’t be easy. The tech world has changed drastically since 2003 and there is a lot of discussion taking place about how to make the leap.
Moving to Windows Server 2012 R2 is an opportunity for Microsoft, its customers and its worldwide partners. Customers will gain more efficiency, while partners stand to make money helping companies make the switch and providing support support. There is also potential opportunity for Microsoft’s Azure cloud, as the end of support can serve as a logical point in time for making the switch from in-house data centers to the cloud.
“It’s an opportunity to move to a more modern OS or to move to an IaaS scenario, such as Azure,” said David Mayer, director of services at Insight Enterprises. “We’re seeing with organizations that we talk with that this is very much an inflection point with the architecture and design of their data center. Do we upgrade internally? Virtualize everything? It’s a multi-pronged conversation.”
Mayer is in charge of the Microsoft consulting services business at Insight, a $5 billion company that has been named the largest SPLA (Service Provider License Agreement) Reseller by Microsoft. Mayer worked at Microsoft for nine and a half years before joining the consulting giant.
“Windows Server 2003 was really stable,” he said. “But as we move to the cloud world, there are a lot of scenarios that weren’t prevalent back then. The big one is that Windows Server 2012 is cloud-ready. The ability to do private, public and hybrid scenarios is huge.”
Not upgrading isn’t really an option. As support ends these servers present a potential security risk. “It doesn’t matter if the server itself has important data, but that 2003 server creates a potential point of intrusion,” Mayer said. “Once you allow somebody in, it becomes much easier for them.”
Consider hardware and software
Two main variables come into play. “In a lot of cases, the hardware [the OS] running on will not support the operating system,” said Mayer. “The other big impact is what applications that server is running. You might need to upgrade the application itself as well if it doesn’t have the cross-compatibility.”
There are systems management considerations as well. While 2003 was a robust operating system, 2012 sees a lot of advancement with systems management.
Mayer also added that users should keep in mind that SQL server 2005, still very much in use, will reach the end of support in 2016. “There’s a kind of continuum as Microsoft upgrades its operations stack,” he said.
Perhaps the biggest consideration, however, is with hardware. While 2012 has many more capabilities, it also often means a need for more powerful hardware.
The hardware benefits include advanced virtualization and reducing overhead to handle more workloads on fewer servers. Recent HP statistics for its ProLiant Gen8 machines (the company recently released Gen9 ProLiants) say that applications run six times faster, deploying servers and updates is three times faster. They provide 70 percent more compute capacity and two times more data center capacity.
Mayer said that on average, customers will need to upgrade about 25 percent net new servers. “It means more hardware, but potentially large return for organizations willing to make that move,” he said.
Three-step process
There are three phases to the migration, according to Mayer.
- Phase 1: Holistic discovery and analysis.
“This is extremely important,” he said. “Understand the interaction of the servers in the data center environment. Moving one thing can impact something else. We tell organizations it’s extremely important to map out what is interacting with what and what users are interacting with the applications as well.” Some of the documentation over the past 11 years might not be up to snuff, and this will be tricky.
Some organizations have thousands of 2003 servers and don’t necessarily know which data center each server is in. Healthcare and the public sector are two examples of verticals that have a lot of application dependency because of niche and custom applications. These might break if the migration isn’t carefully done.
The main consideration apart from making sure software is compatible is making sure the hardware is compatible as well. Have the right hardware in place, Mayer warned.
- Phase 3: Valuation realization and support.
It’s important to do some analysis post migration to learn the benefits. While most have Microsoft Assurance (maintenance), there is a large services opportunity for the channel in selling more Assurance, according to Mayer. | | 12:30p |
Atlassian’s Stash Data Centers Brings Enterprise-Grade Git to Compute Clusters Atlassian’s Git repository management tool Stash has destroyed scaling limitations. The new enterprise offering is called Stash Data Center, and it can now scale to tremendous heights using active-active clustering.
The large Australian software company is best known for its popular project management software JIRA, but it has a number of other products and services, including Stash, the enterprise-grade distribution of Git, an open source system for code revision control and management.
There’s a new culture overtaking even the largest enterprises, general manager of Atlassian’s developer tools business unit Eric Wittman said. Development is adopting a more agile methodology in general and collaborating across larger teams. It is development with real-time communications.
Stash is Atlassian’s on-premises source code management for Git. In a secure and fast setup, customers can create and manage repositories, set up fine-grained permissions and collaborate on code.
Stash Data Centers is Stash that works across server clusters that can be seamlessly scaled, unbeknownst to the developer. A lot of automation is happening around source code, which in general means servers are under a ton of duress, and Stash Data Centers’ clustering capabilities eliminate this problem.
“Stash Data Center is the enterprise flavor,” said Wittman. “It can run on a cluster instead of a single server, easily supporting 10,000 developers. If I want to add an additional node, I can do that without taking the whole system down. This scaling issue is a very complicated problem, and we are the first to solve it.”
While Stash was already fast and secure, Stash Data Center removes scaling limitations. This makes for a promising offering to enterprises looking to establish agile development workflows. Each node in the cluster increases capacity for concurrent users without sacrificing performance.
The product comes with enterprise-grade security and collaborative workflows. Repositories are protected by the user’s firewalls and permissions are customizable at the global, project, repository and branch levels.
It lets customers use any form of load-balancing tech, be it software or hardware, to intelligently distribute the load. Data center deployments integrate with industry-standard technologies for database clustering and shared files systems to minimize single points of failure.
It adds application resilience: users can increase application throughput to avoid performance degradation in the event of load spikes. Daily workflows don’t change for users, but they’ll see fewer slowdowns, faster compile times and less downtime.
Atlassian started 12 years ago and spread primarily through word of mouth. It doesn’t spend a lot on marketing and a sizable portion of the company’s money goes into R&D (about 40 percent, according to Wittman). “It’s a large percentage for a large company,” said Wittman. “We basically allocated almost half the team to developing Stash Data Center.”
The company spent a lot of R&D removing the scaling limitation. Atlassian already touts more than 40,000 organizations using its products across a multitude of industries. And the numbers aren’t fudged: if three departments within Microsoft are using Atlassian, it’s counted as one customer. | | 1:00p |
Schneider to Move U.S. Headquarters to Massachusetts French electricity distribution and automation management giant Schneider Electric has officially opened the doors of its North American research and development center in Andover, Massachusetts (just north of Boston), which will also serve as the company’s new North American headquarters.
Its current U.S. address is in Palatine, Illinois, a small town northeast of Chicago.
Schneider is one of the world’s largest suppliers of mechanical and electrical infrastructure products for data centers. Its portfolio for data centers includes everything from uninterruptible power supplies and cooling systems to data center infrastructure management software.
The Boston One Campus has capacity to accommodate about 750 employees, and Laurent Vernerey, president and CEO of the company’s operations in North America, will relocate there.
This is Schneider’s first R&D center in the U.S. It joins existing centers in Bangalore, Shanghai, Grenoble (France) and Monterrey (Mexico).
There are two buildings, comprising 240,000 square feet. It has a Discovery Center where visitors can learn about the company and its products and 53,000 square feet of engineering laboratory space, where its engineers test and validate customer solutions.
Schneider built the campus using about $8 million worth of its own products to demonstrate its energy efficiency and sustainable design capabilities. The facility is LEED Silver certified.
Among energy efficiency solutions deployed are the company’s SmartStruxure building management system, EcoAisle and EcoBreeze data center cooling systems, its data center and server UPS units, Altivar variable-speed air conditioning control and other products.
“As we imagined the design of the new campus, it was important to us to leverage our own technology and create a facility that enhances our customers’ experience while exemplifying Schneider Electric’s core objective of making the most of our energy,” Varnier said in a statement. | | 2:15p |
Friday Funny: Pick the Best Caption for Data Center Cleaning Friday afternoon is upon us again and fall is in the air! Let’s roll into another weekend of fun with our Data Center Knowledge Caption Contest.
Several great submissions came in for last week’s cartoon – now all we need is a winner! Help us out by scrolling down to vote.
Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon!
Take Our Poll
For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website! | | 5:43p |
StackStorm, Mirantis to Meld DevOps-Style IT Automation With OpenStack Operations automation provider StackStorm is integrating its products with software and services of OpenStack systems integrator Mirantis. The partners will collaborate on engineering, marketing, sales and support.
StackStorm continues to expand its commitment to OpenStack, also becoming an official sponsor of the open source cloud software distribution. The company continues to build out partnerships to broaden the availability of its IT automation software. The company coming out of stealth last May, and its software is currently in private beta.
StackStorm helps private and public cloud users more easily author, manage, share and extend operations automation and says its software improves productivity. Delivered as a service, it is designed to use management and monitoring tools data center managers already use to automate management tasks across their entire infrastructure.
Automation software continues to expand its appeal as the largest tech companies like Google, Facebook and Microsoft lead the trend. StackStorm’s mission is to extend its DevOps-oriented approach to IT automation across all data centers.
The partnership makes its software compatible with a popular OpenStack flavor. StackStorm is also one of the leading contributors to Project Mistral, the OpenStack workflow-as-a-service project.
“Mirantis is one of the leading OpenStack distributions in the industry today, and we intend to heavily invest in our partnerships so that users can leverage StackStorm alongside Mirantis software and services,” said Evan Powell, CEO of StackStorm. “OpenStack is playing an increasingly important role in the industry and enables users to achieve the power and flexibility of the cloud without the rigidity and cost of proprietary cloud services and private cloud platforms. While our software supports more than OpenStack, we depend heavily on the community and are happy to be increasing our support.” | | 6:25p |
Seagate Gets Into Cloud Hardware Solutions Business Seagate announced a new Cloud Systems and Solutions division that will build solutions for original equipment manufacturers.
This summer Seagate picked up assets of LSI’s Accelerated Solutions division and Flash components division from Avago. Last year it acquired EVault and hard drive test equipment maker Xyratex.
With a reported 2 million drives and 17,000 petabytes sold, Seagate says its open Intelligent Information Infrastructure program caters to OEMs and offers hard drives, SSDs and hybrid data enclosures and embedded server modules.
For cloud services Seagate and its partners provide public disaster recovery and backup cloud solutions. Seagate says its EVault Enterprise Backup and Recovery Appliance now accommodates up to 100TB of usable capacity.
Seagate also unveiled ClusterStor 9000, a fully integrated Lustre-based scale-out solution designed for HPC and Big Data workloads.
The system received U.S. government certification for meeting Intelligence Community Directive requirements.
The company said the Hamburg, Germany, climate center DKRZ has selected the Seagate ClusterStor 9000 to deliver 45 PB of Lustre-based scale-out storage to support climate simulation and modeling. | | 10:30p |
Google Hooks Startups With $100K Worth of Free Cloud Google is offering early-stage startups that meet certain criteria $100,000 worth of services available on the Google Cloud Platform, which includes everything from Infrastructure- and Platform-as-a-Service to Database-as-a-Service and APIs for a handful of application services.
The company says the move is aimed at attracting more developers to its cloud. Startups with relatively complex applications that do make the move, however, are likely to stay with Google when their credit runs out if they see business momentum, since shifting applications from one type of infrastructure is a complicated and lengthy process.
Urs Hölzle, senior vice president of technical infrastructure at Google, announced the program Friday at the Google for Entrepreneurs Global Partner Summit.
Google is competing tooth-and-nail with Amazon Web Services and Microsoft Azure for public cloud market share. All three have been continuously reducing cloud prices and growing feature sets available on their cloud platforms, but AWS IaaS services continue to be in the lead in terms of popularity.
There is also a lot of competition for developer dollars in the PaaS space, which has more big players, such as Salesforce’s Force.com and Red Hat’s OpenShift. The PaaS market is becoming more crowded with the push behind Cloud Foundry, an open source PaaS created by EMC’s Pivotal and adapted by a number of big IT players, including HP and IBM.
Google is not the first to offer free infrastructure services to small companies to hook them. In another recent example, data center provider Digital Realty Trust announced a startup contest in July, where the winner will get a free 4kW cabinet with power and fiber connectivity in any of Digital’s data centers around the world where colocation services are available.
Qualified companies for Google’s program have less than $5 million in funding and less than $500,000 in annual revenue. They also have to be associated with one of the incubators, accelerators and investors Google is extending the offer through.
A partial list of partners is available online and the company is accepting nominations of incubators or venture capital firms if they’re not on its list. It currently includes Tech Stars, Y Combinator, 500 Startups, AngelPad and Google Ventures, among others.
The deal includes credit for cloud services as well as around-the-clock support.
Among startups that have already built applications on the platform are video- and photo-sharing company Snapchat and the successful online education non-profit Khan Academy. Another well-known user is the Mayday Super PAC, a grassroots political non-profit organized by Harvard Law professor and activist Lawrence Lessig to get big money out of U.S. politics. | | 10:38p |
US Postal Service Cloud Hodge-Podge Creates Unnecessary Security Risks: Audit 
This article originally appeared at The WHIR
An audit of the US Postal Service cloud found that its contracts did not comply with the agency’s security standards, mainly because there is no designated group in the agency responsible for managing cloud services.
According to a report released Sept. 4 by the US Postal Service Office of Inspector General, US Postal Service management did not appropriately monitor applications or complete the required security analysis process for three cloud services reviewed. The agency also failed to have suppliers and their employees sign non-disclosure agreements.
By failing to have “proper knowledge of and control over applications in the cloud environment, the Postal Service cannot properly secure cloud computing technologies and is at increased risk of unauthorized access and disclosure of sensitive data.”
The information gathered from the audit will be consolidated to determine how successful the federal government is at protecting data in the cloud as it continues to implement its Cloud First policy across agencies. In April, the Department of Defense outlined its strategy for moving to the cloud, including creating security requirements prior to the migration.
The Postal Service cloud security policy requires its cloud service providers to be FedRAMP-certified, but according to the audit findings, this wasn’t always enforced. In four contracts, the Postal Service did not require CSPs to become FedRAMP certified because the personnel assigned to monitor the cloud services were not aware of all the contractual obligations or the agency’s cloud computing requirements.
While there are a growing number of cloud providers certified with FedRAMP, a lack of training and decentralized cloud management seems to have been behind the Postal Office’s blundered cloud service implementation.
In the UK, 83 percent of civil servants have had a poor experience with the public sector cloud, according to a report on the state of the UK federal government’s cloud first policy. More than half said that their agency lacked the technical skills to implement and manage a cloud environment. It wouldn’t be too surprising if the US Postal Service reported similar frustrations.
According to the audit, the agency has not defined “cloud computing” and “hosted services” and also does not have an enterprise-wide inventory of cloud services and contracts.
“The policy provides an overview of cloud computing initiatives and lists general roles and responsibilities; therefore, management and personnel in various functional areas have different interpretations of cloud computing and its associated capabilities,” the report says. “Without effective management of cloud computing technologies, the Postal Service cannot properly govern and assess the risk associated with these technologies.”
A lack of organization and management could defeat the purpose of moving to the cloud to save money and improve ROI, especially as the agency fails to cash in on SLA credits or doubles up cloud efforts.
Government cloud adoption is expected to see slower growth, which should give agencies the time to catch their employees up and give them the proper training.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/us-postal-service-cloud-hodge-podge-creates-unnecessary-security-risks-audit |
|