Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, March 31st, 2017
| Time |
Event |
| 12:00p |
NASA Light Years Ahead on Data Center Consolidation All of the federal agencies out there ought to take a page from The National Aeronautics and Space Administration’s data center consolidation playbook.
Despite increasingly restrictive legislation over the past seven years designed to reduce the number of inefficient and unnecessary data centers, NASA may be the only agency to comply with the Federal Information Technology Acquisition Reform Act’s 2018 deadline, reported MeriTalk.
It’s 33 down and just six to go for NASA, which began its consolidation efforts with 59 data centers. Twenty will remain operational because each of the agency’s 10 major offices and satellite locations require at least one on-premise facility. Karen Petraska, program executive for computing services at NASA, said she her team toured all 59 data centers before determining which to keep and which to close.
“It gave us better insight as to which should stay for the long haul and which should be offered up for closure,” she said.
The Departments of Agriculture, Treasury, and Justice were praised as well for contributing largely to slashing the 10,000 originally inventoried data centers down to 5,600. However, more cooperation and expediency is needed from every government agency to meet next year’s deadline.
No one questions the challenges of consolidation. The updating of IT infrastructures, adoption of cloud services to move existing data, just deciding what stays and what goes, are no easy tasks. And none of the above can happen overnight.
Read also: Half-baked Government Consolidation Causes Cybersecurity Headaches
“Some agencies have a lot more complex data centers and politics. It could be any number of things,” Petraska said. “Older things are hard to move. Big things are hard to move. There’s some inertia there. The process alone of figuring out which data centers they need to close—and subsequent negotiation with managers to pare down that list—took almost a year.”
The government has been working much longer on its consolidation plan. Here’s a little history.
The Federal Data Center Consolidation Initiative (FDCCI) of 2010 didn’t require specific actions by agencies but simply promoted the use of green IT by reducing the overall energy and real estate footprint of government data centers, and reducing the cost of data center hardware, software and operations.
Then came the Federal Information Technology Acquisition Reform Act that required agencies to conduct data center inventories and identify facilities they could close and consolidate, set goals for footprint reduction, and created rules for regular progress reporting.
Finally, the Data Center Optimization Initiative replaced the 2010 act last year and added even more rules. If an agency wants to build a data center or expand an existing one, it must prove beyond the shadow of a doubt to the Office of Management and Budget (OMB) that no better alternatives exist, such as using cloud services or leasing colocation space. DCOI also raised the amount of data centers agencies were required to close, and put in place a number of additional requirements for energy efficiency, server virtualization, server and facility utilization, and use of data center management tools.
The bottom line desired results from all this legislation is clearly to save the government money by reducing its sprawling data center inventory and what it costs to maintain. The goal is to save nearly $1.4 billion by the 2018 deadline, but that is in danger of not being met if the balance of agencies continue to lag. | | 4:02p |
Oracle Cloud Certifications Target Regulated Industries  Brought to You by Talkin’ Cloud
Oracle Public Cloud announced this week that it has achieved a series of compliance certifications and attestations to bring a number of its core services into regulated industries like healthcare.
The certifications and attestations include ISO 27001, HIPAA, SOC1 and SOC2, according to Oracle, and will help bring its portfolio of PaaS, IaaS, and SaaS applications into the hands of users in industries such as healthcare, where providers have been increasingly turning to cybersecurity pros to secure their clouds.
“Oracle is continuously investing time and resources to meet our customers’ strict requirements across highly regulated industries,” Erika Voss, Global Senior Director, Public Cloud Compliance, Risk and Privacy, Oracle said in a statement. “These new certifications not only validate the reliability and security features of the Oracle Cloud; they effectively make Oracle’s solutions available to thousands of new customers in the Healthcare and Public Sector industries.”
See also: Oracle’s Cloud, Built by Former AWS, Microsoft Engineers, Comes Online
The certifications come as Oracle has posted third quarter revenue that topped analysts’ expectations, marking three straight quarters of revenue gains after more than a year of declines, according to Bloomberg. Oracle executive chairman Larry Ellison said that its infrastructure offering will eventually be the software company’s biggest cloud business.
The certifications Oracle received and the corresponding services are below:
- Service Organization Control (SOC): Database Public Cloud Service, Java Public Cloud Service, Database Backup Cloud Service, Exadata Cloud Service, Big Data Cloud Service, Big Data Preparation Service, Big Data Discovery, Application Builder Cloud Service, Storage Cloud Service, Dedicated Compute Cloud Service, and Public Compute Cloud Service
- HIPAA: Oracle Fusion Suite of Software-as-a-Service (SaaS) applications: Enterprise Resource Planning (ERP), Human Capital Management (HCM), and Customer Relationship Manager (CRM) Cloud Service
- International Standards Organization (ISO) 27001 certification: Public Cloud SaaS suite of services in the core areas of Fusion ERP, HCM, CRM, Taleo Social, Taleo Business Edition, Service Cloud, Eloqua Marketing Cloud, BigMachines CPQ, and Field Service Cloud
This article originally appeared on Talkin’ Cloud. | | 4:10p |
Got Microservices? Consider East-West Traffic Management Needs Ranga Rajagopalan is CTO at Avi Networks.
Many network and IT operations today grapple with the same challenge: How do you build a network that can handle disaggregated applications and still deliver the best application performance possible?
This question is a hot topic for operations teams as enterprises move toward more agile development methodologies and ditch the monolithic applications of days past.
The Rise of Microservices
Microservices on containers is the new dev kid in town, offering a brilliant solution for teams that want continuous integration and deployment (CI/CD) when pushing out code.
Because microservices break apps up into multiple parts (services) that work together to make up the full application, developers can update one part of the app without touching anything else. This breakthrough allows apps to be light and manageable instead of static and immovable, which is a win for everyone.
Well, almost everyone. This advancement for development teams brings immediate pain to infrastructure and operations groups that run a traditional data center.
Typically, when monolithic applications are built on virtualized infrastructures, network performance is judged on the speed of the connection between end users and data centers in the WAN link, also known as north-south traffic. At a basic level, this process is linear between one user and one app server, with load balancers determining how to serve traffic to a bunch of app servers.
The use of containers, which adds speed and agility to development, leaves ops teams with the challenge to build a fast, scalable, and elastic architecture that can manage these microservices and apps, especially when it comes to service discovery, traffic management, and security. The birth of the DevOps model was meant to bridge the divide and create a tighter connection between the two groups (development, operations) to support this automation of resource management, but the pressure is still on the operations team to figure out delivery.
Application Architecture Evolution to Microservices
With microservices, individual containers that deliver the different services need to talk to each other (see figure above). In keeping with the directional metaphor, this process is known as east-west traffic. In contrast to the clean north-south highways of traditional end user and data center connections, the visual for east-west traffic is more like the backroads in a cluster of suburban communities where multiple paths can be taken.
As such, a service proxy is necessary to offer load balancing between the services. Add in the complicating factor that microservice applications can be distributed in servers across a data center or across multiple locations, including the cloud (which doesn’t play nicely with hardware in a separate data center), and you are in a real pickle. How do network managers and application architects ensure that traffic is sent to the right place and connecting to the correct containers without overloading servers?
An Architecture for all Directions
To build a modern application architecture that answers these challenges, multiple considerations need to be taken into account around load balancers and how to handle the east-west traffic with microservices and containers.
The best architectural approach accounts for proxy and applications services within a flexible network services framework. By utilizing an elastic service fabric, distributed software load balancers can be managed as one entity, especially with real-time information delivered back to the controller about applications, security, and end users so that administrators can easily troubleshoot issues.
Proxies can serve as a gateway to each interaction that occurs, both between containers within a server and those running across multiple servers. These proxies can resolve DNS lookup requests, map a target service name to its virtual IP address, and spread the traffic load across instances of the target microservice.
With the multiple service instances of a microservice application, service discovery is critical for connecting and consuming information from specific microservices, as well as load balancing for the north-south and, in particular, heavy east-west traffic. Not only do these connections need to happen, but tools that capture the interactions are critical to ensure you can monitor application traffic and performance and troubleshoot problems. Visibility comes in the form of metrics around number of connections, user types, transactions per second and user behavior.
At the end of the day, network engineers, network architects, and load balancing administrators must equip themselves with the knowledge and tools needed to ride the coming wave of microservices. By understanding the needs of east-west traffic with container-based applications and assessing traffic management options, ops teams can build the best architecture to enable their Dev teams and provide end users with seamless and secure services, no matter what direction traffic flows.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:24p |
Microsoft Opens Its First European Lab for Internet of Things Aaron Ricadela (Bloomberg) — As Europe’s industrial companies attempt to modernize production with sensors and software, Microsoft Corp. is trying to grab a piece of the spending.
The U.S. software giant is launching a new lab in Munich — following openings in Redmond, Washington and Shenzhen, China — for customers investing in the so-called Internet of Things, the idea that everything from refrigerators to industrial robots will be connected to the web. The move follows similar ploys by Cisco Systems Inc. and IBM, who have also picked Germany to locate technology-focused labs.
Munich is home to a number of German corporate giants, including BMW and Siemens AG, while companies including Robert Bosch GmbH, Adidas AG, and a slate of mid-sized manufacturers are outfitting more factories with internet-connected production lines and robots. The global manufacturing sector spent an estimated $178 billion on global IoT last year, according to market researcher IDC.
See also: How a Tech Company from the 60s is taking on AI, IoT
Industrial companies have been keen to use technology to improve the manufacturing processes, said Peggy Johnson, Microsoft’s executive vice-president of business development. “There is a concern of being left behind if one of their competitors can shorten the process.”
Yet the market for connected machines has been slow to get going. “We know it’s significant,” said Johnson. “It just hasn’t taken off.”
An IDC survey last year of 1,872 companies in Western Europe found that while more than a third were using IoT technologies, more than half of those were only collecting or analyzing data without using it to improve production.
“It’s a work in progress; I’m not going to tell you the industry has solved this,” said Vikas Butaney, vice-president and general manager for IoT connectivity at Cisco, who’s working with Microsoft on its three labs on sponsorships and technology access. “In industry these customers are very cautious.”
See also: Why is IoT Popular? Because of Open Source, Big Data, Security and SDN
One challenge is that for manufacturers, production yields and reliable deliveries are paramount, meaning they’re averse to tinkering with shop-floor networks that are already in place.
According to a survey of 209 IT and telecom companies published this month by the German trade associate Bitkom, 63 percent of respondents see multiple technology standards as a barrier to IoT adoption, and 37 percent cited difficulties integrating existing machines.
For Microsoft, the Munich IoT lab — where customers can fabricate hardware prototypes in addition to writing software — complements 1,900 staffers already in Munich, part of 2,700 people across eight sites. The company last fall brought online two German data centers for its Azure cloud computing services, including IoT tools.
Cyra Richardson, a general manager for business development, is running the Munich, Redmond and Shenzhen labs. | | 5:00p |
Seven Big Reasons to Move Backup to the Cloud Seyi Verma is Senior Product Marketing Manager for Druva.
Today marks the seventh anniversary of World Backup Day, and much has changed since its inception in 2011. Cloud-based solutions have replaced traditional on-premises solutions to become the gold standard for today’s backup, archiving, governance and disaster recovery (DR). But what exactly makes cloud-based data backup so unique?
The unique capabilities of the cloud not only save companies money, but also enable data protection and governance activities that were previously not possible. In addition to its use for backup, enterprises can use cloud storage to address other key business areas such as DR, archival search and compliance.
The public cloud, in particular, has become the application platform of choice. Unlike hosted or dedicated cloud storage solutions, the public cloud eliminates the costs of cloud-specific gateway hardware for enterprises, offers global availability and can ingest and manage large volumes of data to scale as needed.
While the functionalities listed above are appealing, it’s important to keep in mind that the cloud doesn’t exist in a vacuum, but works in tandem with other technologies and solutions. Some technologies leverage the unique capabilities of the cloud better than others, with leading-edge deduplication technology being a perfect example of this. Together, global dedupe and the cloud form a data backup powerhouse that massively reduces the enterprise storage footprint and achieves enormous cost savings.
You probably have an intuitive understanding that cloud backup is the way to go; very few people these days are clamoring to retain expensive and outdated legacy systems. Let’s take a closer look at what cloud data backup can do for your organization:
- Ensure Compliance Confidence: If an organization has a global footprint, then adherence to local data residency laws is a must. The public cloud enables organizations to store data in a specific region so they can rest assured that they are always in compliance with geographical data regulations.Furthermore, using a public cloud offers all the security and certifications required, including those for government agencies. With the ability to create an isolated cloud region designed specifically for federal, state and local government agencies, government organizations can protect valuable data and easily adhere to compliance regulations.
- Break Down Siloed Workflow: Why have separate backup/recovery, DR, archival and analytics systems if you don’t have to? Consolidating infrastructure in the cloud makes it possible to centralize data management and eliminate separate legacy systems and workflows. The cloud makes it easier to address all data across every endpoint, server and cloud application. That means greater efficiency, fewer manual errors and reduced overhead. Furthermore, a central repository for all data allows for simple eDiscovery and compliance monitoring.
- Replace Outdated Processes:The emergence of cloud-native technologies solves antiquated processes that have historically relied on on-premises solutions. For example, eDiscovery was previously a cumbersome, expensive and time-consuming process. By moving it to the cloud, organizations can easily streamline the eDiscovery process and cut down on the time it takes to complete the entire process by nearly 50 percent.
- Always Be Up to Date and Reliable: The cloud boasts no downtime and access to the latest software — always. Once organizations have transferred to the cloud, they have access to automatic software updates and security patches, unlike legacy systems where they have to wait for scheduled updates. Furthermore, unlike on-premises solutions, with the cloud it is not as expensive to assure acceptable levels of data protection and recoverability and meet AWS’s 99.99999 percent durability guarantee and an availability commitment of 99.5 percent.
- Lower Your Total Cost of Ownership (TCO):Consolidated cloud backup and DR enables organizations to reduce hefty capital expenditures and shift to less expensive operating expenses. In addition to lacking storage efficiencies, legacy solutions typically come with complex pricing models. In contrast, cloud storage can scale up or down to meet demand, enabling vendors to offer ”pay-as-you-go” pricing that’s aligned with actual – not projected – usage. Usage-based payment models, reductions in software licensing costs and the decreased need for a dedicated DR infrastructure further shrink TCO. Additionally, global deduplication efficiencies afforded by the cloud offer massive network bandwidth savings and gigabit-effective backup speeds. Ultimately, the elasticity and scale of the cloud accommodates data variability and gives organizations the ability to scale down and save more money on storage when needed.
- Save Money on Overall Storage: Public cloud architectures offer vast storage capacity. Building on this, native cloud storage solutions enable centralized data management and eliminate the need for pricey, on-premises facilities, infrastructure and staff. Automatic tiering of data in the cloud substantially reduces overall costs by making warm data available for instant restores and automatically moving infrequently accessed data to less costly cold storage. Tiered backup architecture also enables the enterprise to more easily satisfy recovery time objective (RTO) and recovery point objective (RPO) requirements.
- Standardize Availability, Costs and Service Levels: Moving secondary storage workloads to the cloud provides companies with a global approach to data backup, availability and governance via a common, geographically shared platform. By adopting a single point of management across global infrastructure, companies can also better predict costs, efficiently manage data and achieve consistency in service levels, compliance and other processes.
These are just seven ways moving backup to the cloud can dramatically reduce the overall storage footprint of your organization and slash costs, while also gaining the unique advantages only a cloud-native solution can offer. If your organization is growing, processes large volumes of data, or is required to comply with regional data privacy laws, it may pay to take the pledge to move your backup to the cloud on World Backup Day this year.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 7:28p |
Data Center Cooling Outage Disrupts Azure Cloud in Japan A long list of Microsoft Azure cloud services malfunctioned for hours Friday for a subset of customers using services hosted in a Microsoft data center in Japan due to a cooling failure.
Customers had trouble connecting to resources that leverage storage infrastructure in the Japan East region, according to an update on the Azure status website.
“Engineers have identified the underlying cause as loss of cooling which caused some resources to undergo an automated shutdown to avoid overheating and ensure data integrity and resilience,” the status page read as of noon Pacific Time.
While the most consequential cloud outage of the year so far – the Amazon Web Services storage service meltdown in late February – was caused by a mistyped command, today’s incident in Japan serves as a reminder of the extremely physical nature of the Cloud.
Cooling-related data center outages are common, but not nearly as common as electrical infrastructure problems. Malfunctioning uninterruptible power supplies have consistently been reported as the most common cause of data center outages in regular surveys by the Ponemon Institute, commissioned by the company formerly known as Emerson Network Power (now known as Vertiv).
The issues started around 7 a.m. Pacific Time and continued into the afternoon as Azure engineers worked to restore the systems.
Both storage and virtual machines were impacted, along with many more cloud services, such as Web Apps, Backup, HDInsight, Key Vault, and Site Recovery.
Microsoft Azure launched its Japan East region, hosted in a data center in the Saitama Prefecture, and its Japan West region, hosted in a data center in the Osaka Prefecture, in 2014. |
|