Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, September 3rd, 2015
| Time |
Event |
| 12:00p |
Data Center Consolidation: a Manager’s Checklist You’ve seen all of the big statistics around cloud growth. It’s clear that demand for new kinds of data center services continues to grow. Through it all, cloud providers and data center partners are working around the clock to make their environments as efficient as possible. Why? To maximize their bottom line and to stay competitive.
The reality in today’s very competitive data center and cloud market is the one who can run most optimally and cost-effectively while still delivering prime services is a leader in the market. To accomplish this goal, there are a few things to consider. First of all, getting ahead doesn’t always mean adding more gear. Smart data center and cloud providers learn to use what they have and make the absolute most out of every resource. There are new kinds of questions being asked when it comes to new data center efficiency concepts. Is there a new technology coming out that improves density? Does the ROI help improve long-term management costs? Does a new kind of platform allow me to achieve more while requiring less?
In many cases, creating better efficiency and a more competitive data center revolves around consolidating data center resources. With that in mind, we look at three key areas that managers should look at when it comes to data center consolidation. This includes your hardware, software, and the users.
Hardware
There are so many new kinds of tools we can use to consolidate services, resources, and physical data center equipment. Solutions ranging for advanced software-defined technologies to new levels of virtualization help create a much more agile data center architecture. When it comes to hardware and consolidation, you have several options:
- Network, route, switch: We have officially virtualized the entire networking layer. If an organization chooses, they can run on an entirely commodity networking architecture and still provide enterprise capabilities. For example, Cumulus Networks has its own Linux distribution, Cumulus Linux, designed to run on top of industry-standard networking hardware. Basically, it’s a software-only solution that provides the ultimate flexibility for modern data center networking designs and operations with a standard operating system, which is Linux. Further capabilities revolve around direct network virtualization integration with the hypervisor. When working with networking components, look for virtual services that can consolidate networking functions and reduce the need for more gear.
- Storage and data: Much like networking, you now have the ability to create and control you own storage architecture. Software-defined storage goes much further than just virtualizing the storage controller layer. This logical component allows you to aggregate siloed storage resources and control all of it under one management layer. You no longer have to worry about lost storage resources and can now control all data points from an intelligent storage management platform. Furthermore, new kinds of app-level policies allow you to maximize storage resources, like flash, by point applications to specific repositories.
- Blades, servers, and convergence: Within the actual compute layer – data center architects have quite a few options. Convergence allows you to create powerful environments which couple several data center functions into a node-based architecture. Even traditional rack-mount servers now come with better resources control mechanisms and improved density. Still, new kinds of blade architectures allow for direct fabric backplane integration and even more throughput. Furthermore, hardware policies allow you to dynamically re-provision resources. This allows new sets of users to take on entirely new hardware policies on the same blade chassis. Creating a “follow-the-sun” data center model allows you to add less gear, while still supporting a diverse set of users.
- Managing your rack: Cooling, power, and airflow are all critical considerations when you examine the overall data center consolidation spectrum. How much power are you drawing? Do you have hot spots? Are you servers running efficiently? Are you utilizing some of the latest mechanisms around air flow management? Creating an ideal data center and rack architecture can go a long way in helping control how much gear you actually need. Remember, user density and workload performance are directly impacted by the health of your data center environmental variables.
Software
The software piece of the data center puzzle is absolutely critical. In this case, we’re talking about management and visibility. How well are you able to see all of your resources? What are you doing to optimize workload delivery? Because business is now directly tied to the capabilities of IT, it’s more important than ever to have proactive visibility into both the hardware and software layers of the modern data center.
Having good management controls spanning virtual and physical components will allow you control resources and optimize overall performance. In working with various management tools, consider the following:
- How well are you able to monitor everything ranging from chip to chiller?
- Can you see virtual workloads and how they’re distributed?
- Are you able to see hardware resources utilization?
- Can you control load-balancing dynamically?
- Is your DCIM solution integrated with your virtual systems and the cloud?
- Can you proactively make decisions around resource utilization?
Visit the Data Center Knowledge DCIM InfoCenter for guidance on DCIM products on the market, as well as help with selection, deployment, and day-to-day operation of Data Center Infrastructure Management software.
The User
The first iPhone was released in 2007. Over the course of just eight years we’ve seen the vast adoption of cloud, consumerization of IT, and now the Internet of Things. Behind the scenes, the data center is churning away to support all of this new data and so many new users. These users are requesting applications, services, and a variety of other critical functions that allow them to lead daily lives and be productive. Still, at the core of it all sits the data center, churning.
Data center consolidation must never negatively impact the user experience. Quite the opposite; a good consolidation project should actually improve overall performance and how the user connects. New technologies allow you to dynamically control and load-balance where the user gets their resources and data. New WAN control mechanisms allow for the delivery or rich resources from a variety of points. For the end-user, the entire process is completely transparent. For the data center, you have less resource requirements by leveraging cloud, convergence, and other optimization tools.
Moving forward, careful control of data center operations will mean involving users and the business process. It also means that data center managers must look at new options to consolidate their data centers while still supporting next-gen use-cases. | | 3:00p |
Learning to Trust in the Cloud Marc Olesen is Senior VP and General Manager for Splunk.
Fear. The fear that the cloud is less secure than an on-premises computing environment is the key reason why IT and business decision-makers have refrained from moving their organizations more aggressively into the cloud.
Just like urban legends — such as alligators in the sewers — the myth that the cloud lacks adequate security has perpetuated over the years, despite industry research that shows otherwise. For example, Gartner data shows most cloud-based services are as secure as many on-premises IT infrastructures and the data they contain.
After the prominent security breaches in retail and the public sector over the last year, it’s clear that a strong security posture is a requirement, not an option, as no one wants to be the next headline. Reviews of these breaches show that they were the result of internal policy or system failures, not the result of any weakness of a cloud service. Although most of these breaches targeted on-premises infrastructure, they contribute to false assumptions about cloud security.
These false assumptions tend to be overplayed and persist, despite an increasing amount of research supporting cloud the security of cloud. A study released by CDW this year showed that nearly half of the 1,204 IT professionals surveyed (47 percent) said security remains a barrier, keeping their organizations from migrating to the cloud. This was followed by trust in cloud solutions at 31 percent.
In a global survey from audit and advisory firm KPMG, 53 percent of the respondents cited data loss and privacy risk as the most significant challenge to doing business in the cloud, followed by the risk of intellectual property theft. In a similar KPMG 2012 survey, cost efficiency was listed as the most significant challenge, indicating that security and data privacy have become greater concerns.
In other words, companies have recognized the cost efficiency of the cloud, but have yet to overcome the security fears. Security versus efficiency doesn’t have to be a choice, and with the cloud, it’s not.
Understanding The Cloud Security Landscape
Let’s take a step back for a moment and get in the shoes of a major cloud provider. Any incidents that expose security vulnerabilities within their infrastructures could be disastrous for a cloud service provider from a business standpoint. As a cloud provider, if I know that security is a major concern among my customer base, then I know I need to deploy the highest levels of security. And, that’s exactly what major cloud providers are doing.
For example, consider the steps Amazon Web Services (AWS) has taken to provide a secure environment for its public cloud services. From a physical security standpoint, AWS’s data centers use the latest electronic surveillance and multi-factor access control systems, and are staffed around the clock by trained security guards.
The company says environmental systems are designed to minimize the impact of disruptions to operations. Multiple geographic regions and availability zones enable services to remain operating in the face of most failure modes, including natural disasters or system failures.
AWS includes built-in security features such as secure access, firewalls, identity and access management tools, multi-factor authentication, private subnets, encrypted data storage, security logs and centralized encryption key management. AWS uses third-party certifications and evaluations to ensure existing and prospective customers that its environment is secure.
Security: A Shared Responsibility
In the same way you’d take precautions before entering a sewer with alligators, organizations considering or using cloud services still need to take steps to make sure they are effectively mitigating whatever risks there might be with the cloud, as well as within their own environments.
Organizations should employ best practices when selecting cloud providers and using their services, such as:
- Putting policies and processes in place to manage cloud services and setting guidelines for employees on how to use these services.
- Making sure contracts with cloud providers include service level agreements (SLAs) that address remediation in the event of a security incident such as distributed denial-of-service attacks and malware, including a description of lines of responsibility during such events.
- Larger companies may want to establish a cloud governance committee to oversee issues such as vendor relations, performance and reliability, acceptable cloud usage, security awareness training and other areas.
- Deploying security technologies to strengthen the security of their IT infrastructure, particularly when building a hybrid cloud environment that combines public clouds, private clouds and on-premises components.
- Creating an in-depth defense strategy that protects the perimeter, as well as end points to guard against intrusions and protect data.
Security is a shared responsibility with your cloud provider, and companies should consider implementing tools such as next-generation and application firewalls, intrusion detection and prevention, anti-virus software, encryption, identity and access management, visibility, log and big data analytics. This can help ensure internal security standards are as high as those set by cloud providers.
Editor’s Note: This is Part 2 of a two-part series. You can find the first article, Welcome to the Hybrid World, here.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| | 4:26p |
Microsoft Launches New Azure VMs for Compute, Storage Intensive Applications 
This article originally appeared at The WHIR
In order to support applications that are both compute and storage intensive in the public cloud, Microsoft launched a new variant of its G-series VMs, called the GS-series on Wednesday. GS-series offers the compute power of G-series with the performance of premium storage.
According to Microsoft, GS-series are ideal for applications that are both compute and storage intensive, including relational databases like SQL Server and MySQL, noSQL databases like MongoDB and data warehouses. The GS-series can also come in handy when scaling up the performance of enterprise applications like Microsoft Exchangeand Dynamics, the company said.
The announcement comes as Microsoft, along with CSC and AWS, have won a $108 million cloud computing contract with the Federal Aviation Administration.
The GS-series is powered by the Intel Xeon E5 v3 processors and has up to 64TB of storage, 80,000 IOPs, and can deliver 2,000 MB/s of storage throughput.
Microsoft launched its G-series earlier this year to offer more memory and SSD space than other VM sizes in the public cloud. Over the past three months, G-series has seen a more than 50 percent growth in usage.
Also on Wednesday, Microsoft lowered the prices on its D-series and DS-series instances by as much as 27 percent, going into effect on Oct. 1.
The company also launched a new diagnostic capability for VMs – the ability to see serial and console output from a running virtual machine, Microsoft Director of Program Management, Azure, Corey Sanders said in a blog post.
This first ran at http://www.thewhir.com/web-hosting-news/microsoft-launches-new-azure-vms-for-compute-storage-intensive-applications | | 10:09p |
IBM, ARM Form Alliance to Simplify the Integration of IoT IBM and ARM just announced the formation of an alliance that will make it much easier for developers who are building products that use ARM’s mbed operating system to link their devices to IBM’s specially designed cloud for the Internet of Things, reported our sister site, The VAR Guy.
This is specifically appealing for companies that want to build connected products but don’t want to have to worry about a proprietary operating system. The mbed OS, currently only available in beta versions, is designed to run on small microcontrollers that are used in sensors and embedded devices such as appliances or even medical monitors. It
The new alliance will provide access to software development kits (SDKs) that make it simpler to invoke application programming interfaces (APIs) to build IoT applications, Chris O’Conner, general manager of the IoT business unit at IBM, told The VAR Guy. In the case of IBM, those APIs will connect back to IBM IoT Foundation, a cloud service managed by IBM based on the IBM Bluemix platform-as-a-service (PaaS) environment running on the IBM SoftLayer Cloud.
O’Connor said IBM is making a concerted effort to reduce the barrier to entry when it comes to building IoT solutions. By combining access to back-end services via open APIs with ubiquitous connectivity over the Internet, just about any organization can now build an IoT application, he noted.
In fact, he added, there is a growing massive realization among business executives regarding the potential impact of IoT. As a result, IBM is seeing a massive spike in interest in IoT solutions. For that reason, IBM today also announced it has collaborated with ARM to create IoT for Electronics, which is an instance of IBM IoT Foundation aimed specifically at the electronics industry.
Considering that International Data Corp. (IDC) expects the IoT industry to increase to $1.7 trillion by 2020 from $655.9 billion last year, the companies couldn’t have picked a better time to expand their presence in the IoT arena.
Making up the bulk of the growth in the IoT market will be devices, connectivity and IT services, according to the research firm. Combined, they will amount to more than 66 percent of the worldwide IoT market by 2020, the research firm said. Platforms, application software and service models will be a large part of that revenue.
To learn more about both new IoT projects, read the complete post at: http://thevarguy.com/business-technology-solution-sales/090315/ibm-partners-arm-drive-iot-solutions. |
|