Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, May 15th, 2015
| Time |
Event |
| 12:00p |
Virtualization – A Look to the Future Several years ago, we began using virtualization technologies as means to test servers and use resources more effectively. When VMware became a hypervisor, very few vendors actually supported a virtual infrastructure. So, virtualization was left behind in the classroom, and the development environment within numerous organizations.
With the awareness quickly rising, administrators saw that server resources were being wasted dramatically and that virtualization was a way to curtail that. And with that, the pressure rose on vendors to support a virtual state. From there, server virtualization made its way into almost all data center environments as more organizations adopted the technology to help align their business needs.
Now – we’ve entered the next frontier…. We’re way beyond simple server virtualization and are exploring new avenues to make virtualization an even more powerful platform. Let’s take a look at some of these technologies.
- Application Delivery. If we can virtualize a server, why not apps? Popularity with products like XenApp, ThinApp, and now Cloud Volumes continues to increase. Administrators are able to stream or delivery applications to the end-user without actually deploying them at the end-point. This sort of control and manageability makes app virtualization very plausible. In fact, many of the big Fortune 500 organizations have some type of application virtualization deployed already. The next iteration of application and virtualization will absolutely revolve around secure clientless delivery. HTML5 allows you to stream entire applications directly to a web browser. This can helped revolutionize how end-points are being deployed and how organizations control resources.
- Hosted/Virtual/Cloud Desktops. People very realized that VDI isn’t as easy as it may seem. Numerous underlying components can make this technology a bit cumbersome. Today, there has been a resurgence behind VDI and the delivery of complete virtual desktops. Similar to applications, HTML5 can also steam entire desktops directly to a browser. The other big aspect is how far the data center has come as well. Converged infrastructure, better resource controls and more use-cases are actually resulting in more VDI deployments today. The future, however, might be a bit different. The concept of a “desktop” as we know it might be going away as the focus shifts even more towards the delivery of applications and data.
- Network Virtualization (SDN and NFV). Also known as software defined networks (SDN), network virtualization has allowed the administrator much greater control over a network infrastructure. Where one physical NIC had its limitations, new technologies allow for numerous virtual networking designations on a corporate network. Another big network virtualization push revolves around network functions virtualization (NFV). You can now virtualize specific network functions and allow them to run as individual nodes connecting with other communication and networking services. For example, you can have virtual machines or appliances running as virtual load balancers, firewalls, and even WAN optimizers.
- Security Abstraction. There will always be room in the IT world for more traditional unified threat management devices. However, hardened physical appliances aside, more organizations have deployed security platforms on top of a VM. The flexibility to clone security appliances, place them at various points within the organization and assign specific functions to them makes security virtualization very appealing. Imagine having a security appliance VM only doing DLP, IPS/IDS. This type of deployment can be very strategic and beneficial. Furthermore, you’re going to see a lot more virtual services specifically designed to protect your cloud. Inter-cloud connectivity needs a good security practice. This is where more virtual appliances helping bind security services spanning multiple cloud services are really going to help.
- User Virtualization. With IT consumerization and mobility making a presence, more organizations have been looking for ways to abstract the user layer from devices, applications and end-points. And so, user virtualization was born. Solutions from technologies like AppSense provide a way for a user to transfer their personalized settings from application to application and from platform to platform. Basically, users are able to carry their settings with them as they migrate from various systems and applications. Furthermore, you can tie the user’s compute profile between various end-points and even cloud resources.
- Storage Virtualization. A single storage controller can be logically carved up so well, that they appear to be their own standalone units to the administrator. Using storage more efficiently is on the front page of many project lists. Controller multi-tenancy is just one example of how storage virtualization plays a role in today’s IT world. Another big example is what’s happening around software-defined storage. An organization’s ability to completely abstract every storage resource and point it to a virtual layer for management is absolutely a reality. Today’s heterogeneous storage architecture is asking for a better way to manage silo’d disks, storage arrays, and cloud resources.
- Server Virtualization. This stays on the list only because server virtualization continues to evolve and expand. With entire platforms being designed for server virtualization, more emphasis is being placed on how to better use a virtual environment. There continues to be a need for virtualizing the server and to better incorporate virtualization efficiencies into the modern data center. However, a lot of future conversation around server virtualization revolves around commodity server systems. Remember, your hypervisor is a lot more power than ever before. Future capabilities will allow you to create even better underlying server resource management solutions to help keep your underlying data center very agile.
The list will most likely grow as more environments seek ways to be even more efficient. Already, virtualization technologies are helping many businesses cut costs, regain control, and allow for greater growth with their infrastructure. The most important point to remember here is that the logical (virtual) layer will be critical to help connect your data center to your users – and to the cloud. | | 1:00p |
Cirba Adds KVM Support to IT Analytics Software The rise of the OpenStack cloud management framework has served to increase usage of the Kernel-based virtual machine (KVM) hypervisor on which the framework was first built. In recognition of that shift in the hypervisor landscape, Cirba has added support for KVM hypervisors to its IT infrastructure analytics application.
Cirba CTO Andrew Hillier says that while Cirba already supports VMware ESX, IBM PowerVM, Microsoft Hyper-V and Red Hat Enterprise Virtualization, the company is now adding support for KVM hypervisors deployed in OpenStack environments that are gaining ground in both public and private clouds.
In particular, Hillier says that Cirba is now beginning to see OpenStack adoption increase within internal IT organizations that are looking to replace commercial hypervisors with an open source platform.
The Cirba management platform consists of a Control Console that identifies ways to increase efficiency, while also helping to reduce application performance created by IT infrastructure capacity issues. Specifically, Cirba makes use of analytics to eliminate the need to manually determine where workloads should be placed within an IT environment. The Cirba Reservation Console then automates the entire process of selecting the optimal hosting environment for any given workload based on the available amount of compute and storage capacity.
Hillier says the rise of more standard application programming interfaces (APIs) is making it easier to apply analytics across a broad spectrum of IT infrastructure. That data in turn is then being used to automate IT operations.
“You can’t automate what you can’t see,” says Hillier. “Now we can use APIs to monitor the infrastructure.”
Of course, not every IT organization is equally comfortable with either analytics or automation. Some may appreciate the analytics, but not necessarily the level of automation. While the IT industry as a whole has reached a new level of industrialization, many IT administrators worry that IT automation could just as easily propagate errors at the same scale fixes get applied. Those errors could then have a cascading effect that winds up taking entire applications offline.
At the same time, it’s equally apparent that data centers can’t scale on the backs of the manual processes implemented by IT administrators or even the custom scripts they might write. In that context, reliance on more IT automation is almost inevitable. But before any of that automation gets embraced most IT organizations are first going to want a lot more visibility into exactly what is currently occurring across their entire IT infrastructure environment.
| | 1:35p |
Google’s Wholesale Move To Cloud and Take On Security Makes Cloud Apps Enterprise-Friendly Google is moving all of its internal corporate applications to a cloud model, reports The Wall Street Journal. So far, 90 percent of Google’s corporate applications have migrated. With that shift to cloud comes a shift in the way the company approaches and thinks about security. Gone is the idea of the cordoned-off enterprise.
Called the BeyondCorp initiative, the new security model assumes that the internal network is as dangerous as the Internet. Traditionally, corporate security hinges on the idea that a trusted internal network secured by firewalls and other security measures is much safer than having to traverse the Internet for access.
The thinking not only addresses the changing nature of how we access applications, but the changing nature of attacks, as well as increasingly distributed, remote workforces.
“The perimeter security model works well enough when all employees work exclusively in buildings owned by an enterprise,” wrote Google reliability engineering manager Rory Ward and technical writer Betsy Beyer in a paper published in December. “However, with the advent of a mobile workforce, the surge in the variety of devices used by this workforce, and the growing use of cloud-based services, additional attack vectors have emerged that are stretching the traditional paradigm to the point of redundancy.”
In the initiative, Google is tuning its security practices with the assumption that everything will move to cloud. Overall, this means that trust is moving from the network to the device level. Fine-grained access is provided to employees, whose access depends on the employee’s device and user credentials. Authentication, authorization and encryption are employed. There are no virtual private networks and connections are encrypted the same whether an employee is at home or inside the office.
Because the trust has shifted from network to device, Google uses a device inventory database that keeps track of what devices are issued to employees, as well as changes made to those devices. After device authentication, the user is identified through a user database and a group database that is tied to the company’s human resources processes. The human resources tie-in ensures that an employees status, and access remains up to date.
Part of the apprehension on the part of enterprises when it comes to using SaaS is the fact the application traverses the Internet. However, it hasn’t stopped many from employing SaaS to varying degrees. The first applications to move were non-sensitive ones; however, Gartner noted that enterprises are getting comfortable with SaaS for more mission critical applications.
Regardless of a company’s mix of on-premises applications and SaaS, the security paradigm needs to shift to include the wider Internet. Once the model better addresses the Internet “X” factor, more wholesale moves will begin to occur en masse.
Wall Street Journal notes that Coca-Cola, Verizon and Madza are examples of big corporations taking a similar approach to security. As security shifts from network to user, security is performed through granular access and permissions rather than an internal network.
When Google does something, many follow. The move might stand to change the way many think and approach security, and in the process, better align corporate policies with the usage of SaaS and cloud. | | 2:30p |
Friday Funny: Pick the Best Caption for Help Wanted Looks like Kip and Gary may be getting some help?
Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.
Congratulations to Ben H., whose caption for the “Beached Data Center” edition of Kip and Gary won the last contest. Ben won with: “Not only does this thing help keep our servers cool, I’m using it to store my piña coladas too! Speaking of which, could you hop in there and fetch me one?”
Several great submissions came in for last week’s cartoon: “Smoking Rack” – now all we need is a winner. Help us out by submitting your vote below!
Take Our Poll
For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website! | | 3:00p |
Cost of Data Breaches and Cybercrime Will Top $2 Trillion by 2019 
This article originally appeared at The WHIR
By 2019 the cost of cybercrime and data breaches will rise to $2.1 trillion dollars, according to a new study released by Juniper Research on Tuesday. Most of these breaches come from existing infrastructure.
The Future of Cybercrime & Security: Financial and Corporate Threats & Mitigationreport by Juniper estimates the cost of threats in 2019 will be four times higher than the cost in 2015, partially due to cybercrime products. This is similar to a finding inApril that showed lack of cybersecurity professionals ease of buying cybercrime products on the dark web causing an increase in breaches. “The average price for exploit kits is usually between $800 – $1,500 a month, depending on the features and add-ons,” said Carl Leonard, Principal Security Analyst for Websense in a report in April. “The price is likely to remain low due to increased competition.”
McAfee also recently reported that cybercrime is on the rise. In the last year security breaches have been prevalent with several major companies experiencing hacks of varying severity using a variety of methods. JP Morgan, Kmart, Dairy Queen,Home Depot, Xbox, ICANN and Sony have all been the target of cybercrime designed to obtain sensitive data. Attackers can sell stolen credit card information on the online black market, where, according to IT security firm Hold Security, you can also find around 360 million stolen sets of personal credentials and 1.25 billion email addresses for sale.
Researchers expect most of the crime to come from existing infrastructure and traditional computing devices. Although new threats are targeting mobile devices and IoT it is unlikely these methods will become very popular due to the lack of payoff. “Currently, we aren’t seeing much dangerous mobile or IoT malware because it’s not profitable,” noted report author James Moar. “The kind of threats we will see on these devices will be either ransomware, with consumers’ devices locked down until they pay the hackers to use their devices, or as part of botnets, where processing power is harnessed as part of a more lucrative hack. With the absence of a direct payout from IoT hacks, there is little motive for criminals to develop the required tools.”
The average cost of a breach will go up as well exceeding $150 million by 2020 due to even more infrastructure becoming connected. Breaches will continue to be most prevalent in North America comprising over 60 percent of breaches worldwide.
This first ran at: http://www.thewhir.com/web-hosting-news/cost-of-data-breaches-and-cybercrime-will-top-2-trillion-by-2019 | | 3:45p |
NetCracker 10 Drives Convergence of IT and OSS Looking to accelerate the convergence of traditional IT and operational support systems (OSS), NetCracker Technology, a unit of NEC, today unveiled an upgrade to its namesake management platform that includes analytics services designed to optimize the deployment of virtual server and network functions.
Sanjay Mewada, vice president of strategy at NetCracker, says that by including an orchestration layer in NetCracker 10 that is tied back to a Big Data analytics service in the cloud, the company is fundamentally changing how IT and OSS are managed.
“We’re standing at the threshold of a generational shift,” says Mewada. “We’re integrating as many as seven management domains in a single platform.”
Specifically, NetCracker contends that by making use of a Big Data analytics service integrated within NetCracker 10, organizations will reduce unnecessary traffic hops and re-routing. They can also more accurately model projected services demand while simultaneously optimizing capacity and improving application performance. To achieve that goal, NetCracker is not only analyzing machine data, but also external data sources such as social networks as well as biometric data, when relevant.
NetCracker customers are not obligated to deploy every module with the NetCracker 10 suite, but as OSS and IT continues to converge, the company is betting that organizations will want to embrace a more holistic approach to OSS and IT. An approach that, by definition, is more agile than existing systems that are managed manually.
A big driver of much of that change has been the emergence of network function virtualization (NFV) software. NFV enables the replacement of many of the physical appliances that today clutter networking environments with standard hardware based on x86 processors or commodity silicon running NFV software. As that evolution occurs it theoretically becomes more feasible to apply the same framework for managing for IT and OSS environment.
Much of that convergence is also expected to be driven by the rise of the Internet of Things (IoT), which means thousands of endpoints connected to hundreds of gateways that in turn are connected to any number of distributed data centers. NetCracker is making the case for a new era of IT management that makes heavy use of analytics to automate the management of those highly distributed IoT deployments.
Of course, the rate and degree to which any of this convergence actually occurs is unknown. OSS and traditional IT departments are generally entrenched inside their organizations, with neither one enthusiastically looking to give up control over their respective domains. At the same time, the economics of converging OSS and traditional IT may prove too compelling for any organization to ignore.
| | 4:30p |
IBM Research Unveils Silicon Photonics Chip Capable of 100Gbps Making a big step forward in silicon photonics, IBM Research said it has designed and tested a fully integrated wavelength multiplexed silicon photonics chip, which fully enables the use of pulses of light instead of electrical signals over wires to move data. This step will lead to the eventual manufacturing of 100Gbps optical transceivers for commercial use.
With supercomputing and data center interconnects in mind for initial uses of this technology, IBM says it has demonstrated pushing 100Gbps over a range up to two kilometers. Whether the new CMOS Integrated Nano-Photonics Technology IBM has developed will be built with existing manufacturing processes or not will determine exactly how quickly the product will come to market.
IBM says its chips use four distinct colors of light traveling within an optical fiber, each acting as an independent 25Gbps optical channel, for 100 Gbps bandwidth over a duplex single-mode fiber.
The chip enables integration of different optical components side-by-side with electrical circuits on a single silicon chip using sub-100nm semiconductor technology. The essential parts of an optical transceiver, both electrical and optical, can be combined monolithically on one silicon chip, and are designed to work with with standard silicon chip manufacturing processes, according to IBM.
Arvind Krishna, senior vice president and director of IBM Research said that “just as fiber optics revolutionized the telecommunications industry by speeding up the flow of data — bringing enormous benefits to consumers — we’re excited about the potential of replacing electric signals with pulses of light. This technology is designed to make future computing systems faster and more energy efficient, while enabling customers to capture insights from Big Data in real time.” | | 7:04p |
Commvault Extends Data Protection to the Cloud Moving to make the data protection in the age of the cloud a simpler process to manage, Commvault this week released a bevy of data protect software offerings designed to collectively erase the divide between private and public clouds.
Starting with Commvault Cloud Gateway and Commvault Cloud Replication software, Phil Curran, director of product marketing for the Commvault Cloud Ops Business Unit, says that Commvault has reduced much of the complexity associated with deploying data protection software across a hybrid cloud computing environment.
“The cloud is clearly now part of the mainstream conversation in the data center,” says Curran. “Our approach is to integrate cloud APIs. We don’t want to require organizations to have to buy a separate cloud gateway appliance.”
As part of that effort Commvault has also released Commvault Cloud Disaster Recovery and Commvault Cloud Development and Test software. Not content to simply back data up into the cloud, Curran says IT organizations want a simple tool through which they can redirect end users to instances of their applications and data running on a public cloud in the event of a disaster. In that scenario, there’s no need to recover data until the IT organization as whole is ready, notes Curran. Instead, users can continue to remotely access their applications running on a public cloud until such time the IT organization deems it appropriate to once again spin up a local instance of those applications.
Rather than simply exposing Amazon Web Services and Microsoft Azure APIs, Curran says Commvault has created graphical applications that make it simpler for IT administrators to manage data protection workflow between their data centers and the public cloud. IT organizations can either opt to deploy the Commvault software on a server or use a dedicated appliance that Commvault built in conjunction with NetApp that was unveiled last year.
In either case, Curran says Commvault is trying to eliminate data center and public cloud silos that wind up making deploying a data protection strategy involving public clouds a lot more complex than it should be.
While there are often many compliance issues that have to be addressed when it comes to storing data in the cloud, the ability to spin up instances of applications on public that have suddenly gone offline for one reason or another is driving the emergence of disaster recovery-as-a-service (DRaaS). While that concept has been around for several years, Curran says it’s just now becoming feasible for many IT organizations to automate that process in a way that finally doesn’t require a lot of extensive integration work on the part of the internal IT organizations to actually make it work.
|
|