Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, February 23rd, 2016
| Time |
Event |
| 1:00p |
Application Migration: Know Your Options You’ve signed a contract with your favorite cloud provider, updated your entire data center with modern converged infrastructure solutions, and you’ve optimized your entire ecosystem to support mobility and new kinds of end-point technologies. You understand that to stay agile in today’s market, you have to design an environment that’s ready for virtualization and cloud.
Now that the task is done at the data center level, what about your applications?
In working with almost every industry vertical, it’s clear that data and applications continue to be very serious stopping points. And with the current boom in mid-market spend on virtualization and cloud technologies, even more organizations are running into application migration challenges. Those challenges can be very serious:
- Some apps can’t be virtualized.
- Some apps simply can’t be moved into a cloud ecosystem
- Some apps have critical legacy dependencies
- Some apps are challenged by compliance and security measures
So now what? You’ve spent all this money on infrastructure and cloud only to be bound by a critical app or two. This forces progressive architects to hold on to legacy gear or an older part of their environment, which is not an ideal scenario.
Here’s the good news: new technologies are enabling a very granular understanding of applications and their unique requirements. So, if you’re facing a serious application migration challenge, know that you can work through these issues to get to a better state.
Before we move on, there is one important point to remember: it is definitely possible to end up with an immobile application. It could be code; it could be hardware requirements; or integration with other systems. If that happens, deploying a new app or service might be the only way to go, and the initial investment cost might scare you. However, if you’re hoping for one last chance to hang on to that legacy app, do consider the cost. You will have to hold on to legacy gear. You’ll have to find people to spend time to support the app. You’ll need to potentially deprecate optimal systems to support this app. Most of all, you might actually be slowing down the business by holding on to an app only because “it still works.” Even scarier are the security implications around an old application. Know your options, know when and how to migrate, and make your decisions based on the best interests of the business in the long term.
With that said, let’s look at some ways you can work with challenging applications and some application migration options.
Understanding the Application DNA
Some vendors call them RAG reports: Red, Amber, Green. These types of reports help you understand the makeup of an application. If the report comes back ‘red,’ you know you have some serious application issues. This can be everything, from code to drivers. Or, it can even be the type of OS the app can run on. If the report comes back ‘amber,’ you know you can work with this app. You might need to make some minor policy changes, update a registry setting, or isolate a VM. But, the point here is that you can migrate or at least work with the application. If your report comes back ‘green,’ you know the app is agile and can move between cloud and virtualization ecosystems.
These types of reports and analysis allow you to granularly understand the DNA of your application. It’ll help with migration planning, seeing clear challenges, and where you can create optimizations. These types of assessments must be done to truly understand the entire makeup of your application environment.
What to Do With Mainframes?
I hear this question time and time again. The architecture is certainly aging, as are many of the people supporting the environment. Still, mainframe systems are absolutely critical to many businesses. Banking, financial services, manufacturing, pharmaceuticals, and a number of other industries use mainframe systems as core parts of their businesses.
The good news is that there is still a lot of development around mainframe architecture. Middleware makes it easier to make data even more agile, and cloud providers can actually be mainframe-friendly. For example, there are services out there which will take your IBM AS400 and let it live in the cloud. They’ll handle the software upgrades and patches, and you basically get a mainframe-as-a-service type of environment. Most of all, these providers can even handle EDI transactions and data integration. One of the bigger challenges is moving from an older codebase to one that’s more supported.
The reality is that you might have to make that decision sooner or later. The best advice here is don’t let it surprise you. Plan for this type of migration and be ready to support your organization.
Working Around Dependencies
This is a challenging one to get around. There are so many applications out there, and each can be very unique. Some have calls to legacy Java environments, while others are accessing an aging database. What do you do if it’s a custom app? This will be a multi-layered approach. If you have a legacy repository or database that’s being accessed, try to create a parallel deployment of that data on a new system. From there, work to point the application to the new architecture and create modifications to the app as needed. Once you break apart the application, you’ll be able to see what can be easily migrated into a cloud or virtual ecosystem and what can’t be.
Remember, dependencies can also be around networking gear or even distributed connections. In those cases, it’s important to understand the resources being utilized and where you can abstract the application. In some cases, the only way to move an app will be to decouple it from all existing and legacy resources. This can be a challenging but necessary part of the app migration process.
Getting Creative with Challenge Apps
Applications can be packaged, virtualized, installed directly into a virtual desktop, and even streamed. In some cases, a simple physical-to-virtual process will do the trick. Other times, you might have to do a clean installation on a VM. You may also have to install the application as a package with certain registry settings to make it work.
The point is you have options. First, understand the application and what will allow it to be most effective. Then, understand your cloud and virtualization delivery options and deploy accordingly.
Ripping Off the Band-Aid
Imagine the following. Your organization has an aging CRM tool. You’ve build add-ons into it, you’ve put in customizations, it helps run your business, and it’s the bulkiest and most fragmented piece of software you have. In fact, a new version of that on-premise software might even break some of the critical customizations you made yourself. And now you’re stuck with a CRM platform that’s customized for your business but cannot be upgraded and is non-agile. Now what?
Many organizations are using new solutions, such as Salesforce, to move these massive operations into the cloud. What about those customizations? There are partners who actually work with you to understand your code and move it into a respective cloud CRM-ready architecture. Then, they help you test in parallel and allow you to create user readiness programs. These types of parallel migrations are the least disruptive and help make the entire move a lot easier.
The pace of application development is absolutely staggering these days. We’re seeing cloud-based systems take over functions only large on-premise environments could do. CRM systems, call centers, and even Big Data processing can all be done within a cloud ecosystem. By working with challenging applications, you’re not only optimizing your data center, you’re also helping your business.
Don’t get discouraged. If you’re stuck with an app, work with cloud, data center, and virtualization partners who can help. There are truly fantastic ways to conduct parallel migrations, create user adoption, and even create new strategies for the business when it comes to working with new types of apps. Don’t get stuck with an old app; get creative! | | 4:00p |
Disruptive Forces Shaping the Next Generation of Data Centers Jack Pouchet is Vice President of Market Development for Emerson Network Power.
Traditionally, the data center has evolved in response to technology innovation—mostly server-based—and the pace and direction has been somewhat predictable. Disruptive forces such as cloud computing, sustainability, cybersecurity and the Internet of Things are driving profound IT changes across all industries and creating opportunities and challenges in the process.
The data center, an enabler of disruption in many instances, is not immune. These forces are causing new archetypes to emerge that will change the data center landscape and improve productivity, drive down costs and increase agility. Four of these archetypes, in particular, will have a profound effect on the data center.
The Data Fortress
Cyber attacks have disrupted some of the world’s leading companies as our increasingly connected world creates more and more openings for hackers. The cost and frequency of these data breaches continues to rise, despite the billions spent annually on digital security. A Ponemon study of data center downtime commissioned by Emerson Network Power found that the number of security-related downtime incidents rose from 2 percent in 2010 to 22 percent in 2015.
As a result, organizations are beginning to take a security-first approach to data center design, deploying out-of-network data pods for highly sensitive information—in some cases with separate, dedicated power and thermal management equipment. The next wave is the purpose-built, cold-storage facility with massive storage arrays protected by heavy investments in security systems and protected from access by all but authorized networks.
The Cloud of Many Drops
The reality of cloud computing today is that many enterprises are buying computing to bring applications online faster and cheaper, at a time when in-house computing resources are underutilized. Despite virtualization-driven improvements, too many servers still remain underutilized – some studies indicate servers use just 5-15 percent of their computing capacity and that 30 percent of all servers are “comatose.”
We see a future where organizations explore shared service models, such as those being applied to everything from personal taxi service to legal counsel, to tap into this unused capacity on-demand – and even sell the unused capacity on the open market. This shared services approach could even result in increased enterprise server utilization, extended life for existing data centers that move toward a self-support model, and the ability for enterprises to build new data centers based on average rather than peak demand.
Fog Computing
Distributed architectures are becoming commonplace as computing at the edge of the network becomes more critical. Introduced by Cisco, fog computing is a distributed computing architecture that connects multiple small networks into a single large network. Application services are distributed across smart devices and edge computing systems to improve efficiency and concentrate data processing closer to devices and networks.
This provides a more efficient and effective method of dealing with the immense amount of data being generated by the sensors that comprise the Internet of Things (IoT). It also allows data to be aggregated and filtered locally to preserve bandwidth for actionable data.
The Corporate Social Responsibility Compliant Data Center
Energy efficiency continues to be important for an industry with seemingly limitless consumption needs, but other drivers—most notably an increased focus on reducing carbon footprint among some organizations—are pushing the focus toward sustainability and corporate responsibility.
Many organizations, including colocations and cloud service providers, will take a more aggressive approach to data center efficiency—adopting, for example, cooling with maximum economization and UPS systems that use active inverter eco-mode to provide high efficiency—while also pushing for increased use of alternative energy, such as wind and solar, to power data center operations and achieve carbon neutrality.
Speed, cost, security, sustainability, application availability and productivity must all be factored into future data center archetypes as data center operators deal with disruption from inside and outside their organization. Software-defined management will increasingly provide the flexibility organizations need to move away from single-instance data centers in which all data and applications are treated with the same level of resiliency and security while adopting new technologies and practices being pioneered in the archetypal data centers. This will enable organizations to evolve to the data center ecosystem of the future capable of accommodating all of these disruptive trends through a multimodal model in which the environment is tailored to specific needs of the data, applications and users it supports.
By working with application owners to meet their specific needs, data center operators will have the opportunity to build on their role as service providers to become trusted advisors.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:30p |
What Senior Execs Must Know about Enterprise Data Center Strategy There’s no question that the data center is now a core part of the business. Entire business capabilities are built around the power of IT solutions inside the data center. This is why it’s more important than ever for any company to select the correct data center strategy, considering a number of crucial factors:
- TCO vs risk
- Role of cloud or a private colocation
- Build vs buy
- Impacts on network and overall infrastructure security
- How to reduce risk and liability
Remember, data centers can helps organizations stay agile and adapt to fluid market demands. At Data Center World next month, Mark Evanko, principal engineer at BRUNS-PAK, will discuss the major considerations board members and senior management staff should take into account when developing enterprise data center strategy.
Evanko comes with an extensive background in data center architecture and data center strategy. He will guide the audience through understanding sixteen elements to be considered in developing short-term and long-term enterprise data center solutions. He will also talk about enterprise data centers and unresolved liability. Who is responsible when something goes wrong?
“Network security and downtime, and the corresponding liability, is dominating the news,” Evanko said. “If the data center function is provided by third-party party providers (cloud/colo) that have in general no liability for data breaches, hacking, or downtime … what does a board do? We’ll cover major considerations for board and senior management when it comes to developing real-world enterprise data center solutions.”
Sign up for Data Center World and attend his session, titled “Executive Considerations for Data Center Enterprise Solutions in 2016 and Beyond” to learn more about:
- Real-life feedback and considerations from board of directors, trustee, and senior committees regarding their decision matrix associated with selecting a data center enterprise solution in the 2015/2016 and beyond marketplace
- Total cost of ownership vs risk around buying and acquiring data centers.
Understand how data center, cloud, colocation, network, disaster recovery, opex vs capex, risk, and liability all contribute to the “enterprise data center solution.”
Want to learn more? Join BRUNS-PAK’s Mark Evanko and 1,300 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency, and agility for data center leadership in the digital enterprise. More details on the Data Center World website.
| | 5:56p |
CenturyLink Launches FedRAMP-Compliant Government Cloud 
Hosting provider CenturyLink has added a new Infrastructure-as-a-Service offering aimed at federal government agencies, providing a way for government agencies to migrate and extend their data center workloads to the cloud while also complying with federal security standards.
According to the announcement, the new service known as “CenturyLink Government Cloud” will be available under CenturyLink’s General Services Administration (GSA) Networx contract and is planned to be available on other GSA contracts. CenturyLink Government Cloud meets the government’s rigorous security standards required by the Federal Risk and Authorization Management Program, known as “FedRAMP”.
CenturyLink’s offering is based on the FedRAMP-approved VMware vCloud Government Services platform. “CenturyLink Government Cloud combines the power of the VMware cloud with our carrier-class network and secure data centers, positioning us as a leading provider of hybrid IT solutions to the government,” according to CenturyLink Senior VP and GM Tim Meehan, who leads the company’s federal government team.
Hosting provider Carpathia also uses VMware as the basis for its government cloud services.
Managed by GSA, FedRAMP is a risk management framework that helps government agencies choose cloud-based products and services. Major cloud providers such as Amazon Web Services and IBM place a huge value on FedRAMP as a way to access lucrative government contracts. Recently, FedRAMP Fast Forward, an industry advocacy group for the hosting industry, has been pushing for improvements to the process of obtaining FedRAMP approval to allow more service providers to compete for government contracts.
This first ran at http://www.thewhir.com/web-hosting-news/centurylink-launches-new-fedramp-compliant-iaas-cloud-for-government-agencies | | 6:02p |
IT Innovators: Eying IT’s Influence in Education Virtual reality may seem like the stuff of science fiction, but it’s increasingly reality–and being used in new ways, including in education. And that means institutional IT departments need to begin thinking about how to integrate it, and other new technologies, into their operations, says Emory Craig, director of e-learning and instructional technology at College of New Rochelle in New York.
Craig’s role at New Rochelle puts him in charge of both making sure online course systems are operational and integrating emerging technologies into classrooms. That means managing the learning management system and working with staff and faculty to make sure things are running smoothly. This process could involve setting up lecture capture, for example, and arranging for the digital storage of those lectures. Or, it could mean working with faculty to establish split classrooms, where lectures are placed online and in-class time is used for workshops, questions and special projects.
“They’re looking for things that are user friendly, very simple to use, and that have a great deal of support,” Craig says of the faculty at New Rochelle. That’s why New Rochelle’s IT department recently moved to a cloud-native (i.e., applications built specifically for cloud platforms) learning management system; to help give faculty the tools they need to support technology-forward teaching.
But Craig’s role also means working with different players at the college to introduce new technologies for online and electronic learning. Virtual reality increasingly plays a role here. Many people think of virtual reality as a way to make video games more entertaining, or perhaps something to be used only by the richest or more technologically connected people. But it’s increasingly a tool that can be accessed by all kinds of people, Craig says, with materials as easily accessible as a $15 viewer made of cardboard.
“This is the future of learning, media and entertainment,” Craig says of developments in virtual reality. “It is going to transform everything we do.” And it shouldn’t be all that surprising given that virtual reality has been named a key trend for anyone involved in IT Infrastructure these days. While it’s clearly an end user computing trend, it will fall to IT professionals to ensure the proper infrastructure is in place to support it.
According to Craig, there are three key ways this IT trend can be integrated into the education industry. The first is to provide realistic, deeply immersive experiences. “It opens up the opportunity to not just read a text about something or watch a film about something, but to step into an environment,” he says. It could be used, for example, to insert students into historical situations.
Additionally, virtual reality can be used to provide training based on more specific scenarios such as a particular natural setting or medical procedure. “I already have nursing faculty talking about how they could use something like this to help train their nursing students,” Craig says.
Finally, virtual reality can add new powers to documentary filmmaking, another powerful teaching tool, Craig says. “It’s one thing to watch something on a screen. It’s another thing to step inside a screen and be inside an experience,” he says.
Craig is well aware of the cost restraints many universities are working under, and how that might affect the willingness of institutions to experiment with new technology like virtual reality or split classrooms. But as costs lower and technology advances, the tools for virtual reality on a smaller scale become increasingly accessible. Craig currently has his students use cardboard viewers for a course he’s currently teaching on new media and society, for example. It provides them–and the college–with an easy way to see what kinds of things are capable with this technology, and where it might be worthwhile to invest in the future.
“For a lot of institutions, I think it’s hard to put the financial commitment into it until you see exactly what you’re going to do with it,” Craig says. “It becomes kind of a quick win.” The same approach could be applied to any IT department. And making larger-scale choices, like choosing a cloud-based learning management system that allows for the integration of emerging technology, can help an institution or IT department take a big leap forward.
“That’s a really exciting thing, and something that I’d love to see being rolled out more,” Craig says of the moves an IT department can take to make classrooms more engaging, interactive and collaborative. “We’re in a modern era where it’s increasingly difficult to say ‘I’m just going to stand in front of a room and I’m going to tell you what I know.’”
Terri Coles is a freelance writer based in St. John’s, NL. Her work covers topics as diverse as food, health and business. If you have a story you would like profiled, contact her at coles.terri@gmail.com.
The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.
This first ran at http://windowsitpro.com/it-innovators/it-innovators-eying-it-s-influence-education | | 7:34p |
Verizon to Buy XO Communications’ Fiber Business for $1.8B Verizon has added a fiber-optic network business to its portfolio.
The telecommunications giant has announced plans to acquire the fiber-optic network business of XO Communications for approximately $1.8 billion.
In addition, Verizon will simultaneously lease available XO wireless spectrum, with an option to buy that spectrum by year-end 2018.
So what does the acquisition mean for Verizon and XO?
The acquisition of XO’s fiber-optic network business provides Verizon with access to XO’s fiber-based IP and Ethernet networks and fiber facilities to help the company “continue to densify its cell network,” according to XO.
Verizon said it expects to receive several financial benefits from the transaction, including a step-up in the basis of the assets as well as operating and capital expense savings, and the net present value of the operational synergies between Verizon and XO is expected to be in excess of $1.5 billion.
Read more: Who May Buy Verizon’s Data Centers?
Also, the transaction may help XO strengthen its business.
“This transaction will create a stronger provider of business broadband services for the customers of XO Communications,” Chris Ancell, XO’s CEO, said in a prepared statement.
Verizon noted the transaction is subject to customary regulatory approvals and is expected to close in the first half of 2017.
This first ran at http://talkincloud.com/telco-hub/verizon-acquire-xo-communications-fiber-business-18-billion |
|