Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 2nd, 2017
| Time |
Event |
| 1:00p |
New VMware NSX Promises Easy Microsegmentation for Data Center Networks VMware is promising the latest version of NSX, its data center network virtualization platform announced Thursday, will help enterprises accelerate microsegmentation of their network traffic, giving them more manageable and transparent patterns within a matter of a few weeks.
In the old way of doing things, a data center’s active VMs would be scanned using an application discovery manager (a class of product VMware discontinued in 2013), and its traffic logs would be recorded and scanned by vRealize Network Insight, Milin Desai, VMware’s VP for NSX, explained in an interview with Data Center Knowledge. Assessments of the data center’s traffic flow would be fed into a console, from which an operator may refer to those assessments while manually writing a script.
Although exporting network logs is always essential for auditing purposes, he said, NSX 6.3 (the latest release) effectively replaces what was called “Activity Monitoring” with Endpoint Monitoring. This new feature can analyze network traffic for a period between 24 and 48 hours. From there, the new Application Rule Manager will automatically set up the conditions for microsegmentation rules for that traffic, which may then be deployed at will.
“It will help streamline deployment factors, and also for smaller organizations, it will help the ‘uber-admin’ to take an application and microsegment it faster, without making mistakes, missing a port, or adding a protocol that was not supposed to be there,” said Desai.
Bit by Bit
When VMware premiered its NSX network virtualization platform in 2014, it introduced DevOps professionals to its version of the concept of microsegmentation — isolating traffic by job function rather than the identity of the application.
Virtualizing a network enables an administrative console, or a network orchestrator, to symbolically subdivide giant networks into small streams according to the traffic they facilitate. This way, security rules can be applied to those small streams individually, that are much more effective than rules for firewalls — especially in situations like Web traffic where multiple functions may use the same numbered IP port.
While microsegmentation has been an articulated ideal since the beginning of the decade, NSX was perhaps the first component to bring the ideal out of the clouds and put it on the debating table. But two-and-a-half years into the debate, many enterprises still don’t really know what it is or how it works.
Indeed, a number of NSX customers today may not have even tried it yet. So the addition of an Application Rule Manager, which builds the bases for microsegmented rules in the background, could give these customers a boost.
“If a customer today says, ‘You know what, I want to start a trial,’” said Desai, “within a week we can be in the environment deployed, and starting to monitor it. And within two weeks, we can be putting [in place] the first set of rules for their first applications.” He suggested that new customers adopt microsegmentation not all at once — not for entire networks — but one application or one use case (such as business process management or virtual desktops) at a time.
Customers adopting this approach, Desai projects, may be able to move microsegmentation from development to production in the same quarter.
Dismissing Agents
VMware’s move comes on the same week that Cisco announced updates to its Tetration Analytics network monitoring suite — effectively completing the package by filling in features that were not quite ready for prime time last July. Application segmentation is one of those features Cisco added.
Although NSX is not, technically speaking, an application performance management (APM) platform, the functions that an admin performs with NSX and vSphere, and with Tetration Analytics, do belong to the same category. They are two means of attaining the same objective: monitoring the throughput of functions over a network, and applying real-time remediation to expedite them.
What’s missing, he points out, are the agents which APM tools use (and which Tetration uses) for establishing remote endpoints. Desai told us that NSX was designed to provide the same functionality for which APM tools would deploy surrogates. An outside monitoring tool needs to make contact with something close to the application, he said, whereas NSX is already close enough.
“Because we have the hypervisor, and the application is hosted on the hypervisor,” he said, “we’re able to get information about flows and processes and endpoints fairly easily. The way you distribute and manage agents, that’s a whole complexity in itself. This has been done in the antivirus world, and you know how hard that is, from a lifecycle standpoint.” | | 4:00p |
Microsoft Takes Hands-Off Stance on LinkedIn Data Centers, for Now Microsoft’s near-term strategy for LinkedIn is to let the social network continue growing its user base and improving user experience without making any potentially disruptive changes, including changes around infrastructure, Microsoft CFO, Amy Hood, said on the company’s earnings call last week.
“Over time of course I’m sure they’ll want to take advantage – as we build new services together – of some of our infrastructure assets,” she said. “But in the short term, the most important thing is [that] they continue to add value and usage and great experience for their members. And so I have really no intention of messing with that in terms of capital expenditures in the short [term] and next six months for sure.”
The statement appears to answer the question about Microsoft’s plans for LinkedIn data centers, at least in the immediate future. Before Microsoft’s blockbuster $26.2 billion acquisition of the professional social network last year, LinkedIn had started executing on an ambitious plan to build out a hyper-scale global data center platform.
Read more: LinkedIn Adopting the Hyperscale Data Center Way
That plan includes designing its own hardware and proposing its own hardware standard, Open19, positioning it as an alternative to Open Compute Project. Microsoft is one of OCP’s biggest members and has standardized its data center hardware on OCP-based designs.
Hood’s statement signals that at least for the time being, Microsoft has no intention to push its new subsidiary to abandon its infrastructure strategy to bring it more in line with Microsoft’s.
See also: LinkedIn Deal Means More Microsoft in Digital Realty Data Centers
Companies consolidating data center infrastructure post-acquisition is common. Facebook, for example, decided it would be worthwhile to go through the extremely complex exercise of moving Instagram’s backend from Amazon Web Services into its own data centers after it bought the photo sharing platform in 2012. But that migration did not happen until 2014. | | 4:36p |
How a Tech Company from the 60s is taking on AI, IoT  Brought to You by Talkin’ Cloud
When we talk about the next big thing in tech, it’s easy to overlook the fact that many of it relies on ideas that have existed for decades.
Take for example artificial intelligence (AI), which was formally founded in 1956 at a conference in Dartmouth College, in Hanover, NH; or the Internet of Things (IoT), which was born in the 1990s. These technologies have spurred billions of dollars of investment and the creation of hundreds of companies. But one of the companies poised for significant growth in these areas is more than 50-years-old itself.
Intel, the microprocessor company founded in 1968, is seeing tremendous opportunity in these emerging technology areas as demand for PCs and chips for traditional data centers soften. Intel’s IoT business was its fastest growing business segment of 2016, with 15 percent growth year over year, while its fourth-quarter sales of server chips to cloud service providers jumped 30 percent year over year.
Partnerships with cloud service providers, the biggest public cloud firms in the world, are helping drive demand for Intel products, Raejeanne Skillern, VP of Intel’s Data Center Group and general manager of its Cloud Service Provider Group says.
 Raejeanne Skillern, VP of Intel’s Data Center Group (Photo: Intel)
“They can consume technology as fast as we can deliver it,” Skillern says about Intel’s cloud service provider customers, which include Amazon Web Services (AWS) and Microsoft. It’s a different pace from 8 or 9 years ago when she first joined the data center group. At that time, “Amazon was just coming on board, Google was growing in search, and Intel’s position was very small, we had about 35 percent market share because AMD was largely winning in that space,” she says.
Intel calls its largest cloud service provider customers ‘Super 7’ – AWS, Google, Microsoft, and Facebook in the U.S., and in China, Tencent, Alibaba, and Baidu.
“Our strategy is fundamentally based on having deep and first-hand knowledge and collaboration with our largest customers,” Skillern says. “They’re kind of unique in the way they grow. We’ve been fortunate enough that they’ve allowed us to go in and have these very direct engagements; I think that’s kind of surprising to some because we’re a silicon ingredient supplier of an infrastructure system – why would Intel as one component have these direct relationships? But it’s because of the deep customization that we do.”
The results of this customization include Amazon’s C5 instances which are set to launch in early 2017 and are based on Intel’s next-generation Xeon cloud processors, code-named Skylake. Amazon says the C5 instances are “ideal for compute-intensive workloads like ad serving, scientific modeling, 3D rendering, cluster computing, machine learning inference, and distributed analytics.”
“For us as we bring the Skylake platform to market obviously getting our best technology into the service provider segment is going to be critical throughout the year,” Skillern says.
Part of this focus is the early ship program where cloud providers can get a hold of Intel technology a couple months ahead of the market. “This is because they’re not as constrained by the OEM ecosystem and the validation schedules that apply to enterprise,” she says.
Over the next 12 months Skillern and her team will focus on driving faster adoption in the cloud service provider segment and helping these customers grow through different services that use Intel technology, including high performance compute as a service, AI as a service, big data analytics as a service and security as a service.
“A lot of people know us as a compute provider but a lot of these challenges are happening at their platform level and software level. When they create solutions they think of the entire stack not a bunch of little, individual ingredient parts.”
“A lot of what we do is we match our engineering teams with theirs to meet at the level that they’re at; whether its platform, hardware, OS,” she says. “As they design their next generation systems we help co-design a lot of the boards and systems with them.”
Aside from Skylake, Intel also plans to launch the next processor in the Xeon Phi family, code-named Knights Mill, to power artificial intelligence.
“Artificial intelligence has been around for decades,” Skillern says. “We’re seeing it becoming very pervasive; talk-to-text, photo recognition and tools, ad matching, all of those analytics capabilities and tools come together to provide better services and answers to important questions.”
Intel set a lot of groundwork in 2016 for its AI strategy, making several key acquisitions, including a company called Nervana, which has a software and hardware stack for deep learning. In November Intel said it plans to integrate Nervana’s technology into a new product code-named Knights Crest, and the formation of the Intel Nervana AI board to further AI research and strategy.
Its partnership with Google will also help to drive its AI efforts, including a focus on optimization of the open source TensorFlow library to provide software developers a machine learning library to drive the next wave of AI innovation, Skillern wrote in a blog post last year.
“Artificial intelligence can bring amazing outcomes in medical science, retail, and business operational efficiency. I also get excited working with it because you’re bringing together such amazing minds. The company we acquired, Nervana, is a talented group of about 50 people who are some of the smartest you’ll meet in the industry,” Skillern says.
“It’s just really fun when there’s something new, and you get to learn it and see the possibilities and impact,” she says.
This article originally appeared on Talkin’ Cloud. | | 5:00p |
Companies Anticipate Big Software Deals, With Help From Trump By Brian Womack (Bloomberg) — The software industry went on a shopping spree in 2016, and this year could be even busier, bolstered by the new president’s policies and what one analyst suggests could be a major deal by Google.
The value of software deals in 2016 topped $115 billion for acquisitions closed or pending, according to data gathered by Bloomberg. That’s up about 19 percent from 2015 and easily outpaced the growth of deals in the overall technology market, which was slightly down. And this year is already off to a strong start with Cisco Systems Inc. agreeing to acquire AppDynamics Inc. for $3.7 billion right as the company was planning to go public.
Oracle Corp. and Salesforce.com Inc. were among the bigger buyers in 2016, but this year companies that have been relatively quiet may step up, analysts say. They could include other important names in enterprise technology — and potentially some of the biggest players in the broader industry: Google and Amazon.com Inc. The new Trump administration has talked about rewriting tax provisions that could return profits stored away in other countries, which would fuel the size and frequency of deals. And while blockbuster deals can be difficult to pull off, some see the potential for Google pursuing Salesforce.
“There’s no question that’s it going to be an active year,” said Crawford Del Prete, an analyst at IDC. “The stakes are just so high for these software companies to make sure that they stay relevant.”
Arete Research Services LLP, an influential research firm, issued a report arguing why Google parent Alphabet Inc. should make a big merger splash and buy Salesforce. The note says Google could nab it for around $73 billion, immediately giving its cloud business surer footing in enterprise sales. Such a deal would also have a better chance of passing regulatory muster than one in ads or search, according to the Arete analysts.
It’s a radical proposal, but not implausible. Google is pouring nearly everything it has into the cloud. Since recruiting enterprise veteran Diane Greene in late 2015, Google has invested internal resources and acquisition coffers in her division, which includes its customized workplace apps and cloud storage service. Google and Salesforce both declined to comment.
But not everyone thinks it’s a certainty. “I wouldn’t think this is particularly likely,” said Jonathan Atkin, managing director of RBC Capital Markets. “But this is a way to jump-start their momentum they have in their cloud product. It’s an acquisition of a customer base, effectively.”
Amazon is also trying to grow its cloud business. It’s the largest player in the so-called public cloud, which lets customers easily rent computing and storage power at their data centers. Amazon declined to comment on potential acquisitions.
“I think you’re going to start seeing them being more aggressive in 2017,” said Sean Jacobsohn, a venture investor at Norwest Venture Partners.
A potential change in tax laws could free up more spending money for a lot of players. U.S. companies have earned profits overseas for years, but have tucked them away in foreign countries to escape a U.S. tax rate of 35 percent. As long as the profits remain overseas, they remain untouched by the government.
Yet Trump’s nominee for the U.S. Department of Treasury, Steve Mnuchin, may give companies a new incentive. He said in November he wanted to bring “a lot of cash back to the U.S.,” proposing a one-time 10 percent repatriation tax to return corporate tax revenue held abroad, another key promise by Trump.
“I expect that ’17 will be another big year in M&A, particularly if we get repatriation or a tax cut,” said Joel Fishbein, an analyst at BTIG. “I think that will spur more M&A as some of those offshore dollars can be used for domestic acquisitions.”
There is a lot of corporate money overseas that could come back inside the country if Trump acts on taxes. Oracle and Microsoft have more than 80 percent of their cash, near-term cash and short-term investments in foreign subsidiaries, according to recent filings. The cash overseas held by large tech companies runs into the hundreds of billions of dollars, according to Ross MacMillan, an analyst with RBC, and that’s money companies may use to go shopping. Oracle and Microsoft declined to comment.
Overall in 2016, the value of merger-and-acquisition business software deals totaled $117.6 billion– and included a wide swath of software companies in technology sectors, along with some in health care and entertainment. That doesn’t include the blockbuster tech deal of the year: Microsoft paying $26 billion for LinkedIn. LinkedIn does not fit neatly into the category of business software because of its professional networking tools that are used by workers outside of business hours.
Even without repatriation, companies are sitting on a lot of cash that can be used to finance deals. Cisco — which has about $70 billion in cash and equivalents and investments overall — saw opportunity to expand its business-software capabilities with the deal for AppDynamics, which had previously been valued at more than $1 billion, making it a so-called unicorn. It’s the biggest software acquisition for the company since 2012.
The purchase is also significant because it derailed the first potential IPO of the year for a U.S. tech company. Now it’s seen by some advisers as the first in a long-expected wave of unicorn buyouts. With more than 150 companies valued at more than $1 billion and a temperamental IPO market, being acquired offers startups an exit from the funding roller coaster and can also shield them from the regulatory and investor scrutiny that comes with a public listing.
“IPO investors still aren’t paying premiums, so M&A looks like an attractive option to private companies,” according to Dan Scholnick, a general partner at Trinity Ventures. Unless IPO investors are willing “to pay up, the M&A trend will continue.”
International Business Machines Corp., which recently reported its 19th consecutive quarter of sales decline, could gobble up smaller companies to reinvigorate growth as well. IBM declined to comment. SAP SE, a rival to Oracle and Salesforce, might also be on the hunt for cloud acquisition targets, analysts say. Its last big multibillion-deal came in 2014 when it spent more than $7 billion on Concur Technologies, which helps businesses manage travel expenses. Still, SAP, which declined to comment on specific future targets, signaled it will look at acquisitions under $1 billion.
Still, there are several factors, analysts say, that could lead to a disappointing 2017 for deals. The Trump administration could fail to pass tax reform. And because 2016 was such a busy year, it may be difficult to keep up the pace; some of the acquirers will need time integrate their purchases. Some say that many of the interesting companies got picked up last year.
“You did see a lot of good assets come off the table,” said Rodney Nelson, an analyst at Morningstar.
Not everything, though. Analysts say two big potential targets for companies looking to expand their cloud services are Workday Inc., a provider of human resources software, and ServiceNow, which helps companies manage technology and human resources tools. Both are relatively large with a valuation of about $15 billion. Both companies declined to comment.
Other companies that might look like good buys include FireEye Inc., a security company; Hubspot Inc., a cloud-based marketing service; and Zendesk Inc., which serves call centers, according to Brad Reback, an analyst at Stifel Nicolaus & Co. | | 5:40p |
Three Solar Farms, Costing $45M, to Power Facebook Data Center in New Mexico Affordable Solar, an Albuquerque-based solar-farm developer, will build three solar farms that will supply energy to the future Facebook data center in New Mexico. The projects will cost $45 million total, the office of New Mexico Governor Susana Martinez announced this week.
Facebook builds and operates some of the world’s largest data centers. Like many other companies with infrastructure of comparable scale, the social networking giant has committed to powering its data centers with 100 percent renewable energy, and the solar projects will bring it closer to that goal.
It’s notable that the projects are in the same state as the previously announced Facebook data center site. Investment in renewable energy generation on the same section of the utility grid as the data center is a more effective way of ensuring the data center is carbon-neutral than the common practice of investing in renewable energy generation far away from the point of consumption and applying renewable energy credits remotely.
Read more: Switch Gets All A’s, Four Providers Fail Green Data Center Test by Greenpeace
The Facebook data center in New Mexico, whose first phase will span about 500,000 square feet, is expected to come online next year, and the first solar farm is slated for completion by January 1, with the remaining two following closely behind, according to a statement from the governor’s office.
The three solar farms will generate only a portion of the load for the future facility, which will also use wind energy, according to the announcement.
See also: How Renewable Energy is Changing the Data Center Market | | 6:10p |
Five Steps to Make Mobile Compatibility Testing More Agile and Future-Ready Pavel Novik is Mobile Apps Testing Manager for A1QA.
With the number of devices and operating systems appearing on the market, their fragmentation poses a particular challenge to software testing and quality assurance specialists. It turns out that many companies are not ready to face the ever growing number of OSs, platforms and devices.
Android’s operating system seems to be the most complicated issue. Why so? Compare the following data: In 2012 there were about 4,000 Android device models on sale. By the end of 2015, this figure has increased six fold and made up a total of 24,000 distinct Android devices.
Obviously, companies eager to take the lead in the industry should focus not only on constant innovation, but also be able to renew their set of mobile devices on a regular basis.
Today, the most crucial point for mobile and smart device application testers is to ensure the tested product works smoothly on all types of devices used by end-users. To meet this demand, they need to take into account various network conditions under which the apps are used and experience they derive. This is commonly referred to as the compatibility of the application. And the quality assurance process to make sure it’s achieved is usually named “compatibility testing.”
There are three factors that regularly increase the complexity of compatibility testing:
- Frequent launch of new device models that incorporate new mobile technologies. To make things worse, sometimes new models appear in the market when testing has come to its logical end, which poses new challenges to product owners and managers. Another challenge that may arise while the product is being tested for compatibility is for there to be changes in UI, font size, CSS style, and color, which causes more difficulty in the testing procedure.
- Testing can’t be limited to browser layer checks and must cover additional layers, such as operating systems and technology features.
- Application functionality depends on the device hardware features, which necessitates the comprehension of how the hardware features impact the functionality.
To make this point clear, let’s consider the following case concerning the popular smartphone app, Pokémon GO that combines augmented reality and GPS tracking.
When testing the application (of course, it’s impossible to provide full testing coverage without access to the app’s backend), it was found that in some devices the application wasn’t matched with camera drivers and therefore switched off the augmented reality mode, which is an important factor for the apps’ incredible success.
With all this in mind, the main question for mobile testers should be how to run compatibility testing and:
- Cover maximum number devices and come close to 100 percent of my end users’ device base?
- Test for those functionalities that are more likely to fail in the event of a technology upgrade or a new device appearing on the market?
- Trace the detected issue to the corresponding layer – OS, browser, device features or skins?
The following five-step approach to make compatibility testing more agile and scalable will help answer these questions. At first glance, it may look time-consuming, but it will pay off in the end.
Step One – Create the Device Compatibility Library: Take every device or model available in the market and structure the following information: platform details, technology features supported by the device (audio/video formats, image, and document formats, etc.), hardware features included in the device, and network and other technology features supported by the device. Most of this data will be easy to find on the manufacturer’s website or product release notes. This list will be helpful on numerous projects.
Step Two – For purposes of every unique project, shortlist the device list based in compliance with region or country’s peculiarities to cover maximum end users in the region: Consider the available poll results and market analysis. You can also use sites like DeviceAtlas, StatCounter or Google Analytics to define the most popular devices in the region.
Step Three – Divide all devices into two lists: fully compatible vs. partially compatible devices: Fully compatible devices support all technology features required to make all the application functionalities work seamlessly, while partially compatible devices may not support one or a few features and therefore cause error messages.
The question that forms the bulk of the debate is whether it’s worthwhile to run tests on emulators. Real devices are always a better option, as they give a true feeling of the app.
But if the real device is rare or highly expensive, “half a loaf is better than no bread.” Android and iOS emulators are mainly designed for native applications, but their default browsers accurately reproduce how the app will look on a real device.
Step Four – Run tests on fully compatible devices: When prioritizing testing, check for 100 percent apps functionality on select devices from this list. If you don’t have the opportunity to run tests on all the devices on the list, focus on at least one from each manufacturer.
Step Five – Run tests on partially compatible devices to the extent possible: Try to perform testing on the latest and most widely used set of devices. Place initial focus on the functionality that might be influenced by unsupported features.
This simple approach has been proven to be very efficient while handling multifaceted compatibility testing.
After going through all of these steps, it should be easier to understand that mobile compatibility testing is important and in the end, may be less time-consuming, more agile, and future-ready if you follow this plan of action. Remember, starting with testing early always pays off! In case of mobile compatibility, it should be performed when the build is stable enough to support testing.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 8:43p |
Virtualization Fuels Converged Infrastructure Deployments Sponsored by: Dell and Intel
 
New IT service delivery methodologies are revolutionizing how IT departments function and how users access the applications that make businesses successful. Demands on IT have necessitated a change to on-demand services, self-service models, and increasing focus on time-to-value for IT projects.
Research firm Gartner agrees that the use of cloud computing is growing, and says that in 2016 this growth will increase to become the bulk of new IT spend.
“Overall, there are very real trends toward cloud platforms, and also toward massively scalable processing. Virtualization, service orientation and the internet have converged to sponsor a phenomenon that enables individuals and businesses to choose how they’ll acquire or deliver IT services, with reduced emphasis on the constraints of traditional software and hardware licensing models,” said Chris Howard, research vice president at Gartner. “Services delivered through the cloud will foster an economy based on delivery and consumption of everything from storage to computation to video to finance deduction management.”
With all of this in mind, it’s no wonder that new types of data center models have directly impacted growth in virtualization technologies – specifically application delivery and Virtual Desktop Infrastructure (VDI). In fact, we’ve been seeing a steady increase in spending when it comes to converged systems. More industries and verticals are redesigning their data center to better support efficiency, scalability, and improved user productivity. According to IDC, during the fourth quarter of 2015, the Integrated Systems (CI) market generated more than $2.0 billion in revenue, a year-over-year increase of 6.7%. These systems are pre-integrated and vendor-certified, containing server hardware, disk storage systems, networking equipment, and basic element/systems management software.
A recent Dell EMC | Intel Data Center Trends Survey points out that one if five respondents have deployed a converged infrastructure (CI) solution. An additional 25% plan on doing so over the next 12 months. Their top two business drivers are:
- Improve density for virtualization – 58% of respondents
- Enable a virtualization initiative (VDI, applications) – 53% of respondents.
In this context, converged infrastructure can be a perfect use-case for virtualization, application delivery, and VDI:
- CI Creates Next-Generation Density. Intelligent converged infrastructure solutions allow you to create greater levels of density while deploying fewer pieces of hardware. Not only are you creating better multi-tenant ecosystems with converged solutions, you’re also doing this very cost effectively. Remember, convergence and consolidation also help save on data center real estate, cooling and power consumption, and even management costs. Most of all, consolidation can also help with density. Converged systems help you remove legacy data center components while still catering to a digital user. This means more apps, more virtual desktops, and more capabilities around mobility. There are numerous benefits to the user, the business, and the data center when converged infrastructure helps consolidate critical technologies.
- Optimizing User Experiences and Controlling Critical Resources. There’s been a major resurgence behind a number of virtualization technologies, including VDI, application delivery, and user management. Because of this, resource management has been a critical initiative for a number of organizations. The challenge revolves around resources that are isolated, hard to get to, or not properly utilized. This is where converged infrastructure comes in. Remember, this spans logical and physical deployments of convergence. Converged infrastructure solutions act as central points for resource control; it’s as simple as that. Data center and cloud administrators have fewer management points and greater levels of control over their critical (and expensive) resources. Moving forward, there will be more virtual technologies and even more integration with cloud.
- Changing Business and Data Center Economics. You’re not just deploying a new piece of technology into your data center. You’re deploying a business tool designed to help the organization grow and introduce more efficiency. Converged infrastructure is a means to replace old hardware, consolidate resources, and reduce the entire data center footprint. Many pushes into the cloud require better economics to support more users and applications. Older server, network, and compute technologies – sitting fragmented – could never achieve the level of scale that converged infrastructure can provide. So, by reducing data center footprints with convergence, we’re not only unifying critical resources, we’re also potentially reducing operating costs.
Moving forward, organizations will continue to create greater levels of efficiency for their critical data center systems. This means deploying architecture that can support new business initiatives, while still empowering users to be extremely productive. Many organizations (across many verticals and industries) are looking to CI as a way to revolutionize their data center architecture. If you’re working with virtualization today, look to converged infrastructure systems to help you align with your business and future market demands. |
|