Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, January 20th, 2016
| Time |
Event |
| 1:00p |
Top Five Data Center and Cloud Considerations for 2016 Welcome to 2016 – the year of our digital [r]evolution.
The next few years will be defining moments for the modern data center and the entire cloud ecosystem. We’re beginning to see more markets, industries, and verticals adopting next-generation technologies. All of this impacts the way we design data centers and all of the resource supporting our diverse applications and users.
We’ve reached a point where almost every person has a digital footprint. We can create a digital identifier with critical pieces of information for babies who don’t even have a heartbeat yet. Our data is in the cloud before we’re even born. That’s something we must all become accustomed to.
Today, new market disruptors are pushing organizations to rethink their entire business strategies and find ways to intelligently align their IT environments. With all of this in mind, let’s take a look at the top five trends that will be impacting your data center and cloud environments in 2016.
- The Internet of Things. An explosion of interconnected devices will hit the industry. This will range from cars and appliances to consumer electronics. We’ll see homes, buildings, and even entire cities light up as a part of the cloud and internet ecosystem. Healthcare is using more “smart” devices, which help monitor patients and provide real-time health services options. Cisco recently reported that globally, the data created by Internet of Everything (Cisco’s name for the Internet of Things) devices will reach 507.5 ZB per year (42.3 ZB per month) by 2019, up from 134.5 ZB per year (11.2 ZB per month) in 2014. Data created by IoE devices will be 269 times higher than the amount of data being transmitted to data centers from end-user devices and 49 times higher than total data center traffic by 2019. Regardless of the vertical you’re in, IoE may very well be impacting your business in the very near future.
- SDN will be huge. Having realized the very real benefits behind server virtualization, the network is next. One of the biggest challenges around today’s networking ecosystem is the complexity and distribution of resources. Organizations are having problems managing policies, controlling administrative privileges, and resource allocation. This challenge continues to grow as more devices and more data is passed through the data center. Gartner recently pointed out that by the end of 2016, more than 10,000 enterprises worldwide will have deployed SDN in their networks, a tenfold increase from 2014. According to a new forecast from IDC, the worldwide SDN market for enterprise and cloud service provider segments will grow from $960 million in 2014 to over $8 billion by 2018, representing a robust CAGR of 89.4 percent.
“SDN is taking center stage among innovative approaches to some of the networking challenges brought about by the rise of the 3rd Platform, particularly virtualization and cloud computing,” said Rohit Mehra, VP, network infrastructure, at IDC. “With SDN’s growing traction in the data center for cloud deployments, enterprise IT is beginning to see the value in potentially extending SDN to the WAN and into the campus to meet the demand for more agile approaches to network architecture, provisioning, and operations.”
Today, working with SDN is actually easier than it was before. You have options around commodity technologies, hypervisor-integrated SDN, and even SDN at the hardware layer. Specifically, you can now tie in SDN and network functions virtualization directly into the services that need them. Just make sure to know and understand the differences between the two. Know that data centers are already looking at and adopting SDN and NFV technologies. The latest AFCOM State of the Data Center Survey reports that between now and 2016, 83 percent of survey respondents said that they’ll be implementing, or have already deployed SDN or some kind of NFV. Be ready for a much more agile and virtualized network.
- Next-generation advanced persistent threats (APTs) will continue to evolve. There is a very real economization and industrialization around the entire hacking community. There are now nation states, lone individuals, and entire teams working to access your valuable information. Security moving forward will no longer have a silver bullet. With next-gen APTs, you’re working with physical, logical, and even human threats around your data center. Hackers are taking aim at very specific weaknesses and various services which they try to exploit. Juniper Research recently pointed out that the rapid digitization of consumers’ lives and enterprise records will increase the cost of data breaches to $2.1 trillion globally by 2019, increasing to almost four times the estimated cost of breaches in 2015. There have even been instances where big network and security vendors had their own holes in the system to deal with. Moving forward, organizations will need to examine the entire attack continuum and ensure that they have intelligent security services spanning their entire data center and cloud.
New types of cloud-ready security technologies help encrypt and secure new types of traffic points between the data center and cloud ecosystems. Know that your data may very well be a target. With that in mind, create a security architecture which can lock down specific data points and be able to integrate with various technologies within your data center. Most of all, your network will become a critical part of the security equation.
- Automation and orchestration will be adopted even more. The next-generation cloud environment happens to be very diverse. Many automation tools now place governance and advanced policy control directly into their product. There are technologies that allow cloud admins to control security aspects of their cloud and gain quite a bit of visibility. Aside from being able to control costs around resource utilization, this type of platform creates a very dynamic automated cloud platform. Scaling, orchestration, and even multi-cloud controls are all built in. Here’s the cool part: your cloud automation platform now becomes proactive as well. In working with automation and analytics, you’re able to visualize and forecast requirements for your cloud infrastructure. Business, workflow, and data center orchestration/automation technologies will help organizations control resources and even optimize the overall user experience. Remember, you’re creating a proactive environment capable of adapting to very dynamic market shifts. Furthermore, you’re controlling the economics around resource and data center management. Automation and orchestration allows administrators to focus on growing the business rather than persistently dealing with putting out fires. When you look at orchestration and automation technologies, you can now look to the cloud. Completely cloud-born tools allow you to integrate various cloud as well as data center resource points. All of this enables the business to be a lot more agile with their critical IT ecosystem.
- Architecture convergence will change data center economics. The days of resources locked in siloes are quickly coming to an end. The challenge there becomes lost resources, challenges around management, and controlling a user’s experience. This is where converged infrastructure comes into play. You’ll see this conversation really take off this year as we define more use cases and more technologies pushing converged infrastructure forward. Some vendors will sell physical converged platforms, while others will work with hyperconvergence and actually have virtual appliances capable of aggregating all resources. Regardless, converged infrastructure is helping data center managers create greater level so of multi-tenancy and resource controls. According to a recent Gartner report, “hyperconverged integrated systems will represent over 35 percent of total integrated system market revenue by 2019.” This makes it one of the fastest-growing and most valuable technology segments in the industry today. Whether you need a physical node or two – or complete hypervisor integration – converged architectures are allowing administrators powerful control mechanisms around their business and data center ecosystem. Furthermore, integrating cloud, virtual applications and desktops, and even distributed resources becomes easier with a converged platform. With this in mind, before you go through your next hardware refresh, make sure to understand your use-cases and know where a possible converged infrastructure could actually help optimize your business.
Between now and 2020, we’re going to see an explosion in data center and cloud utilization. As more devices connect, we’ll have to manage, secure, and deliver all of that information. New solutions around automation and orchestration will help with resource control and data center interconnectivity. Furthermore, new security strategies will aim to enable the business while still securing corporate and user data. We’re going to see a big boom in the SMB and the mid-market space as more of these organizations realize the direct competitive advantages of using next-generation data center technologies while incorporating new kinds of cloud services.
There has been a lot of maturity around some truly advanced technologies. All organizations who want to create a competitive edge will need to be looking at IT solutions that can help them stay agile and adjust to a very fluid market. | | 4:00p |
Five Things Your Storage Array May Not Be Telling You Brett Schechter is Sr. Product Marketing Manager for Nimble Storage.
For today’s data-storage administrators, one of the greatest challenges is retrieving information from a storage array. A myriad of questions come to mind: Where are there bottlenecks? How should I budget for capacity or performance expansion? Is my mix of flash and hard-disk storage optimized? Have I chosen the proper RAID structure?
Almost every modern storage array offers a management interface, and some will even answer the questions posed above. However, very few arrays are capable of using the cloud to compare those answers to thousands of others who have faced similar issues.
Wouldn’t it be nice if you could get an answer to your specific question and also leverage the historic data of thousands of your peers? How about having the array make these historic comparisons for you, and ctually offer a predictive model that guides future purchases and helps you foresee and preempt problems? Going for a full out wish-list, how about an array that opens support tickets on its own, researches how others have solved them and closes the ticket — all with no human intervention required?
With these questions in mind, let’s take a look at five things the storage management portal of the future should be able to deliver, so that your highly deserved three-week vacation in Rome is not interrupted by an unexpected event back at company headquarters. These five scenarios highlight the capabilities every storage administrator should look for when implementing storage management solutions:
- Your legacy storage array provides a nice, current view of capacity on all of your LUNs and volumes. But since you are headed on vacation, can you model and predict if alerts will be triggered while you are sampling gelato?
- Your legacy storage array has a single virtual machine (VM) that is gobbling up the performance of all its neighbors. Can you isolate the VM, view its performance and its neighbors, and correct it from your palazzo? Even better, can the array do it for you?
- Your best engineers call you at 11 p.m. one night, saying that new applications have been added and they are unsure how to configure volumes to support them. Can your legacy storage solution rely upon thousands of others running this application, and quickly discover the best practices via cloud-based analytics?
- Your CIO and CFO are hosting an all-day budget planning meeting in New York City. Your phone rings at 2 a.m. – they need a simple, executive summary that includes SAN storage, operational efficiency, data-protection costs and upgrade needs, and they want it stat. Do you have a cloud-based management portal to give you these answers and save the day? Can you simply send them a link?
- As your vacation is winding down, you decide to catch up on emails. You open the summary emails your trusty, smart array has been sending. While you have been gone, your array has automatically opened over 90 percent of its support tickets. More incredibly, the array has closed and fixed over 85 percent of those, without any of your team needing to intervene.
Managing large amounts of storage has been a guessing game for the past 25 years. While information has been available, administrators may not know where to find it, or how to mine it. If available, the cost of adding in these capabilities previously was as high as that for the hardware.
Today, a select few smart solutions have taken much of the guesswork out of storage management. They are saving you money and time by helping your applications and data work at peak efficiency and the lowest TCO. If you are looking to deploy high-end storage for your business, performance metrics like IOPS and PBs are important… but only if the array is smart enough to help you use them. | | 5:22p |
IBM, CSC Partner in Hybrid Cloud Expansion 
By Talkin’ Cloud
IBM has expanded its partnership with CSC, a Virginia-based IT services provider, with the goal of developing joint applications to support mobile, analytics, and cognitive intelligence across hybrid cloud platforms.
The partnership centers around the integration of the IBM Cloud, which includes services such as analytics, mobile, networking and storage, with the CSC Agility Platform, which allows customers to use hybrid clouds across multiple cloud providers as well as their traditional IT environments.
By combining the IBM Cloud with the CSC Agility Platform, CSC customers can now use IBM cloud services in a hybrid multi-cloud environment, according to the announcement. IBM has agreed to incorporate the IBM Cloud as a key component of CSC’s IT strategy as both companies work to help clients meet their compliance requirements.
The integration is also expected to enable customers in key verticals such as healthcare, finance and insurance to distribute applications via the hybrid cloud, according to the press release.
“The addition of IBM’s cloud into the portfolio of clouds managed under CSC Agility Platform further validates our strategy of creating client value through the orchestration of multiple cloud platforms,” said Mike Lawrie, CSC’s chairman, president and CEO, in a statement. “Our expanded partnership with IBM will help us accelerate the delivery of next generation applications that give our joint clients greater flexibility, safety and control of their data as they design and adopt hybrid cloud strategies.”
CSC said it is already training its developers to design and deploy applications based on IBM’s suite of more than 120 APIs and cloud services currently available on the Bluemix cloud platform.
Enterprise users continue to transition toward hybrid cloud services as they look to speed up the delivery of applications. Naturally, both IBM and CSC want to take advantage of growing demand to provide customers with the content their end users need in as little time as possible, while also turning a profit themselves.
“Cloud has become the key driver of helping companies digitally transform their business operations for greater value,” said Robert LeBlanc, senior vice president of IBM Cloud. “Working with CSC will provide greater value to our joint clients, helping them leverage the IBM Cloud to speed application development, data analysis, and faster value from their products and services across hybrid clouds deployments.”
This first ran at http://talkincloud.com/hybrid-cloud-computing/ibm-csc-team-hybrid-cloud-expansion | | 6:00p |
Tech Companies Clamor for Cloud Patents 
By The WHIR
Some of the biggest names in cloud computing are the companies who had the most patent filings in 2015. With 7,355 patents last year, IBM received the most US patents recipients for the year – a title it has held for 23 consecutive years.
Rounding out the list are Samsung, Canon, Qualcomm, Google, Toshiba, Sony, LG, Intel, and Microsoft. Samsung, for example, published its patent for managing cloud content delivered through mobile devices. Qualcomm had patents relating to cloud-enhanced web browsing, and Internet of Things devices being able to find cloud services.
It’s perhaps most striking that Amazon Web Services, which currently has the largest percentage of the cloud market, isn’t among the top patent filers. But it might come down to business model.
As Charles Babcock noted in an InformationWeek article, IBM seeks to monetize its R&D investments that result in around $500 million to $2 billion per year in IP revenues from a combination of licensing fees and patent sales. This should be proof that the research going into these patents is being put to use, and that there are healthy incentives in come up with new cloud technologies. There are plenty of patented technologies to pay for.
More than 2,000 of IBM’s patents relate to the cloud and cognitive computing, of which it is a pioneer with its Watson services. Some of these patents included helping machines understand emotion for more natural conversation, and detect when they’re either dealing with a human or a machine to root out online fraud.
On the cloud side, IBM also has patents for minimizing network latency between resources and between end users and resources by determining the shortest network routes between them and for drawing on other cloud service resources to manage intensive workloads.
“IBM’s investments in R&D continue to shape the future of computing through cognitive computing and the cloud platform that will help our clients drive transformation across multiple industries,” IBM president and CEO Ginni Rometty said in a statement. “IBM’s patent leadership demonstrates our unparalleled commitment to the fundamental R&D necessary to drive progress in business and society.”
By the middle of 2015, IBM had more than 400 new cloud patents for the year addressing a number of issues such as application deployment speed, cloud data center security, and cloud management, storage, and maintenance. This included patents for high availability for cloud servers using virtual machine snapshots, and by monitoring the availability of VMs in a networked computing environment.
It also had developed methods for secure cloud deployments involving sensitive data, providing granular access control for cloud data, and managing and deploying unified cloud computing infrastructure across physical and virtual environments.
This first ran at http://www.thewhir.com/web-hosting-news/tech-companies-clamor-for-cloud-patents
| | 6:30p |
Report: OpenStack Hampered by Skills Shortage 
By The VAR Guy
More than eighty percent of enterprises plan to adopt OpenStack as a cloud computing solution or already have. Yet, half of organizations that have tried to implement it have failed, hampered by lack of open source cloud computing skills. That’s according to a survey out this week from SUSE, the Linux vendor, which sheds vital light on current OpenStack adoption trends.
The survey results suggest strong enthusiasm for open source cloud computing, with ninety-six percent of respondents reporting they “believe there are business advantages to implementing an open source private cloud,” according to SUSE.
Strong interest in private clouds of the type OpenStack enables is also clear. Ninety percent of businesses surveyed have already implemented at least one private cloud, SUSE reported.
Yet for all that enterprises appear to want to build OpenStack private clouds, many — specifically, sixty-five percent — of those that have already done so reported that it was difficult. And, again, half reported having failed in their OpenStack endeavors.
Why this mismatch between will and way? In its report on the survey findings, SUSE chalks the OpenStack difficulty up to three main factors:
- Forty-four percent of companies say they plan to install OpenStack themselves. That could lead to failure, SUSE says, if the businesses lack in-house OpenStack skills. SUSE implies that OpenStack deployment would be less risky if companies adopt commercial distributions of the platform — like the one SUSE offers.
- Almost all respondents report vendor lock-in concerns, which may make them reluctant to implement private clouds despite their desire to do so.
- Eighty-six percent of respondents said the private cloud skills among engineers in the labor market are not sufficient for allowing them to adopt OpenStack or other private clouds with full confidence.
So that’s the state of OpenStack at the beginning of 2016: Everyone wants it, but a skills shortage and a perceived lack of vendor-neutral distributions is stalling adoption.
The numbers reflect a survey of 813 IT professionals in the United States and Western Europe that SUSE commissioned.
This first ran at http://thevarguy.com/open-source-application-software-companies/suse-openstack-cloud-demand-high-hampered-skills-shortage | | 7:00p |
IIX Console to Automate Cross-Connects in vXchnge Data Centers vXchnge, a data center provider targeting secondary US markets, has partnered with IIX to make its automated network connectivity provisioning platform available to its customers, the companies announced this week.
The platform, called Console, addresses the problem of complexity in establishing direct network links between data center users and their partners, customers, or service providers, be their servers in the same data center or a facility elsewhere. Even within the same data center, provisioning these cross-connects requires heavy-duty network engineering skills, which most data center customers don’t have in-house. Console’s software automates the process, making it a lot easier.
Demand for direct network links between a variety of data center users is growing. Enterprises use private links to connect to public cloud providers, for example, while service providers may need to connect to other service providers as they devise combined offerings.
In November, IIX raised $26 million from a group of Silicon Valley’s venture capital heavyweights to scale its business. The company says about 150 data centers around the world are now accessible via its Software-as-a-Service platform, which means any user with servers inside any of those 150 facilities can almost instantly provision a private network connection to any other user in the network.
The partnership between IIX and vXchnge covers the data center provider’s facilities in Austin, Nashville, Philadelphia, Portland, Silicon Valley, and New Jersey but may expand to more locations in the future, vXchnge announced this week.
In recent years, vXchnge has been positioning itself as an edge data center provider, pursuing data center markets aren’t known for high concentration of data centers.
As more media content moves online and as companies use more and more cloud services, there’s rising demand for data center capacity in those secondary markets for the purpose of caching content for delivery to local end users. Edge data centers save content providers money in network transport fees they otherwise have to pay to carry content from the big internet hubs, such as New York or Los Angeles, to end users over long distances every time the users request it.
Read more: How Edge Data Center Providers are Changing the Internet’s Geography
In a big expansion push last year, vXchnge launched a new data center in Philadelphia and acquired eight data centers in secondary US markets from Sungard Availability Services. | | 8:23p |
TierPoint Buys Midwest Data Center Provider Cosentry TierPoint has acquired Cosentry in a deal that will add nine data centers to the data center provider’s rapidly growing portfolio.
TierPoint, which targets secondary US data center markets, has been expanding primarily through acquisition over the last two years. Its biggest move last year was the $575 million acquisition of the entire data center services business of the telecom Windstream.
The deal is also an example of a strategic change in TierPoint’s acquisitions. Its early acquisitions focused on pure-play colocation companies, Philbert Shih, managing director at Structure Research, said. Cosentry, after acquiring the hosting business of Xiolink in 2014, has a mix of services that includes managed cloud, which is “an area that TierPoint has not yet entered but is likely to be working on.”
Omaha-based Cosentry’s data centers are in several markets throughout the Midwest, including Omaha; St. Louis; Kansas City, Missouri; Kansas City, Kansas; Sioux Falls, and Milwaukee. Terms of the deal were not disclosed.
Read more: Windstream to Sell Data Center Business for $575M
Once the acquisition is closed, Cosentry will operate as a subsidiary under the TierPoint brand, TierPoint said in a statement. Its current owner, private equity firm TA Associates, will become a major investor in TierPoint.
Jerry Kent, TierPoint chairman and CEO, said the move would create economies of scale and give the company access to “additional financial firepower” of TA.
TierPoint’s current phase of expansion started after the company was acquired by a group of investors that included its top management in 2014. It was founded in 2010 as Cequel Data Centers, using assets by data center provider Colo4, which it acquired. It then acquired another data center provider called TierPoint and took its name.
Besides the Windstream deal, last year, TierPoint acquired Chicago data center provider AlteredScale and Florida provider CxP.
This article has been updated with comments from Philbert Shih of Structure Research. |
|