Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, December 16th, 2015
| Time |
Event |
| 1:00p |
Five Best Practices to Optimize Virtualization and Cloud At this point, almost every modern data center will have worked with some type of virtualization technology. A recent Cisco report noted that cloud workloads are expected to more than triple (grow 3.3-fold) from 2014 to 2019, whereas traditional data center workloads are expected to see a global decline, for the first time, at a negative 1 percent CAGR from 2014 to 2019.
Traditionally, one server carried one workload. However, with increasing server computing capacity and virtualization, multiple workloads per physical server are common in cloud architectures. Cloud economics, including server cost, resiliency, scalability, and product lifespan, along with enhancements in cloud security, are promoting migration of workloads across servers, both inside the data center and across data centers (even data centers in different geographic areas).
With this in mind, it’s important to note that the modern hypervisor and cloud ecosystem have come a long way. VMware, Microsoft, Citrix, and others are paving the way with enterprise-ready technologies capable of consolidating an infrastructure and helping it grow harmoniously with other tools. Today, many systems are designed for virtualization and cloud readiness. In fact, best practices have been written around virtualizing heavy workloads such as SQL, Oracle, Exchange, and so on. Taking advantage of these cloud-ready platforms will make your data center more agile and more capable of meeting market demands.
As cloud and virtualization continue to grow and impact more organizations, let’s pause and examine some key considerations and best practices around these technologies.
- Use virtualization and cloud for business resiliency. Remember, from a DR and efficiency perspective, it’s always easier to provision a new VM than it is to rebuild a physical piece of hardware. You can create snapshots, backups, and even replicate entire virtual workloads between data center and cloud ecosystems. Virtualization and cloud computing can create a great DRBC strategy when configured properly.
- Virtualization and cloud help you shift data center economics. So many environments have hardware which, at this point, is approaching their EOL. In these situations it’s very important to take good look at how virtualization technologies, in conjunction with the cloud and unified architecture, can really help an environment consolidate and expand. Remember, better hardware with more intelligence built in means more VMs per host and greater density. This means more users can be handled with less amounts of hardware and physical resources. New server and data center ecosystems allow you to dynamically provision resources and allocate users. A good hardware platform can create great cloud economics. All of this translates into cost savings in the form of power, HVAC usage, space requirements and hardware utilization within the data center.
- Cloud and virtualization give you powerful controls around resources, VMs, and users. Automation, creating workflows, and controlling entire cloud instances is now a part of the management toolset within cloud and virtualization environments. For example, by using virtual images, administrators are able to move their workloads between distributed sites ensuring the resiliency of their data. Creating highly replicated hot sites becomes easier with mature, built-in technologies that come directly with the hypervisor. For example, integration with storage systems – both onsite and remote – is now a normal practice where data deduplication and backup comes standard with a given feature set. Furthermore, integrating with intelligent virtualization and cloud management controls makes working with a virtualized datacenter much easier and more efficient.
- Always plan around capacity, growth, and business alignment. Although virtualization is being widely adopted, there are still some areas which will need careful attention. First of all, sizing and scaling an environment will always be very important. Initial planning stages are crucial to making the right hardware and resource decisions. Not having enough resources to support your user count can be much more costly to resolve after a system has gone live. Remember, as with any physical resource, the capabilities of your data center are finite. This means administrators must carefully watch how their virtual workload is operating and where their resources are going. Too often, administrators over-provision a VM only to see that most of the resources go unused.
- With such a fluid architecture cloud and virtualization requires regular testing. For any cloud and virtualization ecosystem supporting critical applications – testing and maintenance will be very important. Always remember to manage logs, VM health and accessibility regularly. This means performing off-hours DR testing to ensure production system stay live. Creating runbooks and documenting changes helps resolve issues quickly and help administrators understand their environments quickly. The density and segmentation capabilities of cloud and virtualization allow administrators to carve bits of their environments out for testing and development. This allows you to test “production” workloads in a safe ecosystem. Take advantage of this, understand your workloads, and continuously optimize how you deploy your content. Testing applications, virtualization, and your entire cloud environment will create a much more proactive data center and cloud platform.
Just like any tool, cloud and virtualization must be properly maintained and optimized. The fluid nature of the modern user and the data that they are accessing requires administrators to know how their data center is performing. New tools allow you to granularly see how resources are allocated between local data center and distributed cloud locations. With greater levels of visibility come better levels of support and management. The overall goal should be continuous optimization which aligns with your IT teams, the users, and the business. | | 4:00p |
Ten Data Center Trends to Watch in 2016 Yossi Ben Harosh is President & CEOof RiT Technologies.
All signs indicate that 2016 will be a year of many challenges. Disruptive technologies will be introduced, the exponential increase in computing power will continue, while businesses will demand a prompt response to quickly changing requirements. At the same time the requirement to be highly resource efficient will stay the same.
As a result of these challenges we predict these changes will emerge in 2016:
Data Centers Harness IoT Technologies
By incorporating smart technologies into the data center, facility managers will be able to keep track of real time status of components and environmental measurements to keep operations flowing smoothly. Sensors that measure temperature, humidity and electricity will be combined with network equipment monitoring to help data centers maintain a high level of uptime and reduce capital and operational expenditures. Data centers will have more platforms available to them, including IoT integrating data from many different sources to keep their computing facilities functioning at optimum capacity.
Hyperconvergence
Hyperconverged systems that bring together key IT system components into one box, or system, that’s managed through a software layer, will take on a larger role within data center infrastructures. Business requirements drive simplification of infrastructure and time to value within IT departments. Virtualization will become a driving force for hyperconvergence, especially as virtualization exposes the inefficiencies of SAN storage and the need to virtualize the storage and network layers. As convergence becomes a growing trend there will be a need for converged infrastructure management platforms that can provide an integrated unified view to bridge the gap between virtual networks and physical infrastructures.
Software Driven Infrastructure
As infrastructure becomes more software-defined (software defined networks) , operations will be automated, eliminating manual configuration — and reconfiguration — at the hardware-component level. This will allow for greater agility, fewer errors, and lower operational costs. The data will be presented in a way that it can be fully actionable for quicker responses for maximum availability. There will be a focus on software platforms that can provide a unified view of components and connectivity to increase provisioning and management efficiency. If you have a database of all of the components that is real time and up to date and accurate, you can recovery from system failures more quickly, not only when there are problems with virtual services but also when there are faulty components in the physical infrastructure.
Building Block Scalability
Enterprises will want to mimic what large cloud giants like Facebook, Google and Amazon have architected: a highly responsive IT environment that can easily expand and contract as dictated by business requirements. However, they will not be willing to part with the resiliency they’ve grown accustomed to with traditional architectures and technology. Infrastructures that enable building-block scale, with the same level of redundancy and resiliency using software-defined approaches, will become more popular to those with scale aspirations.
Automation for Labor Efficiency
Automation of data center management activities will become the norm to reduce the work load and human errors, and to speed up responsiveness to equipment failures. There will be a shift in how infrastructure and operations teams administer the IT environment with the introduction of fool proof instructions such as using IP discovery for automatic validation. In this new environment, administrators will need to embrace new skills and get comfortable shedding the mundane and repetitive tasks, such as provisioning storage. Organizations will hire for an entirely new set of skills within the data center – moving away from individualized domain experts in the stack of the infrastructure to skill sets that focus on automation, API integrations between technologies, the outcomes of user experience and how to integrate the new with the old.
This promises to be an interesting year for the data center, filled with new virtual capabilities and new processing demands. The data center managers who will come out ahead in 2016 are those who can keep their fingers on operations by utilizing software platforms to ensure high availability and resource utilization while forging ahead with new technologies.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 7:31p |
Switch CEO: Michigan Data Center Build is a Go Now that the state legislature has approved tax breaks for data center owners and users in Michigan, the project to convert the pyramid-shaped office building outside of Grand Rapids is a go. Future of the project by Las Vegas-based data center provider Switch hinged on the bill’s passage, and lawmakers rushed it through the legislative process to get it approved before the holidays.
The bill, passed by the state House Tuesday, now heads to Governor Rick Snyder’s desk for signing. In a phone interview, Switch CEO Rob Roy said the company has decided to go ahead with its Michigan data center construction plans “100 percent.”
Those plans call for 2 million square feet of building space, including the Steelcase Pyramid and several additional buildings Switch plans to erect around it. The full build-out could take up to 10 years and include six buildings, the company’s spokesman Adam Kramer told us earlier.
The pyramid takes its name from its former occupant, the big office furniture supplier Steelcase.
Michigan was up against numerous other states in competing for the big construction project. Without the tax incentives, “Switch couldn’t build in Michigan,” Roy said. “None of the clients would ever come.”
They wouldn’t come because there are more than 20 other states that have data center tax breaks that slash taxes on expensive IT equipment purchases completely or by 50 percent, he explained. “The states competing with each other have created that.”
State lawmakers made several amendments to the bill to break a legislative stalemate over it, according to MLive. One of them, made Tuesday, was a sunset on the sales and use tax breaks if the data center industry in the state doesn’t create 400 new jobs minimum by 2022 and 1,000 new jobs by 2026.
The final version of the bill also did not include personal property tax breaks which were in a previous draft. While sales tax exemptions are important for data center tenants who spend millions of dollars on IT gear they house in those facilities, personal property tax on that same equipment can add up to a lot of money, and some states, such as North Carolina, offer that exemption as well.
Local governments in Michigan still have the option to exempt individual data center users from personal property tax.
Switch is likely to bring major technology companies as its tenants to the state. The company’s client list in Las Vegas includes eBay, Google, Amazon, Intel, HP, Intuit, and Boeing, among others, and data center providers’ decisions to expand are often driven by existing major customers. eBay, for example, will also be the anchor tenant in the new data center Switch is building in Reno, Nevada – a project the online auction company said this week it would invest $230 million in.
Switch also announced earlier this year its first project overseas, a 450,000-square-foot data center build outside of Milan. | | 8:01p |
Carbonite Buys EVault from Seagate for $14M 
Article courtesy of The Var Guy
Carbonite has acquired EVault, the business continuity and disaster recovery division of Seagate Technology for $14 million in cash, the company announced on Wednesday.
The acquisition will allow Carbonite to add several of EVault’s solutions to its own portfolio as the company looks to expand its lineup of data protection and recovery solutions for small and medium business customers, according to the announcement. The purchase is also expected to boost Carbonite’s status in the lucrative SMB market, which the company estimates to be worth $13 billion in the U.S. and more than $40 billion worldwide.
“With this acquisition, Carbonite is taking a big step forward in meeting the data protection and business continuity needs of the entire SMB market from home offices to medium-sized businesses,” said Mohamad Ali, president and CEO of Carbonite, in a statement. “EVault’s proven technology… enables us to round out our portfolio and immediately provide the features and functionality larger businesses require to support their complex environments.”
Straight from the press release, Carbonite said it plans to add the following EVault products to its portfolio:
• EVault Cloud Backup and Recovery – a software-only solution for server backup
• EVault Backup and Recovery Appliance – provides all the benefits of EVault Cloud Backup and Recovery with an appliance form factor for local backup and restore
• EVault Cloud Resiliency Services – A disaster recovery as a service solution providing failover in the cloud
Carbonite expects to complete the acquisition of EVault’s North American assets by January, with the acquisition of its European Union assets scheduled to close during Q1 2016.
Carbonite recently appointed Jessica Couto as the company’s new vice president of Channel Sales and Marketing, replacing former channel chief David Maffei, who stepped down from the role in July to become chief revenue officer at Akumina.
This first ran at http://thevarguy.com/information-technology-merger-and-acquistion-news/carbonite-purchases-evault-seagate-14-million | | 10:09p |
C7 Plugs Utah Data Centers into CenturyLink’s Network C7 Data Centers has connected two data centers on its campus in Bluffdale, Utah, to CenturyLink’s massive fiber network that spans the US, which also gives it access to the telco’s international transport network, the companies said in a statement.
The deal gives C7 customers the option to interconnect with about 300 data centers on CenturyLink’s network, including the telco’s own 60 facilities, Eric Barrett, director of network product management at CenturyLink, said. They can also access CenturyLink’s cloud services over the network.
While C7’s campus in Bluffdale is large, the town is better known for being the site of the massive NSA data center the agency reportedly uses to collect, store, and process electronic communications data acquired via programs disclosed by the former NSA contractor Edward Snowden.
CenturyLink has been struggling to grow its own data center colocation business, whose nexus was the $2.5 billion acquisition of Savvis in 2011. The company is currently considering alternative options for ownership of its data center assets, including potentially selling some or all of the assets. |
|