Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, December 29th, 2014
Time |
Event |
1:00p |
Big Switch Ramps up Sales Efforts as Enterprise SDN Adoption Grows As more enterprises get comfortable with adopting the hyper-scale data center operators’ approach to IT, startups like the software defined network vendor Big Switch are in accelerated growth mode.
Gregg Holzrichter joined Big Switch this month as vice president and chief marketing officer to help the company scale and grow its marketing and sales organization to capture the opportunity. Big Switch recently tripled the size of its sales team and is now in the process of adding marketing support, he said.
Big Switch has a Linux-based network operating system for bare metal and virtual switches called Switch Light, an SDN monitoring solution called Big Tap, and an SDN fabric that creates a virtual network on bare metal switches. It launched the fabric product, called Big Cloud Fabric, in July.
Now that the product line is fleshed out, it’s time to ramp up sales and marketing. Holzrichter considers helping startups scale a personal specialty. He has done this at the software defined storage company Atlantis Computing, where he led marketing before joining Big Switch. He also helped storage virtualization company Virsto Software establish a marketing organization and ultimately sell to VMware in 2013.
Big Switch saw a major uptick in adoption of its technology in the second half of the year, including two $1 million-plus deals, Holzrichter said. One of the two big customers was a financial services company, and the other a software vendor. The company also recently closed its first deal with a higher education institution.
Enterprise SDN Made Easy
Big Switch is making SDN on bare metal switches palatable for traditional enterprises that don’t necessarily have the internal engineering resources web-scale data center operators like Facebook, Google, or Twitter do. Enterprises are interested in this approach because open bare metal switches are cheaper than the proprietary hardware-and-software bundles vendors like HP, Cisco, and Juniper sell.
Companies like Big Switch take all the custom software development work the web-scale approach requires out of the enterprise SDN equation. Big Switch users don’t have to do any development in Puppet or Chef or manage Linux, Holzrichter said.
More Bare Metal Switches on the Market
The business case and the interest is there. The enterprise SDN opportunity is also growing because more and more bare metal network hardware is becoming available. Even some of the “incumbent” vendors have introduced open switches.
Dell now has a line of switches shipping with the Switch Light OS by Big Switch or with Cumulus OS – another Linux-based alternative. Juniper announced plans to ship an open commodity switch in 2015 that will support non-Juniper operating systems.
Dell is a reseller for both Big Tap and Big Cloud Fabric. Big Switch has not done any joint development work with Juniper yet, “but it’s something that we’re working on,” Holzrichter said.
Dell’s open switches cost 15 to 20 percent more than “white box” switches, he said. But they’re still cheaper than the traditional full-package solutions enterprise IT shops are used to buying.
Holzrichter expects more incumbent networking vendors to add similar product lines. “We believe juniper’s not the last of the traditional networking companies that’s going to announce this in 2015,” he said. | 4:30p |
Data Migration Service Level Agreements for Data Centers Valeh Nazemoff is an international bestselling author and SVP of Acolyst, with a focus on data migration solutions for the Federal Data Center Consolidation Initiative and Act of 2013.
Recently, I met with the Deputy CIO of an agency within the Department of Defense. The agency had just received an unpleasant report card from the Office of Management Budget (OMB), an organization that requires agencies and departments to routinely report to Congress on realized savings. As a result, the Deputy CIO was more convinced than ever of the urgent need to migrate data and consolidate data centers for a variety of reasons.
In September 2014, the United States Government Accountability Office (GAO) issued report GAO-14-713, titled Data Center Consolidation – Reporting Can Be Improved to Reflect Substantial Planned Savings, covering data consolidation inventories from 24 departments and agencies. Only two agencies were reported to have achieved success with “realized savings and efficiencies from the migration to enterprise data centers.” Further, only one agency was reported to have successfully “instituted a culture of continuous process improvement to seek new, cost effective methods, tools, and solutions for data center migration.” Only one.
Transforming Mindsets, Creating Change
Federal agencies and departments must transform their mindsets in order to migrate data in ways that meet GAO and OMB’s shared initiative of data center consolidation. Many benefits will result – data that is more reliable, scalable, and high performing. Plus, data consistency, latency, and efficiency will be maintained and even improved. But, where should they start?
In November 2014, a white paper issued by Acolyst – FalconStor Federal titled Consolidating Multi-Petabyte Data Centers: Breaking through the Data Migration Barrier suggested that the major reason for rework and delays in data consolidation comes from lack of proper planning. The white paper asserts that “proper assessment and documentation establishes the appropriate framework and effective lines of communications, and confirms the direction that the organization is heading.”
Another major issue expressed by the aforementioned Deputy CIO was the agency’s difficulty identifying which applications housed the data that was most critical to be migrated. Internal lines of business within the organization were not effectively communicating with IT.
This is why there is a need for both internal and external data migration service level agreements (SLAs). Most federal agencies and departments vehemently refuse to do internal SLAs. Why is that? Many have the mentality that SLAs are only for punitive and negative uses.
What if there was a mind shift when it comes to SLAs? They can serve as an effective means to communicate and document a project’s framework, garner buy-in from all parties, and redirect teams toward a common goal. SLAs can be used to uncover inconsistencies in definitions and expectations, determine root cause, assess impacts, mitigate risk, and drive actionable activities.
Further, SLAs uncover which applications and data are most important to the client when migrating and consolidating data centers. Formalizing the SLAs assures all business units that their strategic and tactical objectives will be met. The objective of the exercise of writing and documenting an internal SLA is to get into the right mindset and be aware of the various questions and information that must be collaboratively gathered and evaluated.
A Clear View of Current Conditions
A clear picture of the current “as-is” state of the impacted data centers are crucial. Organizations often discover new sources of data that must be migrated during this discovery process. Continual questions must be asked and their answers documented in the SLAs, such as:
- What data must be collected, migrated, and consolidated? Why?
- Who uses the data?
- When is the data accessed?
- What agencies and bureaus need this data?
- What data is shared by multiple departments or agencies?
- What changes must be made to access the data?
- What systems are tied to the data?
- How is the data backed up and archived?
- What people, processes and technology will be impacted by data migration?
- What are the dependencies?
Additionally, questions about data center applications should be addressed in the SLA, including:
- Who are current application owners?
- What are the planned upgrades or changes?
- What is the maximum allowable downtime threshold?
Implementing Consolidation Strategies
The focused goals of the Federal Data Center Consolidation Initiative (FDCCI) as outlined by then Federal CIO, Vivek Kundra, in February 2010 were and still are:
- Endorse the use of Green IT to cut energy consumption
- Reduce cost
- Increase IT Security
- Obtain efficient computing platforms and technologies
Expectations and deadlines were also set about the then voluntary changes to data centers. Given that most agencies and departments did not meet these milestones, the initiative became law when the Senate passed the Federal Data Center Consolidation Act of 2013 on September 18, 2014, requiring agencies to perform inventories and implement consolidation strategies by firm dates.
All affected agencies share the same target (their desired “to-be” state), which is to achieve the mandates laid out in the FDCCI. They must create individual strategy maps to meet their goals, while creating joint definitions of migration success and streamlined communication along the way. Key considerations include:
- Must all data be migrated?
- Does the data need to be cleaned up?
- What problems could occur when migrating the data?
- What applications are chosen for migration?
- How much will it cost?
- What are the impacts (people, process, data, technology, infrastructure, etc)?
- How can rework be avoided?
- What resources are available?
- What are the necessary user roles to help with migration?
- Who is FedRAMP authorized?
- How are cybersecurity concerns handled?
Monitor and Evaluate Progress
The PortfolioStat Integrated Data Collection (IDC) Consolidated Cost Savings and Avoidance report established by OMB is now considered the official method for agencies to report on their data center consolidation and spending and energy consumption optimization. Use of this report will help agencies monitor and evaluate their progress of data migration efforts and give insight into what changes need to be made to make their processes and procedures more efficient.
The secret to meeting FDCCI initiatives lies with a well-prepared and constantly evolving SLA to gain perspective into gaps between the insight (as-is) and the strategy (to-be), enabling agencies to understand their current state, determine where they need to be and develop plans to get there. By asking the right questions and taking strategic action, agencies can effectively report on inventories and implement consolidation strategies.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:24p |
Windstream Upgrades to 100G to Boost Cloud Connectivity Little Rock, Arkansas-based Windstream Communications is updating its regional and metro networks to 100G using Cyan’s Z-Series packet-optical hardware.
While the new network allows the usual ISP provider enhancements like better IP services to SMBs and faster consumer broadband, it also improves cloud connectivity for the company’s data center services business and wireless backhaul.
Cloud and data center services have been a major focus for many telecommunications companies, as they seek to leverage their existing infrastructure investment to diversify their revenue streams. The trend has been true for both national and regional telcos.
Verizon has focused on cloud connectivity as of late, acquiring Terremark several years ago to form the foundation of cloud and data center businesses. CenturyLink has successfully transformed itself into a cloud and data center provider in addition to telco since its acquisition of Savvis in 2011.
Windstream is updating its network to boost cloud and data center business.
It has also been expanding its data center footprint and its Windstream Hosted Solutions division. It moved into Sabey’s Intergate.Manhattan building last year. Most recently, the company opened a Chicago data center.
Initial deployment of the new network infrastructure is underway. There’s a new packet-based 100GbE network across major Windstream markets.
Cyan’s Z-Series Packet Optical Platforms have been a part of the architecture for many years, initially serving wireless backhaul and consumer broadband applications. The Z-Series will now support other high-performance applications across the footprint.
The 100G technology in the system is provided in a single slot, increasing density and reducing space and power requirements, said Cyan in the release.
“Windstream makes technology investments that are directly in line with supporting both the company’s long-term growth and the critical use cases of our customers,” said Randy Nicklas, CTO and executive vice president of engineering at Windstream. “Upgrading from 10G to 100G capacity across our regional and metro networks through Cyan’s Z-Series platforms will allow Windstream to utilize network resources efficiently and provide best-in-class services to our customers.” | 6:36p |
No More TL;DR for IBM Cloud Contracts IBM has tossed out its longer, complex contracts for cloud services in favor of two-page agreements. The company is trying to streamline the process and said the new cloud contracts are shorter and less involved than competitors’.
Contracts are a boring but necessary evil, so shortening them as much as possible is a benefit for everybody involved. It means the sales cycle and negotiations are shorter, which in turn means getting revenue sooner for the provider and quicker service deployment for the customer.
The shorter cloud contracts are the result of two months of work by a small team at IBM. The contracts are now deployed globally for all of its offerings.
Cloud computing has greatly streamlined acquiring raw IT resources like compute and storage, making it possible to fire up a virtual machine instantly with a credit card. However, the same utility-style approach is complicated in an enterprise setting.
IBM is trying to simplify enterprise IT, which has been a common theme in the industry, championed by the likes of SAP in its cloud ambitions.
“It’s ironic that cloud computing represents a faster and more innovative approach to doing business, yet lengthy and complex cloud business contracts from most vendors remain an obstacle,” Neil Abrams, IBM vice president and assistant general counsel, said in a statement. “By dramatically simplifying and accelerating how clients contract for cloud services, IBM is making it easier and faster for companies to reap the benefits of cloud.”
Another aspect of acquiring cloud services that remains complex is cloud pricing. It is often difficult for users with complex cloud deployments to forecast how much they will end up paying over time. | 7:48p |
Compass Gets Tax Breaks for CenturyLink Data Center in Minnesota Dallas-based developer Compass Datacenters has qualified for state tax breaks for its data center in Shakopee, Minnesota, a Minneapolis suburb. Compass built the data center and leases the facility to CenturyLink Technology Solutions.
Tax incentives are an important instrument state and local economic development agencies use to attract data center development projects – one of the more capital-intensive types of construction.
Minnesota is one of the states with more aggressive data center tax breaks. A company that builds a data center or a network operations center in the state that’s 25,000 square feet or bigger and commits to investing at least $30 million in the first four years is exempt from sales tax on IT gear, cooling and power equipment, energy use, and software for 20 years.
Minnesota stepped up its data center tax incentive program recently to lower the threshold for qualification, Madeline Koch, director of communications for the Department of Employment and Economic Development, wrote in an email. “The program has been successful since the changes were implemented, with three data centers that have been certified, and seven additional centers completing the application process.”
Compass gets Minnesota tax breaks for CenturyLink data center Click To Tweet
The state does not tax anyone for personal property, utilities, and Internet access, among other services.
While the Shakopee site currently has only one of Compass’ standard 21,000 square foot, 1.2 megawatt data centers, it has the capacity to support three more. | 8:30p |
| 9:00p |
Is Converged Infrastructure the Future of Cloud Solutions? The idea is to build a cloud platform as efficiently as possible. That means having hardware components in place capable of handling high user demand, great levels of multi-tenancy, and advanced resource controls. So, is this why so many companies are starting to look more at converged infrastructures for their cloud and virtualization platform?
There are some new players on the market. Numerous vendors are creating powerful node-based server platforms capable of great amounts of scale.
Nutanix, for example, says that their platform is already adopted by the likes of Google, Facebook and Amazon. It does make sense though. Your organization can quickly build a “cloud-in-a-box” platform with everything you need. Plus, you’re able to deploy a truly virtualization-ready environment which can scale on demand. It’s not a bad concept with quite a few benefits:
- Lots of HA
- Pretty solid performance per node
- Elastic scaling
- Availability of intelligent storage with deduplication and compression
- Virtualization-ready management
- Built-in optimizations like storage awareness and flash caching
- Very rapid deployment time
So, outside of a few really awesome deployments, why are some organizations still hesitating around this “hyper” converged infrastructure? When deploying this type of platform, there is a bit of re-thinking that has to happen. First of all, you’re creating an environment that is now capable of storage plus compute. In some cases, this means completely getting rid of a SAN. Well, many organizations and data centers aren’t quite ready to do that yet.
Take a look at the Cisco UCS “unified” infrastructure. You have a platform that is also capable of storage and compute. However, you’re also able to throw in the networking components. Because of this, the end-user has options as far as which infrastructure to work with, which is great for the consumer. Another big aspect is how platforms like UCS can integrate with other technologies.
- UCS can act as the network and fabric backplane.
- UCS can then create an automation policy to distribute and entire workload.
- That workload can be process by one of the UCS blades or by a Nutanix converged platform.
The amazing piece here is that processes can be distributed based on requirement, resources utilization, and user needs. In the above scenario an organization can offload all VDI processes to a Nutanix system while the UCS chassis handles data base operations and application delivery.
As organizations grow and evolve, there are a number of growing use-cases where converged systems are the right platform to deploy.
- Branch offices
- Micro-clouds
- Virtual applications/desktops
Some organizations can even use a converged platform for big data analytics on Hadoop. So, adoption is happening, but we’re not seeing this platform replace existing components within the data center or cloud. Rather, we’re seeing an interesting trend happen.
- These converged systems are acting as direct compliments to current platforms.
- Converged systems are capable of providing specific services that the rest of the data center does not have to be burdened by.
Doesn’t it make sense though? You’ve got all your resources under one roof and you can dynamically provision and de-provision workloads. Consider some of the trends from the latest Cisco Global Cloud Index Report: data center virtualization and cloud computing growth are definitely in the forecast.
- By 2018, more than three quarters (78 percent) of workloads will be processed by cloud data centers; 22percent will be processed by traditional data centers.
- Overall data center workloads will nearly double (1.9-fold) from 2013 to 2018; however, cloud workloads will nearly triple (2.9-fold) over the same period.
- The workload density (that is, workloads per physical server) for cloud data centers was 5.2 in 2013 and will grow to 7.5 by 2018. Comparatively, for traditional data centers, workload density was 2.2 in 2013 and will grow to 2.5 by 2018.
- Global data center IP traffic will nearly triple (2.8-fold) over the next 5 years. Overall, data center IP traffic will grow at a compound annual growth rate (CAGR) of 23 percent from 2013 to 2018.
As the data center continues to become more optimized, I won’t be surprised if we see more of these hyper-converged platforms emerging within micro-cloud and data center deployment models. | 9:30p |
Ground Broken on State Farm’s Huge Dallas Data Center Build Construction crews recently broke ground at the site of a massive future State Farm Insurance data center in Richardson, a Dallas suburb.
Developer KDC and Holder Construction are building the facility, The Dallas Morning News reported. State officials have been referring to it as “Project Black Flag.”
The Dallas data center will be about 130,000 square feet in size, located on a 15-acre property. Dallas firm Corgan Architects designed the one-story building.
State Farm is also building a 1.5 million square foot office campus in the area. First of the four planned buildings on campus is close to completion, the Morning News reported.
Richardson and the surrounding area are a hub of the Dallas data center market.
Digital Realty Trust has a massive data center park in Richardson. Rackspace is one of the major tenants there. Another example is a DataBank data center in town.
CyrusOne has data centers in nearby Carrollton and Lewisville. Cisco has data centers in Richardson and about 15 miles north, in Allen. |
|