Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, November 23rd, 2015
| Time |
Event |
| 1:00p |
With its 100-Gig Switch, Facebook Sees Disaggregation in Action After announcing it had cooked up its own fast data center switch called Wedge in 2014, Facebook last week announced its engineers have now designed an even faster, next-generation switch. The first Wedge was a 40 Gigabit Ethernet switch, but the second one pushes 100 Gigs – an extremely high bandwidth for a switch of its type.
In addition to its bandwidth, however – you can get a 100-Gig switch from the likes of Cisco or Juniper – one of Wedge’s distinctive features is that it is software-agnostic. It is the first time Facebook applied its philosophy of disaggregation, used in its custom server designs, to networking. Wedge disaggregates networking hardware from networking software, which is something incumbent data center network vendors have started doing only recently.
The point of disaggregation is being able to advance individual system components independently of the system. The first step toward disaggregating the network, separating hardware from software, was meant to “spur the development of more choices for each,” Facebook engineers wrote in a blog post last year, when the company announced the first Wedge switch and FBOSS, its Linux-based network operating system that allows its data center operators to manage network switches using tools that are similar to the tools they use to manage compute and storage.
Now, as the company designed its second Wedge, as well as the Wedge-based high-capacity aggregation switch, called Six Pack, it is seeing the benefits of that disaggregation come to life. It is designing more powerful hardware independently of software.
The same FBOSS software runs on Wedge 40, Six Pack, and Wedge 100, Jay Parikh, Facebook’s VP of engineering, said at the Structure conference in San Francisco last week. Switch software development is at Facebook is now on a separate cycle from switch hardware development, he said.
The company developed FBOSS to handle the weekly rate of feature roll-outs and bug fixes, Facebook engineers Zhipping Yao and Jasmeet Bagga, wrote in a blog post last week. One of the key capabilities that support that is being able to update thousands of switches without traffic loss that could lead to outages.
The custom-built tool that deploys weekly FBOSS software updates across the network is called fbossdeploy and modeled after the company’s software load balancer Proxygen. It deploys code to a small set of switches first, and the team monitors them for problems before deploying at full scale.
The software stack running on a Wedge switch is fairly similar to the one running on compute servers in Facebook data centers. Both have the Linux kernel, system tools and libraries, monitoring daemons, and configuration management. In addition to those components, the network stack has a routing daemon and an FBOSS agent.
There are now thousands of Wedge 40 switches running in production in Facebook data centers, and the goal is to have Wedge switches replace every top-of-rack switch in the company’s infrastructure. In an emailed statement, the company also said it has “hit the limits of what 40 Gbps switches can handle,” so the next step is to complete and start deploying the 100-Gig, 32-port Wedge. | | 4:00p |
2016: The Year of The Data Center John M. Hawkins is VP of Marketing and Communications at vXchnge.
With 2016 quickly approaching, executives are starting to hash out their plans and budgets for the coming year. While all aspects of a business are important, no other division is more critical to supporting business growth than the IT department. After all, most businesses today have data they need to keep secure and functioning in order to keep things running smoothly.
That’s why it’s so pivotal that IT executives keep their data center operations top of mind for the coming year as these choices will impact their business growth not only for the 2016, but for years to come.
Keeping this in mind, I spoke with 15 data center site managers and with the office of the CTO to determine what the top trends for data centers in 2016 will be.
Cabinets are Staying Cool
With high-density compute, storage and networking making its way into data centers, now’s the time to update your cabinets. This surge of data and applications will require increasing amounts of energy and power to keep data centers at an optimal temperature. According to IDC, from 2013 to 2020, the digital universe will grow by a factor of 10 – from 4.4 trillion gigabytes to 44 trillion. It more than doubles every two years! While data center cooling techniques are often a popular topic of conversation, one technique we’ll see becoming even more popular is cold aisle containment pod deployments, which will soon become a standard in data center design.
Keeping our Heads in the Clouds
A few years back, there was talk of the cloud having the potential to “kill” the data center. However, over time we’ve seen that cloud and data centers are not in competition, rather they complement one another and need to work together in order to properly function.
We’ll see this trend carry over into 2016. Cloud-based businesses increasingly rely on colocation providers to support their large data storage needs. Data center management teams need to focus part of their efforts on supporting increased usage from cloud-based companies and staying leading contenders in the data center space.
By 2020, IDC found that 40 percent of data in the digital universe will be “touched” by the cloud, meaning either stored, perhaps temporarily, or processed in some way. And with the digital universe experiencing unprecedented growth, we’ll see cloud capabilities being a must in data centers for most customers going forward in 2016 and beyond.
Positioning Matters
You’ve heard this before – location matters! While this is true in most industries, it is also now starting to hold true for data centers.
Many IT decision makers are relying on strategically located data centers rather than relying solely on a hub. For example, instead of storing massive amounts of data in a few select data centers, application providers are moving their applications to “the edge,” (in locations where they can serve customers locally, and reach more businesses and consumers in more markets) in order to be able to serve their consumers more closely and reduce discontinuations.
Another item to consider when thinking about location is costs associated to that particular area. Are there tax incentives for businesses in that region? What are the utility costs for that area? These are all location elements that IT executives need to consider when selecting a location.
Having a data center partner with reliable and resilient infrastructures strategically situated in multiple areas around the country will help keep data safe and functioning properly.
CTOs will Sleep Better at Night
Utilizing a data center as your own, or data centers-as-a-service, will be a new line of thinking for tech executives in 2016.
Businesses are increasingly finding value in data center environments that are supported by an accomplished 24x7x365 on-site technical team. This expedites processes, saving their customers time and money. In addition, having a data center that is local and can physically be accessed also helps ease processes.
Building and maintaining a data center can be a pricy undertaking for small to mid-sized companies. In a report, Gartner found that data center systems spending is projected to reach $143 billion in 2015, a 1.8 percent increase from 2014. So now, more than ever, is the time to utilize a data center with its main focus on helping to grow and protect your business so that you can focus on other aspects of your business.
Due to this high price tag, CTOs should consider working with a data center partner that has done the research and handles it all: power, cooling, security, etc. Data centers-as-a-service provides peace of mind that their brand and customers’ data is protected.
High Density will Dominate
More and more, we are getting requests from customers for high-density and super high-density cabinets. This desire is on par with the times, as the Internet is becoming more accessible – with smartphone, tablets, etc. – and more data is present. As a result, servers must be present to handle all these functionalities and requests, which increase the demand for power, cooling and network. High dense applications allow customers to quickly scale their business and maximize profits.
Cabinet refurbishing, the cloud, location and content provider applications pushed to “the edge” are just a few of the data center trends for 2016. Knowing the amount of data that is already out there and understanding the surge of data that it expected in the years to come, it’s key that companies start considering partnering with a secure and reliable data center if they want to survive.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:59p |
Texas Colo with Efficient Data Center Cooling System Launched Aligned Data Centers, a data center provider formed earlier this year, has launched its first colocation facility in Plano, Texas, featuring a water-efficient data center cooling system designed by its sister company Inertech, whose systems are also deployed at eBay, Lenovo, and Telus data centers, among others.
In addition to using very little water, the design of the cooling system is the primary enabler for Aligned’s unusual colocation business model. Offering a compromise between physical data center capacity and elastic cloud infrastructure, the company offers what it calls on-demand colocation services, meaning customers only pay for the capacity they actually use, with the ability to scale their deployment up or down as they go along.
Aligned COO Thomas Doherty told us in an earlier interview that the cooling system’s modular design was what gave the provider the flexibility to change the amount of capacity provisioned for any single client on-demand.
Find more details on Aligned’s innovative data center cooling system and business model in a recent Data Center Knowledge feature.
In addition to the Plano facility, Aligned is building a larger data center in Phoenix and scouting for additional locations in California, Illinois, Virginia, and New Jersey, the company said in a statement.
Aligned, Inertech, as well as Energy Metrics and Karbon Engineering are subsidiaries of a holding company called Aligned Energy. The parent company is backed by the hedge fund BlueMountain Capital Management.
First phase of the Plano data center provides about 100,000 square feet of space and 12.5 MW of power. The site can accommodate 300,000 square feet of space and 30 MW of power total.
In Phoenix, Aligned is building a 550,000-square-foot, 70-MW facility. | | 7:37p |
Equinix Adds Direct AWS Cloud Connectivity in London, Dallas While enterprises want to deploy more critical applications in the cloud, they don’t want to rely on the public internet to do it. This has given rise to services that provide direct cloud connectivity, selling private network links from companies’ servers to the servers of their cloud providers, located in the same data center.
Equinix, one of the biggest players in this ecosystem, announced Monday the addition of two more data centers where companies will be able to connect directly to the servers of Amazon Web Services, the world’s largest cloud infrastructure providers. The two new locations are London and Dallas, bringing the amount of Equinix data centers where AWS Direct Connect, the name of the private Amazon cloud connectivity service, to 10.
Equinix customers in London could use AWS Direct Connect before, but their connection would link from London to an Equinix data center in Dublin, Ireland, where the actual AWS servers were. Now, if a company has a server at Equinix’s LD5 data center, it can buy a direct private interconnect to AWS servers in the same facility.
Direct AWS cloud connectivity is also available in an Equinix data center in Frankfurt.
The new Direct Connect site in Dallas is called DA2. It is the fourth Equinix data center in North America that provides the service.
All Equinix data centers in a metro area are usually linked with a high-capacity optical network, which means if a customer has servers in one facility, they can get direct, high-speed network access to its own, a partner’s, or a service provider’s servers in another facility in the same metro. Each metro essentially acts as a single virtual data center, so Direct Connect in London, for example, is accessible to customers in facilities other than the LD5 site.
Direct cloud connectivity services are a quickly growing business segment for Equinix, its rivals, such as CoreSite, Digital Realty Trust-owned Telx, and Interxion, as well as the major network carriers, such as Level 3, AT&T, and Verizon. Similar services are available for other major cloud providers, such as Microsoft Azure, Google Cloud Platform, and IBM SoftLayer.
DigitalOcean, a cloud infrastructure service provider that has been successful in attracting developers to its cloud, doesn’t offer direct links to its servers but has heard from some of its biggest customers that they would like to have the capability.
“There’s definitely demand for it,” Luca Salvatore, network engineering manager at DigialOcean, told us in a recent interview, saying the company may build such a service in the future. “We do see that need.” | | 7:50p |
Verizon Teams with VMTurbo to Help Clients Move Apps to Cloud 
This article originally appeared at The WHIR
Verizon Enterprise Solutions and VMTurbo announced on Monday the launch of Verizon Intelligent Cloud Control to help Verizon customers migrate workloads to the public cloud that best suits their needs.
According to Verizon, Intelligent Cloud Control enables customers to use software to drive automated, price, performance, and compliance-based placements, as opposed to existing cloud brokerage solutions that typically manually recommend workload placement to public cloud service providers.
“In listening to our customers, we realized they require a better way to manage their risk when moving to the public cloud,” said Victoria Lonker, director of enterprise networking for Verizon. “With Verizon’s Intelligent Cloud Control, we are removing the complexities and myriad trade-offs between price, performance and compliance in various public cloud services and enabling them to focus instead on the applications and services that their end-users demand all within a secure environment.”
Intelligent Cloud Control allows users to control public cloud workloads through a single Verizon interface, budget and control public cloud spend, and assure performance and compliance of workloads.
“Building our joint vision of Intelligent Cloud Control with Verizon has enabled us to take VMTurbo’s core capabilities to its logical extreme – enabling automatable, real-time decisions on where to provision workloads between competing cloud providers to ensure a high quality, low cost service,” Endre Sara, vice president, advanced solutions of VMTurbo said in a statement. “Intelligent Cloud Control is different from cloud broker or manager programs on the market today in that it factors in application performance in real-time along with CSP price to determine the best cloud resources for every workload.”
Verizon said that its Intelligent Cloud Control powered by VMTurbo will launch during Q1 2016. The service will initially include connections to AWS, IBM SoftLayer andMicrosoft Azure.
Verizon recently formed a new enterprise cloud unit with EMC that operates under the Virtustream brand.
This first ran at http://www.thewhir.com/web-hosting-news/verizon-enterprise-solutions-teams-with-vmturbo-for-intelligent-cloud-control |
|