Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, December 27th, 2012
| Time |
Event |
| 1:12p |
Most Popular Data Center Articles for 2012  Cold aisle or hot aisle? Interxion team members were ready to catch some Zzzs in the sleeping pods added to their London data center ahead of the 2012 Summer Olympics.
It was an extraordinary year for the data center sector, as reflected in the stories that our readers found the most compelling in 2012. That included Data Center Knowledge’s coverage of major events (the Olympics), major disasters (Sandy), major outages, and major trends in designs. Even physics genius Dr. Stephen Hawking made a cameo. Check out The Top 10 Data Center Articles of 2012, ranked by page views. | | 1:30p |
Getting Control of Application Development Application development has a profound impact on data center efficiency and costs. New application deployments, releases of new versions, and maintenance updates all require more hardware, more facilities space and change management. As the owners of the processing power it feels like we are never able to provide developers the amount of flexibility or new capacity that they want. The frustrating part is that we are on the receiving end of the demand driven by the work of application developers, so it feels like we are in a respond mode. We control the hardware and facilities, but feel like we have no control over the applications using that computing power.
But that is not true. Control can be direct or indirect.
For a number of years I ran an application development team. It was a small but very focused group of third party developers who coded an application monitoring tool. Lots of code was created and tested and re-written through the development cycle. Once the final product was put into production we immediately started to write more code to enhance and expand its capabilities. Several of us were awarded patents on the project because it was such an innovative approach. I still remember it as one of the top achievements of my career.
While we were taking that application from whiteboard to full production I kept watching the number of lines of code increase at a phenomenal level. Unexpected issues needed to be addressed and additional features became desirable. The solution was just a few lines of code away. It was all about risk at that point. How much of a delay or additional cost would doing that next element mean and will that be offset by the value.
In the back of my mind a little voice kept asking, “What is being taken out?” We were adding code to an inventory of applications that was already swollen and growing year on year. I started to think about application retirement strategies and where the problems were.
The Classic Strategy is Faulty
Make it a Tax
Application and data cleaning should be an ongoing process. As a result, they need to be funded as part of the operating model. My solution is to implement an application retirement tax. There are two aspects to this tax.
First, every new application development project should be burdened with a cleaning tax. A portion of the total project cost should be used to fund a separate team of “undevelopers”. The undervelopers are a software patrol dedicated to removing old applications and data, streamlining current code, and keeping applications in synch with current architecture. This team measures and codifies what they have done to keep their efforts out in the open.
The second form of taxation focuses on old infrastructure. There may be servers and hardware in place that is old and difficult to maintain. In many cases, this is because the application running on that infrastructure that is impeding upgrade. I suggest implementing a progressive tax that starts at the end of the depreciation cycle and increases each year as that specific infrastructure ages. Yes, to some degree it is artificial, but we are talking about incenting the business to keep current.
We are addressing risk. As data center managers we know better than anyone how difficult it is to recover old infrastructure after an event. Likewise, the number and complexity of applications has a direct relation to availability. That risk finds its way into our cost to maintain the infrastructure.
We continue to install millions of lines of code on an ongoing basis. It is just as important for us to assure that code is being managed so the business sees consistent performance and our data centers run in the most efficient way possible.
To get more useful enterprise class data center management strategies and insight from Nemertes Research download the Q3 Data Center Knowledge Guide to Enterprise Data Center Strategies, compliments of Vantage Data Centers. | | 2:14p |
Lightower, Sidera to Merge in $2 Billion Fiber Deal Lightower Fiber Networks and Sidera Networks will merge as part of a $2 billion transaction in which both companies will be acquired by private equity firm Berkshire Partners, the companies said today. The deal will bring together two leading fiber providers serving the Northeast, create a combined network with over 20,000 route miles and access to more than 6,000 on-net locations between Massachusetts and Virginia, including data centers, financial exchanges, content hubs, commercial buildings and other critical interconnection facilities.
Pamlico Capital and ABRY Partners, who are significant investors in Lightower and Sider, respectively, will remain as investors in the new company. Berkshire and ABRY are already joint owners of colocation and interconnection specialist Telx.
The combined company will be led by current Lightower CEO, Rob Shanahan, who hs led the company through acquisitions of Lexent Metro and Veroxity, among others. The merger is pending regulatory approval and is expected to close in the second quarter of 2013.
“Lightower and Sidera together will offer customers an industry-leading, fiber-based network with a deeply experienced team supporting it,” said Shanahan. “Both companies have a shared vision of network excellence, customized solutions and superior customer support. Once merged, we will offer customers more services, more routes and more access options with the same high levels of performance, diversity, reliability and support that our customers have come to expect from us.”
Companies Offer Similar Solutions
The combined network will have both a larger footprint and greater density in its core markets in the Northeast, Midwest and Mid-Atlantic regions. Both Lightower and Sidera currently offer fiber-based networking solutions comprised of Ethernet, dark fiber, wavelengths, Internet access, private networks and colocation services, as well as specialized services for ultra-low latency connections for financial services firms, video transport for media companies, and wireless backhaul for wireless operators.
“We have invested in the telecommunications infrastructure space for nearly 20 years and believe that the combined company, with its incredibly robust network, is well positioned for continued growth serving customers with an ever increasing need for high-performance bandwidth,” said Randy Peeler, Managing Director of Berkshire.
Additional terms of the deal were not disclosed. Current Lightower Fiber Networks investors include M/C Partners, Pamlico Capital and Ridgemont Equity Partners. Current Sidera Networks investors include ABRY Partners and Spectrum Equity Investors. | | 3:43p |
GitHub: Outage Caused by Failover Snafu Automation is an incredibly important tool in the data center. But it’s difficult to anticipate every set of events and conditions, and complex systems with lots of automated infrastructure can sometimes experience unexpected results. That’s been the case in some of the recent outages at Amazon Web Services, in which equipment failures have triggered ripples of server and network activity. A similar scenario emerged Saturday at the open source code repository GitHub, as failover sequences triggered high levels of activity for network switches and file servers, resulting in an extended outage for the site that lasted more than 5 hours.
“We had a significant outage and we want to take the time to explain what happened,” wrote GitHub’s Mark Imbriaco in an incident report. “This was one of the worst outages in the history of GitHub, and it’s not at all acceptable to us.”
The details of the incident are complicated, and explained in some detail in GitHub’s update. A summary: GitHub performed a software upgrade on network switches during a scheduled maintenance window, and things went badly. When the network vendor sought to trouble-shoot the issues, an automated failure sequence didn’t synchronize properly – it did what it was supposed to do, but “unlucky timing” created huge churn on the GitHub network that blocked traffic between access switches for about 90 seconds. This triggered failover measures for the file servers, which didn’t complete correctly because of the network issues.
The value of a detailed incident report is that it identifies vulnerabilities and workarounds that may prove useful to other users with similar infrastructure. As more services attempt to “automate all the things,” understanding complex failover sequences becomes more important, and GitHub’s outage report may prove interesting reading for the devops crowd.
Imbriaco got props from Hacker News readers for the thoroughness of the incident report, and shared some advice on the topic:
“The worst thing both during and after an outage is poor communication, so I do my best to explain as much as I can what is going on during an incident and what’s happened after one is resolved. There’s a very simple formula that I follow when writing a public post-mortem:
1. Apologize. You’d be surprised how many people don’t do this , to their detriment. If you’ve harmed someone else because of downtime, the least you can do is apologize to them.
2. Demonstrate understanding of the events that took place.
3. Explain the remediation steps that you’re going to take to help prevent further problems of the same type.
Just following those three very simple rules results in an incredibly effective public explanation.” | | 4:10p |
Best of Industry Perspectives 2012: Cloud Computing 
This was the year that “cloud” went mainstream. In 2012, consumers became aware of Apple’s iCloud. Microsoft TV ads included cloud references. Google’s online apps such as Gmail proceeded to grow. Outages in the Amazon cloud brought down hugely popular websites such as Netflix and Pinterest.
IT professionals have been thinking cloud computing for a few years, but this year really marked a shift into action, working to operationalize cloud services, secure cloud services and move more and more applications into cloud environments. Our Industry Perspectives guest columnists offered good counsel on many aspects of cloud such as its impact on teams, business implications and best practices. Here’s our Top 12 picks of cloud columns for 2012. Enjoy!
- CIO Roles Shifting in Emerging Cloud Landscape - Bryan Doerr of Savvis/CenturyLink, January 4, 2012
- Bandwidth Management, Cloud & the 405 Freeway - Bob Deutsche of Intel, January 6, 2012
- 7 Things Your CEO Should Know About the Cloud – Jason Cowie of Embotics, January 24, 2012
- 5 Reasons to Own Your Cloud - Kent Christensen of Datalink, February 2, 2012
- Cloud Ecosystems, Profitability, Common Sense & the New Order - Bob Deutsche of Intel, February 8, 2012
- Applying Cloud Principles to the Data Center – Ronny Front of Glasshouse Technologies, February 17, 2012
- It’s Onward and Upward with Cloud Security - James Greene of Intel, March 14, 2012
- Platform as a Service Ushers In True Private Cloud - Sinclair Schuller of Apprenda, April 10, 2012
- Moving to the Cloud, One App at a Time - Josh Crowe of Internap, April 27, 2012
- Protecting the Cloud: Data Erasure as a Best Practice - Markkku Willgren of Blancco, May 22, 2012
- Understanding the Cloud’s Effect on Facilities Teams - Bill Kleyman of MTM Technologies, June 21, 2012
- Which Cloud Model is Right for Your Company? – Jim Thompson of ViaWest, September 25, 2012
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:35p |
2013 Predictions: Cloud Impacts on Staffing Alan Priestley is a multi-year Intel veteran, and currently holds the role of Strategic Marketing Director within Europe, Middle East and Africa.
 ALAN PRIESTLEY
Intel
The debate over whether or not the rise of cloud computing will have a positive impact on employment has been going on for some time. A few recent developments have indicated that the cloud may indeed be living up to the hopes of those who expect it to generate a wave of new jobs.
Future Jobs Expected
Researchers and analysts from Gartner predicted cloud-related technology to generate 1.9 million IT jobs in the US by 2015. On the other side of the pond, in late September, the EU announced that it has launched a cloud computing strategy to boost European business, claiming that it will generate 2.5 million new jobs and increase the region’s total GDP by €160 billion by 2020. Alongside this optimism for the future, we’ve seen concrete steps being made already, with UK telecoms giant BT announcing this week that it is running a recruitment drive for cloud experts to work in its new Converged Infrastructure Practice.
Of course, there are still many in the IT industry that remain cautious. Recent research from SAP found that as many as 46 percent of those conversing online about the cloud believe that it destroys jobs.
The Upside
I fall into the optimists’ camp, and believe, along with Gartner, that we need to step back and take a wider view. Speaking of cloud computing itself is just the beginning, but when you consider all the other innovations that it underpins – mobile, social computing and big data services, for example – then the horizons expand massively.
These new technologies, which would not exist without the cloud to support them, have enabled us to create entire business models and industries that in many cases weren’t viable a few years ago. They also mean that whereas cloud computing has traditionally been viewed as a preserve of the IT department, it now has relevance for other parts of the business. Marketing teams for example, now need social media experts to engage with online audiences and people trained in leveraging big data to analyze customers’ behavioral trends. These are specialized skills, underpinning full-time roles, and so creating cloud-related jobs across the business.
Continued Innovation Leads to New Roles
There’s no reason why the development of new cloud-enabled platforms should cease, and for each of these new platforms, there are hundreds of new business –- and new job –- opportunities. By this time next year, I wouldn’t be surprised to see a variety of new roles emerging that we haven’t even thought of yet.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. |
|