Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, March 24th, 2014
| Time |
Event |
| 12:17p |
Data Center Jobs: Talent Lattice At the Data Center Jobs Board, we have a new job listing from Talent Lattice, which is seeking a Construction Project Manager in Santa Clara, California.
The Construction Project Manager is responsible for providing overall leadership and driving the success of new and ongoing construction projects, supporting current and prospective customers to understand business needs and recommend continuous improvement and innovation plans that will maintain and grow sales, and understanding of Mechanical Engineering, as you will provide technical engineering information and assistance to ensure that new datacenter spaces comply with the project requirements, all engineering standards, applicable codes, and specifications. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 12:24p |
Root Cause Analysis: An Alternative to Blamestorming David Mavashev is CEO of Nastel Technologies, a provider of APM monitoring.
“Blamestorming” – to my surprise this term is actually in the dictionary or at least dictionary.com. The definition is as follows, “an intense discussion or meeting for the purpose of placing blame or assigning responsibility or failure.” How is this relevant to IT Operations or Healthcare IT?
After an extraordinarily rocky start, the federal healthcare exchange – the online marketplace consumers use to purchase health insurance under the Affordable Healthcare Act, a.k.a. “Obamacare” – seems to be working more smoothly. But, now problems are cropping up with the state healthcare exchanges. Media reports highlight state-level exchange system issues seemingly every week. This shouldn’t be a surprise as we are dealing with a highly complex system.
When you alleviate a bottleneck in one location of a complex system, the result is often a newly visible series of bottlenecks in other locations. The transactions now flow past the prior bottleneck only to hit another logjam in different area of the system. The rule of thumb in performance analysis is analyze before you fix and focus on the most significant bottleneck, first. The state of affairs will change once that issue is relieved and you then focus on the next significant issue. However, this well-worn IT approach is not always followed.
Some states have singled out vendor software as the culprit. Others blame a lack of comprehensive testing or inter-operability. Still others cite inconsistent project leadership and failures to address known issues in time to achieve a smooth rollout. Some or all of these glitches may sound familiar to CIOs and IT executives who have spearheaded the launch and maintenance of a complex system.
The Rush to Point Fingers
CIOs may also recognize a familiar tone from people quoted in the news reports – the rush to affix blame. When a complex system doesn’t work, groups that handle components of the larger system tend to focus on deflecting responsibility from their unit. It’s important to find out what went wrong, but a more fruitful discussion would focus on identifying root causes like scalability and infrastructure monitoring capacity.
There are a number of possible explanations for a troubled system rollout. Clearly, the system lacks the capacity to handle anticipated demand. Was the anticipate demand known? In this, the answer is decidedly “yes”. Or worse, there were no requirements for testing loads that simulated anticipated demand. Were there a clear set of “user stories” that illustrate what the system must do to be effective? User stories, as part of an agile development environment often include performance expectations and should also cover the range of users expected to utilize the application
A friend of mine told me about their endless troubles in registering for healthcare. This person is a private instructor with irregular hours who fit the profile of the type of user this program was supposed to address. Previously, she was not able to get affordable healthcare and had hoped that this would address her needs. It might actually do that, if she could get registered. When she tried to register, the website application told her that her income she entered on the website did not match what the state had on file. It turned out the application wanted future income for the current year end. But, since she is not an employee with a regular salary there was no way to do that with certainty. The application made assumptions that didn’t fit the target audience. Apparently, the user stories created were not appropriate or complete. At this point she still has not made it through the application process.
Alternatively, there may have been flaws in the architecture or perhaps, coding bugs could be responsible. Maybe, there’s a database access issue. Any and all of these explanations may play a role, but here’s the fundamental problem: The technology professionals charged with resolving the issue typically work in silos, and the person in charge may feel overwhelmed by the sheer volume of analysis and speculation. This is especially true when past experiences inform them that all this painstaking work produced little to no results.
See more on the Next Page
| | 12:28p |
Iowa Data Center Boom Continues with $255 Million Project Alluvion Another mystery data center project may land in Iowa, with the city of West Des Moines set to review an application for Project Alluvion – a $255 million data center project from an unnamed company.
The Des Moines Register reports that city documents indicate the mystery company plans an investment that would add at least $255 million to the city tax base, and create 84 jobs. Once given the green light by the city council, the application goes to the Iowa Economic Development Authority asking for state assistance for the proposed development.
With the term Alluvion referring to an increase in the area of land over time, the data center will be built in four phases, with the city throwing in $18 million in tax increment financing to help pay for infrastructure improvements and development costs. Project Alluvion would carry a minimum taxable valuation of $255 million, which does not indicate the final costs for the project.
By comparison Microsoft’s data center in West Des Moines was given a 2013 taxable valuation just under $146 million.
“If the project comes to fruition and if we’re able to get it done, I think we’re very excited about it,” City Councilman John Mickelson said Friday. “It will add to our property tax base in West Des Moines and it will add some new jobs and it will open up infrastructure in a new part of town.”
Microsoft announced plans last summer under the code name Project Mountain for a $679.1 million expansion of its West Des Moines data center, bringing its total Iowa investment to $864 million.
That expansion was part of over $1.4 billion in data center projects for Iowa, which witnessed another expansion of the Google data center in Council Bluffs and a $300 million project for Facebook in Altoona (project Catapult). Later in 2013 another code-named project, project Oasis turned out to be Travelers Insurance, which landed in a suburb of Omaha. | | 1:30p |
Midokura Partners with Cumulus Networks on Open Networking Network virtualization company Midokura announced a partnership with Cumulus Networks to offer a joint technology solution that will enable customers to manage workloads on virtual and non-virtualized infrastructure. This partnership further extends the new networking open ecosystem where businesses can have the flexibility to choose between various industry standard networking hardware, network operating systems and applications. In January Dell announced that it will be offering Cumulus Linux network OS as an option for its Dell Networking S6000 and S4810 top-of-rack switches.
“The partnership between Midokura and Cumulus Networks represents a significant step forward in the modern networking movement,” said Dan Mihai Dumitriu, Midokura CEO and co-founder. “By enabling virtual networks to span across both virtual and physical workloads, we will give customers the power of choice and the ability to experience networking without constraints.”
Empowering Open Networking
Midokura’s MidoNet is a software-based, highly distributed, network virtualization system that allows service providers and enterprises to build, run, and manage virtual networks with increased control and flexibility. Together, Midokura and Cumulus Networks will provide a solution that enables MidoNet to connect to physical switches running Cumulus Linux, allowing network traffic flows from virtual machines to machines running on physical infrastructure through the VxLAN Tunnel Endpoint (VTEP) gateway. It will enable service providers and enterprise customers to provision networks for physical workloads, in a matter of minutes. The joint solution will enable customers to deploy virtual networks, preserve bare metal servers, and provide the desired performance for production-class network traffic with wire rates.
“Enterprise and cloud service providers are going through a transformation to SDN-architected networks to improve automation in their data centers via increased network programmability,” stated Cliff Grossner, Directing Analyst, Infonetics Research. “In our view, Midokura and Cumulus Networks are the first in the industry to bring together network virtualization and bare metal networking with the aim of providing an open network that will enable innovation in automating operations for virtual and non-virtual workloads. We believe that enterprises and cloud service providers will want to look at how the Midokura-Cumulus solution will improve application deployment timeframes and reduce operational costs.” | | 2:35p |
Latisys Expands to UK, Reveals Healthy Financials High-touch hybrid hosting provider Latisys is expanding its platform to the UK, as well as disclosing some financials for the first time. The company has hit an annual revenue run rate of $100 million, and is financially positioned well going forward.
The expansion involves deploying its Cloud Enabled Systems Infrastructure (CESI) platform over in London, in a Tier III data center in the Docklands.
“Expanding from our national platform in the US to new international markets represents a natural evolution for Latisys, and we look forward to demonstrating the business benefits of our unified platform to European and US multinationals alike,” said Pete Stevenson, CEO of Latisys. “When we started Latisys in 2007, our goal was to build the most flexible, high performance, hybrid IT infrastructure platform in the industry—supported by exceptional customer care. By integrating best-in-breed technology with process automation, operational best practices and a company-wide culture to serve as an extension of each customer’s IT team, we’ve always had a model that can scale—for customers and investors—and now internationally.”
While the initial expansion equates to some capacity running on their platform, it’s important to note Stevenson’s past work history with Globix, which had a very sizeable UK business. Stevenson still has all the proper connections on the UK side of the pond.
“Customers are taking us over there,” said Stevenson. “We’re also putting in some excess capacity. We run the company conservatively with a little bit of aggression.”
Cash-Flow Positive
The company also revealed it has been cash-flow positive since 2012 and is healthy on the financial front. Growth rates are all well above industry standards, in an industry that grows faster than most. Enterprise managed hosting and cloud sales are up 65 percent from 2012 to 2013. Multi-site deployments account for nearly 40 percent of total revenue, and have grown 35 percent from 2012 to 2013.
“What’s driving these multi-site deployments is the way customers are deploying their technology,” said Stevenson. “Risk mitigation is getting addressed at the executive and board level. Disaster recovery is really an important topic. Another driver is the type of customers we’re seeing, (which includes) a lot of SaaS companies.”
Latisys has built an Infrastructure as a Service platform which is repeatable in new markets, making growing into future markets simpler.
The company has its data centers networked together and offered through a unified platform.
“Ours is a high touch model,” said Jonathan Sharp, VP of Marketing for Latisys. “We don’t sell to really small business. We made the call years ago that we were not going to be all things to all people. These (customers) have complex workloads, so they’re looking for that consultative support. We’ve designed an infrastructure solution that is flexible and scalable. CIOs know they can’t go it alone and that’s where we come in.”
Emphasizing a Unified Platform
The company shifted to a message of hybrid and a unified platform, and that message appears to be clicking with customers based on growth numbers.
Since its inception in 2007, Latisys has raised $300 million in total capital from its private equity partners and some of the world’s leading financial institutions. The company’s free cash flow positive position provides significant flexibility and control for exploiting the growth opportunities on the horizon.
What’s driving all of this growth? “It’s A little bit of ‘thank you Amazon’,” said Stevenson. “Amazon is kind of developing the market a little bit. Companies are figuring out they don’t have to develop resources internally. While Amazon is a certain fix for certain types of things, we’ve been very focused on outsourcing critical workloads for customers. The CIO has pressure to do a lot more with a lot less these days, given all the security all the redundancy that they need. We built an enterprise platform.”
End users are outsourcing more. Latisys is focusing strictly on mission critical workloads, and offering high touch services, an area a cloud giant like AWS can’t really address. “It’s all about reducing Total Cost of Ownership (TCO) in the end,” said Stevenson.
Why Share Financials Now?
“We’re winning among the midsize enterprise with mission critical workloads,” said Stevenson. “They want to know what kind of company and what size you are. It’s a way to let them know it’s a sizeable company and growing well, and a way to attract people to work for us. Anytime you’re trying to get an engineer, it’s a great way to attract them.”
A lot of Latisys’ closest competitors have been rolled up into telecoms though acquisitions, including Terremark (Verizon) and Savvis (CenturyLink). By offering a peek behind the curtain, it’s a way to exhibit to customers that the company is going strong and has the capital to continue on its strong trajectory.
Latisysy now has over 1,100 customers in the U.S. and is now expanding internationally. It’s books are healthy, it’s growth is healthy and is stepping up its game among high touch, hybrid providers. | | 3:36p |
Cisco Joins Cloud Wars, Pledges $1 Billion for Data Centers Networking giant Cisco Systems is the latest tech titan to enter cloud in a big way, pledging to spend $1 billion over the next two years. The company will use that money to build up its data center infrastructure, which will run the new Cisco Cloud Services, according to the Wall Street Journal.
Cisco wants to capitalize on customers’ growing desire to rent computing services rather than buying and maintaining their own machines, the basic impetus behind cloud computing. The service will be delivered with and through global partners, including Aussie telecom Telstra, tech distributor Ingram Micro and Indian IT company Wipro.
Cisco has built a massive business topping $49 billion in annual revenue. Most of that is selling networking equipment, and cloud offers an opportunity to diversify that revenue a bit. There’s no telling whether the company’s cloud intentions will fly or falter, as several hardware companies have entered the cloud computing fray to varying degrees.
Amazon Web Services is still the market leader in cloud infrastructure. While Amazon has been touting customer wins across a range of company types, AWS is still predominantly viewed as the place where startups get their infrastructure legs. Cisco’s entry into cloud will focus on the opposite end of the spectrum, targeting enterprise customers looking to shift to the cloud in order to reduce total cost of ownership or increase their agility.
Benefits for Partners
Cisco’s partners stand to benefit greatly, as this is a tech giant investing a number with a lot of zeros on building out a cloud infrastructure that they may leverage to compete with the big boys.
The move could also be the result of investor pressure for the company to shore up some kind of cloud strategy. Cisco’s revenue declined about 3.1 percent in the six months ended January 25, and a steeper sales drop is predicted for this quarter. Vendors selling equipment are struggling to keep up with cloud providers, which are the darling of the tech investment world, so many are forced into cloud to both diversify revenue and appease investors who view cloud revenue favorably. This strategy has worked to varying degrees with public companies, and arguably played a role in Dell’s decision to go private.
Cisco enters the cloud in the middle of a price war. AWS has been cutting prices, Microsoft has been cutting prices aggressively, and so has Rackspace (along with doing a major tech refresh on its cloud). There are also smaller competitors like Profitbricks slashing prices as well. Cisco is a late entrant to the cloud wars, but one that has the firepower to potentially be a main competitor.
It’s not easy transitioning a traditional technology company into the new cloud paradigm. However, a $1 billion dollar commitment is huge. It also means that Cisco will be building out a lot of data centers, which is good for the data center industry in general. |
|