Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, January 23rd, 2013
| Time |
Event |
| 1:50p |
Cloud Growth Propels Turnaround at Codero  A look at the cabinets inside a data center operated by Codero, a growing hosting provider. (Photo: Codero)
A year ago, Emil Sayegh took the reins as CEO at Codero, a dedicated hosting provider that had pretty flat growth at the time. After a year of adding talent and expanding the product portfolio, the company has returned to a year-over-year growth of better than 20 percent, and is set to expand its infrastructure.
Sayegh says that success was tied to growing the company’s cloud portfolio to enable hybrid infrastructures and a sharper focus on customer service. The company has evolved from a solid dedicated hosting provider with flat growth to a hybrid hosting provider with growth above industry averages.
The company was founded in 1992 as an IT equipment value-added reseller (VAR). Four years later, the company repositioned as Aplus.net, a provider of Internet access, domain name registration, and hosting services. In 2006 it was purchased by Catalyst Investors. The company divested its shared hosting and domain name businesses in a sale to Hostopia (Deluxe Corp) in 2009, rebranding as Codero and shifting its focus to dedicated and managed hosting.
There was a sequence of three CEOs that came on board after the rebranding. Customer satisfaction was always very high and Codero developed an automated platform that was quite unique early on. But the company didn’t really begin to hit its stride until last year, when it started to see notable growth.
Staff Changes
“We’ve added some really good people,” said Sayegh. “We just hired some of the top networking experts in the industry, bolstering the networking team. It’s the reason we’ve had great uptime. You need to have a good networking architect. From an uptime perspective, we’ve differentiated ourselves by constantly holding ourselves to the highest standards. 100% uptime guaranteed, plus we mean it.”
100 percent uptime? Website monitoring company Pingoscope tracks hosting provider performance throughout the year. Codero clocked in at 99.99517% uptime, having a single 7 minute outage, and was at the top of the list.
The company continues to improve its automation and cloud offerings. Sayegh says hiring Chandler Vaughn as senior VP of products was instrumental. “He’s an industry expert who knows cloud like no other,” said Sayegh said. Vaughn was a major part of hosting strategies at Rackspace and Sungard, and also headed the cloud computing institute at the University of Texas before joining Codero in September. “We’ve made tremendous progress with cloud and automation since,” said Sayegh.
The company also opened an office in Austin, Texas in 2012. “Opening an office in Austin enabled us to get a completely different pool of talent,” said Sayegh. “Before we were limited to Kansas City and Phoenix. Austin is becoming a good place for hosting talent.”
Focus on Cloud Hosting
The company has three lines of service: Dedicated hosting, cloud hosting and managed hosting, with the company launching deeper into both cloud and managed services over the year.
“We’ve added several new products: smart servers, a virtual dedicated offering,” said Sayegh. “We added private cloud that we branded micro-cloud.” Adding these pieces as a compliment to its bread and butter dedicated hosting has been a boon for the company. | | 2:01p |
Telx Buys Clifton Building as it Readies NJ Campus  The raised-floor area at the NJR2 Telx data center in Clifton, NJ. (Photo: Rich Miller)
Colocation provider Telx continues to strengthen its footprint in the New York region. The company has purchased the building in Clifton, New Jersey that houses its NJR2 data center, and achieved Tier III certification right next door for its NJR3 facility.
Both of these milestones bring Telx closer to its ultimate endgame of creating a data center campus in Clifton, which is about 10 miles west of Manhattan. With full control of both facilities and the adjacent perimeter, Telx has begun to invest in property upgrades that will allow these facilities to comply with the most stringent security and operational requirements.
The property acquisition gives Telx two adjacent facilities, creating a unique location that allows a client to establish physical redundancy in two completely separate infrastructures and yet managed by the same data center operator within the same secure perimeter.
Shepcaro: “A World Class Property”
“We intend to develop both assets into a world-class interconnected property with both buildings adhering to some of the most stringent compliance requirements,” said Eric Shepcaro, CEO of Telx. “We are also excited to be able to fully control this entire location with new operating procedures, a secure perimeter and the development of a new Telx technology center for future R&D and product development. In fact, we have already begun evaluating wireless and other technologies between this location and our Manhattan data centers.”
Telx completed its purchase of 100 Delawanna Avenue from Mountain Development Corporation. There are benefits to being a landlord/owner. The purchase of the property allows Telx to develop an advanced data center campus as well as a future technology center for the evaluation and development of new data center products and services for clients. NJR2 allows for 60,000 square feet of white space.
NJR3 achieve Tier III status from the Uptime Institute for its Concurrent Maintainability design. The facility is 215,000 gross square feet of high power density, energy efficient space and more than 100,000 square feet of customizable colocation space.
Security upgrades will include a perimeter fence with anti-climb and intrusion detection technology, guarded perimeter property gates, full campus surveillance using advanced imaging technologies and other undisclosed new security enhancements. | | 3:39p |
Cisco To Boost Mobile Networking With Intucell Acquisition Cisco (CSCO) today announced its intent to acquire mobile networking company Intucell for $475 million in a deal that will position the networking giant for huge growth in mobile data traffic.
Intucell provides advanced self-optimizing network (SON) software, which enables mobile carriers to manage and optimize cellular networks automatically, according to real-time changing network demands. The addition of Intucell will help Cisco target global service providers, adding a network intelligence layer to manage and optimize spectrum, coverage and capacity, and ultimately the quality of the mobile experience.
“The mobile network of the future must be able to scale intelligently to address growing and often unpredictable traffic patterns, while also enabling carriers to generate incremental revenue streams,” said Kelly Ahuja, senior vice president and general manager, Cisco Service Provider Mobility Group. “Through the addition of Intucell’s industry-leading SON technology, Cisco’s service provider mobility portfolio provides operators with unparalleled network intelligence and the unique ability to not only accommodate exploding network traffic, but to profit from it.”
The Cisco Visual Networking Index predicts that mobile data traffic will grow 18-fold from 2011 to 2016, a compound annual growth rate of 78 percent. The report also predicts that global mobile data traffic will reach an annual run rate of 130 Exabytes in 2016.
The Intucell acquisition enhances Cisco’s ability to deliver next-generation solutions with a SON software platform that supports multi-application, multi-vendor and multi-technology capabilities and enables service providers to manage operational costs and make better use of infrastructure investments. Upon the close of the acquisition, Intucell employees will be integrated into Cisco’s Service Provider Mobility Group, reporting to Shailesh Shukla, vice president and general manager, Software and Applications Group.
“The mobile network of the future must be able to scale intelligently to address growing and often unpredictable traffic patterns, while also enabling carriers to generate incremental revenue streams,” said Kelly Ahuja, senior vice president and general manager, Cisco Service Provider Mobility Group. “Through the addition of Intucell’s industry-leading SON technology, Cisco’s service provider mobility portfolio provides operators with unparalleled network intelligence and the unique ability to not only accommodate exploding network traffic, but to profit from it.” | | 3:43p |
Top 10 IT Predictions for 2013 Paul Cormier is President of Products and Technologies, Red Hat, Inc.
 PAUL CORMIER
RedHat
In 2012, CIOs and IT teams in enterprises worldwide leveraged cloud computing architectures,while they designed their data centers to best manage the explosive amount of structured and unstructured data. In the next year, Red Hat envisions even more significant movement in the data center — in cloud, middleware, storage and virtualization technologies. Here are our top predictions for enterprise IT.
The hybrid approach will be the prevalent cloud deployment model for enterprises worldwide. Hybrid cloud enables resources to be made available to users as easily as if they were accessing a public cloud while keeping the process under centralized policy-based IT management. In 2013, enterprises will take advantage of hybrid cloud architectures as a way to have a more dynamic computing architecture over time.
OpenStack will continue to demonstrate the power of community innovation. Openness is one of the most important enablers of hybrid IT because it helps users avoid lock-in to vendors and specific ecosystems. OpenStack enjoys a broad community with more than 180 contributing companies, including Red Hat, and 400 contributing developers. We will see all that developer involvement lead to some commercial products in 2013, the same way the open source development model has led to innovative products in operating systems, middleware, and other areas.
Private (and hybrid) Platform as a Service (PaaS) will go mainstream. As has been the case with Infrastructure as a Service, PaaS will be seen not as just a public cloud capability but also as private and hybrid capability.
Open source software will drive proprietary storage hardware and software stacks. The rapid pace of innovation at the software layer is likely to fast outpace the innovation of hardware. Today, monolithic proprietary storage hardware and its proprietary software layer cannot be decoupled. This will all change in 2013 with the rapid commoditization and standardization at the hardware level combined with increased intelligence at the software layer.
Enterprise storage will transform from a ‘data destination’ into a ‘data platform.’ As a platform for big data and not just a destination for data storage, enterprise storage solutions will need to deliver cost-effective scale and capacity; eliminate data migration and incorporate the ability to grow without bound; bridge legacy storage silos; ensure global accessibility of data; and protect and maintain the availability of data.
More organizations will adopt an integrated DevOps approach. Increasing communication, collaboration and integration between developers and operations teams will eliminate issues stemming from incomplete hand-offs, misinformation or insufficient skills.
Buying patterns and the perception of mobile technology will change dramatically. Devices will be selected based on content and services first and technology second. These applications and services will be tied to enterprise middleware technologies such as CEP, business intelligence, and BPM, to create more cohesive and accessible executive and business dashboard tools.
The multi-hypervisor data center continues to grow. Just as many enterprises have found single-vendor strategies for operating systems and hardware do not make sense for them in an agile world, a single vendor for virtualization doesn’t either. Multiple vendors for virtualization will continue to proliferate in enterprise data centers in 2013.
The operating system remains the foundation of the IT infrastructure. The operating system has served as the cornerstone of traditional IT for decades. As organizations continue to move to the cloud in 2013, the operating system will continue to deliver a critical foundation.
New business models will emerge: Gone are the days when enterprise customers just want a channel partner to fulfill a product need. They truly want trusted advisers: partners that can guide them through technology industry changes and help provide them with competitive differentiation in their market or industry.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:00p |
Creating Data Center Strategies with Global Scale Before you can begin to analyze or make any decisions as to where you want to locate your organization’s data centers, you need to first step back and understand what your immediate and long terms goals are for your organization’s business and how it is best served by its computing architecture. Decisions made in isolation by the typical separation of senior management, as well as the facilities and IT departments, represent a significant impediment to a global project. A cooperative alliance is imperative to its success.
There is an emerging trend, which is growing rapidly, toward outsourcing the data center, as many organizations re-evaluate the need to build, own and operate their own data centers. Many organizations are either outsourcing it (in whole or in part), in an effort to concentrate their resources on their core business. This was discussed in detail in “Build vs Buy”, which was Part I of this Executive Guide series.
Once you have made a business decision to extend the reach of your organization’s computing systems into a new global territory, it would be the ideal time to examine your entire existing IT architecture. Do an honest self-evaluation, using your internal business and IT teams, and possibly also using outside consultants. Evaluate if it has been meeting its goals and has been performing as expected or if it needs to be updated and optimized.
The time to upgrade your IT systems is before you start replicating them all over the world, rather than having to upgrade an entire new global network.
If you are confident that your IT systems are fully extensible, then potential locations for the new satellite data centers, as well as their projected size and scope needs to be closely evaluated, for both the immediate and long term requirements. If you are expanding into a new area where your organization does not have a large established base, it may be wiser to initially consider using colocation (colo) or hosted service providers in some countries. Virtualization and data replication technologies may allow you to cost effectively extend your reach. This is especially true if your expected levels of traffic and computing resources are expected to be low for the first several years in that general region.
This would allow you to begin to service the local market immediately and gain statistical computing usage trends and peak bandwidth profiles in order to better project your long term requirements before building your own data center to service that area. Consider shorter contract commitments, but try to negotiate fixed price extensions so that you will not be exposed to large price hikes once your initial term expires. This gives you the flexibility to leave a co-lo or hosting provider, if you are ready to expand into your own data center, but allows you the option to continue in place, in the event your expected growth projections in that area are not met. Be aware of contractual issues such as Service Level Agreements (SLAs), early termination penalties, the costs for any additional services, and, of course, the history and reputation of the service providers.
If you have never used a colo previously, note that each provider has their own formula for power provisioning. Some charge per-circuit (i.e. a fixed price for a 120V/20A or a 208V/20A outlet, regardless of actual energy used) for each cabinet, while other sites may provide metered actual usage charges. Depending on how many circuits you need and how heavily you load them, will greatly impact your monthly costs.
When planning your communications networks, also note that some colo sites are “carrier neutral” and allow you to directly connect to any communication provider, while others only allow you to connect to carriers that they have an in-house relationships with. The costs of bandwidth and the choice of providers and carrier diversity should be considered carefully. Also be aware of who you are contracting with for your services; the carrier directly or the co-lo operator. This can affect your future options, should you decide to move to another colo or when moving into your own data center.
Also review their overall security and access procedures, as well as any “hands-on” support options they offer.
Each week, we will launch the following articles in this executive education series on building a global data center strategy:
- Transitioning the data center
- Communications and network design considerations
- Ten considerations in build a global data center strategy
This is article series is the fourth of a six part series on data center strategies for executives. The other series can be found here:
You can download a complete PDF of Creating Data Center Strategies with Global Scale by clicking here. This series is brought to you courtesy of Digital Realty. | | 4:30p |
Emerson’s 45kW Rack to House Hyperscale Workloads Emerson Network Power is focused on the hyperscale computing market, which includes extreme density and innovations in rack design. At last week’s Open Compute Summit, the Emerson team displayed rack solution that can support up to 45KW per rack, integrating power distribution and back-up into the Open Rack specification (created with off-the-shelf components). David Gerhart and Eric Wilcox from the new Emerson Hyperscale Solutions team provided an overview of the rack to Data Center Knowledge editor Rich Miller. The rack, which uses the 19-inch footprint, includes a single AC feed (DC goes to the servers, but is only used for back-up) with an onboard centralized inverter and 48V batteries. There is just one minute of battery back-up in the rack, while the servers’ workload is shifted or shut down “gracefully.” The video runs 5 minutes 30 seconds.
The company recently started a new business unit — Emerson Hyperscale Solutions — to address the needs of hyperscale data center operators. (See DCK Coverage – Emerson Adapts Open Compute, Eyes Hyperscale Market.)
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 4:36p |
CapRate NYC Summit and Expo Data center real estate and technology infrastructure executives will gather in New York for CRE’s Second Annual Greater New York Data Center Summit & Expo on February 27. This executive-level summit will be held at the New York City Bar Association and will bring together the most active and innovative firms for important discussion, debate and networking. 40-plus speakers will participate in multiple panel discussions, aimed at informing executives with the necessary market intelligence and networking opportunities for new business development in today’s environment.
Speakers include:
- Brian Doricko, Director – Eastern Region, Digital Realty Trust
- Michael Levy, Analyst – Datacenters, 451 Research
- Michael Bucheit, Chief Executive Officer, FiberMedia Group
- Chris Crosby, CEO, Compass Datacenters
- John Sabey, President, Sabey Data Centers
Venue
The New York City Bar Association
42 West 44th Street
New York, NY 10036
(212) 382-6600
For more information and registration, visit the CRE website. | | 9:28p |
Google Pours $1 Billion Into Data Centers in 3 Months 
Google poured $1 billion into its data center operations in the fourth quarter of 2012, marking its highest quarterly investment ever in Internet infrastructure. The only time the company has spent more on capital expenditures was the fourth quarter of 2010, when it spent $2 billion purchase 111 8th Avenue, primarily for its office space.
The fourth quarter capital spending, announced in Thursday’s earnings report, boosted Google’s full-year investment in its servers and mission-critical facilities to more than $3.27 billion, illustrating the strategic importance of data centers in Google’s business. That’s a ton of money, but it’s actually down slightly from $3.43 billion in 2011.
Google’s capital expenditures fluctuate from quarter to quarter. The latest increase in spending reflects Google’s focus on expanding its existing data center campuses. The company is pumping $600 million into an expansion of its campus in Berkeley County, South Carolina and another $200 million into its facilities in Council Bluffs, Iowa. In the third quarter, Google unveiled its first data center project in South America, which will be located in Quilicura, Chile.
Here’s a look at Google’s quarter-by-quarter spending on capital expenditures.
- 1Q 2007: $597 million
- 2Q 2007: $575 million
- 3Q 2007: $553 million
- 4Q 2007: $678 million
- 1Q 2008: $842 million
- 2Q 2008: $698 million
- 3Q 2008: $452 million
- 4Q 2008:$368 million
- 1Q 2009: $263 million
- 2Q 2009: $139 million
- 3Q 2009: $186 million
- 4Q2009: $221 million
|
- 1Q2010: $239 million
- 2Q2010: $476 million
- 3Q2010: $757 million
- 4Q2010: $2.55 Billion
- 1Q2011: $890 million
- 2Q2011: $917 million
- 3Q2011: $680 million
- 4Q2011: $951 million
- 1Q 2012: $607 million
- 2Q 2012: $774 million
- 3Q 2012: $872 million
- 3Q 2012: $1.02 Billion
|
A capital expenditure is an investment in a long-term asset, typically physical assets such as buildings or machinery. Google says the majority of its capital investments are for IT infrastructure, including data enters, servers, and networking equipment. In the past the company’s CapEx spending has closely tracked its data center construction projects, each of which requires between $200 million and $600 million in investment. |
|