Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, April 11th, 2013
| Time |
Event |
| 1:22p |
Countering the Theat of Cloud: IT Ops With A Service-Oriented Approach Vic Nyman is the co-founder and COO of BlueStripe Software. Vic has more than 20 years of experience in systems management and APM, and has held leadership positions at Wily Technology, IBM Tivoli, and Relicore/Symantec.
 VIC NYMAN BlueStripe Software
Cloud computing is perceived as a significant threat by some data center organizations. By changing the focus away from managing server resources and adopting a Service-Oriented approach to IT Operations, IT organizations can turn that threat into an opportunity while helping to deliver business innovations to their enterprises.
As you know, the corporate data center faces unprecedented competition for their internal customers. The threat of wholesale IT departmental outsourcing has been with us for quite a while, but for the individual employee, an outsourcing contract has often meant keeping the same job, but just being on the payroll of XYZ Systems instead of Venerable Bank and Trust.
Cloud has the potential to be different. It means your company’s applications will run in somebody else’s data center, using somebody else’s employees to manage somebody else’s servers. It means that business-based application groups can bypass the operations process entirely. And while it is considered unlikely that large, critical legacy applications will be moved to cloud in the immediate future, over time Cloud can mean data center consolidations and staff reductions.
At my company, we deal with customers who are facing this issue on a regular basis. One Director of IT Infrastructure recently told a story about trying to impose some discipline on their server management process, and being told flat out, “We can get the server we want from Amazon in 15 minutes. What’s wrong with you?” Clearly this is a whole new world.
Better Service is Answer
The key for data center teams is to be able to deliver better service than the cloud providers do. Part of the source of demand for cloud services is the promise of hassle-free, efficient delivery on service-levels – that business applications will deploy and run as asked for, with minimal hassles and downtime.
In many companies, the data center teams use a server resource-based approach to managing application performance. The focus is often on machine resource metrics – CPU and memory utilization, disk IO, network performance – metrics that are only loosely correlated with actual application performance. A better approach is to concentrate on application and transaction response times.
We’ve seen customers who take this approach make significant reductions in their key performance metrics. Availability for mission critical applications has far exceeded SLA levels, and IT Operations teams have been freed up to work to deliver new capabilities.
Here’s how they’ve changed the way they manage the delivery of applications:
- First, recognize that transaction response times are more important than resource utilization. In a large, interconnected application with multiple tiers and extensive virtualization, chances are good that some servers will show high CPU utilization. Chances are also good that those servers will not have anything to do with an application slowdown. Focusing on the individual transaction response time will yield the source of the problem, and will help avoid “red herring” activities that don’t contribute to the solution.
- Second, recognize that every component affects transaction response times. Highly distributed, inter-connected services typically involve ten or more servers – sometimes hundreds of servers. Don’t just look at the application server – the team doing the triage needs to consider the web tier, authentication, middleware, database, and even third-party services.
- Last, recognize that within the problem server, every infrastructure layer affects component response times. Every dependency of the problem server is the potential culprit during a slowdown – rather than just looking at CPU and memory utilization, the data center team needs to look at the application component, other applications on the server, the operating system, virtualization, storage, networking, shared services like DNS, and even server management tools.
By applying the Service-Oriented approach, data center teams can greatly improve their results in dealing with application management – making themselves competitive with cloud offerings.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:38p |
Oracle Expands Big Data Appliance Family To help jump start customer projects for big data, Oracle (ORCL) announced new additions to its Big Data Appliance product family, promising faster performance for all Oracle applications. The announcements were made in Denver this week from Collaborate13, the technology and applications forum for the Oracle community.
Big Data Appliance Enhancements
Oracle announced the availability of Oracle Big Data Appliance X3-2 Starter Rack and X3-2 In-Rack Expansion. This allows customers to select an optimally sized appliance, and the expansion helps them easily and cost-effectively scale their footprint as their data grows. These new configurations contain six Oracle Sun servers within a full-sized rack with redundant Infiniband switches and power distribution units; as well as Oracle Big Data Appliance X3-2 In-Rack Expansion, which includes a pack of six additional servers to expand the above configuration to 12 nodes and then to a full rack of 18 nodes. Both systems include Cloudera’s Distribution Including Apache Hadoop (CDH), Cloudera Manager and Oracle NoSQL Database.
The Oracle Big Data Appliance X3-2 in a full rack configuration is now available through Oracle Infrastructure as a Service (IaaS), offered as on-premise, behind a customer firewall, or for a monthly fee. The X3-2 product family is comprised of Engineered Systems that simplify the implementation and management of Big Data by integrating hardware and software to acquire, organize and analyze Big Data.
Oracle In-Memory Applications
Oracle announced new In-Memory applications for Oracle Engineered Systems, leveraging DRAM, flash memories and the near zero latency InfiniBand network fabric. Many applications within PeopleSoft, JD Edwards, Oracle Supply Chain Management and Siebel product families are now available as Oracle In-Memory applications. Existing Oracle Applications run as much as 16X faster on Oracle Engineered Systems and deliver tangible business benefits for customers by providing extreme performance, energy efficiency, lower total cost of ownership, reliability and scalability.
“Oracle continues to demonstrate its commitment to innovation that produces business results and value for Oracle Applications running on Oracle Engineered Systems,” said Steve Miranda, Oracle Executive Vice President of Application Development. “The release of Oracle In-Memory Applications will help organizations not only complete load runs faster, but also discover new insights for efficiencies that would have been previously overlooked.” | | 2:20p |
CoreSite Leases Entire Data Center in Santa Clara  The server hall of a data center operated by CoreSite, which recently leased an entire build-to-suit building in Santa Clara.
It’s not every day you see a company lease an entire data center building in one shot. But that’s what CoreSite Realty has accomplished with its newest project in Silicon Valley, where it is nearing completion on a 101,250 square foot build-to-suit project that has been pre-leased by a single large customer.
The new building is part of the growing CoreSite campus in Santa Clara, the Valley’s leading data center hub. The lease was discussed by CoreSite CEO Thomas Ray on the company’s earnings call last month.
“On our Santa Clara campus, we expect to commence and complete SV5, the 100,000-square foot powered-shell build-to-suit,” said Ray. “This pre-leased development enables us to serve a strategic customer and accelerate the monetization of a portion of the land we own on the campus.”
In a powered shell property, the developer builds the structure and mechanical and electrical infrastructure, but the tenant builds out the data center environment. That differs from the wholesale model, in which the landlord builds the complete plug-and-play data center environment, including raised floor space. CoreSite said its construction costs on the project were $19 million, with projected annual rent of $3.2 million a year.
Key Customer May Boost Campus
“We are helping a customer that’s very, very important to us across North America and just furthering and deepening a good relationship with that customer,” said Ray. “What we believe that customer will be doing on our campus will make the campus even more attractive to other networks and cloud service providers and enterprises.”
Santa Clara is one of the nation’s most competitive markets, with nearly all leading data center and colocation companies maintaining facilities. The CoreSite campus has space for two additional buildings, one currently approved for 210,000 square feet of development, and a second site that will support between 100,000 and 300,000 square feet. The company also leased a previous building on the campus to a single large tenant.
The deal continues the market momentum for CoreSite (COR), a publicly-held real estate investment trust (REIT) which has been one of the industry’s strongest performers on Wall Street, where its shares soared 55 percent in 2012 and added another 26 percent gain in the first quarter of 2013.
Silicon Valley isn’t the only major market where CoreSite is building, as the company has new projects underway in both northern Virginia and New Jersey.
- In Reston, Virginia, CoreSite is building a 200,000 square foot greenfield data center. The company says it will invest $60 million in the facility, commencing construction in the first half of 2013 and delivering finished customer space in early 2014.
- In Secaucus, New Jersey the company has purchased a 280,000 square foot building for a new data center, and expects to invest $65 million to buy the facility and redevelop the initial phase of 65,000 square feet of data center space. NY2 will offer 4.5 megawatts of capacity in the fourth quarter of this year.
The Secaucus facility will be the company’s first data center in New Jersey. CoreSite has a site in New York City at 32 Avenue of the Americas, and the Secaucus facility will mark an important expansion into the suburban New Jersey market, which offers larger footprints for wholesale data center providers like CoreSite, as well as better economics than Manhattan.it
Secaucus vs. Central New Jersey
“We have high expectations for our NY2 expansion as we enter what we believe is one of the fastest-growing and most profitable submarkets in the U.S. for our targeted applications and customers,” said Ray.
Ray said CoreSite opted to build in Secaucus in northern New Jersey rather than central New Jersey, where three of its competitors in the wholesale market have built their data centers.
“Two of the leading colocation and IT services companies have experienced consistent, robust and highly profitable growth in Secaucus in the Meadowlands,” said Ray. “Additionally, the Secaucus submarket is the leading location in the region for financial services firms, and provides robust, diverse, low-latency network access to Manhattan. These factors differentiate the Secaucus area from the outer submarkets of Somerset and Middlesex counties, which offer the same cost of power but significantly longer and less diverse fiber routes to Manhattan and subsea cable landing stations, access to which is often a key requirement for performance-sensitive colocation applications. We see strong opportunity in Secaucus, and look forward to bringing our NY2 facility online at the end of this year.” | | 4:00p |
Improve Connectivity with HP Virtual Application Networks With more devices and more connectivity into new technologies like cloud computing, many organizations are looking for better ways to optimize their networks. The have a solid infrastructure, organizations must remain agile as the reliance on the modern data center increases. In this white paper by ESG and commission by HP, you are able to learn about key trends which closely align with business agility and data delivery. These trends include:
- Aggressive data center consolidation.
- Increaseing use of server virtualization technologies.
- Wide and growing deployment of web-based applications.
- Consumerization of IT and BYOD.
Because of these growing trends, companies must deploy highly agile network environments capable of scale and meeting both business and end-user demands. In this white paper, ESG goes on to point out that existing legacy networks will not be able to sustain future growth. Therefore a new “cloud-friendly” architecture is required This new architecture will support rapid growth, handle dynamic environments based on business and IT policies, and provide sufficient levels of automation and orchestration for rapid software-based deployment of end-to-end services. More specifically the network must be:
- A foundation for connectivity and performance.
- Virtualized and abstracted.
- Tightly integrated with adjacent domains and orchestration programs.
To meet network demands and continue to deliver powerful cloud solutions, HP introduced its Virtual Application Networks, or VAN. VANs are logical, purpose-built virtual networks that leverage existing Flex Network architecture, and are designed to connect users to applications and services, resulting in a scalable, agile, and secure network that streamlines operations.

[Image source: Enterprise Strategy Group, 2012]
Download this white paper to learn about HP’s VAN methodology. This white paper covers a comprehensive approach to designing compute, network, and storage environments, with an overarching theme related to converged infrastructure. HP’s vision into cloud computing has set the wheels in motion for many organizations to move off of legacy networks. In using HP’s VAN solution, organizations can help alleviate short-term challenges while providing a flexible network for future business and IT initiatives. | | 7:02p |
Cisco, Microsoft Team to Target the Data Center Cisco (CSCO) announced a range of new joint technologies and integrated solutions targeting data centers that use Microsoft’s Cloud OS technologies. With those customers in mind the two companies are combining the Cisco Unified Data Center architecture with Microsoft Fast Track architecture solutions.
Microsoft Fast Track 3.0 solutions simplify the management of combined Cisco and Microsoft data centers by giving Microsoft customers programmatic access to the Cisco Unified Computing System (Cisco UCS). Both Cisco and EMC and Cisco and NetApp reference architectures have been validated for the Microsoft Fast Track 3.0 program. The Cisco Nexus 1000V series virtual and cloud networking platform, combined with Windows Server 2012 Hyper-V Extensible Switch and System Center Virtual Machine Manager allow customers who wish to virtualize certain aspects of their data center networks can do so alongside physical networking and cloud computing infrastructure. The Cisco UCS UI Extension Add-in for System Center 2012 SP1 Virtual Machine Manager provides centralized management of Cisco UCS by allowing access to Cisco UCS management controls from within System Center Virtual Machine Manager.
“Microsoft’s Cloud OS approach, based on Windows Server 2012 and System Center 2012 SP1, gives customers a comprehensive platform for implementing their infrastructure on premises, with a hosting service provider, and in the cloud,” said Brad Anderson, corporate vice president, Windows Server and System Center at Microsoft. ”Combining the proven Microsoft Fast Track architecture with Cisco’s innovative Unified Data Center architecture provides partners and customers with a first-class option for navigating their way through today’s new age of data center deployment and management.”
Cisco and Microsoft intend to roll out targeted channel initiatives in select countries to enable alignment in selling Microsoft Fast Track 3.0 architecture solutions.
“Cisco and Microsoft are focused on helping customers realize a new business vision for the cloud era, and Scalability Experts has become a key leader in advancing this vision by leveraging scalable and high-performance platforms,” said Raj Gill, founder and chief technology officer, Scalability Experts. ”Increasing the performance of database operations, lowering IT costs and improving business decision-making capabilities is what we focus on delivering to each of our clients. Microsoft solutions and Cisco’s UCS compute family are key strategic platforms for enabling us to consistently exceed our customer’s expectations.” | | 7:06p |
Network News: Hurricane Picks Zayo for 100G Backbone Here’s a roundup of some of some of this week’s headlines from the network industry:
Zayo selected by Hurricane Electric for 100G backbone. Zayo Group announced that Hurricane Electric Internet Services has purchased 100G wavelength services on Zayo’s newly installed 100G routes. The upgraded routes include New York to Washington, D.C. and Chicago to Memphis. Hurricane Electric is connected to 60 major exchange points and exchanges traffic directly with more than 2,700 different networks, and the 100G routes will help create greater per Gigabit cost efficiencies relative to prior service deployments. “Installation of our newly implemented 100G Wave system along major U.S. routes allows Zayo to leverage the latest generation technology to meet the capacity demands of customers like Hurricane Electric,” says Zach Nebergall, Vice President of Waves Product Group at Zayo. “This service delivers increased capacity and cost efficiency, as well as improved latency through the latest technology.”
Ciena selected by XO for video transport solution. Ciena (CIEN) announced that XO Communications will use Ciena’s Digital Video Transport solution to deliver native digital video transport services to media and entertainment customers across the XO nationwide network. With XO Communications’ coast-to-coast 100G network, the Ciena solution will allow XO to cost-effectively transport high-definition video content for its customers without affecting original quality. The solution includes the 565 Advanced Services Platform, a compact and cost-optimized metro WDM platform that enables a variety of data, storage and video services to be cost-efficiently aggregated onto an optical wavelength-based network or service.The Digital Video Transport solution will be deployed in conjunction with other Ciena platforms in the XO network. “In the media and entertainment industry, uptime, video quality, and secure, high-speed service delivery are critical,” said Francois Locoh-Donou, senior vice president, Global Products group at Ciena. “Ciena’s highly reliable, scalable and flexible optical platforms, part of our OPN programmable network architecture, address the specific requirements of native digital video transport. We’re pleased to help XO Communications deliver a secure and reliable solution so that they can quickly and cost-effectively send bandwidth-intensive, high quality video between geographically dispersed teams.”
Alcatel-Lucent and Shaw trial 400G. Alcatel-Lucent (ALU) and Canadian operator Shaw Communications have achieved a milestone with the successful first field trial in North America of 400 Gigabit-per-second (Gbps) data transmission over an existing optical link carrying live network traffic. The trial ran over a 400 kilometre route between Calgary and Edmonton in Alberta, Canada, using Shaw’s current high-capacity transport network, designed for speeds up to 100 Gbps. Using Alcatel-Lucent’s 400 Gbps technology, the trial demonstrated the ability of an existing optical network carrying up to 17.6 Terabits-per-second (Tbps). “With the growing appetite for services and the proliferation over many different devices, including tablets and connected devices, we were looking for next-generation technologies to help us build a world-leading infrastructure capable of keeping up with broadband demands,” said Peter Bissonnette, President, Shaw Communications Inc. ”Alcatel-Lucent’s 400 Gbps technology enables us to continue our leadership within the telecommunications industry and reinforce our commitment to maintaining leading-edge high-speed Internet capability.” | | 7:26p |
Stream Data Centers Building Again in San Antonio  The new Stream Data Centers private data center property in Richardson, Texas. The company will build a similar facility in San Antonio.
Stream Data Centers is expanding again, building a greenfield data center on land it has acquired in San Antonio. The company announced today it has acquired 9.6 acres of land in Westover Hills Business Park. This will be the company’s second data center in San Antonio, and ninth in Texas. Stream Data Centers will break ground on its San Antonio Private Data Center in May 2013 and the facility will be fully commissioned and ready for occupancy in February 2014.
The data center will be a 75,840 square foot purpose-built facility that will initially deliver 2.25 MW of critical load power, with the ability to easily expand the critical load to 6.75 MW with all necessary conduit and pads in place. It is being divided into three private data center suites, each containing 10,000 square feet of raised floor space
“We are excited to build upon our previous success in San Antonio and start our latest project in the area,” said Paul Moser, Co-Managing Partner of Stream Data Centers. ”San Antonio’s diverse mix of Fortune 1000 companies and their growing IT requirements is driving the need for more data center space in the city. San Antonio is also attractive to out-of-region enterprise data center users due to its central US location, reliable infrastructure, and stable cost of electricity.” The company has witnessed the strength of the San Antonio market in the past, selling a previous project to a Fortune 100 company.
Stream will utilize its standard 2N electrical / N+1 mechanical configuration and the project will include dual feed power from two separate substations. Additionally, the carrier-neutral facility will include redundant telecommunication rooms serving each PDC Suite with access to the multiple fiber providers serving the site.
It is being constructed using Miami-Dade County Building Code Standards, providing the ability to withstand 146-mph straight-line winds and uplift. Stream is using accredited construction and design practices required to achieve LEED Gold Certification
Stream strategically selected the site for this development in Westover Hills Business Park because it boasts high security and a robust fiber and power infrastructure. The site is in close proximity to other primary enterprise data centers occupied by Microsoft, Chevron, Lowe’s Corporation, Valero, Frost Bank, Christus Health and others.
San Antonio is one of the fastest growing oil & gas markets as a result of its proximity to Houston and attraction from large companies who are creating operational hubs in the city. It is also home to a large concentration of financial services, healthcare, and government related organizations. Microsoft can at least partially be credited for kicking off a strong San Antonio data center market way back in 2008, when it decided to build a mammoth data center there.
“We look forward once again to working with Stream Data Centers in San Antonio to identify and recruit enterprise data center users to the area,” said Mario Hernandez, president of San Antonio Economic Development Foundation.
Stream has other data center developments in Dallas, Houston, Denver and Minneapolis. Stream Data Centers has a fourteen year track record of providing space for enterprise data center users including Apple, AT&T, The Home Depot, Chevron, Catholic Health Initiatives, Nokia and others. During that time, Stream has acquired, developed and operated more than 1.5 million square feet of data center space in Texas, Colorado, Minnesota, and California representing more than 125 megawatts of power. |
|