Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, September 16th, 2013
| Time |
Event |
| 11:30a |
Customer Wins For Interxion, Latisys and Savvis  Some of the data center space inside the Latisys Chicago data center
Interxion adds Digital Planet and Deluxe Entertainment cloud build-outs, Latisys expands a partnership with IDS, and Savvis connects with the proposed Aquis pan-European equities trading exchange.
Interxion adds second hub for Digital Planet.
Interxion announced enterprise cloud solution company Digital Planet has launched its second Cloud Platform in Interxion’s data center in Parkwest, Dublin. The first hub for Digital Planet, launched with Interxion in 2010 was expected to meet requirements until the second quarter of 2014, but faster than expected growth has led to this expansion earlier than had been expected. “We are delighted that Digital Planet has chosen to further expand its Cloud Hub presence at our Dublin campus,” said Tanya Duncan, Managing Director of Interxion Ireland. ”With enterprises migrating more and more of their workloads to centrally managed cloud computing platforms, the dependency on the underlying data centre, compute and network infrastructure increases. Interxion continues to enable Service Providers, such as Digital Planet, to deliver Enterprise based solutions to the Irish and UK markets, by providing reliable, scalable and well-connected data centre services. This expansion is a major vote of confidence in Interxion, which highlights our position as a best-in-class supplier of data centre services to the rapidly expanding Enterprise Cloud service provider segment.”
Interxion Aids Deluxe Entertainment Cloud
Interxion (INXN) announced that Deluxe Entertainment Services Group has deployed its cloud-based broadcast services platform, Deluxe LeapCloud, in Interxion’s London and Amsterdam data centers. Deluxe LeapCloud leverages orchestration, asset management and content delivery tools, combined with a playout platform, all housed in geographically diverse Tier III data centers, to help broadcasters set up and manage new television channels. “When we did our research, Interxion came out head and shoulders above everyone else,” said Alec Stichbury, CTO at Deluxe LeapCloud. “Their award-winning, accredited data centers give us plenty of capacity to scale LeapCloud installations or expand to other European locations, and also support disaster recovery across multiple sites – traditionally considered an expensive luxury in the broadcast market. Plus, Interxion’s data centers are home to all the main fiber-based media contribution networks, a growing community of satellite service providers and over 450 carriers, providing our customers with the best possible options to acquire live broadcast streams and ingest media files. Supporting all of these capabilities, Interxion’s dedicated digital media team understands what we want to achieve and is ready and able to support that vision.”
Latisys expands agreement with Integrated Data Storage
Latisys announced a significant expansion to the hosting services agreement with Integrated Data Storage (IDS), a Data Center Technology Integrator and Cloud Services Provider based in Chicago. With a footprint in Latisys’ Chicago and Denver data centers IDS will leverage hosting services to underpin its rapidly expanding cloud solutions platform. “The Cloud has been the fastest growing part of our business and one of the most strategic decisions we had to make was choosing our Data Center partner,” said Justin Mescher, Chief Technology Officer at IDS. “Latisys is one of the few service providers that offer Enterprise-class hosting facilities, while still being nimble enough to support a rapidly-growing business like IDS. That agility is what makes Latisys the perfect partner for IDS, because like us they understand how critical it is to provide solutions that are tailored to customer requirements and flexible enough to change as requirements evolve.”
Savvis adds Aquis Exchange
Savvis, a CenturyLink (CTL) company, announced plans to expand connectivity to Aquis Exchange, the proposed pan-European equities trading exchange and software developer, via Savvis Markets Infrastructure. Savvis Markets Infrastructure provides enhanced network and latency management tools, plus access to more than 200 exchange and liquidity venue feeds, to financial service organizations across the globe. ”In the process of creating the Aquis Exchange infrastructure, we have tried to make the barriers to joining as low as possible for our potential members,” said Alasdair Haynes, chief executive officer of Aquis Exchange. “By having a company with the scale and caliber of Savvis as an authorized extranet supplier, we hope to give users a superior and broad range of connectivity options. We are very pleased to be working with Savvis.” | | 12:00p |
Servers on Demand: Custom Water-Cooled Servers in One Hour  A wall of servers inside the new OVH facility near Montreal, Quebec. The former aluminum factory doubles as a data center and server manufacturing facility. (Photo: OVH)
In 2006, word began to emerge that Google was building its own servers to make its infrastructure faster and cheaper. The search giant wasn’t alone, though. By that time, French hosting company OVH had been building its own servers for a number of years, developing a design that use water-cooling.
“We’ve been doing our own servers from the beginnings of OVH in 1999,” said Germain Masse, the Chief Operating Office of OVH. “The 1U pizza box servers available at the time were real expensive. We didn’t think the way the components and drives were positioned was smart. We found it would be better if we could reduce the case to a simple sheet of metal. We could make them faster and more efficient.”
The concept for these early OVH designs – vanity-free hardware, minimalist server trays, a limited component set – may sound familiar. These are the guiding principles behind the Open Compute Project and other initiatives to customize hardware for cloud server farms. OVH was an unusually early adopter of the build-your-own hardware model, embracing the economics of hyperscale environments in its fast-growing hosting business.
OVH now has 150,000 servers running in eight data centers. The company’s newest campus in Quebec is a unique facility, housing a factory to assemble servers and the data center to house them. It is the engine for OVH’s ambitions in North America, where it hopes to sell hundreds of thousands of servers. The facility in Beauharnois is a former Rio Tinto aluminum factory located just 100 yards from a hydroelectric power dam and substations, providing access to more than 100 megawatts of power capacity.
 OVH employees assemble the company’s custom servers inside the Beauharnois data center facility. (Photo: OVH)
Inside the factory, a team of 25 workers man an assembly line where they build servers from boxes of components. Masse says OVH sources components from five different vendors, including Intel, whose low-power Atom chips are found in most of the company’s servers. Each server board takes about 15 minutes to build, and after being tested can be quickly deployed to racks housed in another part of the building.
OVH says that when a hosting customer places and order through its web site, the server can be built and installed in Quebec in less than an hour.
“We’ve been doing this for 10 years now, so we know how many server orders we typically get,” said Masse, who said orders sometimes experience bottlenecks when the company introduces new server products or opens new data center facilities.
Masse said building its own servers gives OVH the freedom to innovate.
“We are free to choose the right manufacturer for our motherboard and drives, and free to change it when the market changes,” he said. “It allows us to choose the right equipment at the right time.”
Next: The Shift to Custom Water-Cooled Servers | | 12:30p |
The Road to Acquiring DCIM: A Q&A Primer Lara Greden is a senior principal, strategy, at CA Technologies. Her previous post DCIM Integration: Are IT Management Tools Enough? appeared in August 2013.
 LARA GREDEN
CA Technologies
What are compelling business activities at the strategic level that make DCIM technology essential? Business cases for supporting investments in DCIM are often a combination of hard and soft costs. They include the ability to free up OPEX, avoid CAPEX, replace other systems and improve productivity. But the successful business cases for DCIM always recognize that it is essential to the organization’s business goals. Here are responses to questions often asked by forward-looking organizations on the road to acquiring and successfully deploying DCIM technology.
• How do we meet new service demand quickly and efficiently?
When it comes to meeting new service demand quickly and efficiently, the traditional approach is to throw more capacity at the problem, be it physical capacity or labor capacity. Many organizations are recognizing the problems with this approach: namely, that it is expensive and/or not fast enough.
Many organizations align DCIM to the strategic imperative of agility in IT operations. The software-defined data center is helping data center operators in both facilities and IT improve agility at the data center level, and DCIM is a fundamental component of that approach. Due in part to its data federation capabilities, DCIM helps organizations efficiently manage the power, space and cooling capacity of the data center, and efficiently and confidently provision and decommission devices. Because increased automation means that applications are often moving and configurations can be changed easily, data center operators on both the facilities and IT side are finding that the visibility provided by DCIM is a necessary link in the chain. For longer term needs, DCIM provides the analytics necessary for capacity planning.
• Should we build or collocate?
Capacity constraints are a major driver for investment in DCIM technology. When organizations are looking to expand or consolidate their data center infrastructure, DCIM technology is an essential tool. First, in situations where capacity constraints on power, space and cooling are projected to limit the organization, DCIM helps identify and prioritize the opportunities to free up capacity. Likewise, when consolidating data center resources, DCIM helps organizations carry out the consolidation efficiently and accurately. This starts with using DCIM analytics to identify the best locations, equipment and devices to maintain going forward.
Perhaps more importantly, DCIM technology helps organizations uncover situations where capacity constraints may compromise uptime and availability. Knowing when power circuits are reaching higher than desired demand levels, or where hotspots are occurring is essential to maintaining the health of the data center. Data center operators have known this since the beginning, but it’s often easier said than done–unless you have the visualization, normalization, and integration capabilities of a DCIM software technology.
Organizations will increasingly have a portfolio of data center resources, including owned and colo space. DCIM technology helps provide remote insight across a variety of indicators, including efficiency, available capacity and operational status. It helps organizations drive best practices across their data center portfolio by providing insight and transparency.
• How can we create competitive advantage?
For colos and managed service providers, DCIM technology goes beyond helping them run efficient and modern data center operations–it helps them create competitive advantage by offering transparency to their customers, and thus, differentiating themselves in the market.
In a case study on the ROI achieved through the deployment of DCIM software technology in one of their Tier III+ data centers, Logicalis identified top-line revenue as one of the major benefits due to the ability to differentiate their managed services offerings and help customers achieve energy and sustainability goals.
Transparency on various elements such as power consumption, the thermal environment, and other elements in SLAs can even lead to further revenue generating opportunities. These include consulting services and remote operations to help customers reduce their overall costs.
The End Result
Meeting service demand quickly and efficiently, confidently supporting decisions to build or colocate, and creating competitive advantage for colos and service providers share the commonality of attention at the strategic level for data center infrastructure. Because DCIM technology provides the means to execute in those areas with confidence, more and more forward looking organizations are turning to DCIM technology. We expect to continue to see organizations place importance on DCIM as they innovate and evolve their data center strategies.
An upcoming post will discuss essential elements of a roadmap for DCIM strategy in the data center.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 12:30p |
Five Years Later: How The Wall Street Meltdown Hit the Data Center Market  An aerial view of the DuPont Fabros data center in Santa Clara, Calif.
It’s been five years since the collapse of Lehman Brothers triggered a global financial crisis. The anniversary has prompted U.S. media to revisit the crisis and its fallout.
The financial crisis left its mark on nearly all industries. That was certainly true for the data center market, a capital-intensive business that relies upon credit for construction financing.
There were several specific impacts, however. The meltdown scuttled the acquisition of one data center provider, and provided a stern test for another – as well as huge opportunity for brave investors.
Terremark
When the financial crisis hit in September, 2008, enterprise cloud pioneer Terremark was in the late stages of negotiations to be acquired. The potential sale of the Miami-based company was derailed by the credit crisis, as the Terremark board cited problems in the credit markets in its decision to halt efforts to sell the company.
The Miami-based colocation and managed hosting specialist received an unsolicited takeover offer in April 2008, and worked on a potential deal through mid-September, when Lehman collapsed. At the time, the board was conducting a details “market check” to determine whether the proposed price was fair to shareholders.
At the time, shares of Terremark were trading between $6 and $7 a share. The story had a happy ending, as Terremark was acquired by Verizon for $19 a share ($.14 billion) in 2011.
DuPont Fabros Technology
In September 2008, data center developer DuPont Fabros Technology was in the process of arranging a credit line to fund future construction projects. The company hoped to line up a secured loan of $300 to $400 million. But “several banks we were in discussion with have merged or disappeared,” said CFO Mark Wetzel.
In October the company was forced to postpone plans to build a new data center on Santa Clara, California. In an earnings call in November, the company expressed confidence in its ability to fund its expansion, but securities analysts asked pointed questions about the company’s strategic options if it can’t secure new loans.
Amid the uncertainty, shares of DuPont Fabros plunged to new lows. Little more than a year after going public at $21, shares of DFT fell to less than $2 per share. If you were among those that bought shares at those prices, you probably did pretty well. DuPont Fabros open today at $24.66. The Santa Clara SC1 data center was eventually funded and the company just completed leasing the first phase. | | 1:00p |
Your Amazon Cloud Just Went Down. Now What? 
With more organizations leveraging the power of the cloud, it’s very important to understand the infrastructure that supports it all. With more users, lots of consumerization, and a lot of new data, the infrastructure of tomorrow must be as resilient as possible.
Organizations are seeing the beauty of working with a cloud provider like Amazon. There are a lot of benefits too – data distribution, multiple access points, bringing the data closer to the users, deploying a fog, decrease in hardware expense, and a lot more workload flexibility.
However, there is something important to understand about the cloud. Nothing in IT is ever perfect. This includes the cloud. Think your cloud won’t go down? Think you’ve got resiliency built in? You should review some of the very recent outages:
When deploying a cloud environment, an Amazon cloud in this case, you must plan around disaster recovery, business continuity, and infrastructure failures. Remember, whether a networking component goes out or there is a complete power failure a cloud outage is a cloud outage. Regardless of the circumstances, you will lose access to data, workloads and core resources. As a result, your user is negatively affected, and your business is financially impacted.
What do you do if your cloud infrastructure just went down? Now what? Here’s a look at key pieces of your failover and recovery plan:
- Develop a contingency plan. How will you know what to recover if you don’t know what’s running in your cloud? I don’t mean VMs or applications – I’m referring to the entire infrastructure. You can have the best environment in place, but if you have little to no visibility into dependencies, data access, advanced networking, and data control, you’ll have a very hard time recovering your cloud should an event occur. When your cloud infrastructure is a critical part of your business, you must develop a Business Impact Analysis and/or a Disaster Recovery/Business Continuity document. This plan not only outlines your core systems, it’ll also detail infrastructure dependencies, core components, and what actually needs to be recovered. Remember, not everything in your cloud is critical. To save on recovery time and efficiency, it’s crucial to know which systems hold priority. Without this document and without a recovery plan, knowing what to recover and in what order can really slow down the process.
- Understanding WAN traffic and optimization. As cloud platforms become more distributed, there will be a direct need for data to retain its integrity and arrive at its destination quickly and without much latency. Organizations have spent millions of dollars on physical hardware components to help with the optimization task. Now, organizations are able to create entire virtual network architectures where traffic is handled at the virtual routing layer. WAN traffic control and WANOP play a big role in the cloud DR industry. Folks like Silver Peak and Riverbed are designed to create highly efficient network paths between vast distances. Furthermore, these appliances can be deployed at the virtual layer. Controlling cloud-based WAN traffic not only helps with optimization, but it also helps direct users should there be an ISP or even networking-related outage.
- Utilizing dynamic load-balancing (at the cloud layer). Load balancing has come a long way from just directing traffic to the most appropriate resource. Now, with network virtualization, new types of load balancers are helping not only with traffic control, but cloud-based Disaster Recovery (DR) and High Availability (HA) as well. Features like NetScaler’s Global Server Load Balancing (GSLB) and F5’s Global Traffic Management (GTM) not only port users to the appropriate data center based on their location and IP address, they can assist with a disaster recovery plan as well. By setting up a globally load balanced environment, users can be pushed to a recovery data center completely transparently. This virtual cross-WAN heartbeat would check for availability of a data center and push users to the next available resource. Furthermore, there can be policies set around latency, network outages, and even network load. How can this help? Not only are you able to port users around the Internet to the most available resource – you can recover your cloud from a completely different data center if needed.
| | 1:00p |
Every Business Is a Web Business Now The speed of business is reliant on the Internet today. Seconds—even milliseconds—can cost organizations millions of dollars.
Experts say that the file size of the average web page has doubled in size since 2010, while users’ skyrocketing expectations have shrunk their patience. Almost half of visitors will abandon a site if they have to wait more than three seconds. Nobody can afford that. You lose too much when your site doesn’t function at its absolute best—money, traffic, trust.
O’Reilly’s Velocity conference brings its web operations and performance event to New York City for the first time, Monday, October 14 through Wednesday, October 16, 2013, at the New York Hilton Midtown.
After six years in Silicon Valley, Europe, and China, chairs Steve Souders and John Allspaw invite New York’s technical and business leaders to bring their most challenging problems to Velocity. At Velocity New York, experts from finance, media, advertising, entertainment, tech, and other online businesses share their collective brilliance to work on building the fastest and strongest web yet.
Speakers from firms such as Union Square Ventures, Qualcomm, Major League Baseball, Facebook, Netflix, Salesforce, Etsy, Google, GitHub, and Intuit discuss both their success stories and cautionary tales. A rich schedule of networking events also provides time for more informal conversations.
Velocity is about sharing ideas on operations and performance in high-stakes environments. Being the U.S. financial and media capital, New York has many high-stakes situations, for sure.
For further information and registration, visit Velocity Conference’s website. | | 1:30p |
With Quark Processor, Intel Targets Internet of Things and Wearables  At IDF San Francisco 2013, Intel CEO Brian Krzanich unveils the Quark processor family, which will bring higher levels of integration, lower power and lower cost for the next wave of intelligent connected devices. (Photo: Intel)
After a one-two punch with the recent launches of the Atom low-power processor and the new Xeon brawny processor, Intel CEO Brian Krzanich introduced the new Quark processor family for pushing further into the lower power segments the computing market, including the Internet of Things and wearable computing.
At his first keynote address, Krzanich told the Intel Developer Forum audience last week that the company plans to leave no segment untapped.
“Innovation and industry transformation are happening more rapidly than ever before, which play to Intel’s strengths,” said Krzanich.”We have the manufacturing technology leadership and architectural tools in place to push further into lower power regimes. We plan to shape and lead in all areas of computing.”
Innovating in new markets
Intel announced the Quark processor family for applications where lower power and size take priority over higher performance. Intel will sample form-factor reference boards based on the first product in this family during the fourth quarter of this year to help partners accelerate development of solutions for the industrial, energy and transportation segments.
Intel President Renée James highlighted developments in semiconductor technology will further advance machine-to-machine data management in smart cities. Intel is partnering with the cities of Dublin and London to build a reference solution that could revolutionize urban management, providing citizens with better cities and improved municipal services with lower costs.
“It’s one thing to install computing power in billions of smart objects,” said James. “What we’re doing is harder – making powerful computing solutions that turn data to wisdom and search for answers to the world’s most complex problems like cancer care. What we’ve seen so far is just a glimpse of how Intel technology could be used to help heal, educate, empower and sustain the planet.”
Mobile
After taking heat over being late to the smartphone market, Intel continues its mobile computing strategy and will use its manufacturing prowess to drive it forward. The chipmaker announced a next generation LTE product, the Intel XMM 7260 modem – now under development.
Expected to ship in 2014, the Intel XMM 7260 modem will deliver LTE-Advanced features, such as carrier aggregation, timed with future advanced 4G network deployments. Krzanich showed the carrier aggregation feature of the Intel XMM 7260 modem successfully doubling throughput speeds during his keynote presentation.
A smartphone platform was also demonstrated, featuring both the Intel XMM 7160 LTE solution and Intel’s next-generation Intel Atom SoC for 2014 smartphones and tablets codenamed “Merrifield.” Based on the Silvermont microarchitecture, “Merrifield” will deliver increased performance, power-efficiency and battery life over Intel’s current-generation offering. Intel also confirmed that it intends to bring its Intel Atom processor and other products based on the next-generation “Airmont” microarchitecture to market on Intel’s leading-edge 14nm process technology beginning next year. | | 2:00p |
Mystery Company Behind Project Oasis is Travelers Insurance Development companies may have to come up with some new code names. One of the last secretive projects in the midwest, “Project Oasis” has come out of stealth as a $200 million data center for global insurance company Travelers Insurance. The Omaha World-Herald reports that Travelers is the company behind the project, and has selected Omaha suburb Springfield, Nebraska as the next site for a data center to pair with its Georgia facility.
“Travelers is excited about the opportunity to build a new data center in Greater Omaha, and we look forward to next week’s Sarpy County Board meeting,” company spokeswoman Delker Herbert Vardilos said in a statement to The World-Herald.
A Tuesday vote will determine if the development agreement is approved between Travelers and Sarpy County. Travelers could be eligible for a round of new state tax incentives for companies that invest at least $200 million in a data center and employ at least 30 people. The Sarpy County Economic Development Corp., which brokered the deal, has an option on the land. The property already has been rezoned to light industrial, and the Omaha Public Power District plans to build a substation to power the data center.
Numerous data center developments over past years have come down to Iowa and Nebraska for final locations – with Google, Microsoft, and most recently Facebook deciding on Iowa, and Yahoo, PayPal and Fidelity Investments landing in Nebraska. |
|