Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, July 23rd, 2014
Time |
Event |
12:00p |
Photo Tour: CenturyLink and IO Light Up Phoenix Data Center CenturyLink Technology Solutions and IO held a grand opening for the former’s new data center location within IO’s massive facility in Phoenix.
The data center’s opening marks completion of the initial phase in a partnership between the two companies that is unique in the industry. CenturyLink is essentially taking over the role of the data center provider at the site.
Any new customers in Phoenix will be CenturyLink customers, senior vice president and general manager at CenturyLink, said. “We’re running the colocation now for any new customers coming into those facilities,” he said.
CenturyLink is selling both modular space and traditional raised-floor space in the Phoenix data center, but the deal extends beyond Phoenix. IO’s sales team will be selling into the Phoenix data center, but also into CenturyLink’s other locations, including Toronto, Washington, D.C., New Jersey and southern California.
The phase recently brought online in Phoenix includes about 9.6 megawatts of capacity to support IO.Anywhere modules and about 4 megawatts of raised floor. While there are some customers that prefer the traditional data center space, most of them want to be in the modules, Meredith said.
CenturyLink is also looking into the possibility of adding capacity to host the modules in some of its other buildings.
In addition to CenturyLink, there is one other data center operator that houses IO.Anywhere modules in its facilities and offers space inside them as a service. The company’s name is Fortrust.
IO has always had a heavy focus on technology and design, which shows in photos from the grand opening event in Phoenix earlier this month, courtesy of IO and CenturyLink:
 CenturyLink-branded IO.Anywhere data center modules in the Phoenix data center
 Another view of the CenturyLink-branded IO.Anywhere data center modules in the Phoenix data center
 IO.Anywhere employs physical and logical security to protect mission-critical IT assets. A compartmentalized architecture provides separation between IT and support infrastructure and a steel frame, hardened shell and customer-specified access controls provide multiple layers of protection
 A look down a module aisle in the Phoenix data center
 Equipment inside a rack in the Phoenix data center
 CenturyLink Technology Solutions President Jeff Von Deylen talks about the CenturyLink-IO partnership
 Michael Levy, senior analysts, data centers, at 451 Research, shares his insights with guests at the July 27, 2014, event to celebrate CenturyLink entering the Phoenix market
 A ribbon is cut to celebrate the CenturyLink-IO partnership and CenturyLink’s new data center presence in the Phoenix market. Pictured, from left, are Jennifer Mellon, vice president of program development for the Greater Phoenix Chamber of Commerce; Jeff Von Deylen, president of CenturyLink Technology Solutions; Peter McNamara, senior vice president of global enterprise sales at IO; Ken McMahon, vice president-general manager, Phoenix, for CenturyLink; and Greg Stanton, mayor of Phoenix
| 12:30p |
SSD Enhancements: Extending Enterprise Drive Life Doug Rollins is a principal SSD systems marketing engineer at Micron Technology, Inc., who holds 13 U.S. patents and is an active member of the Storage Networking Industry Association (SNIA).
In this article we examine two enterprise solid state drive (SSD) lifespan extension techniques that are used by top-tier SSD manufacturers: dynamic read tuning and NAND-level redundancy. These techniques are critical to getting the most from your investment in SSDs. When choosing an SSD, it is important to ensure that the supplier can explain how techniques like these are implemented in their products and the net benefits of each.
Note: Both part 1 and part 2 of these articles refer to NAND-based SSDs and commands executed inside them (as opposed to commands issued by the host).
Dynamic read tuning
Optimal methods for reading data from an SSD are not static. As the NAND in an SSD ages, specific characteristics of the command used to read data should be dynamically tuned by the SSD controller and firmware. Such tuning has a direct impact on data reliability. Proper tuning improves READ command performance in terms of immediate data access and long-term data reliability which are key requirements of enterprise applications.
Figure 1a shows the default settings used to read data from the media on the SSD. These are factory presets and are optimal for new NAND devices. Figure 1b shows how the optimal read settings can change over time as the drive is used (shown in green). The amount of data written to and read from the drive, for instance, can impact the optimal settings. Adaptive read management dynamically tunes these settings to ensure best performance and data integrity for the SSD.
 Figure 1a: Default SSD Read Settings
 Figure 1b: Optimal Read Settings Change with NAND Use
Dynamic read adjustment can operate in both background and foreground modes. In background mode, the SSD controller and firmware read data from the NAND before the host requests it. Unlike a caching prefetch design, this is a proactive method of pre-tuning the NAND such that when the host reads data from the NAND device, the NAND read settings have been preoptimized by this background process. In background mode, when a read error occurs, the SSD controller and firmware retune the NAND read settings on-the-fly and retry the read—a process that can be iteratively applied and determined by the SSD design.
NAND-level redundancy
For cases where dynamically tuning the NAND read settings does not enable a successful read, many enterprise-grade SSDs employ parity protection as a secondary, fallback protection system. This additional protection mechanism operates in real-time and uses well proven parity techniques to generate parity data and embed it with the user data. The details of each implementation are design-specific, but the SSD supplier should be able to clearly articulate the core elements:
- Data-to-Parity Ratio: Expressed as X data + Y parity (or X:Y), this ratio is optimized for intended drive workload, performance, media type, and several other factors. It is also referred to as the stripe size.
- Parity Storage Location: The parity may be stored in a fixed, relative, or rotating location.
- Protection Level: NAND-level parity can protect user data from catastrophic media failures.
- Hardware Acceleration: SSD suppliers can choose to manage parity in the firmware or accelerate it via hardware.
The figure below shows a data-to-parity ratio of 7:1 with seven elements of user data and one element of parity data. However, RAIN is not limited to 7:1; the ratio can be designed specifically to balance data protection, drive design, intended workload, and cost.
 Figure 2: 7:1 Data-to-Parity Ratio
The ability to dynamically tune the NAND (both proactively and reactively) is a key feature offered in many enterprise-class SSDs that helps to ensure more reliable operation and greater SSD lifespan. As with most enterprise-class storage designs, a single protection mechanism is not enough. When dynamically tuning the NAND for the best read operation is not sufficient, many Enterprise SSDs also adopt a fallback protection system of parity generation and storage, which enables protection and recovery from even catastrophic media failures.
Part 2 of this article will discuss techniques that enterprise-grade SSDs can use to protect user data as it moves inside the SSD, as well as ways to manage background operations to improve SSD responsiveness.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 12:30p |
Microsoft: Cloud Growth Not Sucking Revenue Away From Server Products While its Nokia acquisition did result in a loss on earnings per share, Microsoft’s fourth fiscal quarter revenue beat analyst expectations. The company’s top execs sounded pleased with growth of its cloud business, which includes both Office 365 and Azure, on Tuesday’s earnings call.
After they reported 147-percent growth in commercial cloud revenue, an analyst on the call asked whether that meant companies weren’t buying as many licenses for their on-premise server software (products like Windows Server, System Center or SQL Server). The execs said that was not the case and pointed to 16-percent growth in this product category.
“We don’t see it as a zero-sum,” Microsoft CEO Satya Nadella said about the relationship between server products and its -as-a-Service offerings. He cited continuously growing server virtualization rate as one of the factors driving growth in the category.
Microsoft sells server OS licenses for Azure cloud deployments as well, but CFO Amy Hood said there was also a lot of growth in on-premise server software sales this past fiscal quarter, which concluded the company’s fiscal 2014. SQL Server and System Center revenue growth was in the double digits.
Annualized revenue run rate for commercial cloud now exceeds $4.4 billion, Hood said.
The executives offered little detail on the plan to cut 18,000 jobs the company announced earlier this month and reiterated that Microsoft will continue to focus on cloud and mobility. That focus, Nadella said, will continue informing the company’s investment decisions.
Data centers that support Azure have been and will continue to be one of the biggest areas of investment. In the past fiscal year, Microsoft added data center capacity in Australia, Brazil, Japan and China and doubled capacity in previously existing data center locations.
Nadella expects the company to continue this expansion in fiscal 2015: “We will expand our Azure data center footprint and increase capacity in existing regions.” Hood added that the company expected its capital expenditures in the first quarter to be higher than fourth-quarter expenditures because of further expansion of cloud infrastructure.
Microsoft reported $23.4 billion in revenue for the fourth quarter – up 18% year over year. Operating income was $6.5 billion (up 7 percent), and earnings per share were down 7 percent at $0.55. | 2:00p |
How to Better Align Your Data Center’s Physical Infrastructure The modern business is truly evolving. As more users, applications and workloads encompass the cloud, data center platforms will sit square in the middle of all technological advancements. New demands require different types of server deployments, new ways to control change, and a much better management methodology.
Your data center is one of your organization’s biggest investments, costing millions of dollars with fixed amounts of space, cooling, networking and power capacities. It houses tens of thousands of assets – a third of which should be updated annually - and consumes a third or more of your company’s total power.
Recognizing this essential nature of IT, companies have sought to incorporate their data center infrastructures into a larger management context – not just for IT but for all business services. For many organizations that larger context is the IT Infrastructure Library, or ITIL, the framework of best practices which form the foundation of IT Service Management (ITSM).
In this whitepaper from Nlyte we learn how alignment between Data Center Infrastructure Management (DCIM), the discipline at the core of Nlyte Software’s products and services, and the larger framework of ITSM enhance data center operations.
Download this whitepaper today to see how Nlyte takes a truly holistic approach to aligning a data center’s physical infrastructure. This approach includes:
- Intelligent server placement
- Workflow
- Pre-Built ITSM Connectors
- New Server Deployments
- Incident & Change management
- Service Catalog Management
- Availability Management
- Service Asset and Configuration Management
- And much more
Throughout the entire process Nlyte takes DCIM and your physical assets into consideration. The idea is to be able to incorporate physical layer considerations (e.g., power, cooling and space) directly into your service design. These types of holistic control systems allow you to generate powerful metrics for measuring and optimizing physical assets over their lifecycles which in turn, helps support continual service improvement and promotes a proactively healthy data center infrastructure. | 4:27p |
Docker Acquihires Orchard in Bid to Commercialize Docker Containers Docker has acquired Orchard Labs, a two-man operation based in London that provides users with hosted Docker in the cloud and Fig, an open source tool for container orchestration.
Docker’s “containers” ease deployment of an application across a variety of data centers, devices or clouds. Docker just recently released version 1.0 and has already garnered tremendous interest and support for its container solution. The acquisition of Orchard brings in tools and expertise that will help the company extend capabilities and commercialize the open source container technology. Terms of the acquisition were not disclosed.
Commercialization efforts are focused on paid support and developer tools, as well as potentially a hosted version.
Orchard brings a Docker orchestration tool with capabilities to manage and monitor containers. It will also bring a new European office in London, and the company will tap talent there as well as around its headquarters in San Francisco.
Orchard co-founders, Aanand Prasad and Ben Firshman, the CEO, will take over the developer environment (dubbed DX) initiatives at Docker. “The goal of DX is to make Docker awesome to use for developers,” writes Solomon Hykes, Docker founder and CTO. “This means anything, from fixing UI details, improving Mac and Windows support, providing more tutorials, integrating with other popular developer tools, or simply using Docker a lot and reporting problems.”
Orchard’s hosted Docker service has been discontinued as a result of the acquisition. “Both Orchard and Fig are small pieces of the puzzle,” writes Orchard in its acquisition announcement. “While Fig has caught on, Orchard hasn’t to the same extent, so we’ll be closing it down on October 23rd.”
While aspects of Fig will be incorporated into Docker, the Fig tool will remain open source.. “Fig is by far the easiest way to orchestrate the deployment of multi-container applications, and has been called ‘the perfect Docker companion for developers’,” writes Hykes.
“With Fig, Ben and Aanand got closer to an answer than anybody else in the ecosystem. They have a natural instinct for building awesome developer tools, with just the right blend of simplicity and flexibility.”
Fig makes it possible to build complex applications using multiple Docker containers. It does this using a YAML file to describe relationships between interconnected containers in a multi-container application. Orchard will help incorporate orchestration interfaces into Docker in addition to maintaining Fig.
Fig is similar to a Red Hat tool called geard, a command-line client and agent for integrating and linking Docker containers into systemd across multiple hosts.
Docker is evolving into an infrastructure for distributed services rather than just a way to organize applications. This is Docker’s first acquisition under the Docker name. Docker was previously known as DotCloud and acquired a Platform-as-a-Service startup called Duostack in 2011. | 6:51p |
Oracle Becomes Data-as-a-Service Provider Oracle has launched two cloud-based services that provide access to what the company describes as vast troves of external user data available online to advertisers and marketers with security and privacy compliance built in.
Pitched under the umbrella brand “Oracle Data Cloud,” the two services are DaaS for Marketing and DaaS for Social, DaaS standing for Data-as-a-Service. The data cloud platform used to deliver the services is based on Oracle’s recently acquired BlueKai Audience Data Marketplace and the Redwood City, California-based giant’s own data products.
Oracle bought BlueKai, a data services and technology firm, in February. The estimated purchase price was between $350 million and $400 million, according to Ad Exchanger.
DaaS for Marketing provides access to user data offline and online, including mobile. Oracle said it gathers the data from “trusted and validated sources,” which ensures privacy and security compliance.
The amount of user profiles the database contains is massive. Oracle claims there is more than 1 billion profiles of people around the world, which can be used by sales people to identify prospects at massive scale and by marketing people to targeted ads and content.
The service is also a channel between the customer and hundreds of Oracle partners in the online, mobile, search and social marketing industry.
DaaS for Social (currently in limited availability) enriches and categorizes unstructured data collected from social networks. In Oracle’s words, it provides intelligence on customers, competitors and market trends.
The solution uses a text processing technology, which in combination with other structured data can provide business intelligence, according to Oracle.
Data available through the services can be combined with enterprises’ own internal data. They can also be “plugged” into Oracle applications and its other cloud services.
Omar Tawakol, general manager and group vice president for Oracle Data Cloud, said, “Unbundling data from SaaS applications has enhanced a business user’s ability to activate insights gleaned from external data sources, leading to more engaging and personalized customer experiences.” | 7:21p |
Dropbox Expands Feature Set to Lure (Paid) Business Accounts In a bid to attract more business (and paid) accounts, Dropbox has added several enterprise-friendly features, including better security, better sharing and better search.
Dropbox touts over 300 million end users, but the online collaboration and storage company needs more paid accounts. They offset the plethora of free accounts and the operating expenses that come with running a massive infrastructure to support the service. The free accounts are a necessary evil that exposes first-time users to companies like Dropbox or competitor Box, which compete with giants like Google, which provides a lot of free cloud storage space to anybody.
Online storage is an expensive business to be in. We recently learned about the difficult balance companies in this business have to maintain between ensuring there is enough data center capacity to absorb demand and not overspending on infrastructure to avoid stranded capital from a pre-IPO filing by Box.
Dropbox must find compelling reasons for users to opt for the paid version of its online storage for business, and it is choosing to do this with premium features. Historically, online storage tiers were based on storage space, but actual storage itself is a commodity.
Dropbox is adding features at a fast clip as well as building complementary applications. Examples include Carousel, a photo-viewing and sharing app, and Mailbox, for friendly email on mobile devices.
About 80,000 businesses use Dropbox for Business today, according to the company’s announcement. The business-oriented offering came out of beta last April. The highest tier of paid accounts costs $15 per user per month with a five-user minimum.
New additions include view-only permissions for shared folders. This means a file creator has better control over who edits that file. It’s adding password protection as well as links that can expire, which was the major hook for storage provider Drop.io prior to its acquisition by Facebook.
Business accounts will also eventually get the company’s full-text search functionality. “Over the next few months, we’ll also be making full-text search and Project Harmony available to teams through the early access program,” wrote Ilya Fushman, head of product, business and mobile at Dropbox.
The full-text search feature was built in-house, expanding capabilities beyond searching for file names only.
The mentioned Project Harmony is another major initiative. It’s intended to improve display and editing of Microsoft Office documents within Dropbox.
Office 365, the hosted version of Office, is growing at a fast clip. In the latest earnings call, Microsoft said it added a million subscribers during the quarter. Project Harmony and its Office editing and viewing capabilities grows in importance as cloud-based Office user base grows.
Project Harmony is also adding Android compatibility.
The company added two new APIs to assist developers in hooking into Dropbox with their own apps. The APIs tie into the new features, letting a developer’s app link to shared folders or document previews rather than basic file sharing integration.
Dropbox also announced it was opening a new international headquarters in London to support international growth.
To seek out more paid business usage, compliance is a logical next step. This will allow the company to target more compliance-conscious verticals like Healthcare with HIPAA. | 10:46p |
Google: From 112 Servers to a $5B-Plus Quarterly Data Center Bill It was only 15 years ago that Google was running on slightly more than 100 servers, stacked in racks Sergey Brin and Larry Page put together themselves using cheap parts – including insulating corkboard pads – to cut down the cost of their search engine infrastructure.
That do-it-yourself ethos has stayed with the company to this day, albeit it is now applied at much bigger scale.
Google data centers cost the company more than $5 billion in the second quarter of 2014, according to its most recent quarterly earnings, reported earlier this month. The company spent $2.65 billion on data center construction, real estate purchases and production equipment, and somewhat south of $2.82 billion on running its massive infrastructure.
The $2.82 billion figure is the size of the “other cost of revenue” bucket the company includes its data center operational expenses in. The bucket includes other things, such as hardware inventory costs and amortization of assets Google inherits with acquisitions. The size of this bucket in the second quarter represented 18 percent of the company’s revenue for that quarter. It was 17 percent of revenue in the second quarter of last year.
Google’s largest server order ever
While the amount of money Google spends on infrastructure is astronomically higher than the amount it spent 15 years ago, the company makes much more money per server today than it did back then.
“In retrospect, the design of the [“corkboard”] racks wasn’t optimized for reliability and serviceability, but given that we only had two weeks to design them, and not much money to spend, things worked out fine,” Urs Hölzle, Google’s vice president for technical infrastructure, wrote in a Google Plus post today.
 One of Google’s early “corkboard” racks is now on display at the National Museum of American History in Washington, D.C.
In the post, Hölzle reminisced about the time Google placed its largest server order ever: 1,680 servers. This was in 1999, when the search engine was running on 112 machines.
Google agreed to pay about $110,000 for every 80 servers and offered the vendor a $2,000 bonus for each of the 80-node units delivered after the first 10 but before the deadline of August 20th. The order was dated July 23, giving the vendor less than one month to put together 800 computers (with racks and shared cooling and power) to Google’s specs before it could start winning some bonus cash.
Hölzle included a copy of the order for King Star Computer in Santa Clara, California. The order describes 21 cabinets, each containing 60 fans and two power supplies to power those fans.
There would be four servers per shelf, and those four servers would share:
- 400-watt ball bearing power supply
- Power supply connector connecting the individual computers to the shared power supply
- Two mounting brackets
- One plastic board for hard disks
- Power cable
Each server would consist of:
- Supermicro motherboard
- 265 megabytes of memory
- Intel Pentium II 400 CPU with Intel fan
- Two IBM Deskstar 22-gigabyte hard disks
- Intel 10/100 network card
- Reset switch
- Hard disk LED
- Two IDE cables connecting the motherboard to the hard disk
- 7-foot Cat. 5 Ethernet cable
Google’s founders figured out from the company’s early days that the best way to scale cost-effectively would be to specify a simple server design themselves instead of buying off-the-shelf all-included gear. It designs its hardware on its own to this day. Other Internet giants that operate data centers at Google’s scale have followed suit. |
|