Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, August 6th, 2013
| Time |
Event |
| 12:30p |
Collaboration: The Key to Cashing in on the Cloud Russell Griffin is the Director of Channel Programs at Hostway, with more than 15 years of experience in channel sales and management at industry-leading hosting providers and major technology companies including Dell and Microsoft.
 RUSSELL GRIFFIN
Hostway
Cloud services are one of the hottest trends in the IT solutions sector, with industry analyst Gartner predicting that demand could surge from $2.7 billion to well over $10 billion in just a couple of years. Spotting a new opportunity, many IT solution providers are looking to add cloud services to their portfolios, including value-added resellers, integrators, managed services providers, developers and independent software vendors.
Adding cloud services may be a smart move: Many customers are looking for a one-stop shop, and rounding out an offering with cloud services can enhance your company’s overall appeal. But while cloud is attractive precisely because it has so few infrastructure requirements for customers, the same is not true on the IT vendor side. Even though cloud services are accessed in a virtual environment, it takes significant investment and resources to build and manage a solid cloud infrastructure and keep it up to date.
Partnering with Hosting Providers
So how do you cash in on the cloud trend without making that investment? Partnering with an established hosting provider can be an ideal solution, allowing you to share an existing infrastructure instead of building your own. With the right hosting partner, offering cloud services can broaden your company’s appeal by expanding your portfolio of services so that customers have a single IT solution provider.
To make this work, you’ll need to evaluate potential hosting partners’ goals and strategies to make sure they dovetail with your business operations and objectives. Take a look at the growth opportunities such a strategy offers, focusing on sustainability and scalability as well as deployment and development support.
Cloud partnerships generally fall into five categories:
- Resellers: With this arrangement, you control the customer relationship. The cloud provider bills you, providing services at a discounted rate. You price the cloud services you offer customers and bill them accordingly.
- Referral Partners: In a referral partner scenario, you hand the business off to the cloud provider in exchange for a commission. Terms can vary and often include residual payments over the lifetime of the relationship with the customer.
- White Label Resellers: Also known as a “private reseller” arrangement, this agreement allows you to offer a branded cloud platform to your users. In most cases, white label resellers also provide 1st and 2nd-line support, backed up by the hosting partner.
- Affiliate Programs: These website referral commission arrangements allow you to receive a commission when a customer clicks on a partner ad or other link on your website. These programs often offer a commission for new business the links generate.
- Third-Party Agents: This type of agreement generally involves an onsite resource, often based offshore, to generate new business and provide service to customers in targeted regions.
Depending on your business objectives, one of these types of hosting partnerships can provide a solution for your customers who need cloud services while opening lucrative new revenue streams for your company. The rapid proliferation of cloud services – a multitude of “(fill in the blank)-as-a-Service” offerings – is a testament to the growing demand, so this can be a great way to expand your offerings without taking on a high level of risk.
Customer-need Focused
While it’s tempting to think of cashing in by building your own cloud service infrastructure, it may make more sense to find out what your customers need and seek an established partner who can help you offer cloud services without the upfront investment. Collaboration may be the solution that supports your objectives for your business today – and makes sense as technology service demands evolve over time.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:58p |
Cloud Enablers: IBM Launches New Flex Systems  New IBM Flex System products support larger clouds, even in smaller facilities. (Photo: IBM)
IBM launched major additions to its Flex System portfolio, seeking to enable businesses to build larger clouds in smaller data centers. Addressing IT consolidation efforts, the offerings combine the latest server technology with new virtualization, networking and management tools.
The IBM Flex System portfolio provides a platform for x86 and POWER market segments, providing building-block elements for IBM PureSystems. IBM Flex Systems are built-to-order offerings for clients who want to custom build and tune configurations to their specific requirements. This allows them to select the specific compute, systems networking and optional storage and management required to support their specific workloads.
Three new POWER7+ solutions address entry-level, medium sized businesses and large workloads, while a new x222 double density system is built on x86 architecture that allows support for up to 2,800 Windows 7 user images in a single chassis. For a management layer the Flex System Manager enables a single point of control and the ability to do so from any location with iOS, Android or Blackberry devices. It provides a new utilization fuel gauge to help clients monitor the status and availability of their infrastructure, and has been expanded with the ability to manage the System x HANA-optimized x3950. New switches and fabrics have also been added, to help enable Software Defined Networks (SDN) and improve connectivity and performance by increasing infrastructure bandwidth up to 40Gb.
The Flex System portfolio is backed by a deep business partner network, that has grown to over 4,500 strong. The Flex System portfolio additions give a variety of choices to partners and even allow them to select from Flex System bundles that are specifically configured and priced for their local markets. | | 2:24p |
AFCOM Australian Symposium Features Range of Trends, Technology Presentations Glenn Ashe, former Australian government CIO for the Attorney-General’s Office, will present the keynote speech for AFCOM’s third annual symposium held at Rydges South Bank in Brisbane, Australia, on September 9-11. The presentation titled, “Future Demands and Expectations for Data Centre Capability in the Asia Pacific Region,” will assist data centre professionals in Australia and the Asia Pacific understand what lies ahead in technology.
 GLENN ASHE AFCOM Symposium Speaker
“The AFCOM Australian Symposium is one of the most important events on the ICT calendar,” Ashe said. “It brings together data centre professionals to discuss, share and gain exposure to trends and over-the-horizon elements of data management.”
AFCOM, an association of data center professionals, convenes “The AFCOM Symposium: Brisbane,” to gather Australia’s leading data centre professionals for three intense days of learning, networking, and discussion aimed at helping to provide specific guidance on building data centre strategies and achieving optimum effectiveness in daily operations.
Mr. Ashe will highlight the emerging importance of data centres in the Asia Pacific region, the demands created by data centre consolidation initiatives from all levels of government, and the increasing service demands and expectations from the growing digital economy. Ashe is an ICT and security professional with extensive experience in the military, private sector and in the Australian federal government.
Other speakers for the three-day symposium include:
- Tom Townsend, Data Centre & Networks Manager,University of Canberra – Session: How Many Tiers Do You Need?
- Glenn Allan, Service Performance Manager Data Centre, National Australia Bank – Session: Colocate or NOT “The Multi-Million Dollar Question”
- Mathew Smorhun, Assistant Secretary Strategic Reform, Department of Defence – Session: Large Scale ICT Procurement in Government: Driving Business Outcomes from Large Technology Contracts
- Nigel Phair,Director of Internet Security at Canberra University – Session:
Cyber Security for Organisation Survival
For more information or to register to attend, go to AfCOM’s website. | | 3:07p |
A Storm of Servers: How the Leap Second Led Facebook to Build DCIM Tools  Cabinets filled with servers in a Facebook data center. The company is developing its own DCIM software, based on insights gained during last year’s Leap Second bug. (Photo: Facebook)
ASHBURN, Va. - For data centers filled with thousands of servers, it’s a nightmare scenario: a huge, sudden power spike as CPU usage soars on every server.
Last July 1, that scenario became real as the “Leap Second” bug caused many Linux servers to get stuck in a loop, endlessly checking the date and time. At the Internet’s busiest data centers, power usage almost instantly spiked by megawatts, stress-testing the facility’s power load and the user’s capacity planning.
The experience is yielding insights into the operations of Facebook’s data centers, and may result in new tools to help hyper-scale companies manage workloads. The Leap Second “server storm” has prompted the company to develop new software for data center infrastructure management (DCIM) to provide a complete view of its infrastructure, spanning everything from the servers to the generators.
For Facebook, the incident also offered insights into the value of flexible power design in its data centers, which kept the status updates flowing as the company nearly maxed out its power capacity.
The Leap Second: What Happened
The leap second bug is a time-handling problem that is a distant relative of the Y2K date issue. A leap second is a one second adjustment that is occasionally applied to Universal Time (UTC) to account for variations in the speed of earth’s rotation. The 2012 Leap Second was observed at midnight on July 1.
A number of web sites immediately experienced problems, including Reddit, Gawker, Stumbleupon and LinkedIn. More significantly, air travel in Australia was disrupted as the IT systems for Qantas and Virgin Australia experienced difficulty handling the time adjustment.
What was happening? The additional second caused particular problems for Linux systems that use the Network Time Protocol (NTP) to synchronize their systems with atomic clocks. The leap second caused these systems to believe that time had “expired,” triggering a loop condition in which the system endlessly sought to check the date, spiking CPU usage and power draw.
As midnight arrived in the Eastern time zone, power usage spiked dramatically in Facebook’s data centers in Virginia, as tens of thousands of servers spun up, frantically trying to sort out the time and date. The Facebook web site stayed online, but the bug created some challenges for the data center team.
“We did lose some cabinets when row level breakers tripped due to high load,” said Tom Furlong, the VP of Site Operations for Facebook. “The number of cabinets brought down was not significant enough for it to impact our users.”
Facebook wasn’t the only one seeing a huge power surge. German web host Hetzner AG said its power usage spiked by 1 megawatt – the equivalent of the power usage of about 1,000 households. The huge French web host OVH, which was running more than 140,000 servers at the time, also reported a huge power spike.
Electric power is the most precious commodity in a server farm. The capacity and cost of power are the guiding decisions for most data center customers. The Leap Second bug raised a key question: what happens if a huge company unexpectedly maxes out its available power?
The Power Perspective
It’s the type of question that brings wonky debates about power system design into sharp relief. Fortunately, it wasn’t an academic question for the team at DuPont Fabros Technology (DFT).
DuPont Fabros builds and manages data centers that house many of the Internet’s premier brands, including Apple, Microsoft, Yahoo and Facebook. As the leap second bug triggered, the power usage surged within one of DFT’s huge data centers in Ashburn, Virginia.
Hossein Fateh, the President and CEO of DuPont Fabros, said one of the tenants in the building suddenly saw its power load surge from 10 megawatts to 13 megawatts, and stay there for about 5 hours.
“The building can take it, and that tenant knew it,” said Fateh. “We encourage tenants to go to 99 and 100 percent (of their power capacity). Some have gone over.”
Fateh’s confidence is rooted in DFT’s approach to power infrastructure. It builds its data centers using what’s known as an ISO-parallel design, a flexible approach that offers both the redundancy of parallel designs and the ability to isolate problems.
Multi-tenant data centers like DFT’s contain multiple data halls, which provide customers with dedicated space for their IT gear. The ISO-parallel design employs a common bus to conduct electricity for the building, but also has a choke that can isolate a data hall experiencing electrical faults, protecting other users from power problems. But the system can also “borrow” spare capacity from other data halls.
“The power infrastructure available to the customers in the ISO parallel design allows for some over-subscription inside the room,” said Furlong. “In other words, if every row went to max power, you could exceed the contracted capacity available for the room. The rooms have double the distribution board capacity, which means this over-subscription doesn’t trip the room level breaker. Because the rooms are fed by a ring bus, and some customers may never exceed their contracted capacity, in theory there is capacity available if you exceed the capacity of your room.”
Like any good landlord, Fateh doesn’t identify the tenant in the leap second incident. Like any cautious data center manager, Furlong doesn’t get into details about his company’s power agreements. But it doesn’t take a lot of imagination to conclude that Facebook benefited from the ISO-parallel design during the server storm last July 1.
Next: Facebook’s New Focus: DCIM | | 5:00p |
Dimension Data Adds Three-Tier Cloud Backup Dimension Data will introduce three tiers of cloud backup as part of its Compute as a Service (CaaS) offering in September. It joins a three-tier cloud storage service the company recently introduced.
“Cloud users often manage their backup in-house, which can be complicated and costly,” said Steve Nola, Cloud Business Unit CEO at Dimension Data. “By offering the integrated Cloud Backup service, Dimension Data enables businesses to back up whole applications while providing assurance that historical data is secure and easy to access.”
The tiers range from simple file and folder backup and restore, to system state and full application backup and restore. The service will be integrated into its public, private and hosted private CaaS cloud offerings. Cloud Backup will appear as an option of CaaS users to select as they provision their environment through the User Interface or API, or it can be added in later.
The three tiers are:
- Cloud Backup Essentials – file and folder backup and restore
- Cloud Backup Advanced – file, folder, and system state backup and restore
- Cloud Backup Enterprise – file, folder, system state, and application backup and restore
Dimension Data acquired OpSource in 2011, which became part of its new Cloud Solutions Business Unit. Japanese Telco giant NTT acquired Dimension Data the year prior. Their powers combined, the goal is to help enterprises migrate over to the cloud.
The company has data centers located in San Jose and Ashburn, giving it a presence on both coasts. The service will initially be available out of the United States and Australia, with plans to deploy in Asia, Europe, and the Middle East & Africa by the end of the year. Pricing starts at $10 per cloud server plus .10c per GB data storage/month. | | 6:31p |
Crossbar Emerges From Stealth, Packing 1TB Into a Single Chip  Startup Crossbar has come out stealth with a chip representing a new class of 3D RRAM which can be incorporated into a standard manufacturing fab.
Start-up company Crossbar came out of stealth mode Monday with an announcement that it has created a new category of memory that is capable of storing 1TB of data on a single chip. Crossbar’s Resistive RAM (RRAM) technology is a new generation of non-volatile memory that is capable of storing up to one terabyte on a 200mm2 chip, the size of a postage stamp. The company also announced it has developed a working Crossbar memory array at a commercial fab.
3D Stacking
Due to its simple three-layer structure, Crossbar technology can be stacked in 3D, delivering multiple terabytes of storage on a single chip. Its simplicity, stackability and CMOS compatibility enable logic and memory to be easily integrated onto a single chip at the latest technology node. When compared to NAND Flash memory Crossbar’s technology delivers 20 times faster write performance, 20 times lower power consumption and 10 times the endurance. With these performance levels the company believes it can enable a new wave of electronics innovation for consumer, enterprise, mobile, industrial and connected device applications.
“Non-volatile memory is ubiquitous today, as the storage technology at the heart of the over a trillion dollar electronics market – from tablets and USB sticks to enterprise storage systems,” said George Minassian, chief executive officer, Crossbar, Inc. “And yet today’s non-volatile memory technologies are running out of steam, hitting significant barriers as they scale to smaller manufacturing processes. With our working Crossbar array, we have achieved all the major technical milestones that prove our RRAM technology is easy to manufacture and ready for commercialization. It’s a watershed moment for the non-volatile memory industry.”
How it works
The Crossbar memory cell is based on three simple layers: A non-metallic bottom electrode, an amorphous silicon switching medium and a metallic top electrode. The resistance switching mechanism is based on the formation of a filament in the switching material when a voltage is applied between the two electrodes. This simple and very scalable memory cell structure enables an entirely new class of RRAM, which can be easily incorporated into the back end of line of any standard CMOS manufacturing fab. Crossbar plans to bring to market standalone chip solutions, as well as license its technology to system on a chip (SOC) developers for integration into next-generation SOCs.
“RRAM is widely considered the obvious leader in the battle for a next generation memory and Crossbar is the company most advanced, showing a working demo that proves the manufacturability of RRAM,” said Sherry Garber, Founding Partner at Convergent Semiconductors. This is a significant development in the industry, as it provides a clear path to commercialization of a new storage technology, capable of changing the future landscape of electronics innovation.” |
|