Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, January 22nd, 2013

    Time Event
    1:19p
    Silicon Photonics: The Data Center at Light Speed
    Quanta-Intel-Rack-2

    Open Compute Project compliant rack (prototype shown) which will use silicon photonics. This ultra-low latency technology will increase the speed at which components in the rack can speak to each other  allow components that previously needed to be bound to the same motherboard to be spread out within a rack. (Photo: Colleen Miller)

    Last June, Open Compute Project Chairman Frank Frankosvky outlined an ambitious vision to separate the technology refresh cycle for CPUs from the surrounding equipment in a rack. Frankovsky, also a hardware executive at Facebook, said the ability to easily swap out processors could transform the way chips are procured at scale, perhaps shifting to a subscription model.

    But what would be required to make this work? One answer emerged at last week’s Open Compute Summit: silicon photonics. Intel has been working on the technology for a decade, and is now collaborating with Facebook on a prototype for a new rack architecture that can “disaggregate” the server, taking components that previously needed to be bound to the same motherboard and spreading them out within a rack.

    Silicon photonics uses light (photons) to move huge amounts of data at very high speeds over a thin optical fiber rather than using electrical signals over a copper cable. Intel’s prototype can move data at up to 100 gigabits per second (Gbps), a speed that allows components to work together even when they’re not in close proximity to one another.

    New Options in Server & Rack Design

    This creates intriguing possibilities in server and rack design. At the Open Compute Summit, Intel showed off a photonic rack built by Quanta, which separated components into their own server trays – one tray for Xeon CPUs and another for its latest Atom CPUs, another for storage. When a new generation of CPUs is available, users can swap out a the CPU tray rather than waiting for an entire new server and motherboard design.

    This approach “enables independent upgrading of compute, network and storage subsystems that will define the future of mega-datacenter designs for the next decade,” said Justin Rattner, Intel’s chief technology officer, who said the photonic rack “enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today’s copper based interconnects.”

    “We’re excited about the flexibility that these technologies can bring to hardware and how silicon photonics will enable us to interconnect these resources with less concern about their physical placement,” said Frank Frankovsky, chairman of the Open Compute Foundation and vice president of hardware design at supply chain at Facebook

    At Open Compute, Intel’s Jeff Demain of Intel provided DCK with an overview of silicon photonics, and its potential to transform data center design and operations. This video runs about 7 minutes.

    1:58p
    Fourth Key to Brokering IT Services Internally: Advertise the Ts and Cs

    Dick Benton, a principal consultant for GlassHouse Technologies, has worked with numerous Fortune 1000 clients in a wide range of industries to develop and execute business-aligned strategies for technology governance, cloud computing and disaster recovery.

    Dick Benton GlasshouseDICK BENTON
    Glasshouse

    In my last post, I outlined the third of seven key tips IT departments should follow if they want to begin building a better service strategy for their internal users: create your own menu of services. That means identifying what services you will offer and building a service catalog of offerings. Expanding on that idea, today’s post will focus on the fourth step: advertise the Ts and Cs. IT must develop a simple and easy-to-read list of the terms and conditions (Ts and Cs) under which services are supplied, and include these in the service catalog and in the service level agreement (SLA) that should be provided with each service delivery.

    The Ts and Cs spelled out in each SLA should appear in the introduction to your service catalog as Ts and Cs. Having said that, it is critical that you provide a separate SLA document that both IT and the individual ordering the service formally sign off on (albeit electronically). This formalizes the Ts and Cs under which the services are supplied and, equally importantly, the Ts and Cs under which the services are received.

    What Terms and Conditions Should Cover

    The Ts and Cs (in the SLA) should consist of three key components: 1) the policies governing delivery of the services; 2) the procedures involved in the service provisioning and delivery; and 3) the roles and responsibilities of both IT and the consumer during the life cycle of the service. The prudent CIO will socialize the Ts and Cs and seek consensus with all stakeholders before including these in the service catalog, and subsequently, in each service order’s SLA. This becomes the business foundation for the order, delivery and consumption of the Internal Cloud Provider’s (ICP’s) services.

    Policies are always a delicate and often contentious subject. They lock management into a position from which a retreat is nearly impossible. Management is keenly aware of this restriction on their decision-making flexibility, and will often resist policies unless they are clearly the responsibility of others or renamed as “guidelines.” In addition, consumers see policies as bureaucratic at best and “weasel” conditions at worst. It behooves the prudent CIO to minimize policies and ensure that all assumptions have been made explicit and the attributes against which service delivery is measured are called out.

    The major policies required will include but not be limited to the following:

    • Formalization of all delivery metrics. You don’t want to wait for an event to have a discussion on what is meant by acceptable response time; you don’t want to wait on a delivery complaint to discuss what is a reasonable time to provision; and you don’t want to wait on a disaster to discuss recovery time objectives (RTO) and tolerable data loss under a recovery point objective (RPO)
    • Formalization of roles and responsibilities
    • Calling out the manner in which exception conditions will be identified and communicated
    • Formalization of how the cost of services deployed will be reported and funded

    Policies can blend into procedures when we discuss the steps needed to order a service, and the necessary authorization and clearances required by the organization’s Risk Management group. Procedures do not need to be detailed in the SLA, but they should call out at a minimum the key steps and gateways the process must work through, along with the roles and responsibilities involved in each step or gateway. The major functions requiring a procedure outline include but are not limited to the following:

    • The manner in which a service is requested (possibly Web-based selection)
    • The process for providing a quotation and draft SLA in response to a selection of service (may be entirely automated or a manual process)
    • The process for quote acceptance or revision of a service order
    • The manner in which the service selection will be authorized (signoffs or e-notifications)
    • The procedure to actually provision the service requested, possibly automated workflow-enabled
    • The manner and frequency  in which usage and metrics will be reported
    • The procedure, default and by request for service end-of-life

    Building Good Relationships

    The Ts and Cs in the service catalog and replicated in each SLA, become the business contract between IT and the consumer, governing the manner in which series are selected, provisioned, delivered and funded. The Ts and Cs’ hidden but most significant impact is that they provide IT with an invaluable reputational enhancement tool by formalizing the delivery of consumer satisfaction against empirical criteria (instead of emotional reaction). Ts and Cs and SLAs are mission-critical components of the ICP deployment model.

    In our next post, we’ll look at building the service order process.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:39p
    Flash Memory at Scale: Fusion-io Goes Large

    With growing demand for Flash memory from hyperscale and cloud companies, Fusion-io last week unveiled ioScale, a new product that provides up to 3.2 terabytes of ioMemory capacity. Flash was a hot topic at the Open Compute Summit last week, where Fusion-io SVP of Products Gary Orenstein spoke with Data Center Knowledge editor Rich Miller. Orenstein provides an overview of the ioScale product, which beefs up server processing with ultra-low latency ioMemory per PCI Express slot. ioScale provides up to 3.2 terabytes of ioMemory capacity, and is available to order in a minimum of one hundred units. Pricing starts as low as $3.89 per gigabyte, with increasing discounts based on volume. Based on customer feedback, ioScale was developed from the lessons learned when working with hyperscale companies, and now that market is broadening because it can be deployed in other enterprises as well. The product is currently available. This video runs 6 minutes, 30 seconds.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    5:20p
    SolidFire Adds Management Features Through Flexiant

    SolidFire’s all-SSD storage for cloud providers just got a little more user friendly. The company has partnered and integrated with Flexiant and its Cloud Orchestrator console. Both companies target service providers for their offerings and their combined forces make for a user-friendly solution out of the gate.

    SolidFire is unique in the sense that it’s all SSD, and it enables the provisioning of storage performance (IOPS) in addition to space, so SolidFire customers can guarantee quality of service to their customers. Flexiant provides granular metering, billing and reseller management.  A SolidFire partnership and integration with Flexiant means that customers can granularly provision storage performance (IOPS) and capacity across a SolidFire cluster on a per volume basis from within Flexiant’s Cloud Orchestrator console. The combined forces make for a interesting offering for service providers to leverage.

    “A key aspect to any of our partnerships is helping customers accelerate time to value,” said Dave Wright, CEO of SolidFire.  ”Integrating with key cloud building blocks like Flexiant Cloud Orchestrator is a critical piece of this equation.”

    Both companies claim similar heritages; they were formed to get rid of the headaches encountered when using legacy tools to deliver cloud services. Tony Lucas developed Flexiant Cloud Orchestrator to solve the cloud management issues faced as he delivered cloud services in prior roles. Dave Wright founded SolidFire after witnessing first-hand the unique storage challenges in a multi-tenant cloud infrastructure.

    This is a full integration of Flexiant’s cloud orchestration software with SolidFire storage. “Integration with SolidFire was quick and very straight forward through the use of their REST based APIs,” said Tony Lucas Product Champion and Founder at Flexiant.  ”It has allowed us to offer cloud service providers a level of storage provisioning, automation and scale that simply cannot be achieved by any other solution on the market.”

    SolidFire fully launched its all-SSD storage system for cloud providers last November. The big differentiator for SolidFire is that service providers can guarantee quality of service, allowing customers to provision IOPS. It leverages an all-flash scale-out storage architecture with patented volume-level quality-of-service (QoS) controls. This means that providers can guarantee storage performance within shared infrastructures using real-time data reduction techniques and system-wide automation.

    Flexiant is a provider of cloud orchestration software for on-demand, fully automated provisioning of cloud services.  Flexiant Cloud Orchestrator is a software suite that is service provider ready, enabling cloud service provisioning through to granular metering, billing and reseller management. Flexiant customers include Cartika, FP7 Consortium, IS Group, ITEX, and NetGroup.

     

    9:26p
    Amazon Web Services Unveils High Memory Instances

    Amazon Web Services continues to expand its applicability in the high-end computing world. The newest enhancement is a workhorse of an instance designed for real-time, high memory needs, called the High Memory Cluster Eight Extra Large instance type.

    AWS continues to release instances that fit a broad range of needs, expanding its potential use cases. These high memory instances join a high storage family of instances for EC2 back in December and high I/O instances back in July. In addition to moving in on big applications and big data, the company also continues to enhance functionality across its products, two recent features being EBS Snapshot Copy and Static hosting.

    High Memory Instances

    The High Memory Cluster Eight Extra Large instance type is designed for memory intensive applications that need a lot of memory on one instance or take advantage of distributed memory architectures. It’s designed to host applications that have a voracious need for compute power, memory and network bandwidth such as in-memory databases, graph databases, and memory-intensive high performance computing (HPC).

    These instances are available in US-East (Northern Virginia) Region only, with plans to make it available in other AWS Regions in the future. Pricing starts at $3.50 per hour for Linux instances and $3.831 per hour for Windows instances. Given the size and nature, the company said these are the most cost effective instances it offers.

    Here are the full specs:

    • Two Intel E5-2670 processors running at 2.6 GHz with Intel Turbo Boost and NUMA support.
    • 244 GiB of RAM.
    • Two 120 GB SSD for instance storage.
    • 10 Gigabit networking with support for Cluster Placement Groups.
    • HVM virtualization only.
    • Support for EBS-backed AMIs only.

    “We expect this instance type to be a great fit for in-memory analytics systems like SAP HANA and memory-hungry scientific problems such as genome assembly,” said the company in a blog post.

    It has a total 88 ECU (EC2 Compute Units). Users can run applications that need serious memory that can take advantage of 32 Hyperthreaded cores (16 per processor). There’s also an interesting Turbo Boost feature. When the operating system requests the maximum possible processing power, the CPU increases the clock frequency while monitoring the number of active cores, the total power consumption and the processor temperature. The processor runs as fast as possible while staying within its documented temperature envelope.

    EC2 High Storage Instance Family

    Back in December, the company released a high storage instance family for data-intensive applications that require high storage depth and high sequential I/O performance. Examples of these types of applications include data warehousing, log processing, and the company gave a very specific use case in seismic analysis. Basically, it made EC2 applicable in the work of applications that generate a tremendous amount of data.

    Each instance includes 117 GiB of RAM, 16 virtual cores (providing 35 ECU of compute performance), and 48 TB of instance storage across 24 hard disk drives capable of delivering up to 2.4 GB per second of I/O performance.

    High I/O Instances

    Going further back to July, the company revealed High I/O Instances for Amazon EC2, an instance type that provides very high, low latency, disk I/O performance using SSD-based local instance storage. High I/O instances are suitable for high performance clustered databases, and are especially well suited for NoSQL databases like Cassandra and MongoDB. The use cases for this type of instance include media streaming, gaming, mobile, and social networking. It allows running applications storage I/O needs even

    Customers whose applications require low latency access to tens of thousands of random IOPS can take advantage of the capabilities of this new Amazon EC2 instance.

    This is the third set of instances designed for high performance applications in the last half year or so from the company, as it looks to capitalize on real time big data needs. There have also been high storage instances and High I/O instances released.

    Expanding Functionality and Applicability

    The company isn’t solely focused on expanding its use cases to the upper spectrum of the market. It has also been adding functionality to enhance its existing products. Two added features of note are EBS Snapshot Copy and Static Hosting.

    EBS Snapshot Copy

    Back in December, the company introduced EBS Snapshot Copy, making it easier for customers to build AWS applications that span regions. It simplifies copying EBS snapshots between EC2 Regions. Use cases for this include geographic expansion (the ability to launch an application in a new region), Migration (migrate from one region to another) and for disaster recovery purposes, as in backing up data and log files across different geographical locations.

    Static Hosting

    The company hasn’t focused solely on the high end of things – AWS also released Root Domain Website Hosting for Amazon S3. While customers have been able to host static websites on Amazon S3 for a while, the companyadded two options to give even more control: the ability to host a website at the root of your domain (e.g. http://mysite.com), and the ability to use redirection rules to redirect website traffic to another domain.

    10:24p
    CyrusOne Marks IPO at NASDAQ Market Site
    nasdaq-bell

    CyrusOne Chairman Jack Cassidy and CEO Gary Wojtaszek, at center, congratulate one another after the ringing the opening bell at the ANSDAQ MarketSite in New Yrok to make the company’s initial public offering (Photo: CyrusOne).

    Executives of CyrusOne Inc. rang the NASDAQ Stock Market Opening Bell this morning to celebrate the company’s initial public offering last Friday. CEO Gary Wojtaszek and board chairman Jack Cassidy led the observance, which was also attended by CFO Kimberly Sheehy, Chief Technology Officer Kevin Timmons, Chief Commercial Officer Tesh Durvasula, Chief Operation Officer Mike Duckett and Chief Marketing Officer Scott Brueggeman, among others.

    Shares of CyrusOne (CONE) continued to trade above their $19 IPO price, gaining 92 cents to $22.12  in today’s session, a gain of 4.3 percent.

    << Previous Day 2013/01/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org