Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, June 18th, 2013

    Time Event
    11:45a
    DuPont Fabros Boosts Credit Line, Eyes Another Project

    Here’s an aerial view of the enormous ACC5 and ACC6 data centers developed by DuPont Fabros. (Photo: DuPont Fabros Technology)

    Data center developer DuPont Fabros technology (DFT) has only announced one new project. But the company is clearly thinking about another.

    DuPont Fabros has begin construction on ACC 7, the huge new data center on Ashburn Corporate Campus in northern Virginia. In its quarterly earnings call last month, company officials said they were examining their options for additional new data centers, but were not ready to make any commitments.

    Last week DFT made changes to its revolving credit facility that provide additional funding to support another major construction project. The company exercised an “accordion” feature that boosted its borrowing power from $225 million to $400 million. It also amended the facility, adding the option to further expand the loan to as much as $600 million. There is currently $60 million of borrowings under this facility.

    “A Second Development”

    “This expanded facility provides us with additional capacity at a low cost of capital to fully fund our current ACC7 development in Ashburn, Virginia and a second development, as we grow the company,” said Mark Wetzel, Chief Financial Officer and Treasurer.

    What might that “second development” be? There are several possibilities. DuPont Fabros has discussed an expansion of its suburban Chicago campus, where it has nearly finished leasing its CH1 project in Elk Grove Village, Illinois.

    “To capture future demand of this market, we need to secure lands and commence development of CH2,” DFT President and CEOP Hossein Fateh said in last month’s earnings call. “Land in Elk Grove is readily available. We’ve been looking at several sites over the last few months. Our plan is to secure a parcel of land this year. We will discuss the timing of CH2 after we have secured the site.”

    The company has also discussed adding a second phase to its development in Santa Clara. The other possibility is that DFT may enter a new geographic market. The company currently has projects on northern Virginia, Chicago, Silicon Valley and New Jersey.

    “We will continue to look at new markets,” said Fateh. “However, we would not enter any new market without the significant pre-lease in hand.”

    DFT’s development plans bear watching, as their presence has been known to impact the market dynamics in some of the industry’s hotspots. The new ACC7 facility will be the largest project yet for the data center developer, with a whopping 41.6 megawatts of power.

    12:30p
    Modernizing vs. Revolutionizing

    For more than 15 years, Matt Henderson has been a database and systems architect specializing in Sybase and SQL Server platforms, with extensive experience in high volume transactional systems, large data warehouses and user applications in the telecommunications and insurance industries. Matt is currently an engineer at Violin Memory.

    MHendersonMATT HENDERSON
    Violin Memory

    IT departments and data centers are on a constant treadmill to deliver continuous optimizations for faster and cheaper delivery, yet always at more scale. The requirements never end: Faster. Larger. Cheaper. Quicker. Features versus speed. Cost versus performance. Ease versus flexibility. Solution architecting can be the difference between long-term success and failure.

    Feeding the need for continuous optimization is a constant stream of new technologies. New technologies can be broken down into two categories: those that modernize and those that revolutionize. Modernizing is basically doing the same thing as before, just a little bit faster or little bit easier. Revolutionizing is either eliminating or vastly changing how something is accomplished. Typically, it takes a whole new technology to be fully revolutionary.

    Taking Storage to a New Level

    In the case of storage, a new technology has somewhat recently come to market: NAND flash. Instead of storing 1’s and 0’s in magnetic sectors on spinning platters with a swing arm reader head, it stores data on silicon wafers that are dynamically addressible. While allowing for RAS (Random Access Storage) in the persistent tier, it does not come without complexity and cost.

    In order to sell this new technology, the industry went with the fastest and easiest option to get into the market and what was most likely to sell. Enter the SSD (Solid State Drive). SSDs are specifically designed to be a bootable hard disk drive (HDD) refresh. The designers took the 2.5” form factor hard disk drives, removed the spinning platters and replaced them with NAND flash silicon chips.

    SSDs have the same physical size with the same physical connectors to make it easy to plug right into existing infrastructures. This allows vendors to sell to anyone in the enterprise or consumer market immediately without having to spend and risk a lot of capital building custom flash-optimized storage device. SSDs were designed to make flash easy to sell. They were not designed to be the optimum deployment of flash. SSDs are a modernization, not a revolution.

    SSDs, while faster than HDDs, qualify as a modernization because they leave the infrastructure, architecture and management of storage entirely in place while just making the individual storage components faster. Aggregation and Segregation (A&S) has long been the standard model for deploying enterprise data over many atomic parts. Left standing are all the typical issues and challenges of this type of A&S architecture:

    • Must define each workload and its I/O profile
    • Must determine how many units to allocate to each workload
    • Must determine RAID factor for each workload
    • Must choose which unit type to deploy in each LUN group (SATA, 10k, 15k, SSD, etc.)
    • Must assess the consequences of when the IO profile changes over a given time
    • Must segregate units strands performance
    • Must consider that legacy chassis controllers are not designed for NAND flash specific issues (wear leveling, error correction, write cliff mitigation, etc.)
    • Must consider legacy chassis or shelf engines and controllers (designed for hard disk drive speeds)
    • Must consider administrator time spent managing data locality issues

    If SSDs are just a modernization, what then would be a revolution? All Flash Arrays (AFA) are a distributed block, flash-as-a-chassis persistent storage appliance that deploys and integrates terabytes of flash into one device.
    Flash deployed as one integrated device allows for the technology specific issues (wear leveling, error correction and write cliff mitigation) to be deployed over larger quantities of chips and allows for every I/O to go at the maximum speed of the whole storage appliance. Workloads no longer need to be segregated, thus stranding speed in the underutilized groupings while needing to manage hot spots. It also allows solutions architects to work with much larger blocks of storage. Imagine having 40-100TB’s of usable storage that all works at the same speed with no tuning or advanced planning.
    The benefits:

    • Random Access Storage (RAS). A memory-like architecture where every storage address is equally accessible, at the same speed, all the time. Any workload using any data will work the same at any time. When sequential and random become the same, then any number of workloads can be active at the same time allowing for scale (parallelization) without performance degradation.
    • Distributed Block Architecture (DAB). With every I/O hitting every component every time, parallelization of flash is at its maximum, delivering the best possible speeds to every I/O, every time. Segregation of units into LUN decreases parallelization instead of maximizing it.
    • Parallelization. As the CPU manufacturers have migrated from delivering faster cores to more cores, the model for application processing has turning into massively parallel workloads. This has driven the need for storage that can be dynamically accessed with a high rate of random requests. While SSDs deliver random storage access, they do it over a small footprint. The more flash wafers that are engaged per I/O the more parallel the packet can be processed. Only terabyte sized and chassis-aware appliances can deploy enough flash wafers to sustain the execution of hundreds of thousands to millions of I/O’s per second at a bit level stripping.

    Only the invention of a new technology can allow something to quickly become cheaper, faster and simpler. AFAs are that new technology deployed in its proper form. The future of storage is simple to deploy large footprint, ultra-low latency, chassis-aware storage appliances utilizing NAND flash to deliver a distributed block architecture allowing applications to utilize random access storage.

    Storage purchases will usually have a production life of 3 to 5 years. What do you want your data residing on 3 years from now: something modern or something revolutionary?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in ourKnowledge Library.

    1:00p
    Coprocessors Gain Momentum in HPC and Big Data

    At the International Supercomputing Conference in Leipzig, Germany Monday the Intel Xeon Phi product family had a home run day – powering the new 33.86 petaflop/s champion Milkyway-2 supercomputer, and then announcing new Xeon Phi products. An IDC study shows why coprocessors are gain ground in the HPC market so fast, and Cray, Supermicro and Cirrascale show support for the new Intel Xeon Phi products.

    IDC market study shows coprocessors gainingA 2013 study by International Data Corporation (IDC) of worldwide high performance computing (HPC) sites show that not only are they now performing big data analysis, but that they are employing coprocessors and accelerators more than twice as much as the past two years. The study of 905 HPC sites concluded that the proportion of sites employing co-processors or accelerators in their HPC systems jumped from 28.2 percent in the 2011 version of the study to 76.9 percent in 2013. The 2013 end-user study also confirmed the IDC supply-side research finding that storage is the fastest-growing technology area at HPC sites. ”The most surprising finding of the 2013 study is the substantially increased penetration of co-processors and accelerators at HPC sites around the world, along with the large proportion of sites that are applying Big Data technologies and methods to their problems,” said Earl Joseph, Program Vice President for Technical Computing at IDC.

    Cray CS300 features Intel Xeon Phi coprocessors. Cray announced that the Cray CS300 line of cluster supercomputers is now available with the new Intel Xeon Phi coprocessor x100 family. Available with air or liquid-cooled architectures, the Cray CS300 systems provide superior price/performance, energy efficiency and configuration flexibility. ”Intel Xeon Phi coprocessors use the same familiar programming model as the widely used Intel Xeon processors allowing higher aggregate performance for   most demanding applications in the most efficient and time-effective way,” said Raj Hazra, vice president and general manager of Technical Computing Group at Intel. “Cray CS300 systems containing Intel Xeon processors and Intel Xeon Phi coprocessors deliver extreme performance for the highly parallel applications of the scientific community while maintaining the productivity benefits of familiar Intel CPU programming.”

    Supermicro supports new Intel Phi x100 products. Supermicro announced a wide range of server solutions supporting Intel’s new Xeon Phi coprocessors. Supermicro’s HPC solutions unify the latest Intel Xeon processors with the Xeon Phi coprocessors to dramatically accelerate development and performance of engineering, scientific and research applications. Supermicro solutions are available in 0.7U SuperBlade, 1U, 2U, 3U SuperServer and high-density 7U 20x MIC SuperBlade or 4U 12x MIC FatTwin designed to support the highest performance 300W Intel Xeon Phi coprocessors. “Supermicro has a proven record to adopt new technologies that accelerate the deployment of systems assisting in key engineering and research applications,” said Rajesh Hazra, Vice President and General Manager of Intel’s Technical Computing Group. “With our five new Intel Xeon Phi coprocessor product offerings, Supermicro continues to demonstrate a capability to deliver innovative, time to market solutions to the high performance computing community.”

    Cirrascale supports new Intel Phi coprocessors. Cirrascale announced its upgraded VB5400 blade server featuring expanded support to handle up to eight of the newly released Intel Xeon Phi 5120D, 3120A, 3120P, 7120P, and 7120X coprocessors.  The company has upgraded the design of its VB5400 blade server containing dual proprietary 80-lane PCIe switches to facilitate increased cooling and power efficiency when placed in its BladeRack 2 Series platforms. Each Intel Xeon Phi 5120D, 3120A, 3120P, 7120P, or 7120X coprocessor provides efficient vectorization, threading, and parallel execution that drive higher performance numbers for a wide range of applications. ”The latest additions to the Intel Xeon Phi coprocessor family allows us to continue to offer advanced performance for highly parallel workloads,” said David Driggers, CEO, Cirrascale Corporation. “Our customers and partners can rest easy knowing that these latest Intel Xeon Phi coprocessors work synergistically with Intel Xeon Processors and therefore allow our updated VB5400 blade server system to remain flexible, while providing robust energy efficiency and reliability.”

    2:00p
    OnApp’s Secret Weapon: Strength In Numbers

    In a world in which Amazon Web Services (AWS) and other cloud titans threaten to dominate the traditional hosting industry, OnApp believes there’s strength in numbers.

    OnApp is essentially a union organizer when it comes to cloud. The company has been helping hosting providers big and small stand up clouds, as well as offering content delivery (CDN) services through federated infrastructure. The company added distributed storage, and will be adding federated compute as well.

    “Our customer base continues to expand rapidly, using OnApp primarily to deliver public cloud capabilities,” said Kosten Metreweli, OnApp’s Chief Commercial Officer. “What we’ve seen in the past is that hosting providers come to us and say ‘I have to deliver a cloud capability’. Now we’re hearing it in the telco space more frequently. Regional telcos are beginning to look at building their own cloud services.”

    Some of the larger hosting providers that use OnApp include UK2, Peer1 Hosting, and Exabytes. A lot of these hosting providers develop functionality internally to make the platform their own. The company also assists smaller hosting providers stay competitive against the cloud giants.

    “Right now, if you’re a service provider and you’re not offering cloud, you’re in trouble,” said Metreweli. The company helps hosters launch clouds quickly and easily. Through federated infrastructure spread across many locatons – including CDN, storage, and soon compute – OnApp helps these companies reach global scale quickly and easily. “Across our customer base, an OnApp service provider grows 80% through their cloud service,” says Metreweli.

    Telcos and Emerging Markets Increasingly Leveraging OnApp

    The company’s appeal is expanding to Telcos, particularly in emerging markets. Matreweli mentions recent wins in Eastern Europe as prime examples of this.

    “In Eastern Europe – Southeast Baltic states, Poland, Russia we’re seeing more interest as well,” he  says. “Bulgaria and so on, we’re seeing pick up. Even in Africa. They haven’t had any decent hosting companies there, but everyone’s jumping to the post PC-era. Rather than the classic lifecycle they’re jumping straight to cloud.”

    The primary usage of mobile devices to access the internet in a lot of these countries means the processing increasingly occurs outside of the device, and in the cloud. OnApp is an easy way to bring cloud services to countries that can catch up through cloud.

    The core of OnApp’s business is the cloud management platform, user management, billing, and user interfaces.. Things are moving forward, with the company now touting over 700 service provider customers in 87 countries.

    “That customer base spans a pretty wide range from relatively small hosting providers, to large hosting providers that span wide. Telcos are coming on board a lot recently,” said Metreweli.

    Part of the reason for this is the company has been adding features to make OnApp more enterprise friendly. Distributed storage, the CDN and recent VMWare capabilities all speak to this.

    New Features Target the Enterprise

    The company continues to update the platform as well Some examples of the additional capabilities – managed Xen and kvm, now VMware as well. Anycast DNS service – all bundled in for free with service providers. The OnApp Cloud platform also now includes distributed storage, which helps solve performance problems for providers and increases I/O performance, according to Metreweli. Coming up is OnApp’s upcoming federated compute play.

    “The OnApp platform is just one piece of the puzzle,” said Metreweli. “There’s a bunch of services you can bundle in. CDN service, allows service providers to put spare  infrastructure in the marketplace.”

    The marketplace is a key feature. It gives the ability for customers to buy compute in other locations, or sell spare compute, making money on servers otherwise left unused. The federated model means that as OnApp adds partners, the global cloud becomes stronger. It lets the little guy compete, providing capabilities to cut down the cost of getting to market.

    Tapping Unused Infrastructure

    In 2011, the company began offering customizable, cloud-based CDN for global e-commerce, fueled by its service provider customers lending unused infrastructure to the cause.  The CDN now has over 170 points of presence, all provided by service providers, making the sum total the second or third largest CDN provider. Akamai and Limelight own 75 percent of the market, and this helps customers compete by given them CDN capabilities without the need to set it up. It’s expensive to build a CDN; through federation customers can pick and choose where they want POPs.

    “Capex to get started is high –  we turned the model on its head,” said Metreweli. “Because the federated model, we leverage the power of many.“

    “Cloud services are getting more and more commoditized,” said Metreweli. “If you use OnApp, how do you achieve those same revenue streams? You get to sell CDN. You get to put your spare infrastructure in there.”

    Commoditization isn’t necessarily bad; hosting providers have plenty of room to differentiate in terms of customer service, and through providers like OnApp, additional features.

    “If you do not have the time or the skillset to build a cloud, OnApp is a way to offer a full range of services to compete with Amazon,” said Metreweli

    2:30p
    Optimal Solutions for Data Center Connect (DCC)

    The modern data center is a part of the “on-demand” generation where data, applications, and workloads need to be constantly available. Part of this drive has been the technologies around cloud computing, a lot more data center need, and of course – virtualization. This new generation of virtual computing and storage resources dictate that SPs need more than a pair of (active and backup) data centers — and typically require many data centers to support their enterprise customer base. Low latency transport enables multi-data center architectures, where all data centers are connected together with a scalable and redundant mesh of high-speed links. The data center mesh then delivers processing, storage, and networking to end-user locations for optimal application performance, even though the underlying data may be moving dynamically between data centers attached to the mesh.

    alcatel

    [Image source: Alcatel-Lucent]

    In this white paper, you will learn how Dense Wave Division Multiplexing (DWDM) transport is the leading technology to meet Data Center Connect (DCC) requirements. DWDM is the only solution that enables full network flexibility and adaptability at speeds of 100G and beyond, quick service turn-up to meet dynamic bandwidth requirements, ultra-low latency connectivity, and transport grade reliability. Ultimately, DWDM solutions enable the highest throughput for DCC at the lowest total cost of ownership (TCO) for SPs.

    In working with intelligent, highly-scalable technologies – it’s important to understand where best-in-class capabilities are within a product. For example, the Alcatel-Lucent 1830 PSS is a best-in-class DWDM platform, including 100G coherent optics, T-ROADM, photonic OA&M, and metro to long haul reach. Scaling higher than 2 TB in a single chassis, and as low as a single slot version, the 1830 PSS also supports interchangeable line cards. Download this white paper today to learn how the current data center evolution will be addressed through the DWDM infrastructure. This means an integrated T-ROADM for flexibility, and metro and long haul 100G transport to meet scale and reach requirements. The final product will see an infrastructure which can deliver fixed, predictable latency without traffic loss, and high reliability – thus making it inherently protocol transparent.

    6:29p
    Stop Shaving That Yak, and Make Your App Faster
    It was a packed room at Velocity 2013 for a morning session on operations from Mandi Walls of Opscode. (Photo: Colleen Miller)

    It was a packed room at Velocity 2013 for a morning session on operations from Mandi Walls of Opscode. (Photo: Colleen Miller)

    SANTA CLARA, Calif. - You should really stop shaving that yak. Your applications will thank you, and so will the folks who must  keep them running in the future.

    That was one of the messages from the kickoff sessions at the Velocity 2013 conference at the Santa Clara Convention Center. Velocity is is an O’Reilly event focused on the fast-growing DevOps movement, and the challenges of making really big sites run really fast.

    All three of the morning sessions were jam-packed, with literally standing-room only crowds in each room. It was a working crowd, as nearly every member of the audience had laptops open in front of them. So did the speakers, like Twitter’s Marcel Duran, who shared live code demonstrations of how Twitter monitors and manages performance. Duran said the company will open source an internal performance tracking tool called Peregrine (because Twitter uses bird names for projects, and Peregrine Falcons are the fastest-flying birds).

    Ren & Stimpy: Enemies of Productivity

    But the discussion focused on culture as well as code. Like “yak shaving,” a geek-speak practice highlighted by a presentation by Mandi Walls, a senior consultant at Opscode. Yak shaving is a term that originated in an episode of  ”Ren & Stimpy” and was later popularized in writings by MIT researchers and author Seth Godin. It involves layering complexity and activity atop a process – such as when a programming task requires that you first complete a series of precursor tasks that may have little relevance to the end goal.

    In IT operations, yak shaving is the end result of customization and organizational siloes, but also a legacy of problematic coding behaviors. Walls said programmers often take pride in creating processes that may be impressive, but difficult for others to replicate. Sometimes “it seems easier to do it yourself,” and customize based upon tribal knowledge -critical expertise that is not widely shared.

    That’s one of the challenges being addressed by DevOps, which combines many of the roles of systems administrators and developers. The movement was  popularized at large cloud builders with dynamic server environments that required regular updating – which in turn placed a premium on standards and repeatable processes, so that applications could be supported by a team without breaking every time they’re updated.

    Walls emphasized the need for work to be well documented, repeatable, reliable and “easy to do right.” Opscode is one of the leaders in this effort through its backing of  Chef, an open source framework using repeatable code – organized as “recipes” and “cookbooks” – to automate the configuration and management process for virtual servers.

    This morning’s program kicks off three days of sessions at Velocity, which will feature keynotes and sessions from Google,  Akamai, Etsy, Yahoo, Paypal and Twitter, among others. You can follow the conference on Twitter with the hastag #velocityconf or watch the live feed of keynote sessions Wednesday and Thursday.

    << Previous Day 2013/06/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org