Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, May 28th, 2015

    Time Event
    12:00p
    Digital Realty is Not New to Its New CFO

    Andrew Power may be Digital Realty’s new chief financial officer, but Digital certainly isn’t new to him. Power has been deeply involved with Digital’s finances throughout the San Francisco-based real estate investment trust’s existence, starting with its IPO in 2004.

    He was on the team of lead underwriters that advised the data center giant a little over one decade ago and has since been involved in nearly every capital raise the company has done, Power said. In April, Digital’s leadership asked him to join the company to replace William Stein, its long-time CFO who recently became its chief executive.

    Power met Stein in 2004, when Stein took the CFO seat as the company went public, and it was Stein who asked him late last year to join the successor contest. A long time investment-banking man, whose most recent roles were with Bank of America Merrill Lynch and, prior to that, Citigroup, Power said he “had no real drive to leave banking,” when Stein approached him.

    But what he knew from working closely with Digital for all these years made the opportunity attractive. He would be joining a team he had a lot of familiarity with, led by a new CEO with a confident vision of the future.

    Power coming on board is part of a new chapter in the REIT’s history. Following the departure of its founding CEO Mike Foust in March 2014, the company has assembled a new leadership team and charted a new course, diversifying its model by striking partnerships with cloud and connectivity service providers to deliver joint solutions, promoting more actively its retail colocation business, and pruning its massive real-estate portfolio to get rid of non-core properties.

    Other leadership changes included creation of two entirely new C-level roles for the company: chief operating officer and chief information officer. The COO role was filled by Jarrett Appleby, a long-time COO for competitor CoreSite Realty; and Michael Henry, former CIO at Rovi, joined Digital as its new CIO.

    One of the biggest aspects of Digital’s business that gave Power confidence about joining the company was its global scale. In addition to facilities across the U.S., it has data centers in Europe and Asia Pacific. Many data center customers scale their businesses globally, and “this is one of the few areas where [Digital] has tremendous competitive advantage,” he said.

    Power indicated that more international expansion is in the works. “We’re just at the tip of the iceberg in terms of our global reach today,” he said, adding that Asia Pacific would be a particular focus. Earlier this month the company announced a $150 million Singapore data center, its second in the island country.

    Having had a lot of experience with real-estate-oriented investment banking, the uniqueness of data centers as a real estate asset was also attractive. This is one of the few real estate businesses that can layer high-value services on top of their footprint and continuously grow value of the real estate assets, he explained.

    With selling off non-core properties and taking on partners that can provide services to customers leasing space and buying power at Digital’s facilities, maximizing value of the existing portfolio certainly appears to be key to the company’s current strategic direction.

    Now, with its new leadership team complete, Digital is ready to execute.

    3:30p
    Trust and Innovation in Communication Platforms

    Olivier Thierry is the Chief Marketing Officer for Zimbra.

    Recent mega-breaches are forcing IT leaders to rethink their implementations of communication and collaboration solutions. With each new incident, it becomes more clear that the basic communication channels we trust every day, such as email and online chatting, contain vulnerabilities that can expose sensitive information to malicious actors who wreak havoc on businesses and individuals.

    In a business climate reeling from these developments, IT leaders recognize that it’s no longer acceptable to blindly trust software. IT must first verify that a communication platform is trustworthy, then validate that any innovation in technology meets established guidelines for security and privacy.

    Under more pressure than ever to scrutinize technology for potential security weaknesses, IT leaders are looking to open source as a solution. Transparency gives open source communication platforms the potential for the flexibility, trust and innovation necessary to achieve previously out of reach security standards. Here are three reasons communication platforms need open source to thrive:

    Open Source Allows an Innovation Ecosystem to Flourish

    When a company’s communication system is rooted in open source software, it can tap into a vast development pool provided by the software’s community members. People can create custom widgets that add incremental value to the platforms employees use. Employees can also help shape the design and future of these tools as community members who are encouraged to contribute and vet one another’s work. This practice allows new features to take root in applications that are grounded in business needs.

    Customization Brings More Value to Each Business

    Given the tough regulatory compliance landscape, businesses can leverage open source to create custom workflows. They can also use existing platform extensions created by the open source community in order to reduce the complexities of audits and compliance (e.g., email archiving and electronic discovery). Additional workflows can tie the communication platform into business applications for CRM or ERP. For example, tying in chat with an email platform is ideal in product development, support and management systems in which real-time communication is preferred.

    Establishment of Trust

    Whether it is an open standard, open API or entirely open-sourced software package, the reliance on community participation and transparency builds more secure systems. All software has bugs and vulnerabilities; however, open source offers a larger qualified group that reviews the code. Whether the vulnerabilities are intentional (backdoors and skeleton keys) or incidental, having visibility into the remediation steps will help re-establish trust in the software.

    Trust in the application you use to communicate professionally and personally is paramount. And, as technologists look to increase the usability of encryption, end-to-end encryption will become more attainable and elevate the protection of data. This is but one facet that we, as vendors, must tackle to elevate trust across the digital world.

    The creativity of the open source community can help software quickly adapt to industry and technology updates. Thus, the community amplifies the pace of innovation and time to solution/resolution. For example, the community can improve remediation times with a temporary fix. Additionally, openness improves software flexibility, allowing for customization and extensibility to create unique solutions that meet business needs and maintain security and privacy. Trust and innovation will play leading roles in the next phase of communication, placing open source software at the forefront.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:13p
    Gartner: AWS Pulls Further Ahead of Others in IaaS Cloud Market

    Gartner has released its annual Infrastructure-as-a-Service Magic Quadrant, reporting that Amazon Web Services has once again built on its seemingly insurmountable lead last year, while many vendors trended down and to the left. In Gartner MQ parlance, no quadrant is bad, however.

    This is the fifth consecutive year AWS has taken the leader spot in the evaluation of the public infrastructure cloud services market. Given the company finally broke out revenue for cloud, there are hard numbers backing this up.

    Gartner increased its estimate of Amazon’s share of the IaaS cloud market. AWS has ten times more cloud capacity than the other 14 providers combined. Last year, AWS had five times all the others combined, according to Gartner. AWS not only had more cloud capacity, despite increased requirements in Gartner’s evaluation, but improved position over both axes of the Magic Quandrant.

    All vendors, except Microsoft and Amazon, fell below the mid-point on ability to execute. Microsoft showed healthy progress, VMware went from “Niche” to “Visionary” and increased its ability to execute.

    EMC‘s recent acquisition of Virtustream will likely affects its position in next year’s report.

    The Gartner Magic Quadrant for IaaS 2015 shows last year's cluster is breaking up, and AWS still in the clear lead (image: Gartner)

    The Gartner Magic Quadrant for IaaS 2015 shows last year’s cluster is breaking up, and AWS still in the clear lead (image: Gartner)

    The biggest drops were across IBM, CSS, and Verizon, while HP left the report entirely. Rumors swirled around HP abandoning the public cloud earlier this year, however it’s more accurate to state that it has devalued public cloud as a standalone part of its strategy, shifting focus to private cloud or cloud as complement to a wider portfolio. These same rumors circled Rackspace last year, which also denied it was getting out of the IaaS market.

    The reason for these rumors, and for many vendors dropping in the latest report is because it examines only public cloud compute, not storage, managed services, or any other services built atop cloud. In that regard, AWS is a clear leader, and Microsoft is doing well, considering it was a later entrant.

    Google‘s performance and position in this segment of the cloud market were disappointing. Google improved its ability to execute but the company that often fills out the “Big Three” is not yet considered a leader by Gartner.

    CenturyLink believes it’s one vendor better positioned this year versus last. While it didn’t move much, the company believes it’s keeping a positive, relative pace. “The cluster last year started to tease apart,” said Richard Seroter, vice president of product at CenturyLink, said. “Two clear camps are forming in public cloud.”

    According to him, there are two types of IaaS providers: those that treat it as a centerpiece, such as AWS, Google, and Microsoft – companies with scale – and those that treat IaaS as a component of an IT portfolio strategy.

    6:27p
    SRC Claims Unprecedented Server Performance With New Architecture

    As Moore’s law celebrates its 50th anniversary, SRC Computers is looking to deliver an orders-of-magnitude increase in server performance over x86 architectures with the introduction of its Saturn 1 server built using a radically different computing architecture.

    An optional cartridge for HP’s Moonshot chassis, Saturn 1 is a dynamically reconfigurable server SRC claims can deliver faster performance compared to traditional microprocessor designs. With 42 server cartridges per 4U chassis, it uses a User FPGA (Field Programmable Gate Array) and System FPGA to deliver a new way of overcoming computational limitations of current architectures. Instead of a microprocessor using a fraction of its resources, SRC says, the FPGA is able to use 100 percent with direct access to memory.

    SRC was founded by Seymour Cray, the late engineer who was a fundamental figure in the development of supercomputers, and Intel board member D. James Guzy in 1996. The company has been quietly working on defense and intelligence solutions for government agencies.

    “Existing microprocessor architectures cannot keep up with the demands of hyperscale and cloud computing,” said Jon Huppenthal, SRC president and CEO. “Software developers use every trick in the book to squeeze performance out of hyperscale applications, but they cannot overcome the limitations of multi-purpose processor designs. The Saturn 1 Server changes the limits of traditional microprocessor architecture and lets programmers use the code they already have on a radically different architecture.”

    SRC is not the first vendor to design a reconfigurable server based on FPGAs. The arrays are used to configure chips to maximize server performance for specific workloads.

    Intel has used FPGAs to tailor processors for Amazon and for Oracle, among others. Microsoft has designed servers for its cloud services that support FPGAs, and Juniper sells super-fast network switches that also use the technology.

    As SRC puts it, the application becomes the processor. By allowing custom programming of the FPGA with languages that programmers already know and use, the Saturn 1 makes possible a software-defined processor. Each server contains an efficient Intel processor for general-purpose software and other tasks.

    SRC claims that hyperscale web services companies have ported their applications to Saturn 1 servers in three days. With the reconfigurability of the server, companies could instruct it to behave one way during the day and switch at night to an entirely different configuration.

    Besides the performance increase in using the new computing architecture, SRC notes that the Saturn 1 saves space and power with about 2kW per 4U chassis and 9 chassis per rack for about 20kW.

    The Saturn 1 server will be available for $19,995 for HP Moonshot through resellers, including Parallel Computing Solutions.

    7:15p
    QCT Launches Converged Storage Solutions for Data Centers

    QCT (Quanta Cloud Technology) introduced a line of new storage products, including solutions for scale-out cloud storage and high-performance enterprise applications. The Taiwanese hardware supplier launched three new converged storage platforms at the Computex conference in Taipei this week, each engineered for a specific data center workload.

    All three solutions feature Intel Xeon E5 v3 processors and various OCP (Open Compute Project) networking options. QCT is very active with Open Compute (the Facebook-led open source hardware design effort) and has many products in its portfolio that are Open Compute certified. It also works with a number of other providers – most recently partnering with VCE to use Quanta servers for VxRacks.

    For software defined storage, OpenStack, or enterprise applications, QCT introduced the QuantaGrid D51PH-1ULH as a 1U storage server. It can have up to 12 hard drives plus 4 SSDs. QCT says it is a hybrid platform that can be used as building blocks to scale out both capacity and computing for SDS needs. This platform also comes with a flexible OCP LAN mezzanine card to support a variety of networking options.

    Going up a notch, QCT describes the QuantaPlex T21P-4U as a high-density, high-computing storage server that supports up to 78 hard drives in a 4U form factor. Targeted at the needs of cloud services, archiving and backup, this platform features hot-swap disks and multiple OCP LAN mezzanine cards.

    Introducing a new-generation 2U two-node cluster-in-a-box server, QCT launched the QuantaPlex T21SR-2U. This model places two clustered nodes together via a PCIe interconnection or 10Gb and shares up to 24 disk drives in a 2U chassis, according to the company. QCT says it also features a backup battery unit as a data vaulting solution, to carry over in case of a power failure.

    7:25p
    Lenovo Enters the SAN Fray With Homegrown Models

    By Charlene OHanlon

    I’ll say it once, and I’ll say it again: Storage ain’t sexy. But data’s got to be stored somewhere, and these days even SMBs need enterprise-level storage.

    Enter Lenovo, which has introduced two homegrown SAN units designed to offer a lot of value based on the price point, including a jumping off point to SSD storage, said Denny Lane, director of Product Marketing, Enterprise Business Group at Lenovo.

    “The rate of growth of storage and increasing demands of compliance and regulatory issues … we’re seeing that grow exponentially,” he said. With that in mind, most SMBs need more than the internal storage or network-attached storage (NAS) they can afford.

    “Our latest offerings we believe are at an aggressive price point that’s simple to deploy and provides enterprise-level feature sets that work well from the branch office to the data center,” Lane said.

    The Lenovo Storage S2200 and S3200 storage arrays offer dual and single controllers in 2U-12 and 24 drive configurations. The 3200 in particular supports a hybrid configuration, enabling a transition to an all-flash storage environment over time.

    “We’re seeing a lot of hype about all flash-arrays, so we think this will be a stepping stone for customers moving to flash technology for first time,” Lane said.

    Both units include the Lenovo SAN Manager, which offers features including:

    • Data tiering based on data importance/use
    • Thin provisioning
    • SSD read caching
    • Rapid RAID rebuild
    • Data snapshots and
    • Data pooling

    “This is a very high performance, general purpose-type of array,” Lane said. “We see a huge amount of potential for partners with storage—right now about 45 percent of sales in the entry space is led by server sales. We think it’s ideal that we can team [servers and storage] at an aggressive price point, especially with more customers replacing their storage and servers as they move beyond fibre.”

    Lane noted the Lenovo Storage S2200 and S3200, which made their debut this week at Lenovo Tech World in Beijing, is made from home-grown technology with partnering for some of the development. “It’s not part of the IBM [x86 server] acquisitions,” he said. “We continue to do business with IBM and we think this is a nice complement with what we do with EMC and IBM.”

    Partners that see the value in the natural pairing of servers and storage will be happy to have another options for their customers, especially at the lower end of the customer spectrum. Lenovo is good at providing sensible, if not sexy, options for its partners and their customers to keep their data safe and sound.

    Original article appeared here: Lenovo Enters the SAN Fray with Homegrown Models

    11:01p
    Panduit’s SmartZone

    Panduit SmartZone DCIM solutions encompass a comprehensive suite of instrumented hardware, modularized DCIM software and turnkey services. Based out of Tinley Park, IL, the company’s solutions are focused on improving operational and performance management through automated device discovery, analytics, visualization, and actionable intelligence enabling capacity, and change and event management.

    Built around the four pillars of power, space, cooling and connectivity, Panduit solutions offer integrated, accurate and real-time data that leads to greater DCIM adoption across IT infrastructure, data center operations, and facilities management.

    With more than 60 years in business, Panduit has seen the decade-long evolution from physical hardware into DCIM software. Starting in 2008 with its original DCIM solution, Panduit has nurtured the focused effort in the DCIM market from a team of about 20 individuals to a division of 150-plus.

    “Our DCIM focus on legacy brownfield as well as new data center builds for the mid-market, large enterprise and colocation providers via the suite of hardware, software and services that include the operational building blocks of power, space, cooling and connectivity management distinguish Panduit from other DCIM players,” said Dave Dunnigan, integrated marketing communication manager for Panduit’s data center. “Not to mention that Panduit can also fully provide the physical infrastructure and instrumentation to optimize customers’ data center needs. Panduit is not simply selling DCIM as a product but as an advanced a way of looking at data center power utilization and efficiency.”

    Panduit’s SmartZone platform offers additional DCIM capabilities for identifying, managing, and forecasting data center capacity to ensure maximum utilization of resources. For example, a Stranded Capacity function identifies the data center capacity that can’t be used by IT loads due to a lack of resources related to floor and rack space, power, and cooling. It then creates work orders to reclaim the stranded capacity. Similarly, a Forecasting function provides the ability to identify and forecast remaining capacity across all physical resources.

    Since the first release in March 2008, Panduit’s DCIM platform now works with a wide variety of organizations spanning several verticals. Some of the clients within its Reference Customer Program include:

    • Yahoo
    • RagingWire Data Centers
    • CyruOne
    • Enquinix Singapore
    • Hertz
    11:15p
    Emerson Network Power – The Trellis Platform

    Based out of Huntsville, AL, Emerson Network Power has been a long-term player in this market with more than 20 years of experience designing and implementing DCIM software for global clients.

    Today, its flagship DCIM product, the Trellis™ Platform, is now in its third version. The Trellis Platform was built from the ground up, not an amalgam of other existing products. It is completely scalable and modular and allows organizations to begin their DCIM journeys at any stage appropriate for their business and scale up as necessary. Furthermore, Emerson Network Power utilizes its own design and manufacturing teams and has the resources and services to further develop the DCIM product for clients anywhere globally.

    DCIM has been an evolving technology, and Emerson Network Power has been developing around this kind of platform for more than 20 years, including work at Aperture and Avocent before their acquisition by Emerson. Its Trellis Platform, developed and exclusively owned by Emerson Network Power, has been available to the market since May of 2010.

    Emerson focuses on the direct alignment between data center solutions and business needs. The company leverages existing infrastructure deployments to create a DCIM platform that integrates direct data center intelligence. As a result, Emerson currently works with a variety of global clients spanning several industries. They include:

    • AT&T
    • Time Warner Cable
    • Harris County 911
    • Merck
    • UNUM
    • Vanderbilt University
    • Cambridge University
    • Fujitsu
    • Kingfisher

    “There are a few fundamental drivers for virtually any decision related to the data center: availability, performance and efficiency, cost control, and speed,” said Steve Hassell, president of data center solutions at Emerson Network Power. “Whether considering a new data center, new equipment or simply revised operational practices, questions related to those drivers must be asked and answered. There is one technology that touches on all of those drivers, and that’s data center infrastructure management. DCIM can optimize data center performance and maximize resources and assets in ways that no other single solution can match. As an infrastructure provider with decades of experience in the data center, Emerson Network Power understands this better than anyone, and developed a solution—the Trellis Platform—that reflects that deep understanding.”

    emersongraphics

    11:42p
    Emerson Makes DCIM Integration Easier With RESTful APIs

    Emerson Network Power has made it easier to integrate its data center infrastructure management software suite Trellis with other applications by adding RESTful APIs (application programming interfaces). REST APIs are popular with developers who build web services because they don’t consume a lot of resources.

    Using the APIs, the suite, called Trellis, can be integrated with things like IT management and accounting software, or mobile platforms. They can also be used to customize workflows that include DCIM.

    Integration with as many different systems used by data center operators as possible is crucial to using DCIM software effectively, but not all DCIM suppliers provide RESTful APIs for their tools.

    One traditional approach among vendors has been to simply partner with one another and integrate specific solutions in specific ways. While it provides the assurance that comes from knowing that the vendors behind the products have integrated them and tested the integration, it does not leave much room for customization by the user’s own developers.

    Schneider Electric, Emerson’s biggest competitor in the space, has only recently started to transition web services for its StruxureWare Data Center Operation software to a RESTful API from the more heavy-weight alternative called SOAP. Another major player, iTRACS (owned by CommScope) has devised an Open Exchange Framework for piping data between its DCIM software and other systems instead of using a common API.

    Raritan and Device42 are examples of DCIM vendors that do provide RESTful APIs. Others that offer open but not necessarily RESTful APIs include Geist, and Baselayer (formerly IO).

    “The value of DCIM depends on its ability to aggregate large amounts of real-time data from all areas of the data center,” Jennifer Koppy, research director at IDC, said in a statement. “API’s that enable the interoperation of functions between many different sources of data and management solutions are essential to a successful DCIM implementation.”

    Emerson has made other enhancements to Trellis as well. In the change-planning area of functionality, users can now see their data center capacity at different points in time, which can help them understand the rate at which their resource utilization grows and make more informed capacity planning decisions.

    Finally, the company has added more canned reports, including reports for connections, status, and power consumption.

    << Previous Day 2015/05/28
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org