Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 18th, 2015

    Time Event
    1:00p
    MapR Extends Real-Time Capabilities to Distributed Data

    Enabling real-time analytics, or shortening the time from data ingestion to action in the enterprise, is a big push in the world of databases and distributed data systems like Hadoop. Latest updates to the MapR‘s distribution of Hadoop are focused on real-time capabilities, specifically extending these capabilities across geographically dispersed server clusters.

    Enhancements in version 4.1 include MapR-DB table replication for multiple-cluster support and for real-time disaster recovery, an API for those that code in “C,” letting them create Hadoop applications, and a new POSIX Client that boosts performance and security in real-time data applications through compression and parallel access.

    Data architectures have been optimized to achieve “as-it-happens” operations through automated processes.

    MapR-DB table enables multiple distributed data clusters across geographically dispersed data centers. The active-active cross-data-center capability also means its easier to deploy globally. Support for clusters replicated in multiple geographies means operational data can be stored and processed close to users or devices. All live data is immediately replicated to a central analytics cluster in real time.

    “Businesses continue to push the boundaries of real-time analytics in Hadoop but can be challenged by a geographically-dispersed environment,” said Nik Rouda, senior analyst, Enterprise Strategy Group. “With the new product release from MapR, data is no longer tied to one site and can instead have global relevance. Live data updates across multiple clusters can be shared and analyzed immediately with the speed and reliability needed for enterprise operations.”

    1:00p
    VMware Vets Unveil Software Defined Storage Startup, Raise $34M

    Springpath, a startup founded by VMware veterans that has been building a software defined storage platform that isn’t tied to any specific hardware, came out of stealth today, announcing $34 million in funding by New Enterprise Associates, Redpoint Ventures, and Sequoia Capital. The Sunnyvale, California-based company operated in stealth mode as Storvisor.

    Springpath says its storage and data management software provides reliable and scalable storage services. The basic premise of its approach is to break reliance on capital-intensive hardware that this type of software and functionality are often coupled with.

    The software-only platform runs on standard servers and decouples data from the application layer in what the company said was a simple-to-implement way that is platform- , hardware- , and application-agnostic. It is sold sold on a subscription basis.

    The company dubbed its vision “Independent Infrastructure.” At the core of the platform is what it calls HALO architecture, which stands for Hardware Agnostic Log-structured Objects. This architecture is constructed from the ground up, using different pieces of intellectual property.

    Storage is still using technology from the 90s, and enterprises are riddled with expensive, inflexible silos as a result, Ravi Parthasarathy, vice president of product management at Stringpath, said. He believes storage arrays will be history, replaced by software defined storage with commodity servers underneath.

    “Hardware silos create inefficient hardware usage,” he said. “On the storage side, each vendor offers their own way to manage data. We want to provide a single data platform that’s 100 percent software. Now customers have the freedom to buy whatever servers they want, add those servers as they want and the data platform scales with it. We will provide all the enterprise-grade features like snapshots and clones, creating disaster recovery policies.”

    Springpath CEO and co-founder Mallik Mahalingam, along with another co-founder Krishna Yadappanavar, are VMware alumni. Both were instrumental in pioneering technology like Virtual Extensible LAN (VXLAN), a popular network virtualization system, and Virtual Machine File System (VFMS), the most-widely deployed file system in VMware environments.

    Mahalingam spent about 10 years as principal engineer at VMware, where he worked on a multitude of projects, including vSphere Networking and storage IO. He spent four years as researcher at HP Labs before joining VMware. Yadappanavar’s tenure at VMware lasted about eight years. Both left the virtualization giant in 2012, according to their LinkedIn profiles, around the same time Storvisor was founded.

    “Modern data centers require a versatile and elastic data platform software that runs on a common hardware infrastructure based on standard servers and supports the data management needs of virtualized, containerized, big data, and other emerging environments,” Mahalingam said.

    The company has key technology partnerships in place with VMware, OpenStack, and Docker. It’s working with vendors like Cisco, Dell, HP, and Lenovo. Springpath also announced a distribution agreement with Tech Data, which will give solution providers access to servers pre-loaded with its software.

    The company ran a beta of a few dozen customers across a variety of verticals, for use cases such as Virtual Desktop Infrastructure, test and development, infrastructure consolidation, databases, and for remote, or branch offices.

    “The data center’s move to a server-based ecosystem is well on its way, and Springpath’s software platform enables enterprises to support their diverse application environments utilizing standard servers and appears to fit well into this macro trend,” Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, said in a statement. “We were impressed with their solution’s ability to deliver the value of enterprise features and performance on a variety of top-brand servers, offering enterprises choices without compromising on their expectations.”

    4:30p
    Lean Data Center Operations, Calculating Resource Allocation Index

    Rajat Ghosh is the Chief Executive Officer for AdeptDC, an early-stage startup focusing on optimal cooling allocation for data centers.

    Data center managers often cringe at their data centers’ electricity bills and would appreciate a better understanding between demand and supply profiles of their data centers. A new metric—resource allocation index (RAI)—was proposed in line with the philosophy of designing data centers as an integrated value chain rather than a collection of compute and infrastructure systems.

    RAI is defined as the ratio of normalized resource supply to normalized resource demand. Normalized resource supply is defined as the normalized electricity used by the data center, i.e., the ratio of electricity used to maximum possible electricity available.

    On the other hand, normalized resource demand is defined as the normalized user count in the data center, i.e., the ratio of current number of users to maximum number of users (that can be handled by the data center).

    Critical Remarks on RAI

    Although the metric has received critical acclaim, there are a few concerns regarding RAI calculation. Two prominent concerns are:

    • Since RAI is numerically a ratio of ratios, it is hard to make a business case out of it.
    • Most data centers use power usage effectiveness (PUE) as a performance metric. An additional metric means more metering effort.
    • The computation of RAI is not straightforward.

    Calculating RAI in a Typical Data Center

    To overcome these challenges cited by the readers, following modifications are suggested for RAI calculation:

    Figure1

    Click to expand.

    The ratio within the second parenthesis is a veritable constant because it should be defined by the data center designers in terms of IT hardware/software configurations and power source ratings. At this point, it will be useful to map the user count into a related engineering variable such as the data center computing power. User count is often an impracticable variable to assess.

    In contrast, the maximum computing power and real-time computational utilization are relatively straightforward parameters to monitor by tapping into the IT management software applications in data centers. The modified RAI is defined as:

    Figure2

    Click to expand.

    The computing utilization or IT utilization is directly related to electricity consumption by IT devices. IT power is equal to the sum of static and dynamic powers. The dynamic power component is directly proportional to the computing utilization.

    Therefore, IT power can be modeled as:

    Figure3

    Click to expand.

    Putting back into RAI definition:

    Figure4

    Click to expand.

    This suggests RAI is directly related to PUE. The only dynamic factor for a data center facility is power consumption for computing operations.

    RAI Metering Using Infrastructure for PUE Metering

    Figure 1: Schematic for RAI metering. Most of the metering infrastructure is similar to PUE computation. That means RAI computation does not require additional capital investment except for the highly resolved IT power monitoring systems.

    Figure 1: Schematic for RAI metering. Most of the metering infrastructure is similar to PUE computation. That means RAI computation does not require additional capital investment except for the highly resolved IT power monitoring systems. Click to expand.

    The symbols used in Figure 1 are defined in the following table:

    Figure6

    Click to expand.

    Figure 1 shows the metering architecture for RAI monitoring. Most data centers have this architecture in place for PUE computing.

    Figure7

    Click to expand.

    Clearly, PUE computation does not require resolution of static and dynamic components of the IT power. However, those data are critical for RAI computation.

    Figure8

    Click to expand.

    RAI computation evidently requires measurement infrastructure for IT power. Table 1 (below) informs the various IT power measurement strategies prevalent in data centers. As shown, these infrastructure elements exist in a typical data center. That means RAI computation does not warrant additional capital expenditure.

    Table 1: IT power measurement strategies prevalent in data centers

    Figure9

    Click to expand.

     

    Key Takeaways

    • RAI conveys significant business sense by informing how a data center as a system responding to the incoming computing demand. This is a considerably improved way of metering of a data center as a value chain unlike PUE, which only focuses on the supply-side of a data center.
    • RAI can be calculated with the infrastructure existing in a typical data center.

    Acknowledgement

    The author would like to acknowledge the contributions of Mr. Mark Monroe, Chief Technology Officer and VP at DLB Associates

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:30p
    The Practical Science of Data Center Capacity Planning

    Your data center is growing, you have more users connecting, and your business continues to evolve. As the modern organization places even more demands around the data center model, administrators must be aware of their resources and their utilization. A big part of that is planning out capacity for both today and the future.

    As the need to balance current and future IT requirements against resource consumption becomes more urgent, the data center industry increasingly views capacity planning as a way of achieving a critical component to planning a new build or retrofit. Data center capacity planning can be a complex undertaking with far-reaching strategic and operational implications. DCD Intelligence has compiled this white paper to share some industry insights and lessons on the practical steps that are needed to develop a successful power and capacity planning strategy.

    These insights are based on a series of 15 in-depth interviews conducted with market-leading data center owners and operators. They include Cisco WebEx, Defense.net, Edmunds, Gigamon, IBM Global Technology Services, ING Bank and Scale Matrix. Between them, the organizations interviewed account for just less than 1 million staff worldwide and annual revenues of approximately $350 billion. They own or operate approximately 400 data centers and labs world-wide.

    As the paper outlines, there are several common factors that may impact capacity planning. This includes:

    • Industry specific › Rack density and optimization requirements.
    • Power
    • Cooling
    • Plan for business continuity and disaster recovery
    • Sustainability and ‘green’ performance
    • Budgeting

    Through it all you apply the practical science of capacity planning to really create a powerful data center model. Key emerging industry trends toward Data Center Infrastructure Management (DCIM) and Software Defined Data Centers (SDDC) demonstrate a continuing need to look at the key balance between IT and communications and facilities management. Still, it’s clear that the activities and perspectives of IT and facilities management professionals continue to differ. However, the provision of intelligent and remotely-managed PDUs, as well as the development of DCIM and SDDC tools, are important bridges between the two silos.

    Capacity planning brings together all the key resource and output factors that constitute a data center’s reason for commission and its means of fulfilling that. As critical resources become more expensive or scarce, being able to plan for future capacity requirements becomes more critical.

    From a research perspective, capacity planning is a vital process for any company with its own lab or data center; this view is shared by the experience and insight of the executives which were interviewed.

    Download this white paper today to learn how, based on current trends, the power draw of IT and communications equipment will continue to rise, creating an exponential demand for the power needed to run and cool it, while the cost of power increases and its ready availability is threatened in some locations. Find out what it takes to develop a capacity plan which directly aligns with your data center needs and organizational goals.

    5:59p
    Microsoft to Spend $200M on Wyoming Data Center Expansion

    Microsoft is expanding its data center in Cheyenne, Wyoming, investing more than $200 million, with some estimates going as high as $250 million. The expansion was announced in a news conference by Governor Matt Mead. The project is expected to take up to two years.

    The expansion in Laramie County will boost Microsoft’s cloud computing capacity, according to local business development officials. It will bring Microsoft’s total investment in the site to $750 million, will double permanent jobs to 50, and employ several hundred people during construction work.

    Wyoming officials are excited because of what the Microsoft data center expansion means beyond direct jobs. Big data center projects attract and spur technology ecosystems.

    The Wyoming Business Council and state officials approved a $5 million grant for Microsoft. The grant will pay for supporting infrastructure like water towers and sewer linest.

    Besides the big production data center, Cheyenne is home to some of Microsoft’s most forward-thinking data center activity. The company launched an experimental zero-carbon, biogas-powered data center there in 2014,which uses fuel cells to convert gas that is a byproduct of waste treatment into energy. It serves as a research center for biogas and fuel cell technology, two alternatives to drawing from the power grid.

    Microsoft has made significant data center investment in the state, with another $274 million expansion announced in April of lastg year. The company’s investment in Wyoming has put the state on the data center map.

    Meade expressed interest at the conference in attracting more technology business to Wyoming.

    The Microsoft data center expansion is expected to add 120 acres to the North Range Business Park. The new data center will sit on the west edge, adjacent to Microsoft’s existing infrastructure.

    Microsoft and other technology giants have been making big investments in alternative energy sources.

    The most recent examples of renewable energy deals include Apple’s 130-megawatt solar deal in California, and Amazon Web Services’150-megawatt wind power purchase agreement in Indiana.

    Some data center providers have also made steps to clean up their energy mix. Interxion recently discussed the colocation provider’s role in renewable energy with Data Center Knowledge. Wholesale provider Digital Realty announced it would buy renewable energy credits for new customers anywhere around the world to make their energy consumption 100 percent carbon-neutral for one year free of charge.

    Green House Data, also in Wyoming, recently conducted a survey, however, that found that, while respondents generally agreed that operational cost savings that come with having a green data center make business sense, most IT department’s don’t really look at energy efficiency or sustainability when comparing different data center service providers.

    6:09p
    DigitalOcean Expands to Silicon Valley Data Center by Telx

    Infrastructure-as-a-Service provider DigitalOcean is expanding its infrastructure through Telx data centers. The company has deployed servers in Telx’s SCL2 data center in Santa Clara, California.

    That location will complement a Telx San Francisco deployment, with DigitalOcean housing non-network infrastructure needs in SCL2. The new location is strategic based on proximity to DigitalOcean’s peering and network hubs within the San Francisco Bay Area. Telx houses DigitalOcean in New York and New Jersey facilities as well.

    New York-based DigitalOcean targets the developer cloud, emphasizing simplicity and speed of deploying virtual servers, which it calls “droplets.” The message has resonated, the company growing exponentially.

    Netcraft called DigitaOcean the third largest cloud based on web-facing growth it observed last year. DigitalOcean raised $37.2 million in an Andreessen Horowitz-led Series A funding round in March 2014.

    DigitalOcean’s other area location, Telx’s San Francisco data center, SFR1, is an important network exchange location for the Bay Area. SCL2 will tether the peering and network hubs located in San Francisco.

    “DigitalOcean has a proximity and latency advantage with Telx’s NYC2 and SFR1 data centers as well as scale advantages in the NJR3 and SCL2 facilities,” Mitch Wainer, chief marketing officer and founder of DigitalOcean said in a statement. “None of this is possible without national solutions and facilities focused on capability sets, and this is why we decided to expand our relationship with Telx.”

    The expansion also benefits Telx’s Cloud Xchange portfolio, the company’s cloud ecosystem play. Telx has worked to boost both private cloud connectivity and hybrid cloud connectivity in a bid to attract enterprise customers.

    Internationally, DigitalOcean opened a UK data center with Equinix in July last year and also opened in Singapore.

    Developer-focused IaaS provider and DigitalOcean competitor Linode recently announced international expansion. The success of both DigitalOcean and Linode has proven that giants like Amazon Web Services haven’t completely cornered the IaaS market.

    6:28p
    SUSE Launches Ceph-Powered Software Defined Storage

    After launching a beta version late last year at SUSECon 2014, SUSE has released SUSE Enterprise Storage, which it describes as self-managing, self-healing, distributed software-based storage solution for enterprise customers. It leverages commodity off-the-shelf servers and disk drives, and is powered by the Ceph open source storage model.

    SUSE is best known for its popular enterprise distribution of Linux. It also has a distribution of the popular open source cloud architecture OpenStack. SES will be available as an option with SUSE OpenStack Cloud or as a stand-alone storage solution.

    Billed as Petabyte-scale storage, Ceph is an extremely popular open source scale-out block, object store and file system for big data workloads. Almost a year ago Ceph open storage systems provider Inktank went for $175 million to Red Hat.

    SUSE notes that its SES offering is best suited for object, archival and bulk storage, with features including cache tiering, thin provisioning, copy-on-write cloning, and erasure coding.

    SES is software defined storage that is aimed and public and private cloud environments with large and demanding big data needs.

    SUSE is also taking on the entrenched proprietary storage systems installed with claims that its offering saves 50 percent over the average capacity-optimized mid-range storage array. The company claims that its software defined storage alternative will cost $0.01 per gigabyte per month.

    Software defined storage running on commodity servers is a hot space. Also today, a software defined storage startup called Springpath came out of stealth with a $34 million funding round. The company was founded by VMware veterans.

    IBM has also pushed the software defined storage angle recently, releasing its Spectrum Accelerate offering based on its XIV storage system, but also available to run on SoftLayer Infrastructure-as-a-Service.

    “The emerging combination of reliable open source software and commodity hardware has rocked the storage market, to the benefit of storage customers,” Laura DuBois, program vice president at IDC, said in a statement. “Lower costs, deployment flexibility, and the all-important availability of data are benefits delivered by solutions like SUSE Enterprise Storage.”

    7:00p
    Microsoft Adopts International Standard for Cloud Privacy

    logo-WHIR

    This article originally appeared at The WHIR

    Microsoft has adopted ISO/IEC 27018, an international standard for cloud privacy. The standard is meant to assure customers by restricting the processing and handling of personally identifiable information, and establishing transparent data transfer and deletion policies.

    The company announced in a blog post on Monday that several services’ compliance with the standard had been independently verified by the British Standards Institute (BSI). The services the standard had been applied to are Azure, Office 365 and Dynamics CRM Online. Microsoft Intune was also verified as compliant by Buerau Veritas.

    The standard was created by theInternational Standard Organization (ISO) in 2014 to apply to all cloud vendors. Microsoft says it is the first major provider to adopt ISO/EIC 27018, which is the world’s first international standard of its kind.

    The ISO released a set of standards related to cloud computing definitions, along with reference architecture in October.

    The Microsoft blog post points out that in addition to increasing customer control and making data center storage practices more transparent to consumers, the standard should assure them that their data will not be used for advertising, and that they will be informed of government access of personal information unless disclosure is illegal.

    “All of these commitments are even more important in the current legal environment, in which enterprise customers increasingly have their own privacy compliance obligations,” wrote Microsoft General Council and Executive VP, Legal and Corporate Affairs Brad Smith. “We’re optimistic that ISO 27018 can serve as a template for regulators and customers alike as they seek to ensure strong privacy protection across geographies and vertical industry sectors.”

    The privacy of personally identifiably information is a concern for all companies that host it. It is of particularly concern for cloud providers as new, more stringent standards, like the EU regulations Microsoft announced it was meeting last April, come into effect.

    High profile data breaches have spotlighted poor security and privacy practices and allowed companies to leverage privacy fears by being publicly proactive in complying with regulations and standards.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-adopts-international-standard-cloud-privacy

    8:47p
    Lenovo Developing 64-Bit ARM Server Powered by Cavium

    Chinese hardware giant Lenovo is developing an ARM server powered by Cavium’s 64-bit Thunder System on Chip. The project is part of a computing energy efficiency research effort funded by a U.K. government organization.

    ARM Holdings, a U.K. processor company, licenses its chip architecture to chip makers. ARM processors power most of the world’s smartphones, but because they consume relatively little energy, numerous companies have been working on adapting the architecture for servers.

    Cavium is one of the more recent comers to the ARM server scene, where other major players include Applied Micro, AMD, and Texas Instruments. There also used to be an Austin-based company called Calxeda, but it went bankrupt in 2013, its intellectual property gobbled up by a Taiwanese gaming company late last year.

    Lenovo has been expanding its server play in leaps and bounds. Last year, the company bought IBM’s entire commodity x86 server business, instantly becoming the largest server vendor in China and one of the largest worldwide.

    The prototype ARM server the company is developing will be part of its NeXtScale line, which consists of products for high-performance computing. A single NeXtScale enclosure can hold up to 12 ARM servers or pack 1,152 cores in the space of six standard rack units.

    Efficiency of high performance computing systems is the focus of Lenovo’s joint research project with the Hartree Centre, an organization formed by the U.K. government’s Science and Technology Facilities Council. The project’s aim is to explore performance of scale-out and scale-up computing systems given a defined power budget.

    “This is a fantastic opportunity to meet the challenge of developing a computationally powerful and energy-efficient platform based on the 64-bit ARM v8 microprocessor,” Neil Morgan, program manager for energy efficient computing at the Hartree Centre, said in a statement. “The Hartree Centre will be actively developing a robust software ecosystem encompassing compilers, linkers, numerical libraries and tools – all of which are fundamental to the adoption of these types of technologies.”

    << Previous Day 2015/02/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org