Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, April 2nd, 2014
Time |
Event |
11:30a |
Network News: New Technology from Extreme Networks, Broadcom Extreme Networks launches new hardware and software networking solutions, Broadcom announces an expansion of its multi-core communications processor family, and Numergy selects Nuage Networks SDN solution to expand its cloud infrastructure.
Extreme Networks launches new networking solutions. At Interop 2014 in Las Vegas this week Extreme Networks (EXTR) launched new hardware and software products for simple, fast and smart networking solutions. For centralized management and rapid problem identification and resolution across the network NetSight 6.0 provides one tool, with one database. Purview application analytics gathers application intelligence at Layer 7 direct from the network. A new SDN 2.0 provides an open, standards-based architecture for combining automated network and application provisioning and orchestration through open integration with northbound and southbound services. A new BlackDiamond X8 100GbE Blade gives 100G wire speed with 20Tbps of non-blocking throughput on a 4 port blade. Finally, new IdentiFi 3800 series 802.11ac access points provide 1.75Gbps total throughput with automatic configuration and RF optimization. ”The network is a critical component to enable increasingly mobile and social enterprises,” said Bob Laliberte, Senior Analyst, Enterprise Strategy Group. ”Accordingly, IT should strive to ensure an optimal experience for all its users all of the time. With its new architecture, Extreme Networks has taken a big step in helping organizations meet that goal by combining network visibility, analytics and policy, over high performance wired and wireless infrastructures. These network solutions are designed to deliver the requisite intelligence, performance and operational simplicity to handle demanding enterprise environments.”
Broadcom announces multi-core communications processor. At Interop 2014 Broadcom (BRCM) announced the expansion of its XLP II multi-core communications processors family with the XLP500 Series. Featuring 32 NXCPUs and 80 Gbps performance, the XLP500 Series delivers up to 4X the per-core performance of competing processors. The XLP500 Series performance is achieved with an innovative quad-issue, quad-threaded superscalar architecture with out-of-order execution. Support for Broadcom’s Open NFV platform and seamless interoperability with Broadcom’s StrataXGS Switch Series streamlines the development process, optimizes power requirements, reduces hardware costs and improves time-to-market. ”For service providers and data center operators looking to manage dynamically changing workloads and massive data requirements, the XLP500 Series provides the processing performance and flexibility required to deploy new services and cost-effectively scale the network,” said Chris O’Reilly, Broadcom, Senior Director of Product Marketing, Processors & Wireless Infrastructure. ”With the addition of the XLP500 Series, Broadcom now offers the industry’s broadest end-to-end portfolio of 28 nanometer multi-core communications processors, spanning 4 NXCPUs to 640 NXCPUs.”
Numergy selects Nuage Networks SDN for cloud infrastructure. Alcatel-Lucent’s (ALU) Nuage Networks announced that its SDN platform and Alcatel-Lucent 7750 service routers were selected by Numergy for deployment within and across its data centers. Numergy opted to implement Nuage Networks’ Virtualized Services Platform (VSP) and 7850 Virtualized Services Gateway (VSG) as well as the Alcatel-Lucent 7750 Service Router to manage and automate its datacenter networks to be more programmable and operationally efficient. The Nuage Networks SDN solution provides datacenter network virtualization and automation that transforms data centers into flexible environments that instantaneously establish the network services required in a cloud infrastructure. “We are pleased to implement the Nuage Networks product suite in our cloud infrastructure. The Nuage Networks SDN technology allows us to address key performance and compatibility requirements for an open environment. This will allow us to virtualize our infrastructure and to offer our customers cloud services in a more dynamic way,” said Erik Beauvalot, Chief Operating Officer of Numergy. | 12:00p |
Software Defined Power in Virtualized Application Environments Clemens Pfeiffer is the CTO of Power Assure and is a 25-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies.
Server virtualization has many benefits in terms of reliability, manageability, scalability and efficiency. This prompted most organizations to virtualize their applications. While the virtualization trend was still ongoing, organizations quickly learned that the abstraction of hardware through virtualization allows for much more flexibility when performing routine maintenance tasks like hardware upgrades and reconfigurations for applications that are now expected to run 24x7x365.
Virtualization also allowed for applications to be migrated from one set of hardware to another without any down time and it allowed operators to add (and remove) capacity dynamically, making it easy to adjust capacity based on ever changing demand. As a result, application outages based on IT hardware failures dropped significantly.
Complete Data Center Failures
However, people responsible for application service levels and guarantees are still nervous about the prospect of a complete data center failure. Therefore almost all mission critical application environments require some sort of backup/business continuity location. Such a location is usually geographically dispersed, setup as a fully redundant data center, configured for either hot or cold backup and failover. As a result, the virtualization was extended to network and storage components to allow dynamic configuration changes of all IT components required by an application.
VMware calls this the Software Defined Data Center (SDDC), considered by many to be the final step in the evolution of this virtualization trend where compute, storage and network resources are all virtualized, abstracted away and are delivered as a service that any application can use. However, it is primarily an abstraction within a single data center, so automated failover and recovery is still required and unfortunately is very often plagued by problems.
Additionally, as it is validated by the latest application outage statistics, one of the most common causes of application outages is now related to power, either to internal data center power delivery issues or problems from the power utility grid. While everyone believes that a 2N or even 2N+2 Tier 4 data center has enough redundancy to not experience any power related outages, any mistake, operator error, failure in the power distribution that only interrupts power to IT equipment for a second, can lead to long application recovery times. So while technically 5min of downtime of the data center (assuming 99.999% availability) might never be exceeded, this might not be true for the applications itself.
What is Software Defined Power?
This is where the need for Software Defined Power (SDP) comes from. It requires creating a layer of abstraction that isolates the application from local power dependencies and maximizes application uptime by leveraging existing failover, virtualization and load-balancing capabilities to shift application capacity across data centers in ways that always utilize the power with the highest availability, dependability and quality.
SDP is implemented as an extension to existing virtualization and software-defined environments. It will collect real-time power and IT usage data to understand demand and capacity and the variability of each. Creating automated run-books for each application to adjust capacity either up or down will allow data center operators to achieve the maximum energy and cost savings from running the correct capacity needed for the applications to support the ongoing demand.
Shutting down idle/backup servers and freeing them up as a pool of spare resources will allow them to be shared by any application. As primary and backup applications are virtualized, the hardware allocation for the primary application can be adjusted based on real time demand, while the hardware allocation for the backup site can be minimal to support system management agents and automated patch management. This usually leaves between 40-50% of hardware available for dynamic allocation, assuming a primary and fully populated backup site configuration. Obviously any runbook would leverage DRS and other VMware or hypervisor specific functionality to optimize hardware use and minimize power/operating cost. Last but not least, current customer implementations show that such adjustments, even when done across data centers can be completed in less than 5 minutes, fully automated.
Software Defined Power integrates with all common virtualized environments and leverages its capabilities to distribute application load on the basis of application QoS requirements, power cost and availability.
Software Defined Power is an emerging solution to adjust hardware capacity, migrate applications and maintain pools of spare capacities to be allocated dynamically to applications as demand changes. It enhances investments in IT, facilities and application monitoring, DCIM, virtualization and system management by making it possible to avoid power related issues by proactively moving applications to other environments/locations as a matter of routine, achieving the ultimate application reliability.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 12:10p |
Is Your Data Center Certified for an Extinction Level Event? Is your data center ready for an ELE? For those of you who don’t remember the 1998 film “Deep Impact,” that’s the acronym for an “Extinction Level Event.” In a blog post, wholesale data center provider Digital Realty announced that it was pursuing Extinction Level Event Certification, which would ensure customer uptime through a meteor strike.
Participants would receive real-time satellite data on threats from “near earth objects.” If an imminent meteor strike is detected, Digital Realty will deploy a proprietary Kinetic Deflection System to harmlessly deflect the object away from the data center.
“After the ELE impact, ELE Certified data centers will effectively go into ‘hibernation,’” Digital Realty said in a blog post. “The process entails sealing all exterior access points and protecting the facility from elemental hazards such as 100-year nuclear winters, global tsunamis and any potential zombie apocalypse if presented with a Night of the Comet scenario. The ELE Certification demonstrates Digital Realty’s commitment to our clients and realizing their potential and will be available starting on April Fool’s Day 2014.”
April Fool’s Day indeed. Digital Realty wasn’t the only data center company to have some April Fool’s Day fun. Mindful of the proliferation of “aaS” offerings, Green House Data announced the launch of Babysitting as a Service.
 Green House Data has a new service offering. No word if there’s any Red Bull in those sippy cups.
The company said the announcement comes alongside the creation of a new full-time position in the Network Operations Center of Red Bull Stockperson.
“The Red Bull thing, that was partly in response to our new BBYSaaS,” said Shawn Mills, CEO, in a blog post. “For one thing, we’re going to need to stay awake with all these kids around. For another, we need to keep the kids out of the Red Bull. They go crazy on that stuff.” | 1:00p |
Intel Commits $100 Million to Developing Smart Devices in China At the Intel Developer Forum in China Tuesday, Intel CEO Brian Krzanich outlined a Smart Device Innovation Center in Shenzhen and a $100 million investment fund to accelerate innovation of smart devices, including tablets, smart phones, PCs, 2in1s, wearables, the Internet of Things, and other related technologies in China.
During his keynote, Krzanich discussed how Intel and the Shenzhen technology ecosystem can re-ignite growth – both locally and globally – and deliver differentiated computing products and experiences, spanning multiple market segments, operating systems and price points. Also launched at the event, the Intel Gateway Solutions for the Internet of Things (IoT) based on Intel Quark and Atom processors, and demonstrated for the first time SoFIA, Intel’s first integrated mobile SoC platform for entry and value smartphones and tablets.
“The China technology ecosystem will be instrumental in the transformation of computing,” said Krzanich. “To help drive global innovation, Intel will stay focused on delivering leadership products and technologies that not only allow our partners to rapidly innovate, but also deliver on the promise that ‘if it computes, it does it best with Intel’ – from the edge device to the cloud, and everything in between.”
Intel will establish the Intel Smart Device Innovation Center in Shenzhen to accelerate the delivery of Intel technology-based devices to the China market and beyond. Further accelerating this effort, the Intel Capital China Smart Device Innovation Fund will focus on tablets, smartphones, wearables, IoT and other related technologies in China.
Intel’s 2014 LTE platform, the Intel XMM 7260, meets the five-mode requirement of China Mobile today, including support for TD-LTE, and TD-SCDMA protocols required in China. Krzanich demonstrated the 7260 by conducting the first public, live call using China Mobile’s TD-LTE network, and spoke to strong ecosystem demand for a competitive LTE alternative. Intel is also developing its SoFIA family of integrated mobile SoCs for entry and value smartphones and tablets.
Growing the Internet of Things
At the Developer Forum Krzanich also announced availability of the Intel Gateway Solutions for IoT, an integrated solution based on Intel Quark and Atom processors, in addition to an Intel Galileo-based development platform. The first platforms integrate Wind River and McAfee software to help accelerate time to market and will be available from the ecosystem this quarter.
With the Intel Edison set for release later this summer, Krzanich noted that Intel is expanding Edison to a family of development boards that will address a broader range of market segments and customer needs. The first Intel Edison board will now include use of Intel’s 22nm Silvermont microarchitecture in development of a dual core Intel Atom SoC, increased I/O capabilities and software support, and a new, simplified industrial design. | 1:30p |
Data Center Optimization: Intelligent Hot and Cold Air Containment Your data center now distributes your platform all over the world and provides robust connectivity options. At this point, in a mature platform scenario, you have solid controls, great visibility into your infrastructure and are even planning an expansion.
But what are you doing around optimization and efficiency? Are you creating optimal environment control mechanisms to ensure that your data center continues to run well? Basically, are you deploying intelligence around hot and cold air containment within your data center?
Data center containment strategies can greatly improve the predictability and efficiency of traditional data center cooling systems. In fact, The Green Grid views an air management strategy as “the starting point when implementing a data center energy savings program.” The reality, however, is a bit different. Most existing data centers are constrained to certain types of containment strategies. In this white paper from Schneider Electric and APC, examines intelligent containment methods available today, investigates constraints and user’s preferences, provides guidelines for determining the appropriate containment approach, and emphasizes the importance of ongoing air management maintenance.
Remember, hot air and cold air containment are the two high-level methods for an air management strategy, and both of them provide significant energy savings over traditional, uncontained configurations. So in regards to your existing data center – Why do we need to decide between hot air and cold air containment? Why not just contain both and run the rest of the room on building air? Containing both air streams provides no significant benefit except in cases where IT cabinets are located in a harsh environment (i.e. manufacturing floor). Containing a single air stream is enough to prevent hot and cold air mixing. So, which type of air containment is a better choice for existing data centers? This question has raised a lot of discussions among manufacturers, consultants, and end users. In reality, the best containment type will largely depend on the constraints of the facility.
Download this white paper today to learn about intelligent hot and cold air containment methods. Critical consideration topics include:
- Complete facility assessment (ceiling height, plenum depth, air distribution, and more)
- A look at all possible containment solutions (CACS, Ducted HACS, Row-cooled HACS, RACS, and others)
- Understand the pros and cons of the outlined six air containment methods
- Select containment according to common air distribution methods
- Selecting the optimal containment hardware
Remember, Data center containment strategies can provide great benefits for data centers. Hot air and cold air containment are two approaches to do the containment deployment. The best approach for a specific deployment should be determined by assessing the facility constraints, reviewing all potential solutions, and selecting the right containment hardware. As your data center platform continues to grow, it will be even more critical to utilize intelligent environmental control methods. This includes optimal hot and cold aisle containment deployments for your existing data center platform. | 3:30p |
Interop News: Marvell Launches New Packet Processors At Interop 2014 this week in Las Vegas Marvell launches a new series of Prestera DX packet processors, NETGEAR builds Broadcom StrataConnect SoC into its ProSAFE gigabit smart switch solutions, and Alcatel-Lucent enhances its Unified Access wired and wireless portfolio.
Marvell launches new packet processors. Marvell (MRVL) announced a new series of Prestera DX packet processors that enable secure and power efficient solutions for a new generation of access networks. With 10GbE-enabled servers driving adoption, the DX3300 and the DX3200 families are designed to simplify and secure these converged access deployments. The DX8216 packet processor is a highly optimized solution for 10GbE server connectivity and features the Alaska-X family of 10GBase-T PHYs, with advanced eBridging technology enables a host of virtualization capabilities, as well as OpenFlow 1.4 support. Armed with dual-core on-chip ARM CPUs, the DX3300 and the DX3200 families are capable of meeting the demanding access needs of the carrier, industrial and campus networks. ”The proliferation of Bring Your Own Device (BYOD) and the drive towards common security and policy management of the converged network is pushing new network architecture models. Marvell’s new 28nm packet processors product suite further supports our continued commitment to designing and delivering higher service density per watt. Building on the company’s long heritage in networking, this release brings to market platforms and innovative solutions for 10Gigabit and gigabit networking,” said Ramesh Sivakolundu, vice president CSIBU at Marvell. “The Prestera DX portfolio of devices, coupled with a total solutions approach, continues to offer a variety of switching solutions to enable secure, feature-rich and cost sensitive access in campus and SMB networks across the globe.”
NETGEAR selects Broadcom for Gigabit smart switches. Broadcom (BRCM) announced that NETGEAR has selected Broadcom’s StrataConnect system-on-chip (SoC) to power the new generation of NETGEAR ProSAFE Gigabit Smart Switch solutions. The StrataConnect SoCs feature an ARM-based CPU, Layer 2 and Layer 3 Switching, 10G SerDes and 16 Gigabit PHYs, scalable table sizes and advanced security features. ”Cloud services and access to advanced IP-based applications are straining the SMB network in terms of bandwidth, data security and connectivity,” said Ram Velaga, Broadcom Senior Vice President and General Manager, Network Switch. “Our highly integrated StrataConnect SoCs are designed to address these challenges, combining high performance components and state of the art security features to enable leading equipment providers such as NETGEAR to deliver the advanced solutions required for a more agile and secure SMB network.” Broadcom also announced a 5G WiFi (802.11ac) system-on-chip to deliver pinpoint indoor positioning technology. The BCM43462 SoC, featuring Broadcom’s new AccuLocate technology, provides sub-meter accuracy on physical locations enabling retailers and public venue operators to deliver more personalized experiences to consumers.
Alcatel-Lucent enhances wired and wireless portfolio. At Interop Alcatel-Lucent (ALU) unveiled enhancements to its Unified Access wired and wireless portfolio and network management system. The new OmniSwitch 6860 (OS6860) family, with embedded analytics and programmability features an innovative ASIC and coprocessor providing wire-rate deep packet inspection (DPI) and policy enforcement right at the edge of the network. For wireless the new OmniAccess Access Points to better address the growing adoption of Gigabit wireless (802.11ac) and the diverse deployment needs with outdoor AP, low cost APs and Instant APs. Alcatel-Lucent redesigned OmniVista 2500 Network Management System to better support network analytics and provide the foundation for future consumption models as more enterprises move toward cloud-based services. It features a multi-tenancy and a distributed architecture, open northbound/southbound APIs, a new web-based user interface, unified policy engine, and centralized network analytics. “As enterprise IT evangelizes on getting more operational efficiency from their campus network infrastructure deployments, a unified view of wired and wireless network access provides an interesting value proposition, especially when it comes to application discovery, visibility and analytics. Aligned with one of IDC’s predictions for 2014, Alcatel-Lucent Enterprise’s approach of enabling wired and wireless access via SDN and increased programmability will lead to an improved application and user experience that IT will want to leverage.” | 3:31p |
Top 10 Data Center Stories – March 2014 The data center industry keeps expanding and growing, and this month’s top posts reflect that. Equinix is expanding massively in NoVA, Switch goes global and Cisco puts out a $1 billion plan to build cloud data centers. Here’s our monthly wrap up of the top ten stories, ordered by page views. Enjoy!
Equinix Plans 1 Million Square Foot Data Center Campus In Ashburn – March 17 – Equinix plans to build a massive data center campus in Ashburn, Virginia, not far from its existing interconnection hub in the heart of northern Virginia’s “Data Center Alley.” The company has submitted plans to build a gargantuan development with more than 1.16 million square feet of space.
Large Crack Found in Dam Supporting Quincy Data Center Cluster – March 1 -A large crack has been detected in a dam on the Columbia River that is the largest source of hydro-electric power for a major cluster of data centers in Quincy, Washington.
Cisco Joins Cloud Wars, Pledges $1 Billion for Data Centers – March 24 -Networking giant Cisco Systems is the latest tech titan to enter cloud in a big way, pledging to spend $1 billion over the next two years. The company will use that money to build up its data center infrastructure.
American Express Vacating Massive Minneapolis Data Center – March 4 -American Express is vacating a massive 541,000 square foot data center in Minneapolis, which will soon be up for sale, according to local media.
U.S. Navy Shifting Public Data to Amazon Cloud – March 14 – The U.S. Navy is shifting large amounts of data to the Amazon Web Services cloud, and expects the move to produce huge savings.
Facebook Adopts IKEA-Style Pre-Fab Design for Expansion in Sweden – March 7 – Facebook has begun building a second huge server farm in Lulea, Sweden, and has totally reworked its data center design for the project. The new building will span 25,000 square meters (270,000 square feet) and will combine factory-built components with lean construction techniques.
Foust Resigns as Digital Realty CEO, Stein Takes the Helm – March 17 – Digital Realty Trust said today that its long-time CEO Michael Foust has stepped down, effective immediately. The company’s board has appointed Chief Financial Officer William Stein to take over as Interim Chief Executive Officer.
The SUPERNAP Goes Global, as Switch Adds International Partners – March 11 -The SUPERNAP is going global. Colocation pioneer Switch has formed a joint venture to launch an international expansion, teaming with Accelero Capital Holdings and Orascom TMT Investments (OTMTI) to build SUPERNAP data centers around the world.
Immersion-2: Liquid Cooling Designed for 100kW Racks- March 3 – How do you create a 500kW data center that can run inside a high-rise office building? That’s what Allied Control recently did in Hong Kong, where it created a data center that housed custom Bitcoin mining rigs in a rack-mounted immersion cooling system.
Yahoo to Sublease 24 Megawatts of Virginia Space – March 10 – Some of the world’s largest technology companies are consolidating their Internet infrastructure in company-built data centers, and moving out of leased data center space. Will this shift disrupt the market for “wholesale” third-party data center space?
Stay updated on data center news – Subscribe to our RSS feed and daily e-mail updates. Follow us on Twitter or Facebook or Google+. | 3:48p |
TELEHOUSE America Names New CEO Data center and managed IT provider Telehouse has named a new CEO for Telehouse America and KDDI America. Current President and CEO Masaaki Nakanishi will step down to retire, and Satoru Manabe is stepping in as the new President and CEO.
Nakanishi first joined Telehouse America and KDDI America in October of 2011, moving from his prior position as Vice President and Chief Executive Officer of Global Business for KDDI Corporation in Japan.
Telehouse has a globally diverse infrastructure. The company has 46 TELEHOUSE branded data centers in 23 cities throughout Asia, Africa, North America and EMEA. In the United States, it has 3 facilities and offers peering exchanges in New York (NYIIX) and Los Angeles (LAIIX).
“Throughout Masaaki Nakanishi’s tenure here in the United States, KDDI and TELEHOUSE America have seen significant growth with US-based companies expanding Data Center and Systems Integration services both here and abroad,” said Hiroyasu Morishita, EVP of Sales and Marketing. “We are honored to welcome Satoru Manabe as our new CEO; his excellent credentials will contribute to our accelerated growth in emerging markets, worldwide. Mr. Manabe’s success in business operations in key markets such as China will enhance our companies’ reputation as a leading global colocation and network provider.”
Manabe first joined KDDI in 1984 after graduating with a Bachelors Degree is Law from Kyoto University in Japan. He later received a Masters of Business Administration in Marketing from City University Business School in London. Manabe has been engaged in China business development and established three subsidiary companies in Beijing, Shanghai, and Guangzhou. He served as President of KDDI China from 2005 to 2007, started Telehouse China in Beijing, followed by Telehouse China in Shanghai. From 2007 to 2010, he oversaw a KDDI and China Telecom joint venture company before heading the global account management team for Toyota Motor Corporation and its group companies from 2010-2014.
Telehouse in The USA
Telehouse Center in NY is the company’s headquarters and first facility, opening for business in 1989. The flagship data center is also its largest, at 162,000 square feet. Located in The Teleport, a 100-acre office park on Staten Island, it’s 17 miles from Manhattan and 12 miles from Neward Airport.
The 13,000 square foot, Los Angeles facility is a gateway for connectivity to Asia. Home to LAIIX, the company looks to capitalize on the explosive growth of Chinese and Indian economies, positioning this location as a key exchange point for companies looking to increase reach in the Pacific Rim. It had direct dark fiber connectivity to the major carrier hotel nearby, One Wilshire. The company has 665kVA of commercial power capacity.
The Newest U.S. facility is in Manhattan’s Chelsea and Meat-Packing districts, opened in early 2011. It has 60,000 square feet of colocation space and connection to NYIIX. NYIIX offers dual-redundant fiber connections to its extension at other carrier hotels, including 25 Broadway,111 Eighth Avenue, 60 Hudson Street and 7 Teleport Drive. | 4:00p |
Cloudera Releases New Version of Enterprise Hadoop Platform Cloudera announced the general availability of Cloudera 5, the latest release of its enterprise Hadoop platform, with 96 partner products have certified integration with the new release. Cloudera 5 delivers a secure, managed, governed and open data management platform, backed by a deep technology partner ecosystem.
This major release version continues a big year for the big data company – with a large $160 million financing round, followed by strong financial and product support from Intel (INTC). The Intel investment in Cloudera, noted as the largest data center investment Intel had made to date, turned out to be $740 million, giving Intel an 18 percent stake< in Cloudera. That brings the total financing round to $900 million.
Cloudera 5 Enhances the Enterprise Data Hub
Cloudera 5 delivers tight integration with existing enterprise data management systems including key attributes to deliver robust security, governance, and data protection and management that enterprises require. Data is encrypted at every level, from the cluster to individual byte, and wrapped in a common security infrastructure. Cloudera Manager and Cloudera Navigator provide centralized security to verify authentication and third party extensions that plug in security. With the inclusion of YARN Cloudera 5 makes the management of a Hadoop-based enterprise data hub easy for customers to use, navigate and manage. Controls for governance and compliance are built-in, and can furnish reports relative to user access to data, where they have come from and been moved to.
“At Cloudera we fully realize what it means to be the foundation of an enterprise data hub,” said Charles Zedlewski, vice president, Products, Cloudera. “Our next generation platform has all of the key attributes necessary for customers to make data the true focal point of any business – it delivers onsecurity, management and governance and is open where it matters.”
“Analytic platforms are assuming multiple personalities,” said Tony Baer, prinicipal analyst, Ovum. “Cloudera 5 is clearly in line with this trend as it supports a wide range of analytic styles venturing well beyond MapReduce to interactive SQL, search, and in-memory computing. Cloudera 5 is also extending Hadoop’s footprint with regard to security, data management, and governance. For instance, the new data lineage capabilities are the first building blocks that could eventually lead to a fully audited experience.”
Strong Ecosystem Support
Cloudera also announced that through its Cloudera Connect program, 96 partner products have certified integration with Cloudera 5. The Cloudera Connect partner program is a living and growing ecosystem that continually adds new partner products, ensuring that customers always have the broadest choice of solutions that best suit their business needs. The first 96 certifications are just the beginning of a vibrant and robust certification program. The certified products give assurance to joint customers that leverage the combination of third-party products alongside CDH. Categories of certified products include tools and applications, extensions for management and security, services, and hardware and platform services.
“We are impressed by the volume and depth of partners that have chosen to certify their products in advance of Cloudera 5 becoming GA,” said Tim Stevens, vice president, Business and Corporate Development, Cloudera. “The Cloudera platform was built to allow customers to store any amount and kind of data and to deploy that data against an analytic infrastructure or application when, howand where needed to achieve the most economical value to their business. We are investing deeply in building our partner ecosystem and certifying their products on Cloudera 5, ensuring our customers have the broadest choice of big data tools and technologies for their enterprise data hubs.“ | 4:02p |
Microsoft Cuts Azure Cloud Prices, Introduces ‘Basic’ Instances This article originally appeared at TheWHIR.
Microsoft is cutting prices for users of its Azure cloud, dropping the price of its compute by up to 35 percent and storage by up to 65 percent.
The company will also be moving to region-specific pricing to help customers save money when their workloads does not require specific placement.
In the blog post announcing the change, Microsoft Azure general manager Steven Martin states, “We recognize that economics are a primary driver for some customers adopting cloud, and stand by our commitment to match prices and be best-in-class on price performance.”
According to Martin, Microsoft sees price as just one (albeit a very compelling) factor in the decision to use Azure, which it is trying to build as a platform with the best combination of innovation, price, and quality.
Price competition, however, is heating up in the cloud space, and is largely led by Amazon and Google, who many would describe as Microsoft’s most fierce cloud competitors.
In February, Amazon Web Services lowered its prices in all regions with Amazon Simple Storage Service costing up to 22 percent less and Amazon Elastic Block Store up to 50 percent.
Last week, Google announced new pricing for its cloud computing services including a drop of its Compute Engine service by 32 percent, Cloud Storage by roughly 68 percent for most customers, and Google BigQuery by 85 percent.
New Services Announced, Too
Microsoft’s announcement this week, however, wasn’t just about pricing, but also announcing some new services.
A new tier of General Purpose Instances called “Basic” ranging from extra small (A0) to extra large (A4) will become available on April 3. These are similar to the current Standard tier, but cost 27 percent less, but do not include load balancing or auto-scaling.
In the coming months, the company plans to add Zone Redundant Storage (ZRS), a new redundancy service for Block Blob storage that keeps an equivalent of 3 copies of your data across multiple facilities either within the same region or across two regions.
This article was first published here: http://www.thewhir.com/web-hosting-news/microsoft-cuts-azure-cloud-prices-introduces-basic-instances |
|