Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, June 21st, 2013

    Time Event
    11:30a
    Cisco To Acquire Composite Software

    With yet another software purchase, Cisco (CSCO) announced its intent to acquire privately held Composite Software, a market leader in data virtualization software and services for $180 million.

    Privately held Composite pioneered data virtualization, which connects many types of data from across the network and makes it appear as if it’s in one place. Expanding Cisco’s portfolio of Smart Services Composite will extend the services platform by connecting data and infrastructure. As with the transition from physical servers to server virtualization and from physical networks to network virtualization, together Cisco and Composite will accelerate the shift from physical data integration to data virtualization for customers and partners.

    “Cisco’s strategy is to create a next generation IT model that provides highly differentiated solutions to help solve our customers’ most challenging business problems,” said Gary Moore, Cisco president and chief operating officer. “By combining our network expertise with the performance of Cisco’s Unified Computing System and Composite’s software, we will provide customers with instant access to data analysis for greater business intelligence.”

    Last March Cisco acquired SolveDirect, an Austrian company that provides cloud-delivered services management integration software and services. Together with Composite, the two companies will help provide cross-domain data and workflow integration capabilities to enable real time business insights and operations. Upon completion of the acquisition, Composite employees will join the Cisco services team, under the leadership of Mala Anand, senior vice president, Cisco Services Platforms Group and Mike Flannagan, senior director and general manager of the Integration Brokerage Technology Group.

    12:00p
    GE Launches Cloud Platform for the Industrial Internet

    cloud-keys-dreamstime

    The tag line for global giant GE is that it “works on things that matter.” That philosophy now extends to the Internet of Things.

    GE has launched a first of its kind industrial strength cloud platform for big data and analytics to connect machines and business operations. Built to support the Industrial Internet, GE says the platform is robust enough to manage the data produced by large-scale, industrial machines in the cloud.

    The platform will help convert big data into real-time insight, benefiting global industries including aviation, healthcare, energy production and distribution, transportation and manufacturing. In April GE invested $105 million in Pivotal and its Platform as a Service (PaaS) offering. Combined with GE Predictivity services and technologies available today, airlines, railroads, hospitals and utilities can manage and operate critical machines such as jet engines and gas turbines in the cloud. This will give industrial companies a common architecture, combining intelligent machines, sensors and advanced analytics.

    To support its industrial platform, GE announced new Hadoop-based historian data management software called Proficy Historian HD. It delivers real-time data management, analytics, and machine-to-operations connectivity in a secure, closed-loop architecture so industries can move from a reactive to a predictive industrial operating model. The Proficy Monitoring and Analysis suite is comprised of six integrated products: Proficy Historian, GE’s flagship data collection software; Proficy Historian Analysis for data mining and visualization; Proficy SmartSignal for predictive analytics for condition-based monitoring; and Proficy CSense to troubleshoot process problems, monitor process health and create close loop process optimization.

    “GE’s industrial strength platform is the first viable step to not only the next era of industrial productivity, but the next era of computing,” said Bill Ruh, VP of the Global Software Center at GE. ”The ability to bring machines to life with powerful software and sensors is a big advancement – but it is only in the ability to quickly analyze, understand, and put machine-based data to work in real-time that points us to a society that benefits from the promise of big data. This is what the Industrial Internet is about and we are building an ecosystem with partners to save money for our customers and unlock new value for society.”

    Expanded Partnerships

    GE also announced expanded partnerships with Accenture, Pivotal and Amazon Web Services. Amazon will be the first cloud provider on which GE will deploy its Industrial Internet platform. The Pivotal partnership will be expanded to jointly develop and deploy Industrial Internet solutions leveraging their Cloud Foundry, in-memory and Hadoop-based technology and supporting GE’s strategy of bringing consumer-grade capabilities to the enterprise. A global strategic alliance with Accenture will help develop technology and analytics applications that help companies across industries take advantage of the massive amounts of industrial strength big data that is generated through their business operations.

    “Decades of GE-led innovation have helped shape history, and we are excited to work with the GE team to help shape the future of Industrial Big Data,” said Werner Vogels, Amazon Chief Technology Officer. “GE’s domain knowledge and R&D capabilities combined with the strength of AWS’s global infrastructure, breadth of services and big data expertise will help enable customers to solve problems in ways we haven’t even imagined yet, such as improved accuracy in healthcare treatments or extreme levels of energy efficiency.”

    “Our research found that an industrial strength cloud environment needs to meet the challenges of integrating large volumes of machine data with data from other sources while executing near real-time analytics,” said Jeff Kelly, Big Data Analyst, The Wikibon Project. “GE is well positioned – it has both the Industrial Internet technology and the deep expertise across healthcare, energy, transportation and aviation – to develop and deliver software and services capable of scaling and delivering meaningful insight and action from complex industrial data.”

    12:30p
    Rackspace Sharpens Its Focus on Hybrid Cloud

    SAN FRANCISCO – For many cloud-watchers, Rackspace Hosting has been viewed as a rival to Amazon Web Services in the public cloud arena. But this week the company is touting the power of hybrid clouds, and showcasing a marquee brand implementing a private cloud using Rackspace’s OpenStack cloud.

    In a session at the GigaOm Sturcture conference Wednesday, Fidelity Investments said it is a new Rackspace customer and is using OpenStack for its private cloud, and will eventually transition to a hybrid cloud environment. “The OpenStack relationship is important to us because of the community behind it and the broad support,” said Fidelity Technology Group VP Keith Shinn, who added that Fidelity would work within the OpenStack community to develop enterprise features for private clouds.

    Rackspace CTO John Engates said customers have outgrown the public-only cloud.

    “For a few years, we drank the Kool Aid and believed everyone was going to the public cloud,” said Engates.”The public cloud was intoxicating but we’re starting to see an alternative and that hybrid is the end state.”

    ‘Hybrid Cloud is the Future’

    Engates later expanded upon that theme in a blog post.

    “The hybrid cloud changes the game,” he wrote. “It eliminates the tradeoffs and empowers customers to leverage an infrastructure that makes the most sense for their applications while also building toward a future where a single cloud – whether public or private – or dedicated hardware alone isn’t the perfect fit. It helps them build toward a future where their workloads live in the environment that makes the most sense at the time and can move when the needs grow and change.

    “The open hybrid cloud is the future, and we’re here to help you get there,” Engates concluded.

    Rackspace Playing to Its Strength

    Some may see Rackspace’s new emphasis on hybrid cloud as a strategic pivot in a changing market. Over the past two quarters, the company has seen the growth rate for its public cloud services begin to moderate. Investors have sold off shares of Rackspace due to these concerns.

    But judging Rackspace solely on the basis of its public cloud revenue has always provided a partial view of its value. The San Antonio company has long been one of the world’s largest managed hosting companies, with a base of enterprises and SMBs turning to Rackspace for its combination of hosted hardware and Fanatical Support. In in its most recent quarter, managed hosting accounted for 75 percent of revenue for Rackspace, with $271 million from managed hosting and $90 million from public cloud.

    For many of the company’s 200,000 customers, a private cloud is an easier entry point to cloud services than public cloud.

    1:00p
    The Need for Speed: Velocity 2013
    Arvind Jain, senior engineering director at Google

    Arvind Jain, senior engineering director at Google, tells the packed auditorium about the improvements in speed of the Web in recent years. Since Chrome launched in 2008, JavaScript performance has become 20 times faster, and in the last year, JavaScript has become 57 percent faster on mobile apps. (Photo: Colleen Miller.)

    Velocity 2013, an O’Reilly conference that focuses on making the Web stronger and faster, convened this week in the Santa Clara Convention Center in Santa Clara, CA.

    The multi-day event featured speakers with experience in speeding up web sites and mobile apps, while the exhibit hall was filled with vendors who provide tools and hardware to support speedy and reliable infrastructure.

    For scenes from the action, see Highlights from Velocity 2013.

    2:02p
    An Overlooked Problem: Dynamic Power Variations

    Patrick Donovan is a Senior Research Analyst with Schneider Electric’s Data Center Science Center. He has over 18 years of experience developing and supporting critical power and cooling systems for Schneider Electric’s IT Business unit including several award-winning power protection, efficiency and availability solutions.

    pdonovan_tnPATRICK DONOVAN
    Schneider Electric

    Historically, the total electrical power consumed by IT equipment in data centers and network rooms has varied only slightly depending on computational load or mode of operation. However, once processors on notebook computers were re-designed to lengthen battery time – enabling laptop computer processor power consumption to be reduced up to 90 percent when lightly loaded – server processor design soon followed suit. As a result, newly developed servers with energy management capabilities can experience dramatic fluctuations in power consumption with workload level over time – causing a variety of new problems for the design and management of data centers and network rooms.

    Once negligible (historically on the order of five percent), total power variation for a small business or enterprise server is now much greater. These fluctuations in power consumption can lead to unplanned and undesirable consequences in the data center and network room environment. Such problems include: tripped circuit breakers and overheating and loss of redundancy, creating entirely new challenges for the design and operation of data centers and network rooms.

    Additionally, the growing popularity of cloud computing and virtualization has greatly increased the ability to utilize and scale compute power while in turn heightening the risk of physical infrastructure issues. In a virtualized environment, the sudden creation and movement of virtual machines requires careful management and policies that contemplate physical infrastructure status and capacity down to an individual rack level. Failure to do so could undermine the software fault-tolerance.

    Data Center Virtualization and Magnitude of Dynamic Power Variation
    Two decades ago, server power variation was mainly independent of the computational load placed on processors and memory subsystems. Most often, significant fluctuations were caused only by disk drive spin-up and fans. During this time, typical power variation was approximately five percent. In more modern processing equipment, however, new techniques to achieve low power states, such as changing the frequency of the clocks, moving virtual loads and adjusting the magnitude of the voltages applied to the processors to better match the workload in the non-idle state, have been deployed. Depending on server platform, power variation can be on the order of 45 to 106 percent – a significant increase from just twenty years ago. This type of dynamic power variation gives rise to the following five types of problems.

    1. Branch Circuit Overload
    Typically, servers operate at light computational loads, with actual power draw amounting to less than the server’s potential maximum power draw capabilities. However, because many data center and network managers can be unaware of this power use discrepancy, they often plug more servers than are necessary into a single branch circuit. This in turn creates the potential for possible circuit overloads, as the branch circuit rating can be exceeded by the total maximum server power consumption. While the servers will operate successfully at lower loads, when servers are simultaneously subject to heavy loading, overloads will occur. The most significant result of branch circuit overload is the tripping of the circuit, which will shut off power to the computing equipment. In general, these instances are undesirable, and since they occur during periods of high workload, they can be extremely detrimental to business continuity.

    2. Overheating
    In the data center or network room, most electrical power consumed by computing equipment is released as heat. When the power consumption varies due to load, the heat output also varies. As such, sudden fluctuations in power consumption can cause dangerous increases in heat production, creating heat spots. While data center cooling systems are put in place to regulate overall temperature, they may not be designed to handle specific, localized hot spots caused by increases in power consumption. As temperature rises, equipment is likely to shut down or act abnormally. Furthermore, even if equipment functionality remains, heat spikes may effect equipment over time or void any warrantees.

    Hot spots can also occur in a virtualized environment, where servers are more often installed and grouped in ways that create localized high-density areas. While this problem may be surprising due to the virtualized machine’s inherent ability to dramatically decrease power consumption, the act of grouping or clustering these high density virtualized servers can result in cooling problems.

    3. Loss of Redundancy
    To protect against potential power failure, many servers, data centers and network rooms utilize dual redundant power inputs that are designed to share power loads equally between two paths. When one path fails, the load once supported by the failed feed is then transferred to the active power feed – causing the feed’s load to double in order to fully support the server. In order to ensure that a remaining feed has the capacity to take over the complete load, if necessary, the main AC branch circuits feeding the equipment must always be loaded to less than 50 percent ampacity. However, this can be difficult when the loads are experiencing variations in power consumption – equipment that initially rated as less than 50 percent during installation can, over time, begin to operate at much higher loads.

    Should the inputs begin operating at greater than 50 percent of their rating, the system’s redundancy, and protection capabilities are eliminated. In this case, should one feed fail, the second will overload, the breaker will be tripped and power lost, causing data lose or corruption.

    2:30p
    Secure Solutions for Data Center Connect

    Today’s modern data center environment is faced with new types of persistent attacks and ever-evolving threat vectors. There is a greater reliance on the data center because of new, emerging technologies. With more cloud computing, big data and IT consumerization – the data center has become home to all modern-day platforms. This is where the concern around security has really begun to grow. Although data center operators and service providers recognize the importance of security, their first priority in recent years has focused on the addition of servers, storage and software to cope with new anywhere/anytime business requirements. These application and computing resources have been clustered across distributed geographic locations for the more efficient delivery of IT services.

    Remember, data center security is not just about technical countermeasures such as antivirus and firewalls, but a much more systematic and holistic approach to enterprise-wide security. In this white paper, you will learn how enterprises must establish comprehensive IT security programs that include information security management systems (ISMSs) to achieve corporate or regulatory compliance.

    This is where security technologies designed for the data center can really help. In working with Alcatel-Lucent’s security technologies administrators are able to deliver three important data services:

    1. Data confidentiality
    2. Data integrity
    3. Data availability

    alcatel2

    [Image source: Alcatel-Lucent]

    In this white paper, you will learn how organizations are now shifting to the real-time transfer of data between data centers, and implementing on-the-fly data encryption with key management for security. Download this white paper to discover why physical layer encryption is the preferred method for securing data across the data center connect (DCC) WAN – especially when deployed across optical fiber and DWDM for converged LAN and SAN traffic. Remember, in today’s world of “always-on” business world, working with advanced connectivity and security solutions is a must. This is why it’s important to consider optical DWDM solutions which move to enable the highest throughput for DCC at the lowest TCO.

    4:22p
    Microsoft Plans Major Expansion of Iowa Data Center
    Microsoft-quincy-outdoor

    A look at some of Microsoft’s data center modules at the company’s facility in Quincy, Washington. Microsoft is planning a $677 million expansion in West Des Moines, Iowa. (Photo: Microsoft)

    Microsoft plans to expand its cloud infrastructure in Iowa with a new $677.6 million data center in West Des Moines, the company and state officials announced today. The new project, known locally by the codename “Project Mountain,” builds upon Microsoft’s existing data center campus in West Des Moines.

    The announcement caps a spectacular year for Iowa, which has seen Facebook, Google and now Microsoft announce more than $1.4 billion in new investment in data centers in the state. On April 23, Facebook confirmed plans to build a $300 million server farm in Altoona, and Google announced a $400 million expansion of its facility in Council Bluffs.

    The Iowa Economic Development Authority Board approved incentives in a meeting earlier today. The newest phase of the Microsoft data center will create 29 new jobs, as well as 200 new jobs during construction.

    “Microsoft has enjoyed a strong working relationship with the state of Iowa and West Des Moines and we are excited about our latest expansion project,” said Christian Belady, General Manager of Data Center Services at Microsoft. “The expansion of the West Des Moines data center is a win-win, bringing both new jobs to Iowa while supporting the growing demand for Microsoft’s cloud services. The new facility is designed to provide fast and reliable services to customers in the region and features our latest efficient data center thinking.”

    “We are proud to watch tech leaders like Microsoft choose to make significant investments in Iowa,” said Governor Terry Branstad. “Microsoft’s continued expansion at its West Des Moines Location is a vote of confidence that our state is providing the kind of business-friendly environment that can help global companies succeed.”

    The company will be eligible to receive up to $20 million in tax credits, including $15 million in a sales and use tax refund paid during construction and a $5 million investment tax credit for completing the investment project in West Des Moines. Site work for the expansion project is expected to begin in the summer with construction anticipated late 2013 and a scheduled completion by the end of 2015.

    The West Des Moines project was announced in August 2008, but was delayed when Microsoft slowed its data center investment, citing the economic slowdown and the need to cut expenses. In June 2010 the project was back on, and Microsoft began construction.

     

    << Previous Day 2013/06/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org