Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, March 13th, 2014
| Time |
Event |
| 11:30a |
MRV Communications Advances Provisioning Platform MRV launches Pro-Vision 3.0 service delivery and provisioning software, Ciena introduces new WaveLogic Photonics capabilities, and Vello Systems launches Precision Application Networking software.
MRV launches Pro-Vision 3.0 service delivery and provisioning software. MRV Communications (MRVC) launched Pro-Vision 3.0 service delivery and provisioning software. Designed to make the user experience more intuitive and comprehensive, the new Pro-Vision platform is based on a HTML5 infrastructure that simplifies and automates the design, deployment and assurance of next-generation, multi-layer virtualized networks. The software gives communications service providers (CSPs) substantial OPEX savings in large and complex nationwide networks. As CSPs of all sizes look for a future path towards Software-Defined Networking (SDN) and Network Functions Virtualization (NFV), MRV is making it easier for them to accelerate service creation with a more intelligent network. “We have seen incredible global demand from CSPs for a highly scalable and open service delivery solution. MRV is transforming service delivery infrastructure in order to help our customers better meet the demands of today’s marketplace,” states John Golub, vice president of global product line management for MRV Communications. “The Pro-Vision platform continues to evolve and offers the most innovative tools available as our customers migrate from best-effort performance to SLA-assured, application-driven services.” MRV also announced it has joined the Open Networking Foundation (ONF) and is participating in the European Telecommunications Standards Institute’s (ETSI) Network Functions Virtualization (NFV) Industry Specification Group. The company’s participation reflects its strategic roadmap towards network virtualization and software-driven services that will help carriers, data centers and content delivery providers achieve a more intelligent network, reduce OPEX and deliver high-value, revenue-generating services easier and faster.
Ciena introduces WaveLogic Photonics capabilities. Ciena (CIEN) unveiled WaveLogic Photonics, new line intelligence capabilities that significantly improve visibility, control and programmability of packet optical networks. With Ciena coherent optics, flexible optical line elements and embedded and discrete software tools, WaveLogic Photonics gives service providers the ability to leverage software programmability at greater scale to transform their network from a fixed asset that provides commodity connectivity to one that is automated for faster service responsiveness and differentiation. As the next step in Ciena’s OPN architecture evolution, WaveLogic Photonics builds upon the intelligence already found in Ciena’s WaveLogic coherent optics across the entire optical line system. Product enhancements include expanded control plan capabilities, smart Raman, and PinPoint fiber analytic software. “Wavelogic Photonics is another step in realizing our OPN network architecture vision, which is designed to make networks more responsive to business needs,” said Steve Alexander, Chief Technology Officer at Ciena. “As networks become strategic application-responsive assets, an intelligent, agile optical layer is vital to achieving the necessary flexibility to support changing network demands. WaveLogic Photonics enables network operators to reliably shift their operating model from just ‘configuring’ to actually ‘programming’ the network in real-time– increasing service velocity while eliminating the potential for manual errors and the cumbersome tracking processes that plague networking today. It also allows operators to easily view their network assets and program them to automatically respond to service requests and intelligently route packet services across the network.”
Vello launches Precision Application Networking software. Vello Systems announced Precision Application Networking (PAN) software powering a new class of open, standards-based optical networking switches that will fundamentally change how network architectures are designed and deployed. The new standards-based PAN-powered solutions are a dramatic departure from today’s conventional large, expensive and power-hungry proprietary chassis-based optical systems. The PAN-powered hardware is currently being designed and developed by OEMs and the first systems are expected to be commercially available in the second quarter of 2014. “Our vision is fundamentally to bring the application to the network and gradually replace significant network functions, such as routing, security and application workload management in ways that are mapped to business value,” said Karl May, CEO of Vello Systems. “Our Precision Application Networking software unleashes a new class of data center optical networking device that reduces network complexity, enables networks to securely and intuitively adapt to user preferences in real-time and makes existing hardware more efficient and versatile. These are attributes that our software is already delivering for leading organizations worldwide, including cloud service innovator Pacnet and top-tier financial services companies. Optical router bypass based on the targeted needs of users and applications is now a reality.” | | 12:00p |
Microsoft Lines Up Incentives for $250 Million San Antonio Expansion  An aerial view of the 470,0000 square foot Microsoft data center in San Antonio.
Microsoft has proposed building a new $250 million data center in San Antonio, and has received approval from local officials for a package of tax breaks to support the expansion.
Local media in San Antonio reported last week that Microsoft recently began construction on the new facility, which would be its second major data center in San Antonio.
The company declined to address the reports. ”The San Antonio datacenter is an important part of our portfolio but we have no additional details to share at this time,” said a Microsoft spokesperson.
Microsoft has an existing 470,000 square foot data center in San Antonio, which the company opened in 2008. Last summer, it leased 8 megawatts of wholesale data center space in the San Antonio market, according to industry sources. The new Microsoft facility is expected to be sizeable, given the $250 million investment.
Data Center Cluster Continues to Grow
San Antonio has attracted several enterprise data centers, particularly in the energy sector, as well as multi-tenant providers such as CyrusOne and Stream Data Centers. Both CyrusOne and Stream Data Centers continue to build up their San Antonio footprint.
Microsoft will not comment on the project, but San Antonio City Council meeting minutes from Nov 21, 2013 reveal details of the project. According minutes of a city council meeting, San Antonio International and Economic Development Director Rene Dominguez made a presentation saying Microsoft was proposing a capital investment of $250 million in San Antonio beginning in 2016.
The proposal requested authorization to execute a 15-year Chapter 380 Tax Reimbursement at a minimum of 40 percent of City property taxes on the real and personal property investment of $250 million. The project will result in the creation of 20 high paying jobs starting in 2016. The jobs will have a minimum annual wage of $53,000.The estimated net fiscal benefit of the project to the city is $56.3 million. The motion passed by an 8 to 1 vote.
As of yet, there are no details about the proposed data center. Will it be a raised floor facility or will it use the IT-PACs that Microsoft has increasingly embraced? At one time there was discussion about installing on-site solar power at Microsoft San Antonio. For now, the minutes reveal that Microsoft plans on continuing to invest in its San Antonio infrastructure, but not much else. | | 12:30p |
An Inconvenient (Data Center) Truth Rick Stevenson is CEO of Opengear, a company that builds remote infrastructure management solutions for enterprises.
 RICK STEVENSON
Opengear
Another month, another announcement of an “unusual” data center locale. This past summer, Facebook announced the opening of their data center in Lulea, Sweden, right on the edge of the Arctic Circle, which is welcomed to the Scandinavian neighborhood by Google’s nearby data center in Finland. These data centers add to the remote placements of their brethren in places like bunkers, caves, cathedrals, and other unique locations the world over. The spate of data center installation in these far-flung locations underscores the fact that the environment of a data center matters even beyond its climate-controlled confines.
While the magnitude is debatable, data centers have at least some impact on the environment: they throw off a lot of heat and cooling them requires a lot of power. Data centers are probably not the leading cause of human-induced global warming or climate change, but they are certainly not mitigating these human-induced effects in any way. So what’s the relationship between data centers and the environment? Considering these unusual data center locations demonstrates some of the challenges posed to building and maintaining data centers as a response to both the current environmental landscape and the uncertain environmental future.
It’s Cold, Cold, Cold
Data centers get built where they are for a variety of reasons, from taking advantage of convenient, empty, and available locations to responding to the particularities of tax laws that make one state more advantageous, business-wise, over another. However, many data centers are trying to manage their heat problem by locating in cold places. If you can take advantage of naturally cool environments (like they have near the arctic circle or underground in caves or nestled in old Cold War bunkers), then you can cut down on your cooling costs. Some data centers have even taken to using the heat they throw off to keep residences warm during cold seasons.
Using low ambient temperatures to mediate the heat thrown off by data centers not only cut costs because they use less energy to cool those data centers, but they also reduce emissions precisely because less energy is used. Again, these data centers aren’t exactly preventing global warming, but they are lower impact than they otherwise might have been.
But if cool weather is good for data centers, what happens to data centers under global warming and climate change?
For the most part, predictions for average temperature changes under global warming amount to only a couple degrees: the EPA estimates that low emissions scenarios will result in somewhere between a 2 to 5 degrees Fahrenheit increase, while high emissions scenarios predict anywhere between 4 to over 9 degrees Fahrenheit warmer. In any case, cooler places will stay cooler, and they will definitely remain cooler than the interior of an operational data center.
En (Storm) Garde
What may be more of a threat to data centers are all those storm systems cropping up and causing a great ruckus each time. In addition to the cold, it is also advantageous to house certain kinds of data centers near the populations they serve; so, many data centers (particularly those located in the United States) exist on coasts. To be fair, the jury is still out on whether climate change will result in worse storms or not. Some studies suggest that storms will be more frequent and more intense. Other research suggests that storms will be more frequent but not necessarily more intense. All this research has been criticized for being inaccurate, exaggerated, and alarmist and on the possible basis of relying on inaccurate or imprecise historical data.
Regardless of the connection between global warming and storm systems, extreme weather has always been a threat to data centers. If storms are to occur less often or less severely, the threats engendered by storms are only mitigated, not avoided outright. On the other hand, since it is just as likely that future storms will be as intense and as frequent – if not more intense and more frequent – the impact of the environment on data centers is just as important a consideration as data centers’ impact on the environment.
Storm-proofing a data center is mostly a matter of storm-proofing the structure it sits in. However, storm-proofing itself is not the end goal. Instead, storm-proofing is one of the means by which a data center’s reliability and continuity of service is maintained. Yes, it is wonderful that the storm-proofing prevented major damage to the data center. Even better if the preventive steps mitigates downtime due to a storm.
Of course, even the best storm-proofing of a data center cannot maintain continuity of service if the utilities the data center relies on are shut down as well. Even worse, you may not be able to find out if there is any damage, much less assess the extent of the damage when utilities are compromised, delaying any repairs.
Insuring continuity and mitigating downtime in these situations may be an exercise in developing contingency plan after contingency plan. Relying on multiple forms of contact — for instance, adding a cellular connection in addition to a network connection — and putting back-up power systems in place increase the likelihood that you can communicate with your data center in the absence of utilities. In addition, having these back-up systems can give you time to properly shut down the system if power does go out unexpectedly for whatever reason, storm-related or otherwise.
Controlling for Remoteness
Data centers are often remotely-located, a fact that allows them to be built in whatever environment is the most advantageous. However, their remoteness leaves them exposed to the elements, not necessarily because their physical structures are more exposed, but because their reliance on other infrastructures results in more potential points that are exposed to failure. Those failures could be weather- or environment-related, or they may be due to another reason entirely. But thinking about the functionality of these centers in the future not only requires forethought into how data centers might change the environment but also how the changes in the environment may affect data centers. The time to plan ahead is now.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
SolidFire Introduces New Features To Target Enterprise  A “five stack” unit of SolidFire’s alll-SSD storage units.
All-flash storage provider SolidFire has introduced new features to target enterprise storage customers. The company introduced Version 6 of its Element OS, named Carbon, which includes a new set of enterprise-class features.
The new features include native real-time replication, integrated backup and restore, simultaneous Fibre Channel and iSCSI, and mixed cluster support all increase appeal to enterprises, as well as add promising functionality for cloud service providers.
“The datacenter is changing with software defined everything, and cloud orchestration,” said Jay Prassl, VP of Marketing at SolidFire. “That change has been felt early on in the service provider space, but reaching more and more into the enterprise space. This is purpose-built for the enterprise.”
Looking Beyond Cloud Providers
The company began with a focus on cloud service providers but now is targeting the next-generation data center, which it describes as a tectonic shift in the expectation of IT service, delivery and consumption.
“Storage is at the core of the next generation data center,” said Dave Wright, SolidFire Founder and CEO. “Neither traditional disk systems nor today’s basic all-flash arrays are supporting this transformation in resource allocation and management. Our customers expect great performance from us, but they also expect us to support their broader business objectives to deliver internal storage services that are more agile, scalable, automated, and predictable than ever before.”
The new features are:
Native Real-Time Replication: The real-time replication is ideal for disaster recovery purposes. It enables quick and cost-effective creation of additional remote copies of data. Native to the SolidFire design, it delvers essential disaster recovery capabilities without the need for third part hardware or software. Each cluster can be paired with up to four other clusters and data can be replicated in either direction allowing for easy failover and failback.
Integrated Backup and Restore: Native snapshot-based backup and restore compatible with any object store or devise that has an S3 or SWIFT compatible API. This functionality eliminates the cost and complexity of third-party backup and recovery products, and accelerates backup performance. CSP and enterprise customers can effortlessly scale backups for thousands of hosts and applications.
Fibre Channel & iSCSI simultaneously: Adding to their 10Gb iSCSI connectivity, the company is introducing 16Gb active/active Fibre Channel connectivity to the full line of flash arrays. This allows enterprise customers to easily transition current FC workloads and take advantage of SolidFire’s guaranteed storage performance, system automation and scale-out architecture
Agile scaling to any size or speed with mixed clusters: SolidFire systems now support the combination of storage nodes of different capacity, performance and protocols within a single cluster. This means that as CSPs and enterprises scale out their storage infrastructure, the can simply add the most current SolidFire platform instantly. It gives the ability to decommission or add systems simply, regardless of their “generation”.
While a lot of competitors have jumped into providing Flash storage, SolidFire argues that just Flash isn’t enough. The company provides control and performance management for better predictability in addition to the all-flash solution.
The company has been doing very well with cloud service providers (CSPs) both big and small, including COLT, ViaWest, Internap, ServInt, SunGard, and more. What’s not as well known is that the company has also been selling effectively to Internet scale enterprises looking to set up internal clouds. The new features are focused on both providing CSPs with more functionality as well as addressing some key enterprise needs. | | 1:30p |
Eaton Updates Energy Management Software Eaton has announced an update to its power and energy monitoring software Power Xpert Insight, providing enhanced alarm notification, advanced building integration capabilities, and simplified setup and commissioning. The power and energy monitoring software provides a dashboard view into real-time energy usage, efficiency and power quality.
“Power Xpert Insight software makes it even easier for building managers to obtain the daily information needed to make important operating decisions,” said Marty Aaron, product line manager at Eaton. “The new features will help commercial and industrial customers easily identify energy-saving opportunities, reduce wasteful practices and keep a closer eye on power system status.”
With real-time information down to the device level Power Xpert Insight allows customers to view energy usage and demand data, compare and trend data, and view a one-line representation of their electrical system. The software provides real-time and historical data to identify, track and improve wasteful energy practices. The newest version of the software builds on these capabilities with the Power Xpert Notify system, an integrated program that provides email notification of pre-configured alarms to help users address potential power system issues before they result in unplanned downtime. An enhanced Modbus protocol adapter has been engineered into the latest version of the software to strengthen communication capabilities with equipment from a variety of manufacturers. Providing a more unified view into building status, the upgraded protocol adapter can ease integration into comprehensive building management systems.
Eaton Introduces ESS Plus with Harmonic Reduction System
Eaton announced the launch of its Energy Saver System (ESS) Plus technology in the Americas, featuring a new Harmonic Reduction System engineered to detect and automatically correct harmonic currents before they affect data center hardware and business continuity. As part of its Energy Advantage Architecture system, the ESS Plus enables large, three-phase uninterruptible power systems operating in eco or energy-saving modes to mitigate harmonics, correct power factor and balance loads for increased reliability and efficiency.
“As evident in our research through The Green Grid, one of the industry’s leading organizations committed to data center resource advancement, operating a UPS in eco or energy saver mode is becoming much more prevalent as the need for data center efficiency increases,” said David Loucks, manager, Power Solutions and Advanced Systems, Eaton. “ESS Plus allows information technology and facilities managers to maximize the value and efficiency of ESS in applications where load harmonics, poor power factor, or unbalanced loads exist.”
The Eaton ESS Plus mitigates harmonics in energy saver mode, allowing data center operators to improve UPS efficiency. The feature is designed to eliminate the need to apply additional equipment, such as a harmonic filter or harmonic mitigating transformer, which reduces capital costs and equipment footprint. It enables the UPS to continually monitor currents from the critical load for harmonic distortion. If it detects harmonics in excess of pre-determined and adjustable limits, the system is designed to inject an additional current into the line that’s identical to but 180 degrees opposite of the harmonic current coming from the load, effectively canceling out the electrical disturbance. ESS Plus is available as an option for the Eaton 9395 UPS.
“Energy saver UPSs are engineered to save money and enhance environmental sustainability by reducing data center energy waste up to 10 percent under typical load conditions,” said John Collins, product line manager, Large Data Center Solutions, Critical Power Solutions Division, Eaton. “ESS Plus allows data centers to reap the benefits of energy efficiency in addition to advanced power protection, all together in a complete package with a value-driven price structure for both new and existing customers.” | | 3:00p |
Nlyte Software Debuts Nlyte 7.5 Nlyte releases version 7.5 of its DCIM software with advanced workflow engine, and RF Code launches a new wireless sensor for APC Electric 8000 Series PDUs.
Nlyte releases version 7.5. Nlyte Software introduced version 7.5 of its DCIM software, with its advanced workflow engine which couples asset lifecycle management with project resource management. New capabilities in 7.5 further enable the customer to receive the fiscal benefits of proactive lifecycle management within the data center, improving IT efficiencies and productivity. The Nlyte 7.5 Workflow Manager improves operational efficiency through the introduction of intelligent business process and resource management and enhanced task management. Progress in each project as it is made, can be tracked and compared to the original plan, resources can be allocated dynamically based on their unique availability, and work items can be pushed to individual users with prioritization.
“Nlyte 7.5 delivers on our continued vision to provide the industry’s leading DCIM platform, and it further delivers on our vision for helping customers achieve an efficient data center which are planned, built and documented with the same level of detail as the general ledger does in the accounting department,” said Robert Neave, co-founder and CTO of Nlyte Software. “This newest iteration allows any process to be captured or designed, modified, assigned, measured and reported, regardless of how complicated it is while each asset project can be viewed in the context of the data center as a complex system. Our customers tell us over and over that the biggest impact Nlyte has on their data center is the ability to manage the lifecycles of their assets for the purpose of capacity planning, efficiency and overall operational excellence. Nlyte 7.5 takes efficiency to the next level and provides our customers with the most advanced workflow and policy engine available in a DCIM solution.”
RF Code launches new APC PDU sensor. RF Code introduced the latest in its suite of PDU tags for management of power within data centers. The wire-free R170 PDU Sensor for APC by Schneider Electric 8000 Series PDUs (R170 PDU Sensor for APC) provide the capture, aggregation and calculation of power usage and distribution to allow data center operators to better manage capacity, improve power distribution efficiency and reduce the costs associated with “stranded” power. The RF Code R170 PDU Sensor for APC eliminates networking costs, using wire-free sensor technology to deliver outlet-level power usage data. After plugging in the RF Code sensor into an 8000 Series Rack PDU, power usage data is delivered via the sensor network to supported platforms, including RF Code’s Zone Manager middleware and Asset Manager platform. |
|