Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, August 23rd, 2013
| Time |
Event |
| 1:24p |
Avaya Launches Software-Defined Data Center Framework Leveraging OpenStack and its own Fabric Connect technology, Avaya launched its software-defined data center framework and roadmap. Avaya’s Software-Defined Data Center (SDDC) framework uses the OpenStack cloud computing platform, and includes an orchestration process that combines, customizes and commissions compute, storage, and network components.
Avaya Fabric Connect further enhances the OpenStack environment by removing restrictions in traditional Ethernet VLAN / Spanning Tree-based networks to enable a dynamic, flexible and scalable network services model.With Fabric Connect as a virtual backbone, an open API into it allows easy integration, customization and interoperability with other software-defined networking architectures. Benefits of the Avaya SDDC framework include reduced time-to-service, simplified virtual machine mobility, multi-vendor orchestration, scale-out connectivity and improved network flexibility.
“In many ways this is a logical evolution of Avaya’s Data Center networking portfolio,” said Rohit Mehra, vice president, Network Infrastructure at IDC. ”Having executed on its vision of using Fabric Connect (based on enhanced Shortest Path Bridging) as an end-to-end architecture, it only makes sense to wrap it with an orchestration and automation enabler like OpenStack. This is a natural and powerful extension for Avaya, and their present and future customers will surely embrace it.”
Avaya’s Software-Defined Data Center framework is the first phase of Avaya’s Software-Defined Networking roadmap. The Avaya Horizon-based Management Platform and open APIs will be generally available next year. Future initiatives include the extension of Fabric Connect and Orchestration to deliver end-to-end service creation and delivery from data center to desktop.
“Avaya continues to innovate the way that networks are designed, built and operated, leveraging the unique capabilities of our Fabric Connect technology,” said Marc Randall, Senior Vice President and General Manager, Avaya Networking. This announcement demonstrates that enterprises can immediately realize the operational benefits of real-time orchestration and automation. While some remain hung-up on definitions of SDN and what it might to deliver in the future, Avaya is delivering tangible business benefits today.” | | 1:45p |
Splunk Boosts Visibility For VMware 3.0 Environments Splunk updates its application for VMware 3.0, NaviSite launches a Director IaaS platform support VMware vCloud Director, FalconStor enhances its storage solutions for VMware environments, and HotLink launches a VMware disaster recovery offering to Amazon Web Services.
Splunk App for VMware 3.0. Real-time operational intelligence company Splunk (SPLK) announced the general availability of the latest version of the Splunk App for VMware to provide accelerated operational visibility into virtualized environments. Version 3.0 installs quickly and comes with a resilient and auto-load-balanced configuration allowing for uninterrupted visibility into VMware environments. It includes several patent-pending technologies including visualizations that help analyze the health of virtual environments. The app provides real-time granular visibility into VMware environments across hosts, virtual machines and virtual centers based on pre-defined thresholds and pre-packaged log analyses.”The Splunk App for VMware showcases Splunk’s leadership in providing deep levels of analytics across the entire infrastructure. With more than 25 out-of-the-box reports, the Splunk App for VMware shines a light on the entire virtual infrastructure and enables administrators to quickly resolve issues and get deeper analytics about the health, security and capacity of their environments,” said Leena Joshi, senior director of solutions marketing, Splunk. “With Splunk software, customers can achieve a broad, central view of key performance indicators across the entire data center, not just the virtualization layer. This helps ensure improved user satisfaction and effective resource planning as well as the ability to track changes, control costs and eliminate vulnerabilities.”
NaviSite launches NaviCloud Director IaaS platform. NaviSite, a Time Warner Cable company, announced NaviCloud Director, a VMware vCloud powered Infrastructure as a Service platform that uses VMware vCloud Director 5.1 and is designed to provide cloud-based businesses a more flexible, customizable, enterprise class, production-oriented hybrid cloud environment. NaviCloud Director will give customers access to VMware APIs and developer community. In addition, customers will be able to customize their networking architecture to map specifically to their technology philosophy. Advanced users, such as those with in-house vCloud Director experience, will have the ability to access the native vCloud Director user interface for further user flexibility. “NaviSite has always designed its solutions to meet customer needs, and introducing NaviCloud in 2010 was the first step to taking our customers to the cloud,” said David Grimes, Chief Technology Officer at NaviSite. “Today we recognize that customers are more savvy about cloud computing, so we’re excited to offer them a solution that will give them a truly user-defined cloud environment, but with the support of NaviSite’s Army of Experts should they need it.”
FalconStor offers storage solution enhancements. FalconStor Software (FALC) announced that the latest version of its Continuous Data Protector (CDP) and Network Storage Server (NSS) now offer enhanced features for intelligently moving, storing, protecting and analyzing data without burdening production resources. The RecoverTrac tool now has more flexibility to automate failover and failback with greater accuracy in any physical, virtual or hybrid environment, including VMware and Hyper-V deployments. “FalconStor’s enhanced data protection and DR technologies will provide our customers with improved performance and high-availability storage environments,” said Christopher Peyton, operations manager, hosted services, at Open Storage Solutions, a business partner of FalconStor Software. “The automated stretch cluster configurator will be vital in mobilizing customers’ data to remove the risk of data center failure and to improve business continuity operations. In addition, as companies continue to implement private, public or hybrid cloud solutions, the secure data migration feature of the FalconStor NSS solution will be a key component of these clouds.”
HotLink launches HotLink DR Express. HotLink announced HotLink DR Express, the industry’s first disaster recovery and business continuity solution that leverages Amazon Web Services to economically protect all types of VMware vSphere virtual machines. The solution offers a plug-in solution for VMware vCenter users to integrate robust data protection with day-to-day operational management of VMware Windows and Linux workloads. Built on the patent-approved HotLink Transformation Engine, HotLink DR Express enables VMware users to benefit from intuitive DR/BC in Amazon EC2, at a practical cost. “HotLink DR Express is incredibly easy to deploy. Within only 30 minutes the product was installed, and we had our first three virtual machines protected in Amazon. The two most impressive features are the simplicity and tight integration with VMware vCenter. Protecting virtual machines is as easy a checking a box,” said Michael Warchut, senior network engineer at Monsoon Commerce. “The fact that it’s fully integrated into VMware vCenter means we can manage our DR site in the same way we manage our production servers. We definitely see HotLink DR Express as an easy, economical and powerful solution for protecting all types of workloads in our environment.” | | 2:00p |
IBM Adds Third Data Center in Emerging Market of Peru IBM is opening a new $8 million data center in Lima, Peru, citing an increasing demand for information technology services in the emerging market, particularly around cloud and big data. This is the third IBM facility in the country, complementing an existing data center at the Technology Campus of La Molina and another in the district of San Isidro in Lima.
”Our extensive experience in data centers offer private and public institutions in Peru new ways to innovate, build smarter systems and gain competitive advantages that contribute to their business growth and sustainability,” said Ricardo Fernández, IBM Peru General Manager.
The opening of the new data center is part of IBM’s commitment to Peru, where IBM has continuously operated for more than 81 years. In the past 10 years, the company has invested US$38 million in Peru – including this new data center.
The data center will allow local companies to transform their front office business operations and benefit from significant efficiencies through cloud computing and Big Data analytics.
Latin America has immense opportunity as a data center and cloud market, and IBM has long been investing there for the future. IBM has made multi-million-dollar investments in Latin America since 2009, opening nine IT Services Centers in Brazil, Mexico, Costa Rica, Chile, Colombia, Peru and Uruguay, which provide 24/7 services. Globally, IBM has more than 400 data centers. | | 2:00p |
Friday Funny: How Remote Do You Go? Congratulations! You’ve made it to Friday, which means it’s the end of the work week and time for the weekend — a brief respite from the daily grind. But before you take off, give us your ideas for a caption for our new Data Center Knowledge cartoon.
First, let’s congratulate reader DDayy2K for submitting “I was told you needed a wireless mouse” for the last cartoon, A Mouse in the Data Center.
For this week, Diane Alber, our favorite cartoonist, writes, “So there has been a huge trend the past couple of years of data centers being developed in the most remote locations, or “cow country”. . .only this time it looks like Kip and Gary have some unwanted guests!”

Click to enlarge graphic.
New to the caption contest? Here’s how it works: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner receives a hard copy print, with his or her caption included in the cartoon!
For the previous cartoons on DCK, see our Humor Channel. Please visit Diane’s website Kip and Gary for more of her data center humor. | | 2:03p |
Welcome to Fog Computing: Extending the Cloud to the Edge
It’s been far too long since we’ve had another hot tech buzz term. But new conversations are beginning to emerge around Fog Computing. Closely resembling the concepts of cloud computing, the Fog aims to take services, workloads, applications and large amounts of data and deliver it all to the edge of the network. The goal is to provide core data, compute, storage, and application services on a truly distributed level.
Fog takes the data and workload technology to a new level. We’re now talking about edge computing – the home of the Fog.
Data is now being delivered in large quantities to many more users. To optimize the concept of the cloud, organizations need a way to deliver content to end users through a more geographically distributed platform. The idea of fog computing is to distribute data to move it closer to the end-user to eliminate latency and numerous hops, and support mobile computing and data streaming. Already, we’re seeing “everything-as-a-service” models. The user is asking for more data access from any device, any time, from anywhere. This means that the future of the cloud must support the idea of the “Internet of Everything (IoE).” That’s where Fog Computing comes in.
Applications and use-cases
The term “fog computing” has been embraced by Cisco Systems as a new paradigm to support wireless data transfer to support distributed devices in the “Internet of Things.” A number of distributed computing and storage startups are also adopting the phrase. It builds upon earlier concepts in distributed computing, such as content delivery networks, but allows the delivery of more complex services using cloud technologies.
Before you get confused with yet another technology term, it’s important to understand where Fog Computing plays a role. Although it is a new terminology, this technology already has a place within the world of the modern data center and the cloud.
- Bringing data close to the user. The volume of data being delivered via the cloud creates a direct need to cache data or other services. These services would be located closest to the end-user to improve on latency concerns and data access. Instead of housing information at data center sites far from the end-point, the Fog aims to place the data close to the end-user.
- Creating dense geographical distribution. Fog computing extends direct cloud services by creating an edge network which sits at numerous points. This, dense, geographically dispersed infrastructure helps in numerous ways. First of all, big data and analytics can be done faster with better results. Then, administrators are able to support location-based mobility demands and not have to traverse the entire WAN. Finally, these edge (Fog) systems would be created in such a way that real-time data analytics become a reality on a truly massive scale.
- True support for mobility and the IoE. As mentioned earlier, there is a direct increase in the amount of devices and data that we use. Administrators are able to leverage the Fog and control where users are coming in and how they access this information. Not only does this improve user performance, it also helps with security and privacy issues. By controlling data at various edge points, Fog computing integrates core cloud services with those of a truly distributed data center platform. As more services are created to benefit the end-user, edge and Fog networks will become more prevalent.
- Numerous verticals are ready to adopt. Many organizations are already adopting the concept of the Fog. Many different types of services aim to deliver rich content to the end-user. This spans IT shops, vendors, and entertainment companies as well. Let’s take Netflix for example. With so many users all over the world, centralizing all of the content within one or two data centers would make the delivery process a nightmare. To deliver large amounts of streamed services, Fog Computing can be leveraged by placing the data at the edge; close to the end-user.
- Seamless integration with the cloud and other services. The idea isn’t to replace the cloud. With Fog services, we’re able to enhance the cloud experience by isolating user data that needs to live on the edge. From there, administrators are able to tie-in analytics, security, or other services directly into their cloud model. This infrastructure still maintains the concept of the cloud while incorporating the power of Fog Computing at the edge.
As more services, data and applications are pushed to the end-user, technologists will need to find ways to optimize the delivery process. This means bringing information closer to the end-user, reducing latency and being prepared for the Internet of Everything. There is no doubt that IT consumerization and BYOD won’t increase in consumption. More users are utilizing mobility as their means to conduct business and their personal lives. Rich content and lots of data points are pushing cloud computing platforms, literally, to the edge – where the user’s requirements are continuing to grow.
With the increase in data and cloud services utilization, Fog Computing will play a key role in helping reduce latency and improving the user experience. We are now truly distributing the data plane and pushing advanced services to the edge. By doing so, administrators are able to bring rich content to the user faster, more efficiently, and – very importantly – more economically. This, ultimately, will mean better data access, improved corporate analytics capabilities, and an overall improvement in the end-user computing experience. | | 2:27p |
Schneider Electric’s DCIM Tool Leverages Intel for Remote KVM  A Schneider Electric team member demonstrates the company’s DCIM software at the company’s booth at a recent Data Center World conference. (Photo: Colleen Miller)
Energy management specialist Schneider Electric has updated its Data Center Infrastructure Management (DCIM) software to provides server access without the need for additional hardware. The company worked with Intel’s recent Virtual Gateway technology for the new product module in StruxureWare, in order to provide full server lifecycle access and power cycling for remote management.
“By partnering with Intel to provide an integrated software KVM and DCIM approach for managing the data center, we’re continuing to bridge the gap between IT and facilities,” says Soeren Jensen, vice president of Enterprise Management and Software for Schneider Electric. “As the first DCIM vendor to offer software-only server access capabilities, we view Server Access as an important component to improving energy efficiency in data centers and facilities.”
IT managers, data center operators and facility managers can now launch, manage, troubleshoot and control servers directly from the Server Access module, which combines DCIM technology with Intel’s software KVM (keyboard video mouse) technology. Not only does Server Access help further bridge the gap between IT and facilities, but by using software KVM technology and eliminating the need for hardware, Server Access can reduce technology costs by up to 50 percent.
“Intel and Schneider Electric are bridging facilities and IT by offering vKVM (virtual keyboard video mouse) and DCIM in one integrated product suite,” says Jennifer Koppy, research manager for IDC’s Datacenter Trends & Strategies team. “Virtualization and cloud computing disaggregate IT from physical systems and make adding new workloads as easy as deploying a virtual machine. The connection between facilities and IT – enabled by StruxureWare for Data Centers – is critical because these new workloads affect power, cooling and connectivity, and have an overall impact on efficiency and capacity.”
StruxureWare offers a view from the facility down to the server level, including physical model for the location of servers, which enables identification of potential issues such as power or cooling impact.
Intel released Virtual Gateway last July; it’s a virtual solution rather than a hardware approach to managing and troubleshooting. “With our recently launched Intel Virtual Gateway plug-in, we’re introducing an evolution from the legacy hardware-based KVM solution to a virtual solution that offers a simpler way to access and manage simultaneous IT devices via a remote console,” said Jeff Klaus. “We’re excited to partner with Schneider Electric as one of the first providers to integrate this new capability. Now users can more easily manage multiple, disparate servers and appliances, from conducting diagnostics and troubleshooting to analyzing server logs and making configuration changes.”
Data Center Operation: Server Access provides:
- Console access: Remotely control and manage IT devices through software KVM for lights out data center management.
- One-to-many device control: View, configure and control multiple vendors’ IT devices through one console for secure and easy server management.
- Power cycling: Access servers remotely, whether they are turned on or off, for instant control and reboot.
- Physical location: Provides visibility to exactly where servers are placed within the data center for an accurate inventory and overview.
- Software KVM: Reduce costs by eliminating the need for physical KVM switches in the data center.
- In- and out-of-band management: Reach affected devices by accessing the server operating system through a primary network or utilize a secondary business-critical network accessed through the base management card.
- Multi-vendor device support: Provides support for multiple types of IT assets and hardware platforms.
- OS access: Connect to the operating system via Remote Desktop Protocol (RDP), Secure Shell (SSH) and Virtual Network Computing (VNC).
| | 9:15p |
Steve Ballmer to Retire As Microsoft CEO Microsoft Corp. today announced that Chief Executive Officer Steve Ballmer has decided to retire as CEO within the next 12 months, after a special committee of the company’s board names a new CEO. In the meantime, Ballmer will continue as CEO.
“There is never a perfect time for this type of transition, but now is the right time,” Ballmer said. “We have embarked on a new strategy with a new organization and we have an amazing Senior Leadership Team. My original thoughts on timing would have had my retirement happen in the middle of our company’s transformation to a devices and services company. We need a CEO who will be here longer term for this new direction.”
The committee to choose a successor will include Microsoft chairman and co-founder Bill Gates.
“As a member of the succession planning committee, I’ll work closely with the other members of the board to identify a great new CEO,” said Gates. “We’re fortunate to have Steve in his role until the new CEO assumes these duties.”
In early trading, shares of Microsoft (MSFT) moved higher on the news, gaining $2.30 per share to $34.68, a gain of about 7 percent.
We’ll have more to come on this story. |
|