Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, January 17th, 2014
Time |
Event |
12:30p |
CiRBA Adds Hyper-V Support in Latest Software Update Data center software provider CiRBA has released version 8, which adds Microsoft Hyper-V support to its Control Console, which enables organizations to optimize infrastructure for high density. The CiRBA platform, which focuses on automated capacity management, already supports VMware ESX, IBM PowerVM, and Red Hat Enterprise Virtualization.
“The shift toward cloud is placing less emphasis on the specific hypervisor technology, and more on the capabilities it provides,” said Andrew Hillier, CTO and co-Founder of CiRBA. “Having a scientific way to make hosting decisions across all hypervisors and hosting platforms really opens up the playing field, and allows organizations to focus on the bigger picture of enterprise-level supply and demand.”
Concerns about vendor lock-in and hypervisor costs are driving more and more organizations toward multi-hypervisor adoption. In fact, according to research by Torsten Volk of EMA, 82 percent of organizations plan to adopt more than one hypervisor.
“We are getting demand for Hyper-V,” said Hillier. “We want to make sure that we cover all the major technologies that businesses use, and lets them choose new options. The beauty of the control console is we can see the same thing for a multitude of environments – the components are widely different, but the view is uniform. It allows companies to use one paradigm for all of their infrastructure.”
Version 8 also includes the new Reservation Console announced last fall. The reservation console automates the entire proves of selecting the optimal hosting environment for new workloads and reserving compute and storage capacity. In combination with CiRBA’s cross-platform support, it allows customer to automate “fit for purpose” placements for new workloads across multi-hypervisor, multi-SLA, and multi-site virtual and cloud environments.
Multi-hypervisor adoption, particularly within private clouds, can present a significant management challenge for organizations in determining which workloads should be hosted on each respective platform. In order to help combat the complexity, organizations need to change how they make workload placement decisions. | 1:20p |
2014 is the Year Servers Get ‘Smart,’ Hybrid Cloud Grows Up Robert Miggins is the senior vice president of business development for Peer 1 Hosting. He has worked for more than 14 years in IT infrastructure, including sales, marketing, product development and operations.
ROBERT MIGGINS Peer 1 Hosting
According to Gartner analysts, more than half of IT budgets will be spent on cloud computing in the next few years. There’s no need to speculate on whether cloud adoption will continue to rise in 2014 and beyond – it will. However, exactly how cloud infrastructure is deployed and managed is up for grabs.
While both public and private cloud deployments are on the rise, it’s hybrid cloud environments that currently provide organizations with cloud computing’s best benefits–- a trend that will shift in 2014 as the next generation of dedicated servers, or “smart servers,” evolve to offer all of hybrid cloud’s core qualities, but manageable as one environment.
Today’s hybrid cloud models are complex environments that combine a single-tenant private architecture with a multi-tenant cloud, in order to achieve private cloud’s security and performance benefits along with public cloud’s high availability and security. Many service providers and systems integrators have emerged to address the challenges of deploying a hybrid cloud, yet there are obstacles that these providers can’t address or eliminate because of hybrid’s inherent complications. Smart servers employ a thin layer of virtualization which gives them scale and flexibility, offering a simpler and more cost efficient alternative, while still retaining the basic reliability of bare metal infrastructure.
Smart servers are essential to the next generation of hybrid environments, due in part to their simplicity and ease of deployment. First of all, they can be deployed in a matter of minutes, rather than the hours or days it takes to ramp up standard dedicated servers. They also cut down on the time it takes to scale up resources – for example, a standard server needs to be shut down, reconfigured and manually rebooted in order to provide additional RAM; but a smart server can simply scale up available RAM using its thin hypervisor layer, eliminating downtime and optimizing performance to create a better user experience.
As smart servers become more standard, we’ll see cloud hosting truly become an on demand service, providing CPU, RAM and storage where and when organizations need those resources. This is the utility style computing that cloud technology has always promised, and it’s going to become much more real in 2014 as smart infrastructure provides all of hybrid cloud’s capabilities – performance, security, availability and flexibility – from a single box.
While cloud computing is still relatively new, it’s evolved rapidly in a short period of time. It still has its limitations, with ease of use and seamless manageability a long way off, but cloud’s enterprise-readiness is a proven fact. As infrastructure becomes more intelligent and fluid, for example by incorporating smart servers, it will give enterprises the ability to customize their infrastructure to their exact resource needs – a far cry from the pre-configured cloud environments of yesterday. Here’s to a much smarter hybrid cloud in 2014.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 2:00p |
Extreme Networks Named NFL Wi-Fi Analytics Provider  MetLife Stadium in East Rutherford, N.J. will be the site of the Super Bowl on Feb. 2. The NFL has teamed with Extreme Networks to improve WiFi connectivity at MetLife and several other stadiums. (Photo: Rich Miller)
Ever tried to get Wifi in a football stadium? In a step toward improving connectivity in its venues, the National Football League has named Extreme Networks (EXTR) as the official Wi-Fi Analytics provider for the League and Super Bowl XLVIII. The sponsorship underscores the NFL’s commitment toimproving the in-stadium experience for fans and the critical role big data and analytics play in delivering on that goal for today’s highly connected fans.
“Technology is transforming fan experiences,” said Crawford Del Prete, chief research officer at IDC. “The NFL is at the forefront of this change. High-density Wi-Fi and analytics provide a true differentiator for connecting teams and fans – offering them a one of a kind game experience. Extreme Networks has proven its expertise in this area with the New England Patriots and Philadelphia Eagles, and is expanding that across the NFL. Innovations around Wi-Fi and network visibility have valuable applicability across a number of industries, including education, manufacturing and healthcare.”
Working at Super Bowl Venue
Extreme Networks intelligent Wi-Fi analytics technology is currently deployed for the Lions, Eagles, Patriots and New York Giants/New York Jets. The decision to deploy Extreme Networks Wi-Fi analytics technology is part of the NFL’s overall plan to improve the stadium experience through guidelines established by the League for Wi-Fi connectivity. This provides unprecedented near real-time visibility into what fans expect from their in-stadium experience, enabling each team to more easily deploy new applications and services to better the experience.
“The introduction of mobile and social technology has dramatically changed the fan experience and access to high performance Wi-Fi has emerged as a necessary asset,” said Chuck Berger, president and CEO, Extreme Networks. ”Combined with our analytics technology, we are providing the NFL with the insights needed to bring a rich and digitally immersive game-day event to all fans. It’s not just about delivering connectivity, it’s about helping deliver an experience and that’s the key.”
“Installing high-density Wi-Fi in our stadium in 2012 allowed the New England Patriots to deliver an advanced in-stadium experience to all of our fans and guests,” said Fred Kirsch, vice president of content, New England Patriots. “With the network in place, we can now offer compelling content seen only at the game. Plus, with deep-dive analytics, we have the knowledge and tools to provide fans the connectivity they’ve come to expect. Deploying pervasive Wi-Fi was the critical first step, and Extreme’s solution was exemplary. The use of analytics will provide a deeper layer of visibility into how fans are experiencing games and will help guide future improvements and next-gen applications.” | 2:32p |
VMware and Capgemini Expand Partnership for Cloud Orchestration Capgemini and VMware (VMW) have expanded a strategic partnership to jointly develop new solutions that extend Capgemini’s service integration, aggregation and orchestration platform by leveraging VMware’s cloud management offerings. The new solutions will combine Capgemini’s service integration and orchestration solutions with VMware vCloud Automation Center, vCenter Operations Management Suite and VMware IT Business Management Suite. The companies aid these joint offerings will help enterprises worldwide simplify cloud management complexity, maximize operational efficiency and increase IT and business agility, while improving quality of service.
“In order to support a globally connected workforce, cloud solutions that are efficient, effective and bring real-time business insights are essential,” comments Raf Howery, Senior Vice President and Head of Infra Strategy and Ecosystem for Capgemini. “Expanding our VMware partnership with the introduction of our new business cloud solution is a key part of our cloud orchestration strategy, allowing enterprises to better manage the complexity of their IT transformation journey. It will enable them to transition to cloud with greater flexibility and simplicity and to obtain resources across legacy, public, private or hybrid environments. They will see an immediate business impact.”
The joint solutions will help improve financial and service level management of cloud services and providers via a real-time dashboard, that can provide CFOs with visibility into LOB usage and IT spend. Similarly, CIOs will gain transparency into overall application usage, allowing for greater control over the support of business processes and enabling true collaboration across the enterprise ecosystem. The platform will also allow businesses to consume services in a rapid and efficient manner.
“VMware Cloud Management solutions have been designed to meet the demands of IT-as-a-Service – self-service, scale, velocity of change, shared infrastructures – as well as the modern applications they support. This industry leading cloud management solution, along with Capgemini’s power and expertise to implement large-scale successful enterprise solutions significantly expands our partnership and joint value for our customers,” said Ramin Sayar, Senior Vice President and General Manager, Cloud Management Business Unit, VMware. “I am excited about our joint investment and strategy, which will span across multiple business and technology practices, and will provide unique and differentiated value to the industry and our enterprise customers.” | 3:00p |
Juniper Introduces Firefly Security Suite Juniper Networks (JNPR) unveiled Firefly Suite, a virtualized security portfolio that provides granular, dynamic and secure connectivity for the private and public cloud. The suite introduces Firefly Perimeter, a virtual version of the Juniper Networks SRX Series Services Gateway, as well as Junos Space Virtual Director, an application that automates the management and deployment of Firefly Perimeter.
“IT leaders have been seeking innovative solutions to secure both the infrastructure and data that can keep up with the rigorous demands of their business and leverage the cloud in a smart and evolutionary way,” said Michael Callahan, vice president global product marketing, Security Business Unit at Juniper Networks. ”Juniper’s Firefly Suite allows companies to attach, create and manage security policy across physical and virtual firewalls with a high level of flexibility supporting error-free, fast scale out deployment for the most demanding environments. We provide complete protection for the cloud and from the cloud.”
Firefly Suite can be easily embedded throughout the virtual environment including the hypervisor itself or as VMs connected to the various virtual networks, this allows admins to provide tailored security, automation and control. This approach accelerates service rollout and increases application agility with granular protection that is highly scalable. The three main components of Firefly include: Perimeter – a software-based version of the Juniper SRX Series Services Gateway; Junos Space Virtual Director – a new application with full lifecycle management of Firefly perimeter VMs; and Firefly Host – a purpose-built firewall for virtualization designed to protect intra-VM traffic.
“With Juniper’s Firefly Security Suite, we can rapidly onboard new tenants by provisioning virtual firewalls instead of the typical 30-60 days onboard cycle for dedicated physical hardware,” said Brian Gubish, Product Development Architect, Expedient. ”We can now move from the design limitations of a monolithic architecture to diversified firewall implementation despite the shared public cloud environment. We are able to offer rapid disaster recovery, personalized log information and more importantly a portal to our customers for self-service management.” | 4:00p |
Pacnet Opens Data Center in Singapore  Pacnet SGCS2 opening ceremony included colorful festivities. Pictured from left: Jim Fagan, President for Managed Services at Pacnet; Marcus Cheng, CEO at Acclivis; Carl Grivner, CEO at Pacnet; Jacqueline Poh, Managing Director of Infocomm Development Authority of Singapore; Giles Proctor, Vice President for Data Center Construction and Operations at Pacnet.
Asia-Pacific network provider Pacnet has opened CloudSpace II (SGCS2), which has been recognized with Uptime Institute’s Tier III Certification of Design Documents. The $90 million facility is one of Pacnet’s major data centers built to help meet the rapidly growing demand for interconnected, advanced data and managed services in the Asia-Pacific region.
“With Singapore firmly entrenched as a dynamic financial hub, the Infocomm Development Authority’s recent push to develop into a big data hub translates into a rising need for data center services,” said Giles Proctor, Vice President of Data Center Construction and Operations at Pacnet.
The eight story, 155,000-square foot facility is carrier neutral and is directly connected to Pacnet’s submarine cable network. It is strategically located in Paya Lebar. The facility is also designed in accordance with the BCA Green Mark scheme – the equivalent to the Leadership in Energy and Environmental Design (LEED) certification by the U.S. Green Building Council.
SGCS2 is designed to comply with the Monetary Authority of Singapore (MAS) technology risk management guidelines which address a range of potential security threats. The SGCS2 facility has undergone a threat vulnerability risk assessment (TVRA) to ensure it meets the security requirements of the financial services industry.
“We congratulate Pacnet on achieving the first milestone in the Tier Certification process with its Tier III Design Certification. This Design Certification proves that Tier III concepts may be applied in Singapore—an established data center epicenter with unique characteristics,” said Thomas Baehr, Senior Director of APAC Development of Uptime Institute. | 4:02p |
Internap Showcases New Secaucus Data Center  Internap CEO Eric Cooney speaks Thursday at the company’s grand opening reception foir its new data center in Secaucus, New Jersey. (Photo: Rich Miller)
SECAUCUS, N.J. – Internap Network Services unveiled its newest data center in Secaucus last night, offering a look inside a facility that will be a key driver of the company’s ambitions in the greater New York market. Internap executives said the 101,000 square foot data center will offer higher density than competing facilities, as well as slightly higher elevation – a key consideration for an area of New Jersey that was hit hard by Superstorm Sandy.
“In the last two years, we’ve seen a big increase (in density),” said Mike Higgins, Senior Vice President of Data Center Services for Internap. “This design enables us to support 18 kW a rack. This is a major future-proofing advantage.”
Higgins’ presentation also included a detailed look at maps of Secaucus highlighting the 100-year and 500-year flood plains, a reflection of the post-Sandy world in which customers place additional emphasis on researching worst-case scenarios. The Internap facility is outside of the flood plains in Secaucus, a low-lying area alongside the Hackensack River and New Jersey’s famous Meadowlands wetlands. None of the town’s data centers were flooded during Sandy, but access to some facilities was impeded by flooded streets.
Cross-Hudson Migration in 2014
Many new data centers aren’t certain where they will find their future customers. The equation is simpler for Internap, as many of the customers for the Secaucus site will migrate from the company’s data center at 111 8th Avenue in Manhattan, where Internap’s lease with building owner Google will expire at the end of the year. Internap says it will be migrating customers out of 111 8th Avenue and into Secaucus throughout 2014.
Migration places a high priority on execution. But it’s become old hat for some New York area end users, as noted by Shai Peretz, the Senior VP of Operations at content specialist Outbrain, an Internap customer at 111 8th.
“Over the past two years, we’ve moved several times,” said Peretz. “We just moved into (111 8th) a year ago, during Sandy. Our previous provider went down for a week, so we had to move.”
Peretz said Outbrain will be transporting about 1,000 servers across the Hudson to Secaucus. “This is a big move for us, and we’ll need to plan it carefully,” he said.
22 Megawatts of Readiness
Internap already has its first 13,500 square foot data hall ready and waiting. The company has 5 megawatts of power in the building, with the ability to expand up to 22 megawatts and 60,000 square feet of technical space. The property was built by Prologis in 2012, and Internap moved in last July and began building out its data center infrastructure. Each data hall features three-foot raised floors, with power and cooling equipment housed in separate equipment galleries, allowing Internap to make the most of every square foot of white space, as well as providing additional security by ensuring that vendors don’t need to enter the main data hall.
Internap now has more than 1 million square feet of data center space, spread across nine facilities in North America, where it houses infrastructure for customers including Costco, HBO, JP Morgan Chase, Microsoft, Nokia, Amgen, Southwest Airlines, Delta and McGraw-Hill.
That footprint supports a suite of service that spans colocation, managed hosting and cloud computing. Operating an advanced, distributed infrastructure can require all these components, according to Raj Dutt, Senior Vice President of Technology of Internap.
Colo AND Cloud
A panel at Thursday night’s events showcased three Internap customers – Outbrain, digital image specialist Shutterstock and eXelate, which provides analytics for marketers. All three run advanced infrastructure across multiple data centers, incorporating elemtns of DevOps-style agility and configuration management. All are using colocation and, in several cases, bare metal servers.
Dutt says this reflects the complex nature of infrastructure solutions.
“There’s a lot of religion and hype about cloud,” said Dutt, citing arguments that “scale-out” cloud architectures have left “scale-up” enterprise models in the dust. “At Internap, we fundamentally believe the opportunity resides right in the middle – helping customers scale out, but with the reliability and control of a colo environment. On a long-term workload, colocation is going to cost you less.
“Don’t follow the cloud hype,” he said. “Build an infrastructure that combines colo and cloud, and makes sense for your apps.”
 A look inside one of the UPS rooms at the new Internap data center in Secaucus, New Jersey. (Photo: Rich Miller).
| 5:55p |
Friday Funny: What’s Up With That Phone? Happy Friday! The end of the work week is nigh, and it’s certainly high time for some humor. So we present our data center cartoon that needs a caption. Scroll down and add your suggestion.
Diane Alber, creator of Kip and Gary, writes, “So there has been a lot of talk with BYOD, and in a data center environment, BYOD can be critical since cell phone service is usually a hit or miss.”
Also, a big congratulations to Steven Swanberg for the Robots in the Data Center cartoon caption, “So that explains the new oil and lube machine in the coffee break room.”
The caption contest works like this: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner will receive his or her caption in a signed print by our artist Diane Alber.
Click to enlarge.
For the previous cartoons on DCK, see our Humor Channel. | 9:18p |
IBM Commits $1.2 Billion To Cloud, Adding 15 Global Data Centers  IBM SoftLayer CEO Lance Crosby examines servers at the IBM SoftLayer data center in Dallas. IBM is committing more than $1.2 billion to significantly expand its global network of cloud data centers. (Photo: IBM)
IBM will commit over $1.2 billion to significantly expand its global cloud footprint, the company said today. IBM plans to deliver cloud services from 40 data centers worldwide in 15 countries. The company will add 15 new data centers worldwide adding to 13 SoftLayer global data centers and 12 from IBM.
Data centers will be added in China, Washington, D.C., Hong Kong, London, Japan, India, Canada, Mexico City and Dallas. With this investment, IBM plans to have data centers in all major geographies and financial centers. The company plans to expand in the Middle East and Africa in 2015.
“IBM is continuing to invest in high growth areas,” said Erich Clementi, senior vice president of IBM Global Technology Services. “Last year, IBM made a big investment adding the $2 billion acquisition of SoftLayer to its existing high value cloud portfolio. Today’s announcement is another major step in driving a global expansion of IBM’s cloud footprint and helping clients drive transformation.”
Since 2007, IBM has invested more than $7 billion in 15 acquisitions to accelerate its cloud initiatives and build a high value cloud portfolio. The company says it holds 1,560 cloud patents and processes more than 5.5 million client transactions daily through IBM’s public cloud. IBM hopes to achieve $7 billion in annual cloud revenue by 2015.
By acquiring SoftLayer last year, IBM gained a cornerstone for its cloud offerings. This type of investment is one reason that SoftLayer viewed IBM favorably as a suitor, as Big Blue’s deep pockets can grow its infrastructure faster than SoftLayer could have as a stand-alone operation.
The Global Nature of Data Movement
Part of what’s driving this expansion is the need to store cloud infrastructure globally. The combination of distributed local data centers and a global network allows clients to place data where it’s required, as well as gives the ability to consolidate data as needed. One of our cloud predictions for 2014 was that the lines of cloud and CDN would continue to blur; a global footprint like this allows customers to optimize application performance and responsiveness.
Customer Cloudant, providers of distributed database as a service (DBaaS), provides one example of this trend in action.
“Cloudant’s global expansion rate is fueled by the always-on commitment we make to our customers,” said Cloudant CEO Derek Schoettle. “Our mission is to be the standard data layer for Web and mobile applications. That mission requires us to push application data to the network edge, in as many locations as possible. Expanding beyond IBM SoftLayer’s current footprint presents significant value to our business. The investment IBM is making to expand their global footprint will not only help fuel our growth, but the growth of thousands of Cloudant users worldwide as well.”
Mobile and social data proliferation is necessitating edge computing. A truly global footprint enables customers to serve up data close to the customer. A study by IBM Center for Applied insights reveals by some estimates, the global cloud market is set to grow to $200 billion by 2020. Cloud continues to be driven by businesses and government agencies deploying cloud services to market, transforming their business practices for the cloud paradigm. |
|