Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, August 1st, 2013
| Time |
Event |
| 1:44p |
Riverbed Updates Steelhead Hardware and Software Riverbed (RVBD) announced that it has extended its Steelhead wide area network (WAN) optimization product family with the addition of a new hardware appliance and upgrades to its Steelhead software. Steelhead WAN optimization solutions can now accelerate a broader range of enterprise infrastructures while expanding IT control for smaller branch offices, cloud infrastructures, SAAS applications, and locations served by hybrid networks.
“It is critical for us to rapidly adopt new technologies that help us to reduce costs, boost productivity, and enhance collaboration,” said Ray Sirois, director of IT, at Wright-Pierce, a leading provider of water, wastewater, and infrastructure engineering services. “By delivering accelerated performance cost efficiently across our entire infrastructure, Riverbed WAN optimization is at the heart of our ability to improve both the way and pace that our employees work.”
Riverbed’s Riverbed Optimization System (RIOS) was updated to version 8.5, and added path selection, a new feature to simplify the management of hybrid MPLS and Internet networks while delivering maximum performance. It also includes integration with Riverbed Cascade Profiler software, a leading application-aware network performance management solution. Path selection combined with application identification via deep packet inspection (DPI) enables IT and business units to understand usage and performance down to the specific application type and to dynamically reroute application flows in case of performance degradation.
“As we are delivering a growing mix of centralized and cloud applications to our distributed employees, our WAN is becoming increasingly strained,” added Sirois. “Riverbed WAN optimization will allow us to successfully move to a hybrid architecture and take advantage of lower cost Internet networks. In the end, we will be able to deliver the performance our end users need, reduce the complexity and burden on the WAN and still maintain control of our IT infrastructure.”
RiOS 8.5 also further strengthens strategic technology partnerships with Microsoft and NetApp for the benefit of mutual customers. It adds new optimizations for business-critical Microsoft applications and environments including SharePoint 2013, Exchange 2013, Office365. With storage intelligent optimization for NetApp SnapMirror, IT organizations can now efficiently prioritize and deliver data replication to ensure business continuity and rapid disaster recovery of critical data. Customers can obtain detailed insights into each storage volume and prioritize replication for the most important data sets while avoiding taxing resources on processing data that will see little benefit.
RiOS 8.5 also integrates Cascade Profiler 10.0.7, extending application-aware network performance management capabilities by integrating DPI capabilities within CascadeFlow data from Steelhead WAN optimization solution deployments. This offers accurate and deep application and network visibility to identify and characterize traffic flows across the network and among branch offices.
Riverbed also launched the new Steelhead CX 255 series appliance, with up to six megabits per second bandwidth capacity. The new appliance provides organizations with a small branch office solution that offers the same benefits found in larger Steelhead WAN optimization solutions, including decreasing bandwidth utilization by up to 98 percent and improving application acceleration up to 100 times faster. | | 2:33p |
Kontiki Selects Equinix To Power European Expansion; Interxion selected by Aquis Exchange Equinix and Interxion announce customer wins from Kontiki and the new Aquis Exchange, and Telx and Digital Edge to provide cloud solutions in Dallas.
Equinix selected by Kontiki. Equinix (EQIX) announced that cloud-based enterprise video platform Kontiki has expanded its use of Platform Equinix to offer robust, reliable, low latency data delivery to its expanding European customer base. Expanding in Equinix’s London LD5 data center will bring Kontiki closer to its European customers and give the company access to a global value-chain of more than 4,000 potential partners, customers and suppliers of digital services through Equinix Marketplace. Kontiki has a long standing relationship with Equinix – having equipment housed in three Silicon Valley data center. ”Kontiki is a global company with ambitions to significantly expand its presence in Europe,” said Craig Gordon, VP Worldwide Sales at Kontiki. Equinix is a perfect strategic match for this expansion. Setting up in a UK data center was vital to the commitment to supporting our growing European customer base with a data management system located in the EU. Expanding our use of Platform Equinix will significantly reduce latency and enhance performance for these customers.”
Digital Edge and Telx partner. Telx announced the availability of Digital Edge’s public and private cloud solutions at its Dallas (DAL1) data center. The service will also be available at DAL2, through Telx’s Metro Connect Services. The Digital Edge service leverages presence in Dallas to rapidly provide customized cloud-service options throughout the greater Dallas Metro area. Digital Edge will operate out of Telx’s DAL1 data center with their own managed hosting platform, providing hardware devices, advanced system monitoring, data storage, data center connectivity, bandwidth, power and infrastructure space, and security. “Partnering with flexible intelligent companies like Digital Edge, is a prime example of Telx’s role as an enabler of cloud solutions. We continue to see demand for increased data center space from multi-faceted cloud service providers in Dallas and throughout the country,” said John Freimuth, Telx’s General Manager of Cloud and Enterprise Solutions. “The distinguishing advantage we provide is the flexibility of our network that allows us to leverage a client’s virtual architecture to meet their specific needs within all our facilities.”
Interxion selected by Aquis Exchange. Interxion (INXN) announced that proposed pan-European equities trading exchange Aquis Exchange Limited1, is to take its second data center colocation facility at Interxion’s City of London data center campus. The new Exchange has set out to increase competition in the market by introducing tiered subscription-based pricing and innovative order types on its high performance trading platform. Aquis Exchange plans to offer equities from several markets for trading, including France, Germany, Italy, Netherlands, Sweden, and the UK. Being located in the Interxion London campus will give Aquis access to over 100 capital markets participants including investment firms, high-frequency trading firms, hedge funds, brokers and bankers. ““We are pleased to announce that our secondary data centre is at Interxion’s London campus, which has a long and established heritage of providing colocation services to the financial industry,” said Alasdair Haynes, Aquis Exchange CEO. ”We are looking forward to the prospect of being able to deliver our services to their ever-expanding financial community.” | | 2:47p |
Using Virtualization Technologies to Deliver Mobility Solutions Bill Kleyman is a virtualization and cloud solutions architect at MTM Technologies where he works extensively with data center, cloud, networking, and storage design projects. You can find him on LinkedIn.
 BILL KLEYMAN
MTM Technologies
Virtualization has created a truly scalable environment capable of managing bigger workloads and more users, which is a phenomenon built around efficiency and user density. One of the biggest trends in the current market is IT consumerization, also known as Bring Your Own Device, or BYOD. With the increasingly mobile workforce, administrators are being asked to support more devices as more users begin to use their own machines or devices to access corporate data.
In many cases, allowing users to bring in their own devices will not only create a happier user – it’ll make for a more productive workforce. In working with BYOD, IT administrators and managers must still control the information being delivery as well as the experience. So now the question becomes clear: Do we manage the endpoint or just the workload?
With virtual environments, managers are actually able to centralize the application, data and even desktop delivery process. As a note, no BYOD initiative should ever be a “free-for-all” open environment. There’s still a need to put policies and controls in place to ensure that only supported devices are allowed on site. Once that’s established, there are three solid ways (outside of server virtualization) to “virtualize” and efficiently deliver information to the end-user.
- Application virtualization. One great way to delivery applications to a BYOD endpoint is to centralize an application, virtualize it, and delivery it via a portal or local endpoint agent. This way the information and data are all stored at the central data center and engineers only have to worry about the delivery. Users can suspend their sessions, change over to a new device, and pick up work exactly where they left off. Solutions from Microsoft App-V, VMware ThinApp, and Citrix XenApp make the application virtualization and delivery process much easier. These apps can be delivered in a secure manner with granular control over who sees these apps and how they are assigned.
- Desktop virtualization. A newer trend, desktop virtualization is another great way to deliver workloads to the end-user. Much like application virtualization, the desktop is stored and virtualized at the data center level and then delivered to the appropriate end-point. When working with VDI, it’s important to identify the end-point and understand the resource needs both at the data center level and in between. Improper resource allocation can lead to poor performance and latency. Otherwise, in a solid deployment, VDI for users both onsite and remote can have very powerful benefits. Applications can be installed directly into the master image or combined with an application virtualization solution. In that scenario, the desktop image will remain very light, while pulling virtual applications as needed from the data center. Desktops can be delivered, controlled and fully managed directly from the data center. And, just like with application virtualization, users can suspend their desktops, relocate to a new device or location – and pick up exactly where they left off.
- User virtualization. An ever important process of the application and desktop delivery process is user management. This is where user virtualization can really come in to help. Products, such as AppSense for example, strive to abstract the user layer completely from applications, devices, and location. This means that user settings follow the user regardless of what physical device they are using or what resource they are trying to access. This allows settings such as folder redirection, printers, and personalization to always stay with the user. No more bloated or corrupt profiles to deal with since the information will be stored within a SQL database. Furthermore, administrators can assign application-specific settings for users. For example, an administrator can gray out entire menu items within an application based on which user is accessing the app, from which location and on which device. This type of granular security not only creates a more resilient environment, it will also improve the end-user computing experience.
Managing the environment at the data center level creates for a very agile infrastructure. Administrators don’t have to worry about the end-point, outside of just the client software, to deliver an excellent end-user experience. This creates a secure infrastructure where if a personal device is lost – the information is still retained within the data center. A good mobility and BYOD initiative, coupled with virtualization, can create a flexible environment capable of growing with the needs of the user – and the organization.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:11p |
Cloud-Enabled Robots Scamper From Amazon to SoftLayer  In the DARPA Virtual Robotics Challenge task, teams had to guide the robot over a series of terrain, including mud, uneven ground and a debris-littered path. SoftLayer provided the hosting for the challenge. (Photo: DARPA)
SoftLayer Technologies, an IBM company, has an interesting customer poach from rival Amazon Web Services. The Open Source Robotics Foundation (OSRF), which supports open source software for use in robotics research, education and product development tapped SoftLayer to host the DARPA Virtual Robotics Challenge (VRC).
The goal for competitors in the VRC was to develop software that makes a simulated robot execute tasks that might be required of emergency personnel in a disaster response situation. The winners advance and get to see the software applied to actual, physical robots in a live event.
After initially using Amazon Web Services, the Open Source Robotics Foundation (OSRF) ran into trouble with the speed of server communication. The competition demanded that machines exchange data with one another at hyper-fast speeds. OSRF found that SoftLayer, which provided bare metal cloud and high-performance GPU servers. was the only dedicated platform that could shorten the communication loop between machines to 1k/second, thereby offering the power and speed necessary for complex robotics simulation.
“The first Virtual Robotics Competition is an exclamation point in the evolution of the cloud, testing its performance limits and highlighting the need for bare metal servers and virtual environments to work in tandem,” said Nathan Day, Chief Scientist of SoftLayer, which was recently acquired by IBM. “SoftLayer’s platform can be uniquely tailored to meet requirements across the full spectrum of server needs and we are thrilled to work with Open Source Robotics Foundation in this premier event.”
OSRF was funded by DARPA to support the simulator-based event, which is part of the broader DARPA Robotics challenge. Over 100 teams from around the world participated, with SoftLayer’s global platform allowing them to compete from remote locations.
In preparation for the Virtual Robotics Challenge, the OSRF configured SoftLayer’s platform into a highly specialized format so that teams were able to control their own server constellation apart from other teams. Through SoftLayer’s API, each team was provided with five connected servers, including two high-end dual Intel Sandy Bridge servers with NVIDIA GPU, isolated from any others in the competition. Teams were able to reload their own servers as needed, and OSRF could reset constellations to their virgin state once each team finished its simulation.
“SoftLayer was a true partner in hosting the VRC Simulator and worked with our team closely to pre-test machines for the competition,” said Brian Gerkey, CEO of OSRF. “The VRC was an unprecedented initiative and required technology partners that were willing to go the extra mile to assure that this event made its mark in history.” | | 6:30p |
Cloud Exchanges Will Drive Commoditization. Why This Is a Good Thing 
Imagine you’re thinking about getting a backup data center sometime in the future. Imagine you’re able to buy an option for one. This is just one of the many capabilities that a neutral cloud exchange will open up, according to Rudi Baumann.
Baumann is the CEO of Zimory, which teamed up with financial giant Deutsche Börse this month to announce a cloud exchange - a trading venue for outsourced storage and computing capacity. Zimory raised a $20 million funding round led by the financial giant Deutsche Börse as well, as part of the joint effort to make buying and selling cloud on a neutral marketplace as easy as clicking a button.
The partners aim to create a neutral marketplace that is secure, not tied to a service provider, and run by someone with a lot of experience. “Deutsche Börse is the second largest stock exchange,” said Baumann. ” They are expanding in their diversification policy, using their know how of creating and managing marketplaces. They definitely have the cash and the power to grow financial market demands. They have experience running trading systems in a highly secure way. “
The cloud exchange is anticipated to start early next year in Germany, and start a few weeks later in New York, according to Baumann. After that, hopefully Singapore.
New Ways to Manage Cloud Costs
How might a cloud exchange change the world of cloud computing? A market with genuine liquidity might lead to entirely new ways of using exchanges to hedge capacity needs and control costs, Baumann predicts. Participants will be able to hedge risk, buy futures on prices, and aggregate compute volumes for sale on the exchange.
“Contract negotiations will go from months to minutes,” said Baumann. “This takes a lot of time out of the procedure. It will end up in the standardization of products, compute, storage, IO speed. There will be certifications, so buyers can easily migrate into one standardized product category.”
While other cloud marketplaces have been attempted, Bumann said the partnership between Zimory and Deutsche Börse is a different beast. “Deutsche Börse has all kinds of marketing experience and market-making experience,” said Baumann. “This is unbeatable. This is a huge advantage. Deutche Börse is definitely not selling you hardware or software, and we’ll never try to lock you in to a marketplace.”
“If you want to put your IT workloads and IT futures around a cloud exchange, you want to make sure they are around and independent in five years,” said Baumann.
Standardization Will Drive Quality
Baumann thinks commoditization is not only inevitable, it’s a good thing. “IT is differentiation on the software and service level. The hardware is commodity,” said Baumann. In terms of pricing, anyone fearing that it will be soft only has to look at their cell phone bills. “We see it in the mobile market. Driving down the prices doesn’t necessarily happen.”
Standardization will drive quality. “In order to become a certified service provider, you need to run through an admission process,” said Baumann. “You need to cover certain parameters around bandwidth, security – it’s a quality market. It’s a well balanced-contract term between buyer and seller.”
With no ties to one single service provider, like many other potential exchanges, this is neutral ground for providers of all ilk to sell their capacity. In terms of Service Level Agreements, there will initially be three groups in the beginning, but Baumann says this will most likely expand. Buyers can easily migrate into one standardized product category.
Potential for Speculation
Like any futures market, there potential for speculation. However, Baumann doesn’t see this as a danger. “A contract ends and you’re on the hook to consume it,” he said. “This will limit speculation. It’s the same concept of the electricity market, and like the electricity market, there will be market-makers.”
Zimory is powering the exchange. “There needs to be a physical delivery of the service,” said Baumann. “This is where Zimory comes in.”
The company was founded in 2007. “Our target is to be the independent management layer,” said Baumann. “To enable one view and one management console.” Providers are able to administer their own private cloud, managing it across multiple data centers.
“From the very beginning, we focused on being heterogeneously open – just like data centers, you find all of these devices from a wealth of suppliers and they need to work together,” said Baumann.
This led to a completely different architecture as the company realized there are two roles involved in cloud: one is providing and offering, while the other role is consuming. The suite consists of Zimory Connect which is installed in each data center as a local controller, and Zimory Manage, which sits on the servers.
“By using this architecture, the end user always connects to Zimory Manage, not Connect,” said Baumann. “This secures the data center because the user can never directly talk to the data center.” | | 7:06p |
Uptime Institute Founder Ken Brill Passes Away 
Kenneth Brill, one of the data center industry’s thought leaders for decades, passed away on Tuesday. Brill was the founder of The Uptime Institute, and developed the Tier System that continues as one of the primary measures of data center reliability. He was also one of the founding members of the 7×24 Exchange and the founder of Upsite Technologies.
“Ken Brill was a rare and special man,” said Martin McCarthy, Chairman and Chief Executive Officer of The 451 Group, the parent company of the Uptime Institute. “Part visionary thinker, part ruthless pragmatist, he was an iconoclast and innovator, and a man of great integrity and passion. He is legitimately known around the world as ‘The Father of the Data Center Industry.’ On behalf of our firm, the clients that Ken devoted his life to serving, and the overall global IT industry, we collectively mourn his passing, and express our condolences to his family and intimates. We will all deeply miss this brilliant and bristly individual.”
Brill’s thought leadership in the data center space spanned 30 years and led to increased efficiency and higher performing data centers. He consistently challenged the industry to develop better solutions for data center operations and efficiency. His focus on data center efficiency led Brill to develop the Uptime Tier System, a widely-adopted tiered system for evaluating and classifying data center availability levels.
“Ken Brill was inspired in his thinking, and resolute in his principles,” said Julian Kudritzki, COO of Uptime Institute. “As a personality and an innovator, he left an indelible imprint on the IT and data center industry. His innovations are so fundamental to the progression of the data center industry over time that it is difficult to believe they can be traced back to the energy and passion of a single man. With Ken’s passing, the industry mourns a ferocious critic and committed agent of change.”
In 2001 Brill founded Upsite Technologies, a leader in data center airflow management solutions.
“Ken was a highly respected and influential leader in the data center industry,” said John Thornell, President of Upsite Technologies. “Not only did Ken create a product line to help data center operators manage airflow more efficiently, his passion and practicality for improving the industry led to his founding of a number of organizations where thought leaders could come together in a collaborative environment to further the cause of data center efficiency. Everyone at Upsite Technologies extends their heartfelt condolences to Ken’s family during this difficult time.”
Brill received Lifetime Achievement Awards from 7×24 Exchange International and Datacenter Dynamics. He held an undergraduate degree in Electrical Engineering from the University of Redlands and an MBA from the Harvard Business School.
I saw Ken often at industry conferences, and we always had lively conversations about the data center industry – past, present and future. He was a rich source of perspective on the industry’s evolution and the ongoing effort to operate data centers more efficiently. He will be missed.
Here’s a video conversation we had in 2009, in which Ken discusses the need for executive leadership in the C-suite to acknowledge the importance of the data center, a topic which remains extraordinarily relevant today.
|
|