Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, July 26th, 2013
| Time |
Event |
| 12:30p |
7 Software Defined Networking Considerations Patrick Hubbard is Senior Technical Product Marketing Manager and Head Geek at SolarWinds.
 PATRICK HUBBARD
SolarWinds
The move toward software defined networks (SDN) in the data center is no longer a question of “if” or “when,” but “how.” Density and physical proximity provide a fertile environment for standardization and automation, and many vendors are proposing solutions – several even have early technologies available now. However, IT administrators should carefully consider what they’re being pitched. SDN may not be the most cost-effective solution in every case; other technology investments may pay larger or more immediate dividends.
Moreover, IT administrators should consider that vendors are not universally investing in solving all of the challenges that SDN could potentially correct – so while SDN has potential to solve all IT-related issues, the vendor solutions to do so may not be available for some time. Vendors are currently evaluating the industry landscape for the most lucrative opportunities, which will have a significant impact on the SDN solutions made available to IT administrators.
The good news is that data center networks will receive more attention and product development than other technologies (such as WANs and LANs). Network administrators should expect varying levels of functionality, compatibility, price and performance – particularly depending on how their operations are aligned with the thinking of the major vendors. Given the complexity of the SDN vendor landscape and its status as an emerging technology, there are seven main factors IT administrators should consider when determining whether SDN is right for their network:
1) The industry in which the organization is operating
Certain industries maintain higher ratios of data center investment per employee than others, and the size of the data center is the largest factor in considering SDN. Cloud providers such as Amazon, Google, Facebook and Azure rolled out their own proprietary SDN solutions years ago, and likely could not operate today without them. For these large providers, vendor-agnostic standardization is the goal, and their next target is the top-of-rack switch. An organization that maintains dozens of racks or more should pay attention to what these big cloud players are doing. What we’re seeing from vendors: We can expect upstart vendors to produce commercial versions of open reference architectures, and hopefully, more standardization from established vendors as SDN matures. The potential exists for significant savings as SDN allows data centers to move away from single source hardware to the commodity-based pricing we see with servers.
2) The size of an organization’s network
A data center with near or above 1,000 active IP addresses is also a candidate for SDN. This is especially true if the majority of those addresses are assigned to virtual machines. What we’re seeing from vendors: In this arena, we can expect hypervisor/virtual machine (VM) vendors to rush to be the SDN controller, and push hardware vendors to expose rich Southbound SDN APIs. Controllers will be a new line item in the IT budget, and it’s expected that VM vendors would want to control the vSwitch, rack, and core as a single unit with their software.
3) The dynamic nature of an organization’s applications and workloads
An organization with fairly stable IT operations where systems agility is not a competitive differentiator might hold off on investing in SDN. While it may be cool to configure traffic with a mouse, the existing, skilled team with PuTTY (a telenet client) may remain more cost-effective for some time.
4) The number of virtual machines (VMs) within an organization’s network
Besides the number of IPs assigned to VMs in your primary data center, the total number of VMs across an organization’s network is also a consideration for whether to implement SDN. Imagine an SDN controller with both Northbound and Southbound authority managing a data service VM. Based on network traffic analysis, it might determine that the VM would be more efficient located in a remote data center closer to its clients. The controller would send a command to vMotion the server, while coordinating network changes across the enterprise to reconfigure for the new location.
5) The organization’s need for agility, flexibility and scalability within the network
Especially relevant for hybrid cloud/local data center customers, SDN may provide greatly improved flexibility by removing the traditional demarcation points between networks. Remote, local and virtual networks are normalized into a common entity, differentiated by properties rather than physical interfaces. What we’re seeing from vendors: Watch for cloud providers to continue extending their hybrid management tools in this area. Microsoft in particular has focused on hybrid support, though only on its platforms.
6) The organization’s need to simplify security measures and control access to applications
The application of best practices, policy-based management and routine auditing as it relates to network security and access are a challenge within traditional networks. With SDN, the network itself can define and attach policy to any context: an application, user, flow, endpoint or device. If an organization has complex or critical security requirements, SDN intelligence services will be critical. What we’re seeing from vendors: Expect existing network management and security vendors to expand their products’ abilities to analyze the virtual networks under an SDN controller’s authority.
7) The organization’s access to personnel and capital resources
As is always the case with IT, only SDN projects with clear ROI will be approved. An IT organization with a limited budget or only a handful of specialists may not be a viable candidate for SDN. As with server virtualization, IT shops of every size will eventually benefit from SDN, but it won’t happen overnight. What we’re seeing from vendors: Even with the server virtualization market mature for some time, Microsoft only recently put viable “free” VM hosting in the hands of SMBs with Hyper-V, and likewise, there’s no vendor rush where SDN budgets are small.
Keep Grains of Salt Handy
Finally, IT administrators should consider whether vendors are promoting solutions in the spirit of Stanford’s original research in 2005, which served as the genesis of SDN and sought to solve common networking challenges. The dream of SDN – separating the networking software layer from the hardware layer – is the dream of network administrators: automation based on rules to reduce or even eliminate manual workloads and result in fewer configuration errors and less downtime.
However, vendors protecting install bases may discourage solutions that release IT administrators from single-vendor infrastructures. For IT administrators that aren’t implementing SDN in the near term, there are plenty of great solutions to ease the pain of network management. A variety of freeware and commercial applications are available today that can monitor and manage network configuration, performance, access and security. Expect current products with multi-vendor support to extend monitoring and management into SDN as the technology matures, supporting SDN and non-SDN assets in a single solution.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 12:32p |
Calient Gets $27 Million To Take SDN At Light Speed Optical Circuit switching company CALIENT Technologies announced it has raised a $27 million round of venture financing and named Jag Setlur, a 20-year finance professional, as its Chief Financial Officer (CFO).
CALIENT makes adaptive photonic switching with systems that enable dynamic optical layer optimization. Using 3D MEMS (micro-electromechanical systems) and all optical switches in the data center means that without upgrading the switch, speeds can go from 10Gbps to 40Gbps and then 100 Gbps. To move into a hybrid packet-optical circuit switch network, optical circuit switches (or photonic switches) can be aided by SDN scripts to redirect data flows from one network to another.
Funding from new and existing investors will help the company advance its portfolio of 3D MEMS Optical Circuit Switching systems, extend its IP portfolio and provide working capital for its rapid production growth driven by new applications in software defined datacenter networking.
“This round of financing will support the exciting opportunities ahead at CALIENT as we work to meet rapidly growing demands for Optical Circuit Switching in software-defined data centers and metro networks,” Raza said. “With this funding and our recent growth, I believe now is the time for an experienced CFO like Jag to join the senior management team. I am pleased to have him on board and look forward to leveraging his expertise as we continue to develop and optimize the CALIENT product portfolio and expand our market footprint.”
Jag Setlur joins CALIENT from July Systems where he served as chief operating officer and CFO. Prior to that, Setlur served as CFO for Cotendo, where he closed two rounds of funding of nearly $30 million and managed the strategic positioning and sale of the company to Akamai. | | 12:45p |
Technology and Hollywood Meet At SIGGRAPH 2013 Over 17,000 people from 77 countries took on Anaheim, California this week as the SIGGRAPH, a premier conference on computer graphics and interactive techniques celebrated its 40th anniversary. Numerous announcements, celebrations, exhibits and eye-popping demonstrations filled the event floor throughout the week. As a part of the accompanying Computer Animation Festival, SIGGRAPH 2013 hosted Production Sessions, where elite computer graphic experts and creative geniuses explained their process and techniques for creating compelling content.
Some highlights from the week included:
Pixar celebrated the 25th anniversary of its RenderMan software, which has been used in 19 of the last 21 Academy Award-winners for Visual Effects. fxguide has an exclusive on Pixar’s RenderMan and the software’s co-founder Ed Catmull. This year, Pixar’s “Monsters University” and “The Blue Umbrella” are showcasing new levels of photorealism, including major advancements in lighting that are directly attributable to technological breakthroughs in RenderMan’s system for creating physically-based global illumination. Last month Pixar announced the release of RenderMan Pro Server 18, a major version upgrade that presents core enhancements in lighting workflows.
NVIDIA (NVDA) unveiled its new flagship GPU – the Quadro K6000. Delivering five-times higher compute performance and nearly double the graphics capability of its predecessor, the K6000 GPU enables leading organizations such as Pixar, Nissan, Apache Corporation and WSI, Professional Division of The Weather Company and Innovation Engine of The Weather Channel, to tackle visualization and analysis workloads of unprecedented size and scope. The K6000 features 12GB of ultra-fast GDDR5 graphics memory, 2,880 streaming multiprocessor cores and supports four simultaneous displays, up to 4k resolution with DisplayPort 1.2.
“The Kepler features are key to our next generation of real-time lighting and geometry handling. We were thrilled to get an early look at the K6000,” said Guido Quaroni, Pixar vice president of Software R&D. The added memory and other features allow our artists to see much more of the final scene in a real-time, interactive form, and allow many more artistic iterations.”
Fusion-io (FIO) showcased a complete studio solution for visual effects acceleration. Its conference demo illustrated a 12GB/s Fusion powered pipeline, with ioControl Hybrid storage, HP Z820 ioTurbine Cache, and HP Z820 Artist workstations with ioFX 1.6TB. “Fusion-io products were a fundamental, core component of our pipeline for ‘Star Trek Into Darkness,’” said Adam Watkins, Pixomondo Digital Effects Supervisor. “If you have a facility equipped with Fusion-io with the ioFX for workstations or ioDrive in the server, you get an overall productivity gain that increases efficiency of the equipment you already have. Those cost savings and increased bandwidth help you take on more work to be more competitive in the visual effects market.”
AMD showcased visual computing experiences it powered for Adobe, Autodesk, Christie, Dell and Optis. The AMD FirePro professional graphics demonstrations featured unique animation, display, simulation and other creative hardware and software collaborations for attendees from around the world. ”At the AMD booth, artists will have the chance to check out Autodesk Maya and 3ds Max 3D animation and visual effects software running on workstations powered by AMD FirePro graphics,” said Rob Hoffmann, senior product marketing manager, Autodesk Media & Entertainment. “The powerful combination of AMD hardware and Autodesk software can help increase productivity and creativity to complete tasks faster and give artists more time to try new ideas.”
Intel (INTC) was at SIGGRAPH highlighting high-fidelity ray-tracing from the upcoming 2.0 release of the Embree open source project, as well as giving a demonstration of Autodesk Opticore Professional Studio running on Xeon Phi co-processors. | | 2:16p |
Hybrid Cloud Means Slower Sales Growth for Equinix  Colocation facilities like this Equinix data center are key components of many hybrid cloud infrastructures. But customers are taking their time in analyzing the best way to deploy hybrid clouds, Equinix said this week. (Photo: Equinix)
Enterprises are taking their time as they contemplate hybrid cloud infrastructures, and that’s translating into a longer process for leasing colocation space. That’s the word from executives at Equinix, the largest player in the colocation market, who said they are seeing longer sales cycles as customers try to sort out the best approach to hybrid clouds.
“We’re seeing an uptick in the hybrid cloud deployment as the public cloud players continue to deploy across Equinix,” said CEO Steve Smith in the company’s quarterly earnings call Wednesday. “As enterprises mature, they’re getting more and more sophisticated about understanding the hybrid model, and how they can take advantage of connecting to the networks and the public cloud nodes in our data centers, and that’s taking time.”
Equinix cited lengthening enterprise sales cycles as one of the reasons that it has adjusted its revenue guidance slightly lower for the balance of 2013. “We continue to expect strong operating performance and an acceleration of growth in the second half,” Smith said. “However, based on current visibility, we’re now moderating our guidance for the full year.”
Colo A Key Component of Cloud
Colocation centers are key components of many hybrid architectures, providing third-party space to securely host corporate infrastructure, as well as ultra-fast connections to public clouds like Amazon Web Services. So it’s not that cloud computing is causing Equinix to lose deals – to the contrary, the connectivity at Equinix has made the company a major beneficiary of the growth of both public and private clouds.
The issue is that hybrid architectures are more complex, prompting customers to take more time to evaluate all elements of their infrastructure before committing to colocation space.
“We’re seeing deals slip as decision cycles are protracted,” said Charles Myers, President of the Americas for Equinix. “It’s less losing deals as it is deals slipping into a subsequent quarter. We’re seeing a pretty irreversible trend towards hybrid cloud architectures, but it’s just taking some time for that to shake out.
“These customers are taking time to sort through what exactly hybrid architectures mean to them,” Myers continued. “They are trying to figure out how to get their infrastructure out of their own basements and figuring out what portion of that goes colo, what portion of that is well-positioned to move into a public cloud setting, perhaps due to the need for reversible workloads, and what it means to implement a hybrid cloud. And oftentimes, as they come to grips with that, they realize that the network density that Equinix provides is a very compelling reason to center their hybrid cloud infrastructure around Equinix.
“But that is a nontrivial sort of assessment on the part of an enterprise CIO. Those sales cycles can be protracted, and involve smaller deals, and that’s really the dynamic that we’re experiencing.”
Different Picture at Internap
The experience doesn’t seem to be universal among colocation providers. Internap, which also offers colocation and cloud services, says the current hybrid cloud discussion is about future requirements rather than immediate needs.
“All I can say is, from our perspective, I would not suggest we’ve seen an increase in the sales cycle and our average deal size has been trending up,” said Eric Cooney, the CEO of Internap, in the company’s earnings call yesterday. “So I’ll leave it to Equinix to provide further clarity on their remarks, but it’s not something we’ve seen from Internap’s standpoint.”
“The majority of customer requirements and application workloads that are being deployed today are not utilizing a hybrid IT infrastructure,” said Cooney. “That being said, an increasing quantity of customer-buying decisions are being heavily influenced by the service provider’s ability to offer a hybrid IT infrastructure solution, because customers expect that in the future, they will want to leverage that hybrid infrastructure to get the best combination of cost and performance benefit. So today, it’s a basis for competitive differentiation in many of the discussions we’re in and I think a significant reason why when customers choose Internap.”
We’ll continue to track discussion of cloud deployments as colocation and data center providers report their earnings. | | 3:15p |
Friday Funny: Everybody in the Pool! It’s Friday and that means it’s time for some laughs in the data center. So let’s get to our Data Center Knowledge caption contest, with a new cartoon drawn by Diane Alber, our favorite data center cartoonist!
This week, we present “Everybody in the Pool” from Diane. She writes: “So because temperatures outside are reaching 113 (Yes, I live in Arizona) I couldn’t help but do a comic about ways to cool off in the data center…especially if you are working in the hot aisle!???”
Congratulations to our reader “Slick Rick” for his winning caption, “”The Alienware sales rep always knows how to make an entrance” for the Close Encounters of the Data Center Kind cartoon.
Click to enlarge.
Enter your caption suggestion below. Please visit Diane’s website Kip and Gary for more of her data center humor. For the previous cartoons on DCK, see our Humor Channel. | | 3:50p |
ARM Brings the Internet of Things To Life On Its Campus Chipmaker ARM is using its headquarters campus to demonstrate how its low-power chip technology can slash energy costs for the automated office environments of the future. ARM is installing sensors to automate the infrastructure that manages its meeting rooms, parking lots and HVAC systems. The company ARM will deploy network technology and more than 600 connected sensors across its UK campus in Cambridge, all controlled by smart ARM-based chips.
 ARM Holdings’ office complex in Cambridge will be outfiitted with sensors to monitor and automate much of its infrastructure. (Photo: cmglee via Wikipedia)
The project will provide information and control of the site and its 75 car park lights, 40 meeting rooms, heating and water management systems. ARM believes the project will help it save money by reducing energy usage. The systems will be based on open internet standards and designed to help application developers mobilize connected assets of all kinds. The API specification will be made public to achieve broader deployment and to benefit businesses and individuals around the world.
Three ARM partners will work together to create an environment for reducing inefficiencies and energy consumption – AlertMe, EnLight and IntelliSense.io. The collaboration between some of the UK’s most advanced technology companies will also provide the technology industry with key lessons on how a new generation of intelligent, connected products and services can be fully implemented, with the potential for worldwide adoption.
- AlertMe is providing smart solutions to detect occupancy of ARM meeting rooms. This technology will show employees when rooms are in use via an online booking system, enabling more efficient use of office space. AlertMe is also providing 75 kits for ARM employees that will enable them to monitor their own homes’ temperatures, energy consumption patterns and occupancy levels.
- EnLight is upgrading ARM’s outdoor lighting in car parks and around building exteriors with a lighting management solution that will reduce energy consumption and enable ARM to intelligently control the lighting. The technology will enable ARM to remotely monitor data such as operating temperature, lamp status and energy consumption, plus lamps can be controlled and light levels adjusted during periods of low use to make additional savings.
- Intellisense.io are providing solutions to measure pressure and flow in ARM’s heating, ventilation, and air conditioning (HVAC) systems. This technology will enable building temperatures to be read, zone-by-zone and will also enable the tracking of real-time maintenance needs, plus rates of water consumption and ARM’s overall carbon performance.
“ARM is delighted that, together with our partners, we have won the backing of the Technology Strategy Board to deploy IoT devices to improve the efficiency of our buildings in Cambridge,” said Graham Budd, chief operating officer at ARM. “To create a smart efficient building you need a great many sensors distributed throughout the system, all sharing information to improve decision-making. ARM technology and its ecosystem create the bedrock of this smart intelligence that will enable others to build unique services and applications to help the IoT flourish.” |
|