Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, December 12th, 2012
| Time |
Event |
| 12:46p |
IBM Lights Up Silicon Nanophotonics for Big Data  IBM Silicon Nanophotonics technology is capable of integrating optical and electrical circuits side-by-side on the same chip. In this image, blue optical waveguides transmitting high-speed optical signals and yellow copper wires carrying high-speed electrical signals. (Photo from IBM Research via Flickr).
IBM announced a major advance in the ability to use light instead of electrical signals to transmit information for future computing. Referred to as Silicon Nanophotonics, the technology allows the integration of different optical components side by side with electrical circuits on a single silicon chip, using sub-100 nanometer semiconductor technology.
Big, Fast Data – Without an Interconnect
The technology uses pulses of light for communication and creates a “super highway” for large volumes of data to be exchanged at high speeds between computer chips in servers. This alleviates the cost and bottlenecks presented by traditional interconnect technology. The research has potential ramifications for the cost and speed of future data center networks, and potential implications for design as well.
Silicon Nanophotonics could provide answers to big data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.
“This technology breakthrough is a result of more than a decade of pioneering research at IBM,” said Dr. John Kelly, Senior Vice President and Director of IBM Research. “This allows us to move silicon nanophotonics technology into a real-world manufacturing environment that will have impact across a range of applications.”
The challenge of manufacturing these chips was addressed by adding a few processing modules into a high-performance 90nm CMOS fabrication line. A variety of silicon nanophotonics components, such as wavelength division multiplexers (WDM), modulators, and detectors are integrated side-by-side with a CMOS electrical circuitry. As a result, single-chip optical communications transceivers can be manufactured in a conventional semiconductor foundry, providing significant cost reduction over traditional approaches.
IBM’s CMOS nanophotonics technology demonstrates transceivers to exceed the data rate of 25Gbps per channel. In addition, the technology is capable of feeding a number of parallel optical data streams into a single fiber by utilizing compact on-chip wavelength-division multiplexing devices. The ability to multiplex large data streams at high data rates will allow future scaling of optical communications capable of delivering terabytes of data between distant parts of computer systems.
 Cross-sectional view of an IBM Silicon Nanophotonics chip combining optical and electrical circuits. An IBM 90nm Silicon Integrated Nanophotonics technology is capable of integrating a photodetector (red feature on the left side of the cube) and modulator (blue feature on the right side) fabricated side-by-side with silicon transistors. Silicon Nanophotonics circuits and silicon transistors are interconnected with nine levels of yellow metal wires. (Image: IBM Research via Flickr) | | 2:08p |
CloudVelocity Debuts Its Cloud Cloning Software 
Hybrid deployments that combine public clouds with on-premises data centers are emerging as the sweet spot in enterprise cloud computing. CloudVelocity, which launched today, is offering tools to help automate these hybrid clouds and set them in motion.
The Santa Clara, Calif. company has developed software to “clone” cloud applications, making it easier to replicate a deployment in a new cloud environment or create a failover solution that keeps your app online when your public cloud crashes. By removing barriers to deploying complex applications across private and public clouds, CloudVelocity could make it easier for enterprise users to adopt cloud technologies and boost business for secure public clouds.
CloudVelocity, which was known as Denali Systems during its stealth phase, asserts that cloud migration is harder than you think, and that its software will advance the current state of the art, making it easier to seamlessly move complex cloud deployments between data centers and multiple public clouds. The company is backed by a $5 million Series A round of financing from Mayfield Fund, which was announced today.
“We believe that CloudVelocity will have the same impact on public cloud adoption as VMware did on the adoption of server virtualization by making public clouds look like internal data centers,” said Navin Chaddha, Mayfield Fund Managing Director.
CloudVelocity’s software is in beta, and is available for public download at the company’s web site. The Developer Edition allows users to clone multi-tier app clusters and services, without modification, into the Amazon Web Services EC2 cloud. The Enterprise Edition adds migration and failover capabilities for EC2. CloudVelocity says it is in active discussion with other public cloud providers, and expects to offer support for multiple providers in 2013.
“Our goal is to enable enterprises to operate hybrid clouds as seamless extensions of the data center,” said Rajeev Chawla, chief executive officer of CloudVelocity. “Cloud cloning, migration and failover are our first steps in that direction.”
CloudVelocity is seeking a patent on its One Hybrid Cloud (OHC) platform. One early adopter said it has accelerated its use of the software.
“The CloudVelocity Enterprise Edition software trial worked so well that we’ve chosen to use it ahead of schedule in our production environment for cloud failover. This will help ensure that our online business stays available within the AWS cloud,” said early beta-user Nitin Shingate, VP Engineering for Lealta Media. “Because we cannot afford downtime, CloudVelocity has helped us to increase availability while also substantially reducing our expenses.” | | 2:47p |
Telx Opens Oregon Site, Continuing Northwest Expansion Colocation provider Telx has launched a new Cloud Connection Center in Portland, Oregon, accelerating the company’s expansion into the Pacific Northwest, as it just opened a facility in Seattle last week. The new Telx site will be known as PRT1, and is the company’s 19th facility across 13 markets in the U.S.
Telx is leasing space in a single-story structure operated by Digital Realty Trust in Hillsboro, Oregon, a technology hub located about 15 miles away from downtown Portland. It currently has 4.5 megawatts of service for IT load, and provides 18,000 square feet of white space from a total current building footprint of 52,000 square feet. It’s expandable by another 20,000 square feet.
Telx employed a full 2N+1 design, and the new facility can support client loads of up to 325 watts per square foot. The company says the new retail facility will advance the baseline for “high density” retail colocation. The facility has a 1.3 Power Usage Effectiveness (PUE) rating , achieved partially through an indoor air containment system that directs cold air to server needs, followed by evacuation of the resulting hot air outside.
More Data Center Business for Portland
This facility highlights the continuing emergence of Portland/Hillsboro as a data center hub, and Telx’ continued expansion in the Pacific Northwest.
“With nine major carriers within reach of the facility and three major cable landing stations within one mile, PRT1 provides an impressive expansion of our West Coast capabilities,” said Eric Wick, vice president of sales for Telx. “Tethering the PRT1 facility with Telx’s expanding portfolio of interconnection products such as Datacenter Connect and Metro Connect back to important Portland carrier hotels, quickly make it a core regional asset. Together with our Bay Area facilities and our newly announced SEA1 facility, PRT1 provides both the ideal location for clients to reach Asia as well as being able to address the power consuming needs of gaming, media and other SaaS applications and services.”
There’s been plenty of data center activity in Oregon this year. ViaWest, NetApp and T5 have all announced new facilities in Hillsboro or Portland. Rackspace bought land for an Oregon data center, and of course there’s the giants like Google, Facebook and Apple that are there. The Silicon Forest is growing, and the area is a hot data center market right now.
Telx is a provider of interconnection and data center services in strategic, high-demand North American markets. It has 19 C3 Cloud Connection Centers and provides direct connections to a community of the industry’s highest performance networks and access to 1,000+ customers, including leading telecommunications carriers, ISPs, cloud providers, content providers and enterprises. | | 3:04p |
Maintec Offers Up Colo Optimized for Mainframes  Maintec specializes in colocation services for mainframes, like this IBM Z9 unit. (Photo: IBM)
Maintec has opened a new 8,000 square foot data center in the Research Triangle in Raleigh, North Carolina for Mainframe colocation. Wait, what? Mainframes? Yes, mainframes – the company has found itself an underserved niche in the market.
How does a mainframe colocation facility differ from a standard colo facility? “The knowledgebase,” says Sonny Gupta, President and CEO of Maintec. “The other colocation providers have no comprehension of the mainframes. They measure per square feet, and mainframes don’t fit in that model.”
Instead, mainframe colocation is often measured and charged in MIPS. “Companies with from 20 to 1,000 MIPS find us extremely cost effective,” says Gupta. There is a market for Maintec to tap, one that is a little more hidden from the world than typical colocation customers.
Mainframes Not Going Away
“Some of the larger installations are heavily invested in mainframe and these mainframes aren’t going any time soon,” Gupta said. “Maintec has the complete infrastructure and expert staff for 24/7/365 mainframe support. Depending on your needs, we provide mainframe colocation, mainframe raised floor space and/or mainframe managed services.”
Mainframes are large, powerful computing systems that dominated business computing for decades. In recent years, most companies have shifted to rackmount
Who typically requires mainframe colocation? “Operations services – financial institutions, insurance sector,” said Gupta, listing a few examples. “We’re finding all sorts of other clients, clients who perhaps haven’t upgraded in 5 to 10 years. Their hardware and software is old. “We provide them with a migration path, and usually there’s some type of managed service attached as well.”
The company says there are a lot of companies that are attached to mainframe technology, who want to outsource, but simply can’t find the right solution. So they often remain quiet.
“The problem is that those people are hidden – very few people know about them, and very people are tending to them,” says Gupta. ”The plan long-term is to help these people evolve, but they’re not about migrating to a non-mainframe platform. We want to continue to be there for these folks.
“One client had an almost 20 year old mainframe environment, and they’re afraid to move,” said Gupta. “With any of the hardware, if they move, it’s a toss-up. So we told them we’d provide them with a duplicate environment and implement a migration path in 90 days.” .
The new facility is located on the second floor of a brick building at 250 Dominion Drive in Morrisville, N.C. and is primarily targeting companies in the Raleigh, Chapel Hill, Morrisville and Cary Regions. It says it’s equipped to handle workloads from one rack to 4,000 square feet. Maintec has been in Raleigh, N.C since 1998. | | 5:52p |
Video: Violin Memory’s Flash Memory Arrays At the Gartner Data Center Conference, there was much discussion of the use of Solid State Drives, whether in servers or storage. As the era of big data analytics continues, more and more enterprises need to store and analyze larger and larger databases. Violin Memory offers flash memory arrays that save space in the data center and improve the performance of accessing and analyzing data sets. On this video from the conference, two members of the Violin Memory team, , director of marketing Ashish Gupta and Ken Hoppe, VP Channels, described what the company offers and how the flash memory arrays can be used to save space in the data center, improve performance and even reduce licensing costs. The video runs 5 mins.
For additional video on data centers, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 6:44p |
Big Data News: ClearStory Data, Mellanox, Teradata, ExtraHop Here’s a roundup of some of this week’s headlines from the Big Data sector:
ClearStory Data Closes $9 million Series A. Silicon Valley big data startup ClearStory Data announced it has closed a $9 million Series A financing round with Kleiner Perkins Caufield and Byers, and joined by previous seed round investors Andreessen Horowitz and Google Ventures. ClearStory Data’s solution offers a new way for business users to easily discover, analyze and consume data at scale from corporate, web and premium data sources for combined and up-to-date insights. Data sources include relational databases, Hadoop, Web and social application interfaces, and third-party data providers. ”We’ve seen incredibly strong demand for ClearStory Data’s solution from a wide range of industries as data-driven organizations look for new and better ways to access and combine data from corporate and third-party sources,” said Sharmila Shahani-Mulligan, CEO and Founder of ClearStory Data. “With the astounding growth in external sources of data, data marketplaces, and corporate data housed in new big data platforms, it’s time to make it a lot easier for business users to interactively explore and analyze information no matter where it comes from.”
Teradata selects Mellanox to accelerate big data appliance. Mellanox (MLNX) announced that Teradata has chosen its InfiniBand interconnect solution to accelerate the Teradata Aster Big Analytics Appliance. The appliance is designed for demanding analytics which require high computational power and the fastest data movement. Assisted by Mellanox InfiniBand, the Teradata appliance offers up to 19 times better data throughput and performs analytics up to 35 times faster than typical off-the-shelf commodity bundles. Mellanox’s interconnect technologies deployed in the Teradata Aster Big Analytics Appliance include ConnectX-2 InfiniBand adapters and Mellanox InfiniBand switches, running at 40Gb/s InfiniBand speeds. “Teradata Aster Big Analytics Appliance is part of a truly unified, high-performance big data analytics architecture for the enterprise and will help customers achieve business value,” said Carson Schmidt, vice president of platform engineering at Teradata. “The Mellanox InfiniBand interconnect is the best choice to enable the performance that is needed to analyze big data at a speed that no other analytic platform can deliver. This performance maximizes the return on investment and accelerates time to value.”
ExtraHop adds Sybase IQ for Big Data Management. ExtraHop Networks announced the launch of its SAP Sybase IQ Module designed to give IT organizations operational intelligence into Big Data analytics and data warehousing environments. With a recent Sybase IQ release including support for the open-source MapReduce and Hadoop, the pairing with ExtraHop allows IT teams to monitor the performance of all Sybase IQ queries in real time without profilers or other host-based instrumentation. “Big Data applications such as data warehousing present significant challenges of scalability and visibility for agent-based APM offerings, which impose significant performance overhead on systems,” said Jesse Rothstein, ExtraHop CEO. “With a network-based deployment that requires no agents, the ExtraHop Sybase IQ Module offers unprecedented visibility into Sybase IQ transactions, delivering real-time transaction analysis at sustained 10Gbps speeds. Our customers that run Sybase IQ have been thrilled with the real-time visibility that this new offering provides, and we’re happy to see how quickly it has had a positive impact for their IT organizations.” | | 6:59p |
Load Balancer Misbehavior Cited in Google Outage What happened during Monday’s Gmail outage? At the time, we observed that these widespread Google outages “usually involve software updates or networking issues. Or in some cases, a software update causing a networking issues.” According to an incident report, the cause was indeed a software update causing a networking issue, specifically in Google’s load balancers.
“Between 8:45 AM PT and 9:13 AM PT, a routine update to Google’s load balancing software was rolled out to production,” the respot says. “A bug in the software update caused it to incorrectly interpret a portion of Google data centers as being unavailable. The Google load balancers have a failsafe mechanism to prevent this type of failure from causing Googlewide service degradation, and they continued to route user traffic. As a result, most Google services, such as Google Search, Maps, and AdWords, were unaffected. However, some services, including Gmail, that require specific data center information to efficiently route users’ requests, experienced a partial outage.”
There was an interesting wrinkle to this outage that extended the impact more broadly. Wired Enterprise noted that the load balancer problems affected Google’s Sync web service, which allows Google users to share their Chrome browser settings across multiple devices. “It’s due to a backend service that sync servers depend on becoming overwhelmed, and sync servers responding to that by telling all clients to throttle all data types,” Google engineer Tim Steele said.
As a result, at the same time users were having trouble accessing Gmail, many Chrome users were experiencing mysterious browser crashes.
Google says it has fixed the load balancer bug and is changing the release process for software updates for load balancer software. Google’s incident report said it is “reviewing a multistep release process to push load balancer changes in one location before proceeding with a general rollout. The unique nature of load balancing systems makes this more difficult than with other software components.” | | 7:04p |
Sterling Bay Buys Chicago Property for Potential Data Center There’s an early stage data center project in the works in Chicago. Sterling Bay Cos. paid a little over $23.5m for a four-acre West Loop property which it will use to build a data center. Primarily known for activity with Office properties, Sterling Bay Cos. is believed to be in talks with a national data center developer to build a data center on the Desplaines property, according to Chicago Real Estate Daily.
The Chicago developer bought the site at 717-727 S. Desplaines St. It’s located along the east side of the Dan Ryan Expressway. “That kind of acreage is hard to come by at that location,” Sterling Bay Principal Scott Goodman said to the Chicago Real Estate Daily. It was purchased from a group of investors including the chairmen of two Chicago-based real estate firms, Development Resources Inc.’s James DeRose and HSA Commercial Real Estate’s Jack Shaffer, Cook County records show.
Chicago Moves
Digital Realty bought a campus in Suburban Chicago last June, acquiring a 575,000 square foot redevelopment property in Franklin Park, Ill. for $22.3 million. Latisys added 10,000 square feet to an Oakbrook, Ill facility in November, and in April, data center developer Ascent Corporation closed on $107 million in debt financing to fund the expansion of its CH2 project in Chicago and to enter new markets.
Chicago is a strong market, without a lot of room, so whenever a potential property like this one springs up, Data Center Knowledge tends to take notice. More and more, the real estate world is entering into the data center market – this is very early stage, and we’ll report back once more is confirmed. |
|