Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, November 4th, 2013
| Time |
Event |
| 12:30p |
Data Center Jobs: Jones Lang LaSalle At the Data Center Jobs Board, we have a new job listing from Jones Lang LaSalle, which is seeking a Critical Facilities Engineer in Roanoke, Texas.
The Critical Facilities Engineer is responsible for the day-to-day operation, maintenance and repair of systems and equipment that support a high availability data center. These systems include, but are not limited to, uninterruptible power supplies, backup electrical generators, fire suppression, EPO, leak detection, centrifugal chillers, cooling towers, pumping systems, automated electrical distribution systems, raised floor environments and monitoring systems. Candidate should be in good physical condition and be a self-starter that is also capable of working as a team member. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 1:30p |
Managing Server Performance? Stop Using Server Management Tools Vic Nyman is the co-founder and COO of BlueStripe Software. Vic has more than 20 years of experience in systems management and APM, and has held leadership positions at Wily Technology, IBM Tivoli, and Relicore/Symantec.
 VIC NYMAN
BlueStripe Software
If you’ve ever watched a home improvement show, there’s one truism that always comes out: “Use the right tool for the right job.” With that context in mind, it seems logical for a server administrator to use a server management tool when trying to manage their servers.
But servers don’t operate in vacuums, and they don’t operate simply for the sake of running. Infrastructure systems are designed and deployed to run business services through complex sets of applications and transactions. In these complex systems, a server management tool is no longer the “right tool” for managing server performance and availability. Server management requires a broader view that places the server being managed in the context of the applications and business services that are being provided.
So what to use? The best tool for managing server performance and availability in production distributed applications is a transaction monitoring tool that bridges across the servers, the distributed applications and the transactions that those applications deliver. Server administrators need a tool that will show them which distributed applications run on their servers, which applications rely on their server, what the server’s back-end dependencies are, where the server has failed or poorly performing connections and what rogue applications and processes are running.
Know Which Applications are Running on the Server
Systems administrators need to know which applications are deployed on the managed server and what connections they’re serving or making. Measuring the performance of these connections is impossible if you only look at operating system resources. This information provides a reality check into how the server is actually behaving.
Of course, the list shouldn’t just be created and shown in a vacuum. Tracking how each application on a managed server performs, and with which servers the individual processes communicate, are critical to understanding if a server is configured properly. It also helps system administrators determine if there’s a performance issue and whether other administrators should be alerted.
Know What Servers and Ports Rely On the Server
From a server perspective, front-end connections are the specific servers and ports sending requests to the applications running on the server. To understand the impact of a server to the broader service, it’s critical to know which services rely on the server in the first place.
Once front-end connections are known, seeing the performance of each request to the server is the key component of understanding a server’s impact on business services. Resource-centric server monitoring tools can’t even see the broader context, let alone identify where a request was initiated, or which system initiated a given request.
Having this knowledge empowers server administrators and IT Operations teams:
- Server owners can instantly tell if any of their servers are involved in a given application.
- IT Operations teams use visibility to front-end connections to know which servers (and applications) are impacted if a given server were to go off-line.
- These features allow any IT team member to quickly know whether or not they should be involved in a bridge call to fix an outage.
Know the Back End Dependencies
Back-end dependencies for a server are exactly what they sound like – the processes, servers, and systems that the managed server calls in the course of executing its functions. These systems can be anything from a secure Lightweight Directory Access Protocol (LDAP) system to a large database or a mainframe.
The server administrator/owner will want to know specifically which servers and processes any given server is talking to, when they’re called, and the performance of those calls:
- Knowing which other systems any given server is dependent on (application-by-application) provides a checklist of where to check for problems when an application slows down at the server.
.
- Having response time data linked to specific technical protocols (such as SQL or MQ Series) provides additional data needed to solve problems.
See Problem Connections and Failed Connections
About this time, you’re thinking: “Failed connections? I’ve got that covered with my network monitoring system.” If all problem connections were simply that a hub or router stopped working, then a network tool would be sufficient. Unfortunately, failed connections occur all the time on working network channels.
More than just seeing network connections, you have to understand the application flow and track each process-to-process connection for proper monitoring. When application connections fail but the network connections are working, a network tool cannot know that a problem even exists, let alone solve it.
Two specific causes based on connection monitoring can be quickly isolated:
- DNS Issues – DNS server changes can take a while to propagate. Every request could end up adding a DNS change, which often creates hidden latencies that can add up to big slow-downs.
- Load Changes – understanding differences in the scaled number of requests in between any two infrastructure systems allows IT Operations to cut through the haze and know whether something is even wrong.
Handling Rogue Applications and Processes
When anti-virus is running in the datacenter, signature file updates are usually configured to occur in off peak hours (usually in the middle of the night). You’d like to think the attitude could be “set it and forget about it,” but too many times, for one reason or another, all the servers start updating their profiles at 2:45 p.m., right before the peak login rush.
If IT Operations isn’t able to detect when a rogue application starts depleting critical resources, well, that’s the definition of a “spin your wheels” problem. The biggest risk, naturally, isn’t being hit by an application process that can be planned for (like ensuring virus updates occur at the right time). No, the problem with rogue processes is that you don’t actually know that they are depleting your resources.
Transaction Monitoring – Tying Together Infrastructure, Applications and Transactions
IT Operations and server owners need to manage their servers from a service perspective. The best place to find these features is an end-to-end transaction monitoring solution. That way, we can manage our servers’ performance from the point-of-view of our users.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:30p |
Box Commits to Adopting Renewable Energy for its Data Centers Cloud file-sharing giant Box has inked an agreement with the environmental group Greenpeace to eventually run its infrastructure entirely with renewable energy.
Andy Broer, Box’s senior manager of data center operations, signed the document on behalf of the company on Sept. 6, he said in an interview with Data Center Knowledge. A blog post on the agreement will be posted on Box’s website, he said.
Apple, Facebook, Google and Salesforce.com have all committed to depending more on renewable energy.
Every year, Box will disclose the number of megawatt-hours of power its colocation sites consume, and the amount of power it uses that comes from renewable energy sources, Broer said. Box will also publicly report how its sites perform on two industry energy efficiency metrics: Power Usage Effectiveness (PUE) and Carbon Usage Effectiveness (CUE).
“We have smart power strips for all of our deployments,” said Broer. “Not every provider meters down to an individual cage or cabinet. Our (team) will look at the colocation providers – what power providers were you using over this period of time – and we have to research the power profile (of the facilities). … We’ll do that math ourselves, and we’ll figure out what our carbon footprint is for that.”
Box Will Work With its Colo Providers
While PUE has become a popular metric among data center managers, Broer doesn’t think it tells the whole story.
“The whole idea is, no mater how efficient your PUE is, you can have a fantastic PUE of 1.01, but you’re still consuming 5 megawatts, and you’re still doing it in a place that’s (depending on) 100 percent coal,” he said.
To achieve its goal of using only renewable energy, Broer said Box will work closely with its colocation providers. Thus far Box has worked closely with Equinix for its colocation needs.
For colocation providers, the benefits of increasing renewables in their energy mix could go beyond keeping Box on their customer lists. “They start to realize if they can get more renewable energy, then that’s probably a selling point for them in the future,” Broer said.
As Box gets bigger, it could be in a position to look far and wide for ideal locations for entire data halls on a wholesale basis. And perhaps someday it will be able to build its own data centers and pursue renewable energy projects like Apple or Facebook. For now, though, it’s consuming more and more space in retail arrangements with colocation providers, Broer said.
“I needed a new cage in the spring, I needed a second cage in the summer, and I’m asking for another one in the fall,” he said, speaking of the company’s expanding infrastructure footprint. | | 2:00p |
Microsoft Begins Bulk Purchases of Wind Energy  Microsoft Utility Architect Brian Janous, shown during his keynote at Data Center World Spring, says the company has made its first purchase of utility-scale renewable energy. (Photo: Josh Ater)
Microsoft has begun purchasing renewable energy to support its data center infrastructure, signing a 20-year power purchase agreement for wind energy in Texas that will be funding in part by proceed from Microsoft’s carbon fee.
This is Microsoft’s first “utility scale” purchase of renewable energy, a practice that has been used by Google to offset the carbon footprint of its data centers.
“We’ve purchased renewable energy credits, but this is the first time we’ve entered into long term purchase agreement,” said Microsoft Utility Architect Brian Janous.
The project of two 55 megawatt wind farms totaling 110 megawatts of wind power, located 70 miles west of Fort Worth, Texas. At peak power, it’s enough for 55,000 homes.
Last year, the EPA recognized Microsoft for purchasing 1.9 billion kilowatt hours of renewable energy credits, making it the second largest purchaser overall.
Funded by Internal Carbon Fee
The carbon fee is the cornerstone of Microsoft’s commitment to renewable energy and becoming carbon neutral. The company instituted an internal carbon fee that’s designed to increase the company’s costs for using carbon-based forms of energy, to help curb the usage and move the company towards greener pastures.
By placing a dollar value on a metric ton of carbon, Microsoft is building environmental sustainability into its long term business planning and creating a blueprint for more renewable energy purchases going forward.
“We’re doing everything we can to lower energy cost, using outside air and reducing the amount of electricity we use per workload, but our demand is still going up,” Janous says, adding that data center energy efficiency is just one part of a larger puzzle.
“When we look at the way energy is consumed, it’s small part of it,” he said. “A tremendous amount of energy is lost before that electron gets to my meter. The efficiency of power from the electric grid is in the mid 30s. There’s room for improvement for the server design as well. The best way is to secure renewable energy.”
Greening The Grid
This deal will not power Microsoft datacenters directly. Indirectly, the company’s San Antonio data center draws power from the Texas power grid, and once operational the wind farm will contribute wind-generated electricity to this same grid, reducing the overall amount of emissions associated with operating these facilities.
“This is a first step among many on this journey to reduce our carbon footprint,” said Janous. The company, in addition to looking to renewable energy, is vigorously trying to lower the amount of electricity per compute, and lowering overall energy consumption of the cloud. | | 2:00p |
RagingWire Charts its Next Phase of Growth  RagingWire is expanding on both coasts, a process that will continue and perhaps accelerate under the new ownership from NTT Communications. Here’s a look at the exterior of the company’s Ashburn data center. (Photo: Rich Miller)
Acquisitions usually bring change – changes in personnel, culture and direction. For RagingWire Data Centers, it will mean more of the same, although perhaps on a larger scale.
Last week RagingWire announced that Japan’s NTT Communications is acquiring 80 percent of the company. But rather than being absorbed by the global telecom titan, RagingWire will retain its brand and management team and operate as an autonomous unit within NTT.
It’s not the first time we’ve seen this approach recently. IT infrastructure provider SoftLayer is maintaining its brand and operating as an autonomous unit within IBM, which acquired the company earlier this year. Why are these
“We looked at whether we wanted to do an initial public offering or take on more debt,” said Doug Adams, Senior Vice President and Chief Revenue Office of RagingWire. “We decided that the right strategy was to find a partner.”
Growth in Capacity and Staff
In NTT, RagingWire found a partner in need of data center capacity and feet on the ground in the United States. The Sacramento-based colocation provider has been in expansion mode on both fronts, having added more than 100 staff over the past year as it ramped up to bring another 20 megawatts of data center capacity online. So it made a deal in which NTT would acquire 80 percent of the company, with existing management retaining a 20 percent stake and an ongoing role.
“They believe in the strength of our brand and our employees,” Adams said of NTT. “This is not a roll-up play. They want to see us grow, and they want to keep us autonomous.”
RagingWire has annual revenues of approximately $85 million. It recently began construction of a new 150,000 square foot data center in Sacramento and will soon break ground on a 78 acre parcel of land in Ashburn, Virginia where it plans to build up to 1.5 million square feet of data center space.
Doubling NTT’s North American Footprint
The deal will more than double NTT Com’s data center footprint in the U.S., where it currently has data centers in northern Virginia and Silicon Valley. NTT says the additional 650,000 square feet of space operated by RagingWire will allow it to meet strong demand for data center services in North America. It also positions NTT for future growth, as RagingWire has an expansion underway in Sacramento and has acquired property for a large campus in the key data center hub of Ashburn, Virginia.
NTT America provides managed hosting services out of facilities in Ashburn as well as California facilities in San Jose and Santa Clara in Silicon Valley. In buying control of RagingWire, NTT is making a major push into the market for colocation and wholesale data center space, rather than seek additional space for managed services.
“We’re a colocation company and that’s what we do,” said Adams. “We’re going to remain carrier neutral and not offer services. We’re not going to take on a services focus.”
The deal brings some immediate strategic benefits for both companies. NTT now has data center capacity to support the expansion needs of clients seeking space in the U.S., while RagingWire customers can now access space in NTT’s global network of 150 data centers.
Geographic Expansion Likely
NTT’s financial resources will also make it easier for RagingWire to expand its geographic footprint.
“The most immediate benefit is that it helps us with our growth needs,” said Adams. “We’re looking to expand in strategic Tier 1 markets.”
RagingWire already has capacity in two of the six “Tier 1″ data center markets – northern California and northern Virginia – meaning the potential expansion markets under consideration would likely include Dallas, Chicago, greater New York or Los Angeles.
Why use RagingWire as a vehicle for North American expansion? A key factor is that RagingWire has an in-house construction unit, and experience with expansion projects in busy data center markets, according to Chief Operating Officer Jason Weckworth.
“We have a very competitive cost to build,” said Weckworth. “We remain about 15 to 20 percent below the average (cost per megawatt).We’re expanding fast, and this gives is the ability to accelerate that growth.” | | 2:33p |
StumbleUpon’s New Home Is RagingWire’s Sacramento Data Center  An overhead view of the chiller plant at RagingWire Enterprise Solution in Sacramento, Calif.
StumbleUpon has selected RagingWire’s Sacramento data center campus as its new home. The fast-growing social media company is currently in the process from moving from its current data center in Silicon Valley and into RagingWire for its main production data center.
Founded in 2001, StumbleUpon touts 30 million users and 100,000 advertisers. It has fine-tuned and continues to fine-tune the art of the recommendation; something that many companies are scrambling to accomplish these days.
This is a meaningful customer win for RagingWire, both in terms of size and visibility of the client. The StumbleUpon HBase datastore is currently running across 150 servers and contains more than 12 billion rows and 50 terabytes of data. HBase is just one of many technologies utilized in the data center environment which also houses an additional 750 servers. The datastore serves as the foundation of StumbleUpon’s discovery platform, where instead of searching the web, users “stumble” to encounter recommended content from peers, page ratings, interest mapping, and social networking.
Big-Time Database
Many StumbleUpon users have a long history of stumbles, creating a ton of information. “We have 42 billion rows just in one database, which we’re currently migrating to Aerospike. Aerospike is incredibly fast and very scalable,” said Paul Hands, Operations Director at StumbleUpon.
For those that don’t know StumbleUpon and happen to be at work right now, don’t check it out until you have some free time, because it’s a website that alters the space-time continuum so that several hours pass by in a manner of minutes. At least it feels that way. It is one of the most addictive things on the web.
Users define their interests and the service finds websites and content that it thinks they’ll enjoy. By pressing “Stumble” another page is served up. A user can stumble away for hours looking at content across a range of their interests, while giving pages the thumbs up or thumbs down. The system learns what types of sites each user likes. The longer StumbleUpon is used, the more accurately it’s able to match you up with something you’ll find interesting, but it’s pretty good at this out of the gate.
“We recommend good content, original, not seen before content,” said Hands. “We build models behind the scenes with very sophisticated recommendation technology. Because of the sheer number of sites on the internet, and the number of users, we end up with a large amount of data. We have every stumble every recorded; that’s a huge database.”
Why RagingWire?
Hands knows infrastructure from his work as a data center manager at Google. When he emphatically mentions that he was impressed with RagingWire’s Sacramento facility, it’s noteworthy.
“We spent three to four months looking at candidates,” said Hands. “The cost structure in the existing infrastructure wasn’t right, and it was inefficient infrastructure, not power dense enough for our needs.
“We spent a lot of time negotiating back and forth. We discovered Ragingwire was the only realistic choice – the pricing structure was excellent, the connectivity, state of the art design, I was blown away. RagingWire was easy to deal with, very transparent.”
StumbleUpon fielded Ragingwire with a variety of “what if” scenarios, and the company’s responses helped to seal the deal. “I’d ask stuff like ‘what happens when this generator goes out, how do you react?’ You can sense when someone’s being evasive or economical with the truth,” said Hands. “RagingWire was upfront about everything.”
Hands says StumbleUpon wasn’t comfortable having its primary facility in an earthquake zone, which provided an additional reason to seek out an alternative.
Growing Mobile Usage
StumbleUpon is growing and profitable. Mobile usage represents a growing opportunity for the company, as it’s easy to casually stumble through webpages to kill a few minutes from time to time. “Forty percent of all our stumbles are mobile, up 25 percent from last year,” said Mike Mayzel, Director of Communications for StumbleUpon.
The company just made its first acquisition in September, a video discovery company called 5by. “It’s a very new company doing interesting things. They’re going to remain working on their app,” said Mayzel. 5by does a very similar thing to StumbleUpon in terms of recommendations, but with video. The company believes this is where advertising is moving, and where the content opportunity lies next.
It serves 30 million users but has only 70 employees. “We’re now hiring. we’re looking for people who want to live and breathe big data,” said Mayzel. | | 3:40p |
Enabling the Move to Data Center Switching Fabric As networks expand and demand grows challenging resource delivery, data center environments must support the next-generation in networking technologies. Why? The growth around the modern cloud, workloads, data, and applications will without a doubt continue to push forward.
For the data center to evolve, the network must be modernized. This modern data center network needs to respond to the requirements of new technology such as server and desktop virtualization. It also needs to deliver a high-quality user experience, especially for real-time applications such as video, collaboration and video surveillance, which executives now consider essential to their organizations’ responsiveness, creativity and security. As the number of mobile devices begins to overtake desk-bound and office-bound equipment, the network must also support the plethora of smartphones, tablets and other mobile devices infiltrating the organization that are often no longer under control of the IT team and thus cannot be tuned for application delivery.
Virtualization, new applications and new devices require moving away from the old multi-tier network architecture in the data center to a true switching fabric that provides low-latency, any-to-any connectivity. In this white paper from Alcatel-Lucent, you will learn why the traditional data center network infrastructure is under extreme pressure, what changes need to happen to modernize the data center network infrastructure, and how an application fluent approach can help ensure a successful step-by-step transition toward the next-generation enterprise data center switching fabric.
Download this whitepaper today to learn about:
- Enabling multi-site data center models
- Creating private and hybrid cloud environments
- Incorporating virtual network profiles
- How to bring application fluency into the data center switching model
- How to modernize your network to create direct data center optimizations
The data center infrastructure will continue to undergo a rapid transformation to drive down costs and improve the end-user experience in the face of rapidly evolving technology trends. The Alcatel-Lucent Enterprise Application Fluent Network helps enterprises ensure a high-quality end-user experience and more simplified operations. As the dependency around the modern data center continues to increase, improving how applications and resources are delivered to the end-user will become a key part of creating a truly efficient environment. | | 4:00p |
Dell Powers Reel FX Animated Film “Free Birds”  The Thanksgiving-themed film, titled “Free Birds,” benefited from the computing power of Dell.
Boring servers whirring away in a data center never seem exciting until they are used to power an animated film and show off that technology in full glorious color and clarity. Reel FX has done just that, and used Dell servers to develop, produce and animate its first full-length feature film, “Free Birds.” The movie features the voices of Owen Wilson, Woody Harrelson and Amy Poehler, and is about two turkeys who travel back in time to try to take themselves off the Thanksgiving menu.
After 20 years of making animated content for commercials and interactive projects, doing a full-length movie was a massive undertaking for Reel FX, and the studio relied on a complete Dell solution comprised of Dell Precision workstations, Dell Latitude laptops, Dell PowerEdge servers, Dell Networking, Dell Storage, and Dell UltraSharp monitors to meet the intensive demands of the project under a condensed timeline. For Reel FX, “Free Birds” is a testament to the fact that “with incredible machines in the hands of brilliant, creative artists, anything is possible,” said Chuck Peil, the vice president of business development at the studio.
“In a typical animated film, as you near the end of production you’re producing 35-40, maybe 50 shots a week,” said Kyle Clark, chief operating officer at Reel FX. “In ‘Free Birds,’ because the schedule was compressed, we were doing upwards of 200 shots a week. So the technology had to be on point and very consistent. Any downtime minutes were critical at that point.”
With its Dell-powered render farm, Reel FX increased processing power to 12 cores per machine and moved from 24GB to 48GB systems with an upgrade to Dell PowerEdge C6220 rack servers, allowing them to reduce render time per frame by 30 percent. Last year, Reel FX switched from its previous vendor entirely to Dell Networking for the low latency, high speeds and full breadth of product range. With Dell Networking S55 and S4810 high-performance switches, Reel FX was able to increase network throughput, taking the studio from a 10Gb backbone to a 20Gb backbone.
“Free Birds” contains a large amount of feathered turkeys, dense forest scenes, and many complex effects – all elements that require a lot of processing power to generate and as many iterations as possible in order to define and refine the look of the finished frames. To meet these high demands, artists and animators at Reel FX used Dell Precision T5600 workstations, which offer the high-end graphics and power they need to run compute-intensive professional applications including Side Effects Houdini, The Foundry NUKE, Autodesk Maya and a home-grown plug-in for Maya to create the characters’ feathers. To manage its dynamic software environment in a limited amount of time and with a limited amount of man hours, Reel FX used the Dell KACE K1000 Systems Management Appliance.
“We’ve been partners with Dell for a very long time. They’ve provided great customer service and are always there when we get into a bind at the end of production. There’s someone we can call, they respond in a timely manner, and we feel like they understand our business,” said Ross Moshell, director of business technology for Reel FX. “And then of course from the pure technology perspective, we benchmark our machines, processors and infrastructure on a very regular basis and Dell continues to win in those benchmarks.” | | 4:11p |
Google Data Center Investment in Finland Tops $1 Billion USD  The Google data center in Hamina, Finland, which was formerly a newsprint plant. Google announced major data center expansions this week.
Google’s data center spending and investment continues to soar. The Internet giant announced a EUR450 million (which is about 608 million U.S. dollars) expansion at its Hamina data center in Finland. This comes in addition to an already announced EUR350 million (or about 473 million U.S. dollars) investment.
Worldwide, the company recorded a whopping $2.29 billion (in U.S. dollars) in capital expenditures in the third quarter of 2013 alone, driven primarily by massive expansion projects. This further investment shows there’s no sign of this continued investment subsiding.
“As demand grows for our products, from YouTube to Gmail, we’re investing hundreds of millions of euros in expanding our European data centres,” says Anni Rokainen, Google Finland Country Manager. “This investment underlines our commitment to working to help Finland take advantage of all the economic benefits from the Internet.
Finland Project Is Very Efficient and Green
Google purchased a 60-year-old paper mill, the Summa Mill, in March of 2009 from Finnish paper company Stora Enso. The first phase of the project to convert it into a modern data center became operational in September 2011, and now serves Google users across Europe and around the world. The data center is one of Google’s most advanced and efficient worldwide, employing seawater from the Bay of Finland in its high-tech cooling system.
Starting in 2015, the data center will be primarily powered by wind energy via a new onshore wind park, which was announced last June. The company will sign additional agreements as it grows to power the data center will 100 percent renewable energy.
 View of the data center and a wind turbine. (Photo: Google.)
Economic Impacts to Finnish Community
The initial construction work turning the paper mill’s first machine hall into a data center lasted just over 18 months. At peak, the new construction will provide work for approximately 800 engineering and construction workers. Currently, the facility employs 125 full time and contractor roles, which is set to expand alongside the facility.
This investment comes at an advantageous time for Finland, as the paper industry has been hit hard. Locating one of its most advanced data centers in the Kotka-Hamina Region helps boost the tech industry at large.
“Finland needs more foreign direct investments in order to enhance our economy, growth and employment. The government accepted the national investment promotion strategy last December,” said Jyrki Katainen, the Prime Minister of Finland. “In the strategy, the ICT sector, including data centers, has been emphasized as one of the priority sectors. Therefore, Google’s investment decision is important for us and we welcome it warmly.”
Dieter Kern, Google’s Data Center Manager, noted, “We’ve received a wonderful welcome in Finland and are delighted by the country’s strong infrastructure and business-friendly environment. That’s why we’re happy to build out our capacity to deliver the lightning fast, easy-to-use services that people expect from Google.”
The increased investment comes in addition to a strong community outreach program in the Hamina region. In the spring of 2013, Google announced a new partnership with Aalto University and the regional development agency, Cursor. The partnership supports programs to improve the use of Internet by local small- and medium-sized businesses.
“Our ambition is nothing less than to jump-start Internet innovation in Eastern Finland,” says Will Cardwell, Aalto University Senior Advisor, Global Alliances. “The Google data center in Hamina offers Eastern Finland a tremendous opportunity to jump from the industrial to digital age.” | | 4:20p |
Partners Building on OpenStack Release Innovations  The OpenStack event in Hong Kong this week has generated news around networking and switches.
Enabled by the recent eighth edition of OpenStack (Havana) and heading into OpenStack Summit Hong Kong this week, vendors Brocade, Midokura, Cloudian and Mellanox have OpenStack-related announcements, highlighting storage, networking and switch innovations.
Brocade’s DNRM Blueprint
At the OpenStack event, Brocade (BRCD) will discuss and demonstrate Neutron – its blueprint proposal to extend the advanced networking capabilities of the OpenStack Networking framework. The Dynamic Network Resource Manager (DNRM) blueprint is intended to simplify the deployment and management of physical and virtual networking resources within cloud infrastructures. With a native, heterogeneous approach delivered by DNRM, OpenStack cloud environments can address the needs of specific applications or services. Features include policy-based management of physical and virtual network resources from multiple vendors, Supervisor, Interceptor, Plugins and Appliance Container components, support for NFV, and continued work with partners of the OpenStack community to deliver DNRM capabilities as part of the Spring 2014 OpenStack release.
“Public and private clouds are evolving from basic, homogenous entities to rich, service-oriented clouds that combine best-of-breed solutions, including both physical and virtual resources,” said Ken Cheng, CTO and Vice President, Corporate Development and Emerging Business at Brocade. ”To ensure these next-generation cloud architectures properly serve customers, Brocade is continuing to expand its investment and participation in open projects such as OpenStack and OpenDaylight. It is our belief that contributions such as the Dynamic Network Resource Manager equip customers to build the next-generation services cloud to deliver the flexibility and agility it needs.”
Midokura Releases Updated MidoNet
Midokura announced the latest release of its MidoNet technology, with Layer 2 Gateway support, advanced Layer 3 Gateway functionality and other new features. MidoNet offers integration and full support for OpenStack Grizzly and Havana releases, providing an efficient, simple and seamless network virtualization integration for OpenStack customers. The new release boasts advanced L3 gateway functionality, new management tools, and a command line interface.
“With a deep understanding of OpenStack, our team has been tirelessly driving enhancements to our game-changing MidoNet network virtualization technology,” said Dan Mihai Dumitriu, co-founder and CEO of Midokura. “With our new release, Midokura solidifies its disruptive position in the market and MidoNet as the natural network virtualization platform for the OpenStack community. We will continue to innovate alongside current OpenStack advances, and help advance the market.”
Mellanox Equipment Accepted to Havana Distribution
Mellonox (MLNX) announced that several key InfiniBand and Ethernet adapter and switch components for Nova, Cinder and Neutron have been accepted into OpenStack’s newly released Havana distribution. Leveraging Mellanox 10/40GbE or FDR 56Gb/s adapters and switches and the OpenStack Cinder block storage and Neutron plug-ins, cloud vendors can significantly improve storage access performance and run virtual machine traffic with bare-metal performance, while enjoying hardened security and QoS; all delivered in a simple and tightly integrated package.
“We are pleased to have Mellanox’s early support for the Red Hat Certified Solution Marketplace and collaborate with them on OpenStack’s enterprise advancement,” said Mike Werner, senior director, Global Ecosystems, Red Hat. “We look forward to OpenStack users seeing the intersection of Red Hat and Mellanox’s technologies during their deployments of the Havana distribution.”
Cloudian Brings Object Storage Product to Market
Cloudian announced a coordinated effort with partners Penguin Computing and Intel to bring their production-hardened object-storage product to market as a bundled turnkey solution for OpenStack users.
Built by Penguin Computing, the new scale-out, multi-datacenter capable solution boasts ultra-low power consumption and high drive density, enabled by the Intel Atom C2000 processor’s 64-bit architecture. The solution will be provided to enterprise users through Penguin’s global network of VAR partners. Cloudian is also available as software-only for customers to install on hardware of their choice.
“We are excited to work with Penguin Computing and Intel to deliver our carrier grade storage on OpenStack, which precisely addresses the demands of our customers,” said Michael Tso, Founder and CEO, Cloudian. |
|