Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, March 6th, 2014

    Time Event
    12:30p
    Netronome Network Cards Accelerate SDN and NFV Designs

    Networking solutions provider Netronome has launched a new platform architecture to augment virtual switch implementations with hardware acceleration NICs in standard servers for software defined networks (SDN) and network functions virtualization (NFV) designs. The new products include a suite of FlowNIC PCIe Gen3 cards that scale up to 200 Gbps, along with a new FlowEnvironment software package that provides standards-based APIs and configuration protocols for virtual switch offload and acceleration.

    “Netronome solves the scalability problem of virtual switch implementations in the intelligent network locations where the highest throughput and I/O densities are required, while maintaining the rapid evolution of a software-based edge,” said Niel Viljoen, founder and CEO, of Netronome. “With broad applicability to any virtualized server, the new products are optimized for use in servers running network and security applications, such as SDN middleboxes, SDN gateways and NFV appliances.”

    With up to 4 PCIe Gen3 x8 interfaces, the FlowNIC family packs high performance and port density into a PCIe Gen3 adapter, including 2×40, 4×40, 1×100 and 2×100 gigabit Ethernet options. The cards feature 216 programmable cores to keep pace with the rapid change in SDN protocols and standards. Additional hardware accelerators are provided for cryptography, nanosecond accuracy time-stamping, SR-IOV, VMDq, and RoCE. Massive on-chip and on-board memories deliver 24M flow table entries, and 128K wildcard rules.

    “NTT Communications has been at the forefront of advancing SDN technologies with commercial use for our cloud computing, datacenter and network services for many years,” said Mr. Yukio Ito, director, member of the board, and senior vice president of service infrastructure at NTT Communications. “We’re evaluating Netronome’s solution of processors, software and NICs and expect to use them to extend our SDN offering into the Cloud-VPN Gateway with close collaboration with NTT Innovation Institute, Inc.”

    Broad support for networking standards

    FlowEnvironment software from Netronome features more than a 20 times  increase to virtual switching performance and significantly increases the number of virtual machine instances available per server. The FlowEnvironment includes standards-compliant support for Open vSwitch (OVS) 2.0, OpenFlow 1.4, Intel DPDK, and network virtualization protocols such as NV-GRE and VXLAN. The production-ready software provides standard APIs and is fully supported across Netronome’s FlowProcessors.

    “Operational SDN and NFV environments need hardware to be effective – it’s not just about software. Netronome is innovating in the data plane by integrating key SDN, OpenFlow, and NFV functionality into the design of their new processors,” said Michael Howard, principal analyst at Infonetics Research. ”Service providers will want to examine the possibilities of using equipment with this silicon for their SDN and NFV deployments.”

    “Procera’s Network Application Visibility Library (NAVL) library uses sophisticated techniques to accurately identify dynamic applications and protocols,” said Shawn Sweeney, director of product management, Procera. ”Maintaining a small x86 processing and memory footprint is imperative for high performance DPI applications. With the standard APIs available in Netronome’s FlowEnvironment software, the NAVL engine and Netronome’s FlowNICs work in tandem to provide unparalleled classification accuracy with high performance in standard servers.”

    1:06p
    New Owners for Philadelphia Carrier Hotel at 401 North Broad

    401nbroad-2

    Amerimar Enterprises and Abrams Capital have acquired 401 North Broad Street, the major data center and carrier hotel in Philadelphia, and will partner with telecom industry veteran Hunter Newby to own and operate the property. The new owners will invest and reposition the 1.3 million square foot, fiber-rich building.

    “Over the past 30 years, 401 North Broad was strategically repositioned to become the preeminent telecom carrier hotel in the Philadelphia region under the stewardship of The Stillman Group,” said Amerimar CEO Jerry Marshall. “Amerimar looks forward to continuing with the repositioning of the property through our $70 million capital improvement program. 401 North Broad, with over 80 networks, is a logical addition to our portfolio of carrier hotels, and we are very excited to partner with Hunter Newby on yet another telecom property.”

    The building is about 70 percent leased and has good cash flow, according to Marshall. “The most important thing about the bulding is the connectivity,” he said. “Any business coming to this city looks at this place. It should win any telecom, network deal until it’s full. It’s been in the hands of an owner who was not really set up to put a lot of capital in the property, and it will benefit tremendously.”

    New Meet-Me-Room to be Added

    Perhaps the most noteworthy investment will be the creation of a 25,000 square foot Meet-Me-Room where tenants will be able to make physical connections between their networks. The new owners will also update the facades and putg in new windows, as well as invest in the interior by improving and putting in additional shaftways.

    Amerimar specializes in redeveloping and repositioning real estate assets, with its projects including the nearby data center hub at 1500 Spring Garden in Philadelphia.

    Amerimar and Newby have teamed on two other deals, both important carrier hotels -  325 Hudson in New York and Kansas City Data Hub 1102 Grand, which saw a fair share of upgrades post-acquisition. “I’d say we have an NFL city strategy is the best way to put it,” said Marshall. “We want to buy the most important, or one of the most important buildings in the city where we can take it to the next level, improving the infrastructure for the tenants and implementing our own Meet-Me-Rooms. There’s no monthly recurring costs for cross connects, ever. By owning the real estate and owning the business, we have a lower cost structure so it benefits the customer.”

    Newby is well connected in fiber and interconnection circles from his experience building the Telx Group’s business at the 60 Hudson Street carrier hotel in Manhattan.

    “401 North Broad Street is an extraordinary carrier hotel,” said Newby.  “Not only is it a major junction point for the north-south domestic fiber routes on the east coast, but it is also home to several transatlantic submarine cable systems, making it a renowned international gateway.”

    Philadelphia’s Key Data Center Hub

    401 North Broad was originally developed in 1931. The property was redeveloped into the major carrier neutral facility between New York and Virginia and serves as a major hub for traffic, as well as an interconnection point between carriers service providers and enterprise customers.  The building is also a gateway to long haul fiber in the region and offers direct access fiber routes to Europe.

    The big tenant at 401 Broad Street is Sungard Availability, which is also the dominant colocation provider in the region. The key driver for business in the region, apart from local business, is disaster recovery. Sungard’s strong disaster recovery business is the reason it’s so strong in the market. Philadelphia is located less than 100 miles from New York City and is also located less than 160 miles from a busy northern Virginia market, placing it between two of the top data center markets in the United states. The Philadelphia data center market has seen slower growth as capital tends to be deployed in those nearby markets. However, this means there is a lot of opportunity, given that it’s believed to be an underserved market.

    The Philadelphia economy is based on diverse sectors that include health care and medical services, manufacturing, and financial services.

    “We’re very proud of what we’ve accomplished at 401 North Broad over the years, turning a somewhat bedraggled and relatively empty, but historically significant, building into this extraordinary asset,” said Abbott Stillman, Chairman of The Stillman Group. “I am truly delighted to have passed it on to Jerry Marshall and his partners and team, who collectively will, I’m sure, take this property to the next level. They are very worthy and honorable successors, and we know they will meet with the great success they deserve.”.

    Long History for Partnership

    The acquisition is the culmination of a longstanding partnership between Newby and Marshall, a rich collaboration that has so far rejuvenated all the properties it touches.

    “Back around 1990-91, we (Amerimar) were developing a neighboring property to 401 North Broad,” said Marshall. “We bought it to make it a telecom hotel. We got a couple of leases going, but it wasn’t turning into a carrier hotel. I  found out Hunter was doing the Carrier Neutral Meet Me Room at 60 Hudson for Telx, and reached out to Hunter and asked for his thoughts.  He said ‘What you have here is an amazing data center, but the telecom hotel in Philly is 401 Broad. All of the carriers are there, and there’s nothing you can do to get them to leave.’ I always wanted to own 401 North Broad after I found out we couldn’t make the neighboring property a carrier hotel.

    “I’d send Hunter opportunities to get his opinion,” Marshall continued. “When we bought 325 Hudson, Hunter said ‘I know this building, it’s very important and we can make magic with this building.’ So we bought it completely full, and the magic is happening. We’re getting all the names that we would hope for. Over time we’ll continue to expand that Meet Me Room. Them I said, ‘Hunter, what else?’ He said ‘I have a couple of friends at 1102 Grand that have a lot of value but are probably not inclined to take it to the next step.’ We bought that, and we’ve been thrilled with the customer activity. We’re expanding, adding power and new customers are coming to the property. Once a week, we have a call with the business developer there, and it is the highlight of my week.”

    Debt financing for the acquisition was provided by Starwood Property Trust.   Ropes & Gray LLP, with a team led by partner Walter R. McCabe III, represented Amerimar and Abrams Capital on the acquisition.

    1:30p
    Software-Defined Availability: Redefining Uptime for the Cloud

    Dave LeClair is senior director of strategy at Stratus Technologies, a provider of infrastructure based solutions that keep applications running continuously in today’s always-on world.

    Dave-LeClairDAVE LeCLAIR
    Stratus Technologies

    Software-defined architectures have been redefining nearly every aspect of our digital existence — from virtualized data centers to the systems that regulate the air temperature in your car. So, where does the concept of software-defined functionality go next? I believe one of the most significant implementations of the concept is just around the bend — and it’s happening in the cloud.

    But, the two biggest obstacles to adopting cloud models are security and availability. And, availability is rapidly becoming the single biggest risk businesses face today as they move to the cloud. If your business is reliant on public or private clouds and those services go down, without planning for availability, you’re out of business. What’s clear is traditional hardware-based availability strategies can’t provide the whole solution in a software-defined world, which the cloud is. Since most cloud infrastructures use commodity hardware and are designed for scale, failure is an ever-present reality. So, many businesses need to rethink application and infrastructure availability to migrate to the cloud, while providing the required availability, quickly and cost-effectively.

    Of course, you can build availability intelligence into the applications themselves. Indeed, native cloud applications are often built from the ground up with failure awareness and are designed to automatically restart workloads on new compute nodes to keep running.

    But, what about building availability into legacy applications? The cost of re-engineering these would be prohibitive — and would pose the risk of destabilizing the environment. Even for applications built with availability, a single point of failure in the control plane of your cloud can be just as devastating as a server going down.

    Defining a New Approach

    Enter software-defined availability. With this approach, failure prevention and recovery decisions are moved outside of the application layer to the underlying software layer. And unlike traditional hardware-based availability solutions, uptime is not dependent on a specific set of hardened servers. Availability is, in effect, abstracted from the application and from the hardware.

    This abstraction will enable businesses to do some really useful things to overcome the risks of unplanned downtime. For starters, they can dynamically create highly available systems by linking systems together — either via physical network connection or even using software-defined network connections — to create paired systems in real time with redundancy for high-availability or fault-tolerant levels of protection, as required.

    By abstracting availability, businesses – including line of business executives and IT acting as brokers of cloud services – can change the level of availability based on their current application requirements. This is extremely useful for applications that are mission-critical part of the time but not all of the time. For example, consider how useful this would be for a finance team, who could dynamically raise the availability level to mission critical for a financial application running during the last few days of the quarter when wrapping up the fiscal calendar, but who also could lower the availability at other times. Imagine applying the necessary fault-tolerance resources to ensure availability during critical times, while freeing up those resources the rest of the time? This dramatically reduces cost, complexity and risk, without compromising availability.

    Put simply, software-defined availability gives the line of business the control and flexibility to provide the right level of availability at the right time, on a per-workload basis, according to policies that business group defines. This is a game-changer and quite a departure from past “software availability” approaches that only provided “good enough” levels of availability based on static clustering capabilities. Taking this new approach one step further, imagine the possibilities when IT can provide a service catalog to the business? In effect, by “dialing-up” availability as needed based on a policy engine – all abstracted from the applications – IT can manage availability for the entire cloud environment holistically, and that’s a significant disruption that will unlock innovation for that organization.

    Tapping the Flexibility of the Cloud

    This kind of intelligent, dynamic software-defined availability for existing applications is only possible because businesses can take advantage of the elasticity and orchestration capabilities offered by the cloud. It also helps them leverage the inherent flexibility of open-source software for building clouds, such as OpenStack.

    With this policy engine that defines availability parameters for individual applications, IT can map availability requirements to specific requirements, such as SLAs. IT also can specify compliance attributes for individual applications — such as credit card processing applications that must run without downtime in a PCI environment. The policy engine dynamically manages workloads so applications have the availability resources they require, when they require them.

    This approach to software-defined availability also offers important advantages when developing new cloud applications. First, it dramatically simplifies development up-front, greatly reducing time-to-market for rolling out new or updated applications and getting content and functionality to users. But, just as important, it provides the flexibility to rapidly change availability requirements as the organization’s needs change, without having to “lift the hood” on the application code. IT can simply modify the policies – easy and cost-effective!

    This new software-defined availability approach also helps reduce complexity, eliminating the need to firewall mission-critical applications. This is a huge benefit that doesn’t require IT to build and maintain multiple environments with different availability requirements, which is messy. With software-defined availability, IT has one environment that can be tailored with availability to specific applications as needed.

    Bridging the Availability Gap

    Redefining availability, therefore, has clear advantages for companies building their own private or hybrid clouds. Companies can use this approach to fill the gaps in availability guarantees offered by many public cloud service providers. On the flip side software-defined availability may also prove to be a viable solution for public cloud providers looking to meet their customers’ demands for mission-critical availability. That could be a real game-changer, helping to make public clouds “ready for prime time” for tier one business applications.

    Despite all the hype, the cloud is still in its early days. But, it’s already changing everything — how applications are written, deployed and managed. Applying legacy approaches to availability in the cloud doesn’t make sense. Software-defined availability represents the next-generation approach — one that uses the inherent elasticity of the cloud to meet the unique availability requirements of individual applications, at specific times and under specific circumstances.

    In an always-on world, availability is more critical than ever. And, software-defined availability is how we’ll meet that challenge in the brave, new world being built-out in the cloud era.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Five Reasons to Consider Hybrid Cloud Computing
    cloud-vm-movement

    There are many reasons to consider a hybrid cloud model. Bill Kleyman highlights five of them.

    Cloud computing has become a massive engine helping the corporate and end-user world interconnect at new, massive scales. There is a very real reason that data center business continues to grow around the surge of cloud traffic. There are two very interesting statistic from the latest Cisco Global Cloud Index report:

    • Annual global data center IP traffic will reach 7.7 zettabytes by the end of 2017. By 2017, global data center IP traffic will reach 644 exabytes per month (up from 214 exabytes per month in 2012).
    • Global data center IP traffic will nearly triple over the next 5 years. Overall, data center IP traffic will grow at a compound annual growth rate (CAGR) of 25 percent from 2012 to 2017.

    There is clear growth in the amount of traffic being pushed through the modern data center. Why? The user, the business and the technology have all evolved. The current generation revolves around a new “on-demand” lifestyle where workloads and information must be available anytime, anywhere and on any device. Mobility has become the new normal and cloud computing is the engine to deliver all of this next-generation content.

    So why move to the cloud? Or– why move to this type of specific cloud model? The future of the cloud landscape will revolve around a much more agnostic cloud infrastructure. At some near future point, almost all resources will be interspersed and shared between private and public instances. With that in mind, many still have questions around one of the most popular cloud models – the hybrid cloud environment. So let’s look at five very real use-cases where organizations (and users) leverage the power of the hybrid infrastructure.

    Business Strategy

    Every year your organization sits down to analyze the future strategic initiatives of your company. If you haven’t looked at some type of cloud model already, you may be falling behind your competition. One great reason to look at a hybrid cloud model is to make your enterprise more dynamically adaptive. First of all, it’s not that challenging any longer to expand your existing data center infrastructure into a cloud model. Secondly, many organizations are already leveraging secondary data centers, application platforms, and data repositories which completely reside in the cloud. Remember, creating a hybrid cloud model doesn’t mean you have to buy out space at Amazon AWS. A diversified cloud solution can be as simple as a SaaS application with a direct connection into your data center or as large as a hot-hot disaster recovery platform running within a cloud-based data center provider.

    Content Delivery and Expanding the Edge

    Do you have a growing business? May you already have a distributed data center platform but are having issues distributing rich content? Content delivery networks (CDNs) and edge computing are big components of a hybrid cloud. Basically, you can leverage remote data center sites which help you place rich or valuable information closer to the user. Here’s other use-case – what if you’re a compliance or regulation-driven organization? What if you can’t have your information leave a certain region or area? Utilizing the hybrid cloud model and the data delivery mechanisms behind it – you’re able to intelligently control data flow throughout your cloud.

    New-Era Business Growth Dynamics

    The modern business needs to scale very dynamically. One of the best, and cost-effective, ways to do this is through a hybrid cloud model. Whether it’s an acquisition or an influx of users based on new demand, a hybrid cloud platform allows you to provision resources very quickly without having to invest in your own data center hardware. This can be a temporary use-case for a sudden growth in users, or a permanent model allowing your organization to create regionalized data centers for major offices. Unlike a few years ago, the speed at which organizations grow has changed dramatically. To keep up with competition and growing market demands, companies must adapt much faster and be ready to grow very dynamically. Utilizing a hybrid cloud computing model enables administrators and organizations to scale quickly, efficiently, and with a quantified cost.

    2:33p
    Zayo Acquires Dallas Provider CoreXchange
    infomart-exterior1

    zColo’s acquisition of CoreXchange gives the company two data suites at the Informat carrier hotel in Dallas. (Photo: Rich Miller)

    zColo, the colocation division of international bandwidth provider Zayo Group, has acquired CoreXchange, a data center and managed services provider in Dallas, Texas.

    The purchase [rpvides xColo with one new stand-alone data center, located at 8600 Harry Hines Boulevard, as well as an additional 12,000 square feet in CoreXchange’s suite in the Dallas Infomart at 1950 N. Stemmons Freeway. The transaction was funded with cash on hand.

    This increases the zColo footprint by 18,000 square feet in Dallas, to more than 34,000 square feet. Zayo currently operates colocation facilities at 2323 Bryan St. and the Dallas Infomart, where zColo will gain an additional suite with this purchase.

    While CoreXchange’s space is located completely in Dallas, Texas, the company recently said that only 20 percent of its customers are actually within the region, thanks in part to it’s online ordering portal, ColoUnlimited.  As a part of the acquisition, Zayo will also assume ownership of ColoUnlimited. zColo will continue operating ColoUnlimited in the Dallas market before integrating into Zayo’s recently announced Internet Portal, Tranzact, in the second quarter of 2014. Tranzact will enable transactional ordering capabilities across zColo’s national data center footprint.

    Adding the CoreXchange facilities increases Zayo’s footprint and options for its customers in the Dallas metro area. Zayo has a strong fiber network in the Dallas area, spanning more than 500 route miles and reaching more than 250 on-net buildings.

    “Dallas is one of the world’s leading corporate headquarter hubs and has seen tremendous growth in the high tech and energy fields,” said Chris Morley, president at Zayo Group. “Expanding the zColo footprint not only provides more alternatives for colocation in the Dallas market, but provides easier access for customers to tap into Zayo’s international Bandwidth Infrastructure footprint.”

    CoreXchange was founded seven years ago by industry veterans from The Planet, Rackspace, NTT/Verio and Exodus Communications. One of those veterans was Peter Pathos from The Planet, which was acquired by GI Partners and merged with SoftLayer. The Planet, which was known for dedicated hosting by the time of its acquisition, was actually started as a colocation company. CoreXchange rose out of those foundations as The Planet moved away from colocation.

    Zayo has continuously built its zColo colocation business through acquisitions. The company acquired CoreNAP in Austin, Texas in 2013. The company also recently expanded in Miami Florida.

     

    3:00p
    Open Networking Summit: Brocade Adds OpenFlow Support

    At the Open Networking Summit in Santa Clara, California this week Pluribus Networks announced inNetwork Analytics, Procera Networks’ NAVL Engine was selected by GFI Software, and Brocade launches support for OpenFlow 1.3 across its IP portfolio of products.

    Brocade adds OpenFlow 1.3 support across portfolio.  Brocade (BRCD) announced support for OpenFlow 1.3 across its IP portfolio of routing and switching products, extending the company’s leadership and evolutionary approach to Software-Defined Networking (SDN). OpenFlow 1.3 delivers a richer feature set required for commercial and enterprise networks to address complex network behavior and optimize performance for dynamic SDN applications. These features include Quality of Service (QoS), Q-in-Q, Group Tables, Active-Standby Controller, IPv6 and more. Support for OpenFlow 1.3 on the Brocade MLXe, CER and CES product families will is planned for June 2014 ”Brocade’s commitment to SDN is clear in the significant contributions to the technical leadership of the Open Networking Foundation and OpenFlow,” said Curt Beckmann, chair of Forward Abstraction Working Group (FAWG) at ONF and Principal Architect at Brocade. “The real payoff of Brocade’s standards work comes as we provide deployable and compelling SDN solutions. Our latest example is the Flow-Aware Real-Time SDN Analytics OpenFlow application, for which Brocade has been named a finalist as part of the Open Networking Summit’s SDN Idol competition.”

    Pluribus Networks launches in Network Analytics.  Pluribus Networks announced its analytics solution, inNetwork Analytics, which empowers both NetOps and DevOps with faster time-to-problem resolution and seamless visibility into both overlay and underlay networks. Based on the company’s Server-Switch product line and Netvisor OS, the Freedom Analytics solution integrates into the user interfaces to show real time and historical data using time machine technology. Uses include network and fabric assurance, application performance monitoring, network security and compliance reporting. The Pluribus Freedom Server-Switch brings directly into the network advanced fabric-wide monitoring and analytics services at a scale and performance unmatched by traditional network switches. By leveraging the Netvisor OS and Fusion-io drive technology, the Freedom Server-Switch can drive hundreds of thousands of IOPS to power the Freedom analytics engine. “Innovative solutions such as Pluribus inNetwork Analytics simplify compute, networking and virtualization in a unified platform,” said Will Hall, Fusion-io vice president of OEM sales. “Fusion-io is proud to provide industry-leading acceleration and reliability with the ioScale for the Pluribus inNetwork Analytics solution.”

    Procera NAVL engine selected by GFI Software.  Procera Networks (PKT) announced that its Network Application Visibility Library (NAVL) has been selected by GFI Software to enhance GFI WebMonitor, a web security solution. The NAVL software engine provides granular traffic visibility, intelligence and control for GFI’s solution so that IT managers are able to enhance Internet security, optimize bandwidth investments and usage, and monitor network activity for abuse and potential threats. “SMBs receive a rapid return on investment with GFI WebMonitor, immediately noticing improved productivity, less administrative time spent cleaning infected machines, and better use of bandwidth resources,” said Sergio Galindo, GM, Infrastructure Business Unit at GFI. “Using Procera’s NAVL solution enhances our product capabilities, enabling us to focus on our core competencies and more strategic activities that drive our bottom line so we can address the evolving concerns and demands of our SMB customers.”

    3:33p
    SGI Delivers Shared Memory System for Earth Simulator
    One of the compute blades for the SGI UV 2000 (Photo: SGI)

    One of the compute blades for the SGI UV 2000 (Photo: SGI)

    SGI announced that the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has selected a SGI UV 2000 with Intel Xeon processor E5-4600 v2 series for per and post-processing offline grid simulations for its Earth Simulator supercomputer. Working with NEC, the new supercomputer combines Intel’s latest high performance processors with SGI’s scalable open shared memory platform. The supercomputer broke the previous SGI record for SPEC OMP2012, recording a 13.6 percent improvement with SPECompG_base2012 of 61.9.

    “SGI UV 2000 powered by Intel Xeon processor E5-4600 v2 series propels us to the next level of large scale, fine-grained simulations that were otherwise unthinkable due to memory constraints,” said Dr. Toshiyuki Asano, group leader, Simulation Technology Application Research Group, Earth Simulation Center of JAMSTEC. “Furthermore, due to the significant and specialized capability of the supercomputer, we expect to see a substantial increase in industrial use through our strategic partnership initiatives. Through the Strategic Program of Earth Simulator for advancement of industrial use, we are already seeing research projects needing entire memory capacity of UV 2000.”

    The newly installed supercomputer is one of the largest shared memory systems powered by 2,560 cores of Intel Xeon processor E5-4600 v2 series and provides 49.152 Tflops of computing capacity with 32TB shared memory for a Single System Instance. Along with UV 2000, SGI UV 20 powered by 40 cores of Intel Xeon processor E5-4600 v2 series enables visualization of the simulation results and SGI InfiniteStorage 17000 provides RAID storage solution with 240TB capacity under a Network File System (NFS) environment.

    The SGI solution provides JAMSTEC with super fine-grained simulation results and dramatically increases compute efficiency. The solution will accelerate a number of research projects involving earth sciences, especially those related to global warming projections. In addition, the solution will be widely utilized by automotive manufacturers and both the pharmaceutical and chemical industries through a ‘Strategic Use Acceleration Program of Earth Simulator,’ a program designed to increase industrial usage through strategic partnerships.

    “Innovative and scalable solutions like SGI UV 2000 help scientists, academics, and industries to gain greater insight to their data-driven computing problems through the increased memory capacity provided by the Intel Xeon E5-4600 v2 processor in 4 socket configurations,” said Charles Wuischpard, vice president Technical Computing Group at Intel Corporation. “The new Intel Xeon E5-4600 v2 processor also helps these systems meet the needs of demanding workloads with up to 50% more performance, while conserving power with 45% greater energy efficiency when compared to previous generation products.”

    5:44p
    Cloud Host DigitalOcean Raises $37.2M in Funding for Scaling, Hiring

    Brought to you by The WHIR.

    DigitalOcean, a web hosting startup geared towards making hosting simple for developers, has closed a $37.2 million Series A led by Andreessen Horowitz and including IA Ventures and CrunchFund.

    The new funding, according to a DigitalOcean blog post, will be used to increase DigitalOcean’s scale worldwide through buying “a boat-load of servers and networking gear.” It will also be hiring more engineers who will work remotely, and sponsor more conferences and community meetups in different cities.

    The DigitalOcean post said Andreessen Horowitz understands and supports the company’s values. “We met with Peter Levine of Andreessen Horowitz and he immediately impressed us not only with his technical knowledge but also with his views on open-source and how to build successful companies around that focus.”

    DigitalOcean’s Infrastructure-as-a-Service offering has grown significantly since its launch in 2011.  Between December 2012 and June 2013, its number of web-facing servers had grown 50 times over – faster than any company other than Amazon, Alibaba and Hetzner. In August 2013, DigitalOcean raised $3.2 million in seed funding.

    Of course it hasn’t all been smooth sailing. DigitalOcean had to update its code in late 2013 after a German researcher pointed out that it doesn’t automatically wipe the data from its solid-state drives. But this openness and ability to adapt to problems is likely to help it gain support among developers.

    To date, DigitalOcean has launched more than 1,000,000 virtual servers, and opened data center locations in San Francisco, Singapore and Amsterdam for more than 100,000 customers.

    This post originally appeared at: http://www.thewhir.com/web-hosting-news/cloud-host-digitalocean-raises-37-2m-funding-scaling-hiring

    << Previous Day 2014/03/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org