Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, January 7th, 2015
| Time |
Event |
| 1:00p |
Amazon Data Center Project in Virginia Stumbles Over Power Line Opposition Amazon wants to build a 500,000 square foot data center in Haymarket, a town in Prince Williams County in Northern Virginia. To provide enough power for the future facility, code-named “Midwood Center,” utility Dominion Power needs to build a new substation and a transmission line. The utility submitted plans over Thanksgiving for the double-circuit 230kV transmission line, but the plan has run into opposition from locals, who are afraid the new infrastructure will bring down their property value.
According to planning documents, the company that’s planning to build the data center at 15505 John Marshall Highway is Vadata, a wholly owned Amazon subsidiary that does big data center construction projects on the e-commerce giant’s behalf.
“The transmission infrastructure is to address forecasted increases in energy demand that exceed the capabilities of our current distribution system beginning in 2017,” Dominion said in a project description posted on its website.
Some Haymarket residents are concerned the electrical infrastructure project will lead to loss of the area’s natural beauty and potential loss of property values because of transmission lines and substation towers. The opposition argues that power lines will be near homes and schools and will cross the Rural Crescent, which has historical landmarks.
Some parallels can be drawn between the situation in Haymarket and the situation in Newark, Delaware, last year, that ultimately led to failure of what was going to be a $1 billion data center construction project. Like in Haymarket, the power component of the Newark project and not the actual data center was what created the controversy. The Newark project, pitched by a company called The Data Centers LLC, included a cogeneration plant. Some residents organized a group that opposed the power plant and its potential impact on the area.
The residents in Haymarket are using a similar approach the opposition in Newark used. They have started a website and a grassroots protest. We have contacted the group, called Protect Prince William County, for comment but have not received a response.
Residents, Politicians Urge Amazon to Change Plans
There is now uncertainty about the project, both residents and politicians pushing Dominion and Amazon to change plans. The opposition is suggesting Dominion build a different route from the Gainesville substation and up I-66, with no above-ground lines near residential areas. Virginia State Delegate Bob Marshall and Senator Richard Black have written a letter to Amazon CEO Jeff Bezos asking to consider two alternatives to the current plan.
“In our judgment, the root problem is the Haymarket location for the Amazon data center,” they say in the letter. “This is because it is neither located in an industrial area, nor is it located within existing power line corridors. We hope to preclude future impacts when the 2015 General Assembly convenes in mid-January, and we urge you to seriously consider two alternatives to address the present controversy.”
One of the proposed alternatives are an industrial-zoned property a few miles from Haymarket called Innovation Park, where George Mason University is located. There are some data centers there already. The currently planned “location is not normally zoned for data centers, and this has caused significant, unified community opposition from an unusual alliance of environmentalists, Civil War Hallowed Ground advocates, homeowners, and major business developers,” the legislators wrote.
The second alternative is building the 230 kV power lines underground.
Prince William County an Alternative to Booming Loudoun?
The bulk of the Northern Virginia data center market (one of the biggest in the country) is in Loudoun County, but nearby Prince William also has about 2 million square feet of data center space. Loudoun is a key connectivity hub with an estimated 70 percent of the world’s Internet traffic crossing through the area. Prince William touted its data center appeal last October.
A 500,000 square foot Amazon data center would be a boon to the county, but the area is in need of more infrastructure to really compete with Ashburn, Loudoun’s data center hub. “Don’t expect other cloud folks to follow Amazon out to Prince William,” said an executive for a competing cloud provider who did not wish to be named. “Power costs the same [as in Ashburn]. Land is not much cheaper. Fiber costs are going to be nuts. Ashburn fiber is cheap, and you’ve got to get back to Ashburn, wherever you start from.”
So far, Loudoun and Dominion have been able to meet the needs of every data center looking to locate in the county, said Buddy Rizer, director of economic development in Loudoun. “Loudoun has taken a very proactive position on making sure the infrastructure is in place for our data center clients,” he said. ” We have regular meetings with Dominion Power and plan out sites, timelines, and priorities in advance. The data center cluster is very important to Loudoun’s economy, so we always try to stay out in front of the demand.”
What About Building it Underground?
In response to the situation in Prince William, Dominion is now reviewing several potential alternative routes. A draft of the alternatives can be found here. The utility is in a balancing act of serving the high-revenue data center sector while attending to the needs of residents.
Dominion has also noted some reasons underground lines might not be feasible. In addition to reduced operability and reliability, building the lines underground will cost six to 10 times more than above ground. The process of building underground lines is also environmentally invasive and takes longer. Life expectancy of the line would be cut in half and it would have limited capacity and voltage fluctuations.
Amazon has a sizable data center footprint that keeps growing. Ohio Governor John Kasich recently confirmed a massive Amazon data center project planned in his state. In a bid to attract the project, Ohio offered the company tax breaks and one town even offered free land for the project. | | 4:00p |
Analysts: Internet of Things M&A Topped $14B in 2014 Total value of mergers and acquisitions in the Internet of Things space was about $14 billion in 2014, according to a new 451 Research report.
About 60 IoT-related companies were acquired during the year, a forty-fold increase in acquirer spending and a two-fold jump in the number of deals compared to 2013. And the analysts expect 2015 to be even busier.
Just yesterday, Facebook said it acquired Wit.ai, a platform that makes it easy for developers to build apps powering IoT, with a focus on building machines that understand human languages. Over 6,000 developers are on the platform.
IoT, the buzzword, basically encompasses all devices with IP addresses that exchange data over a network, and systems that connect them, collect and use that data. There are several potential consumer and business applications.
Google’s Nest thermostat is a prominent example on the consumer side. It studies your habits and adjusts temperature in your home to your taste automatically.
The M&A deals in 2014 were evenly split between IoT-enabling horizontal infrastructure and vertical applications, according to 451. The infrastructure segment includes sensors, semiconductors, software platforms, security infrastructure, and connectivity technologies. The transport and logistics segment was the busiest vertical with 11 transactions, followed by fitness and healthcare segment with 10 transactions.
“Acquirers don’t want to cede anything to a growing list of competitors as demand for IoT services in both consumer and industrial markets builds,” Brian Partridge, vice president of 451’s mobility team, said in a statement. “The expected growth in this segment will drive enterprise spending across a myriad of building-block categories from embedded computing systems to communication infrastructure, IP networking, cloud, and data center technologies that will form the foundation of the next generation of connected machines and services.”
IoT adoption has data center providers excited as well. “I think this year will be the year of inflection for the Internet of Things,” said Ernie Sampera, chief marketing officer at Vxchnge. Sampera believes that IoT means more of what Cisco dubs “traffic generating units.” The result is more data in data centers, and a need for more distributed data center infrastructure.
A recent report from IDC said that half of IT networks will soon feel the stranglehold of IoT devices.
More companies are investing in wireless devices; more consumers are using multiple wireless devices, and businesses are now beginning to understand the analytics benefits of connected devices.
IBM has added IoT capabilities to its Bluemix Platform-as-a-Service. Microsoft Azure recently unveiled real-time tools for IoT developers. The Open Interconnect Consortium, focused on IoT connectivity requirements, has seen growing membership.
“We expect to see even more activity in 2015 as the cost and risk hurdles to IoT adoption are overcome and the competition to serve these markets increases,” said Patridge. “Any firm with the strategic intention of being an IT infrastructure and services leader over the next 10 years does not have the option to ignore this market.” | | 4:30p |
Certainties in 2015: Death, Taxes and the New World of IoT Robert Haynes has been in the IT industry for over 20 years and is a member of the F5 Marketing Architecture team where he spends his time taking technology products and turning them into business solutions. Follow Robert Haynes on Twitter @TekBob.
It’s safe to say that there are now three certainties in life—death, taxes, and the prevalence of smart, Internet-connected devices. A world of change is upon us in which your fridge can calculate the freshness of your food and a computer smaller than a grain of rice can be injected into the human body to diagnose illness.
And there’s no sign of things slowing down. Each year will see exponentially more devices connected to the Internet than the last. In fact, Gartner predicts there will be 25 billion connected “things” by 2020.
These “things” will come in all shapes and sizes, from three-ton automobiles to entertainment systems and wearable blood sugar monitors. While the world of smart devices talking to each other—and to us—is well underway and here to stay, reaping the rewards will depend on our ability to design and build infrastructures to service this new Internet, the Internet of Things (IoT). How we choose to respond to the explosion in connected devices, applications, and data will determine who benefits most in a market that IDC forecasts will grow to $7.1 trillion in the next five years.
Harnessing the Powerful Force
Whether or not your organization is currently tapping into the IoT, there will be no escaping the effects of this growing phenomenon. Much like “bring your own device” transformed the workplace and enterprise mobility, IoT is a force that will impact all industries and pretty much every aspect of our daily lives.
The challenge in harnessing this powerful force isn’t limited to managing the sheer volume of data, it’s also making sense of that data, streamlining the apps, and the application architecture itself.
While there’s been extensive discussions about the IoT market size and the growing number of new devices, the wider implications for the underlying network infrastructure used to manage, monitor, and monetize these devices, though less obvious, requires considerable attention. The consequence of these devices and their supporting ecosystem failing could vary from a simple annoyance – no one wants to wait to kill zombies while their gaming console downloads updates – to something significantly worse, like a security breach in a healthcare delivery system. Vulnerabilities in devices will exist, patches will need to be pushed out. How will your infrastructure cope?
The Underlying Infrastructure
As IT professionals, we’re tasked with designing and building the infrastructure that’s ready for the challenges that lie ahead with IoT – from DNS and new protocols to security and scalability. DNS is the most likely channel for connected devices to locate needed services, and it’s potentially the means by which we will locate the devices themselves. There might be better schemas, but those would require adoption of a new technology standard, which would be costly, slow and, to be honest, a rare event.
In the same vein, the explosion in embedded devices may well be the event that drives more mainstream IPv6 adoption. There are several advantages to this: a huge namespace, small IPv6 stacks, address self-configuration, and the potential to remove NAT problems. The data center will require some prep work to embrace this shift. Components such as firewalls, routers, and application delivery controllers will need to be IPv6-ready, capable of understanding the protocols and data that devices will use to communicate.
To ensure security, intelligent routing, and analytics, your networking layers will need to be fluent in the language your devices use. Decoding protocols such as MQTT, CoAP, or HTML5 web sockets within the network will allow traffic to be secured, prioritized, and routed accordingly. Understanding and prioritizing these messages will enable better scale and manageability of the onslaught of device traffic and data.
Clearly, our DNS infrastructures must scale to accommodate the extra demand, but what’s even more concerning is the potential for DNS hijacks and outages that could wreak havoc with our connected world. Sensor data could be sent to the wrong location, “updates” might be intercepted by malicious servers, and let’s not forget the ever-popular DDoS attack. Unless we remain proactive, the ubiquity of connected devices presents a field day for attackers. Outpacing attackers in our current threat landscape will require more resources in order to minimize risk. We will need to continue to harden our own infrastructures and look to third party services like DoS mitigation to lessen the effects of attacks.
Forging the Path Ahead
We might not have all the answers, but one of the great certainties in life—connected devices that are here to stay—will force us to move forward into this brave new world. While there’s much to consider, proactively addressing these challenges and adopting new approaches for enabling an IoT-ready network will help us chart a clearer course toward success.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:20p |
CoreSite Brings Second Virginia Data Center Online CoreSite Realty has opened the second facility on its Reston, Virginia, data center campus with the first phase fully leased. The building, close to 200,000 square feet in size, has a 50,000 square foot Phase I, which was leased last quarter to a single tenant.
CoreSite is doubling down on expansion in top U.S. data center markets where it already has presence. The company is one of the largest providers of data center services in the country, offering both wholesale deals (big customers taking lots of space) and retail (by the rack).
It is currently constructing Phase II of similar size in the building. The company expects to complete that phase in the second quarter.
In its most recent earnings call, the CoreSite execs were optimistic about the Northern Virginia data center market, noting it anticipated larger leases in Q4 and onward. This would be one of those leases.
The execs did note one concern about the market. A tremendous amount of data center space vacated by Yahoo came on the market, potentially disrupting supply and demand.
DuPont Fabros Technology, the market’s biggest wholesale player, revealed in October that an existing customer subleased the 13 megawatts of capacity vacated by Yahoo at its ACC4 data center in Ashburn in one fell swoop. The temporary vacancy apparently had little effect on market supply and demand, and now CoreSite opens the first phase completely leased.
Other providers with recent builds in the Northern Virginia data center market include CyrusOne, which opened the initial 30,000 square feet of space within a new 125,000 square foot facility in December. A third of that phase was pre-leased to an undisclosed Fortune 50.
Also in the pipeline is NTT-owned RagingWire’s new two-story data center, which will open with 14 megawatts and 140,000 square feet. RagingWire also reports strong leasing activity in the market.
The newly formed Infomart Data Centers (a merger between Fortune Data Centers and Dallas Infomart) is currently refurbishing an AOL data center, its fourth in the market. It is expected to come online this year.
Other recent or upcoming expansions for CoreSite include Boston, Chicago, and New York.
VA2 is physically connected to the network and cloud-dense VA1. The CoreSite campus has access to nearly 60 network, cloud and IT service providers, as well as offers direct network links to Amazon Web Services. The company has cloud and peering Exchanges.
Rich connectivity (where a provider can get access to so many carriers and other service providers) is what makes Northern Virginia such a desirable market. The region also has relatively low power rates and offers tax breaks for data center operators.
“We believe that the nearly 60 network, cloud, and IT service providers already deployed at our Reston campus combine with our market-leading network ecosystem at DC1 and the scalability offered at VA2 to enable CoreSite to offer one of the most comprehensive solutions in the region for performance-sensitive customer requirements,” Tom Ray, CEO of CoreSite, said in a statement. DC1 is CoreSite’s data center in Washington, D.C. | | 7:41p |
Amazon Web Services Kicks Off New Year With New Cloud Features Amazon Web Services kicked off 2015 by announcing a number of new cloud features, namely cross-account access in the AWS management console, EC2 spot instance termination notices, and some enhancements to GovCloud. The company finished 2014 as one of the most reliable cloud providers, according to cloud service status tracker CloudSquare by CloudHarmony.
Despite increasing competition from Google and Microsoft, as well as smaller cloud players like DigitalOcean, AWS continues to maintain its position as a market leader, thanks to its reputation, variety of cloud features, infrastructure, and solid reliability record.
Early Warning for Spot Instance Termination
EC2 Spot Instances and the EC2 Spot Market enable users to place competitive bids on available EC2 instances, committing to pay what you are willing to pay to use an instance for an hour. It’s an economical brokerage-like option good for temporary big jobs. If a customer bid exceeds the current spot price, instances run. When the spot price rises above bid, spot instances are reclaimed and given to another customer.
AWS has improved the reclamation process with the addition of a two-minute warning before the instance shuts down, known as a Spot Instance Termination Notice.
“Your application can use this time to save its state, upload final log files, or remove itself from an Elastic Load Balancer,” AWS Chief Evangelist Jeff Barr wrote on the company blog. “This change will allow more types of applications to benefit from the scale and low price of Spot Instances.”
GovCloud Updates
The GovCloud region of AWS, built specifically for government agencies, now has access to Amazon Glacier, AWS CloudTrail, and Virtual Machine Import.
Glacier is for data archiving, or cold storage, that can be used for records storage. CloudTrail records calls made to AWS APIs in log files, used for compliance. VM Import lets you import VM images, for example a VMware ESX image, into EC2.
Single Sign On for Multiple AWS Accounts
Cross-Account Access is a little tweak that enhances multi-account and multi-role usage on AWS. It makes it easier to switch roles within the AWS Management Console. Users can sign in as an IAM user or via Federated Single Sign-On, then switch the console to manage another account without having to enter another user name and password. No more remembering different IDs and passwords.
The Management Console now also supports Auto Scaling and Service Limits. Auto Scaling helps systems respond to changes in demand automatically, and Service Limits shows usage limits and enables quick requests for limit increases.
AWS Most Reliable Among Big Cloud Providers
Over the last 365 days, Amazon’s EC2 (cloud compute) and S3 (cloud storage) services each saw about 2.5 hours of downtime. Providers CloudFront and Route 53 had 100 percent uptime, according to CloudHarmony.
Google Compute Engine (its Infrastructure-as-a-Service) had 4.5 hours of downtime. There was about 20 mins of downtime between Google’s Platform-as-a-Service called AppEngine and its Cloud Storage service.
Microsoft Azure saw 11 hours of downtime for Object Storage during the year, and close to 40 hours of downtime for virtual machines. One of Microsoft’s major cloud outages happened in August, and another big one came in November. Azure CDN had a perfect uptime record last year.
Rackspace had 7.5 hours of downtime for Cloud Servers and a perfect track record across CDN and DNS. DigitalOcean had about 16 hours of downtime.
Verizon started 2015 by notifying its customers of an update that will require downtime across its entire Verizon Cloud infrastructure. The company said customers should prepare for up to 48 hours of downtime, starting early this Saturday morning. | | 9:30p |
DigitalOcean Co-Founder and CEO Ben Uretsky: Finding a Place in the Cloud Hosting Marketplace 
This article originally appeared at The WHIR
Since its launch in 2011, DigitalOcean has grown considerably, with its Infrastructure-as-a-Service offering centered around its virtual servers or “Droplets” quickly becoming a favorite of developers.
Before the close of 2014, became the world’s third largest web host. The New York City-based company (which has US data centers in NYC and San Francisco) has also launched data center locations in Singapore, London, and Amsterdam.
Many observers wondered how a startup could gain so much market share and grow so quickly in a web hosting market that seems to require enormous capital expenditure. It also competes for customers from some extremely competitive players such as Microsoft, Rackspace, SoftLayer, Amazon, and Google.
In an interview with the WHIR, DigitalOcean co-founder and CEO Ben Uretsky explains that the cloud hosting market can be disrupted.
The Economics of Cloud Hosting Might Be Different Than You Might Think
Even with all the talk about the centrality of cloud to these businesses, many of the companies providing cloud offerings aren’t pure-play cloud providers. For instance, Amazon had originally built its enormous server farms to underpin what’s become the world’s biggest ecommerce business, and Amazon Web Services, which offers public cloud services, can be seen as an offshoot – a way to monetize unused IT resources. Rackspace is another example, having been a dedicated server and managed services provider before entering the cloud space.
In a way, these companies are at a disadvantage compared to providers who are more focused.
“What’s great about DigitalOcean is we are unique in that we only provide cloud infrastructure, and absolutely nothing else,” Uretsky says. “In terms of efficiency, I think we actually have the highest level of efficiency because everything that the company does is focused on cloud infrastructure.”
DigitalOcean’s offering is based around a single product: the Droplet, a virtual cloud instance that is available in different sizes and for a monthly fee that’s lower than many of its competitors.
While Amazon might have low prices in its retail division, it’s not true of their cloud hosting. And while businesses get access to greater scale and capacity through using AWS, it can often cost more than building it inside a traditional colocation environment. And it still requires staff who are able to navigate its many services. “Today, AWS is still a fantastic option for enterprises and large-scale businesses that have the operational team – the people that can wrangle the cloud resources, and ultimately the systems and the software,” Uretsky says.
Organizations often find the AWS pricing structure, which varies based on factors such as service and region, difficult for predicting their eventual cloud costs, and often necessitates outside help to manage their AWS hosting.
“Where we saw an opportunity for DigitalOcean was to be the exact opposite of AWS has become. As an individual developer, you don’t have the same resources to figure out the complex ecosystem that AWS provides…not to mention, Amazon wants to lock you into their ecosystem as much as possible, so they brand everything under their own terminology. So, all-in-all, you walk away extremely confused and frustrated. And that’s where we saw the opportunity.”
Also, while it might seem that larger companies can take advantage of their buying power by getting better prices on IT equipment such as servers, Uretsky says that it’s often negligible – perhaps 10 to 15 percent cheaper.
“The economies of scale in Internet infrastructure are actually not that advantageous… everyone pays roughly the same amount for silicon, memory and storage. And also, we’re all buying the same commodity components, so nobody has a significant price advantage over someone else, nor a performance advantage.”
And, as opposed to building its own facilities from the ground up, DigitalOcean has been expanding through established data centers. This allows it to enter new markets in highly connected facilities more quickly and cheaply, and without having to deal with air conditioning, security, and other issues with operating a data center on its own.
Looking “Down the Stack” for Improvements
Hosting services such as Flywheel have built hosting services directly off DigitalOcean which are aimed at less technically-savvy users. But while there are clearly opportunities to cater to a SMB market, DigitalOcean is unlikely to deviate from its IaaS service in the immediate future.
“Today, our ambitions really lie with the low-level server infrastructure first, and we think we have a lot of ground to cover in that space,” Uretsky says. He notes that he sees storage and network improvements that could be made by moving towards a more software-defined environment.
A recent $50 million credit facility from Fortress Investment Group will help DigitalOcean spend the R&D and engineering time on refining its networking with a focus on SDN and VPN capabilities, as well as firewalls and security groups.
In terms of storage, DigitalOcean only provides local storage at the moment. Still, using SSD storage exclusively deals with many of the potential performance issues, and DigitalOcean can also constrain and migrate obtrusive server workloads to where they don’t affect other users.
Also, with its 350,000 customers, there’s a “randomized spread across hypervisors” and a “long-tail of users with substantially smaller needs” that reduces the likelihood that a high-usage customer would have a huge effect on performance.
Even still, DigitalOcean is undergoing major hardware and network upgrades that will come into effect in the first half of 2015.
While there’s a clear focus on low-level architecture, Uretsky says DigitalOcean will likely work on upstack offerings, but in its own unique way. “We do have ambitions of going up the stack, and building some platform-like features.”
For instance, WordPress and other common hosted services can be easily installed using a “one-click install” utility.
But, he notes, the company doesn’t see these capabilities as developing into new services that will cost more. “The platform will be a value-add but it won’t necessarily cost a customer any more.”
Cyberattacks and Malicious Users are Huge Pain Points for Hosts
It should come as no surprise that Distributed Denial of Service attacks have had a hugely deleterious effect on web hosts. For instance, last month Rackspace’s DNS servers in two regions were hit with a flood of DDoS traffic. Additionally, the size of DDoS attacks has risen dramatically in the past few years to the point where Akamai recorded 17 attacks exceeding 100 Gigabits per second in Q3 2014, including one peaking at 321 Gbps.
Remaining secure from DDoS threats has been a gradual learning process for DigitalOcean.
“On the denial of service attacks, we’ve suffered a tremendous number of incidents,” Uretsky says. “When we deployed Singapore, someone really had it out for us. We were getting multi-hundred Gbps attacks week after week, and that really pushed us to progress our security story on the network side. And luckily we emerged out of that, but it was really touch and go in the first three or four months, but it really allowed us to gain the knowledge that we were able to spread to the rest of our data centers and secure our perimeter.”
The company deployed a number of systems layered on top of one another to protect customers, and also rolled out a highly secure configuration of its edge routers.
But beyond outside threats, users may have malicious uses in mind when they create hosting accounts. This is a huge issue for web hosts.
“With individual accounts, we go through multi-step verification process, so we’ll have a bunch of flags and filters and if you get caught in that web.” The approach has been to layer verification tools and technologies on top on each other, and add some in-house filters.
Users wrongly flagged can verify their identity by having it cross-checked against their profile on sites like GitHub, LinkedIn, Facebook, and Twitter.
Uretsky says, “In the past we’ve had a third of our sign ups on a daily basis attributed towards malicious users, so we’ve really progressed substantially.”
The Year Ahead
Uretsky says 2013 was about keeping up with customer growth and infrastructure expansion. 2014 was about catching up in terms of hiring and organizational structure. 2015 is about building a powerful engineering team as well as developing the leadership and the processes to fuel the company’s continued growth.
And this means making its Droplet product even better, and even moving beyond the Droplet. DigitalOcean is focusing on product development, and delivering on customer requests.
“We only delivered a single product to date in a meaningful way, and that’s the Droplet,” he says. “There’s so much more that our customers are asking for, and I can’t wait to deliver that.”
This article originally appeared at: http://www.thewhir.com/web-hosting-news/digitalocean-co-founder-ceo-ben-uretsky-finding-place-cloud-hosting-marketplace | | 9:50p |
U.S. Weather Service to Boost Supercomputer Power Tenfold U.S. government agency that forecasts weather said it will increase capacity of its two supercomputers tenfold by October.
Each system will go up to 2.5 petaflops, which will enable the agency to forecast severe weather, water, and climate events with better precision and look further out into the future.
The National Oceanic and Atmospheric Administration’s National Weather Service has been upgrading its supercomputers since July 2013.
IBM is making the upgrades under a $44.5 million contract with NOAA. Of that, $25 million came from the Disaster Relief Appropriations Act of 2013, which was enacted after Hurricane Sandy, which wreaked havoc in Jamaica, Cuba, and the Bahamas, before making its disastrous landfall in the Northeastern U.S. in October 2012.
IBM is a major government IT contractor. It is also one of the biggest high performance computing contractors for the feds. IBM’s biggest recent government supercomputer deal that was announced publicly was a $300 million purchase of two systems by the Department of Energy for two of its national labs.
This week, NOAA announced kick-off of the next phase in its upgrade project. Already sometime in January, the administration expects the two systems to more than triple their capacity.
The capacity added this month will enable the weather service’s Global Forecast System to increase resolution from 27 km to 13 km for forecasting 10 days into the future, and from 55 km to 33 km for forecasting from 11 to 16 days.
Another system, called the Global Ensemble Forecast System, which uses a model made up of 21 separate forecasts, will increase resolution from 55 km to 27 km out to eight days and 70 km to 33 km for nine to 16 days into the future.
Cray will be supplying the systems for NOAA’s upgrade as an IBM subcontractor.
“This investment to increase their supercomputing capacity will allow the National Weather Service to both augment current capabilities and run more advanced models,” Peter Ungaro, president and CEO of Cray, said in a statement. |
|