Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, January 13th, 2014
Time |
Event |
1:10p |
Amazon Adds Edge Locations in Taiwan, Brazil Amazon expands edge locations to Taipei and Rio de Janeiro, Dell helps customers use Salesforce.com, ScaleOut Software extends its in-memory data grid, and PlanSource raises $12 million to continue to innovate its insurance exchange and benefits administration cloud offering.
Amazon adds Taipei and Rio de Janeiro Edge locations. Amazon Web Services announced that it is launching its first edge location in Taiwan, at Taipei, and a second edge location in Brazil at Rio de Janeiro. These new locations will improve performance and availability for end users of your applications being served by Amazon CloudFront and Amazon Route 53, and they bring the total number of AWS edge locations to 51 worldwide. The new sites will also support all Amazon Route 53 functionality including health checks, DNS failover, and latency-based routing.
Dell provides application services on Salesforce platform. Dell announced that it will provide its customers a complete portfolio of services to enable development and migration of applications to the Salesforce Platform. For customers looking to develop applications on the Salesforce Platform, Dell offers expert advisory and application migration services and also acts as a single point of contact for design, delivery and ongoing application management. “This collaboration helps Dell customers develop new applications using the cloud, on the platform that works best for their business. Customers will benefit not only from our strong Salesforce advisory services but also from Dell’s own successful implementation of the Salesforce Platform. Offering customers this type of choice and flexibility is at the heart of Dell’s overarching Cloud strategy,” said Raman Sapra, Executive Director and global head, Strategy and Business Innovation Services, Dell Services.
ScaleOut Software extends in-memory Data Grid. ScaleOut Software announced the availability of ScaleOut StateServer Version 5.1. The latest version includes C++ APIs, additional solutions on Amazon Web Services, a Windows version of ScaleOut hServer, performance improvements in data management, and SSL support for enhanced security. The addition of C++ APIs enables C++ applications to take advantage of the full range of its feature set, and ScaleOut GeoServer and ScaleOut hServer, will now be available in the AWS Marketplace. Version 5.1 releases major enhancements to the underlying IMDG architecture, including a 5X faster transport for load-balancing data between servers and a new implementation of adaptive heart beating for greater resiliency in virtual server environments. “With this release, C++ developers now can easily integrate ScaleOut’s IMDG into their applications to provide scalable performance, as well as parallel query and integrated real-time analytics for applications written in C++,” said Bill Bain, ScaleOut’s CEO. “The added capabilities in Version 5.1 also introduce significant enhancements to ScaleOut StateServer’s features and performance and broaden its availability in the cloud.”
PlanSource raises $12 million Series B Round. Cloud-based benefits administration provider PlanSource announced that it has completed a $12 million Series B Investment Round with its existing investors Lemhi Ventures and Timucuan Asset Management. “As a leading provider of healthcare technology and employee benefits administration, our business is rapidly evolving and expanding in the areas of insurance exchanges and SaaS platform solutions for carriers/payers,” said Dayne Williams, CEO of PlanSource. “Hundreds of brokers and carriers, including some of the largest health plans in the U.S., use PlanSource for our proven ability to increase distribution and product participation with our sophisticated functionality. This investment allows us to stay on the leading edge of innovation and product development as we launch several exciting new products over the next year.” | 1:30p |
Considerations for Data Center Owners When Partnering With Modular Builders This is the second part of a three-part series by Stephen Madaffari, Principal of Data Centers Delivered, on how various sectors of the business serving the data center industry can effectively partner with modular data center construction companies to achieve success. The first part was previously published, titled Four Things Colos Should Consider When Evaluating Modular Construction Solutions.
 STEPHEN MADAFFARI
Data Centers Delivered
As modular construction of data centers gains more mainstream acceptance in the industry, it is important to keep in mind how data center owners engage and work with different modular data center builders. The data center industry has vastly diverse offerings and often use alternate meanings of the word “modularity.” This article is intended to help guide owners through the process of partnering with a modular construction provider to ensure they achieve their desired product outcome and return on investment from a modular builder.
Levels of Modularity
Modular data center companies appear in all forms, from modularity at rack levels to modularity at building levels to modularity at infrastructure levels. A data center owner needs first to decide at what level they would like to engage in modular deployment. If the path chosen is a fully modular data center, it becomes significantly more important to engage your potential partner at the very early stages of concept and design.
Similar to there being multiple levels of modularity there are also numerous “product-based” modular designs. As is typical in factory building, the goal is to lean out the construction process and identify as many efficiencies as possible. Often, this is achieved by standardizing a design or product, and producing it to scale. However, when it comes to building data centers, it is very difficult to design a “one size fits all” type of product. Owners have different views and strong opinions about how their data center will look from the perspective of system architecture. One of the great benefits of partnering with modular data center builders is their ability to take an owner’s specific requirements and create customized solutions on a case-by-case basis…and efficiently replicate the process in the future when additional capacity is required.
Full Modular Design
Designing a full modular data center encompasses several aspects of a traditional build model, however, the scope of the project is commonly vetted by the modular data center builders’ capabilities to handle it holistically. There are essentially three components included in the design: white space and electrical and mechanical infrastructure. A true custom modular data center allows owners, along with their consulting team, to define the system architectures and preferred space layout, specify desired equipment requirements, and in some cases stipulate exterior architectural needs. Modular builders then develop a customized design to accomplish the owners’ goals and facilitate construction within a factory environment. However, for this process to be ultimately successful, owners must make note to not get too far down the road of designing a data center in the traditional field build sense and then try to apply it to the factory build method. Factory and field build processes are very different. Therefore, owners must educate themselves on the modular data center design and construction process options available to them and evaluate the realities by which their project objectives will or will not be accomplished by using a factory built modular solution.
Components of Modular Design
Owners also need to consider the primary goal for their data center. Is it an asset? Is it a product? Is it temporary? Does it need to be scalable? All of these questions can be determined in the early stages of engaging a modular data center partner. In some cases it is determined there is a diminished value in pursuing a fully built modular data center. For example, maybe modularizing the infrastructure is the primary need. Mechanical and electrical infrastructure make up over 66 percent of the cost of a total data center build. If an owner can capture cost efficiencies, mitigate risk, achieve scalability and predictability of modular design by applying it only to their mechanical and electrical infrastructure, they stand to gain many benefits as a result. Each project is unique and all present different challenges for design teams and modular builders. However, modular infrastructure builds can be designed for outdoor or indoor scale requirements, offering the owner ultimate flexibility on deploying “what they want – when they need it.”
In summary, education will rule the day. Owners should always get what they want out of a data center design partner. Engaging a modular build partner early in the stages of design or concept will minimize the amount of time needed to vet out the multiple solutions available to each owner. And more importantly, deliver them with predictability…giving them what they want and when they want it while reaping the ultimate benefits of the factory build process according to their desired level of modularity.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 7:58p |
Microsoft Details Patent Application for Performance-Based Cloud Pricing This article originally appeared on TheWHIR.
Microsoft researchers have announced a patent application for performance-based pricing models for cloud computing. The inventors suggest that performance-based pricing will allow clients to purchase the resources they require without paying for excess, while retaining cost certainty.
A pair of Microsoft employees applied for the patent in July of 2012, and revealed information about it to VerticalNews for online publication last week. The newly invented pricing scheme is contrasted by the inventors with the common methods of pricing.
On-demand pricing, they say, is expensive without guaranteeing that resource requirements will be met in cases of large dynamic demand. Spot pricing has the drawback of cost uncertainty, as well as the possibility of job interruptions when dynamic market spot price surpasses the client’s bid price. Reservation/subscription pricing is less efficient when the client’s long-term usage fluctuates or usage predictions are inaccurate, as the client ends up paying for excess capacity or running out of what they have allotted.
The four drawings in the patent application describe a system of job price evaluation based on performance parameters, which are then used to set the job price through either a bid by the client, or a quote from Microsoft.
“For a job comprising a batch-type application, example parameters include a work volume parameter and time data comprising a completion time,” Microsoft told VerticalNews. “For a job set comprising an interactive-type application, example performance-related parameters may include an average load parameter (e.g., number of requests or transactions), a peak load parameter, an acceptance rate parameter, a capacity parameter corresponding to a statistical metric, and/or a time window parameter over which load is specified.”
High profile tech companies have been loading up on patents recently, with new patents being announced by Sony and GoDaddy last week. The protection of intellectual property is also a focus of the recently leaked Trans-Pacific Partnership documents.
The Microsoft performance-based pricing patent application is of particular interest to all cloud service vendors, as its success could block competitors from offering similar price schemes. If that is the case, and the benefits of the patent-protected price scheme are as Microsoft claims, then Microsoft will have a legally guaranteed price advantage to offer to large clients who require the scalability advantages of cloud hosting.
Microsoft moved to improve support for clients running Parallels Hypervisor on Windows Server, and also expanded its partner program in late 2013.
Original article published at: http://www.thewhir.com/web-hosting-news/microsoft-details-patent-application-performance-based-cloud-pricing | 8:11p |
FieldView: The Taxonomy Of DCIM Is Crystalizing With more than 70 vendors in the space, the Data Center Infrastructure Management (DCIM) marketplace is moving ahead, and some might say there is more clarity about DCIM benefits.
“We’re past the frenzied-hype part, and people are looking at DCIM in a much more sober, pragmatic fashion,” said Sev Onyshkevych, Fieldview Solutions. The DCIM vendor had a banner year in 2013, and is very optimistic about both its prospects and the state of the DCIM industry overall. Fieldview’s DCIM product is now used in six continents, monitoring a total of 2.5 gigawatts of data center space. The company launched version 6.0 last year with an focus on integration.
The initial confusion surrounding DCIM sprang from a lack of clarity of definitions as well as what DCIM as a whole could provide an organization and which DCIM providers did what. The issue stemmed from messaging that DCIM could fix everything and several vendors who were claiming to be the “be-all end-all.” It led to customer being unclear about what DCIM could realistically provide, and what providers could realistically deliver.
Those days are over, according to Onyshkevych. “As an industry matures and the players mature, you want to rule the universe, you’re chasing revenues,” said Onyshkevych. “Now it’s an increase in core competency.”
Core Areas of DCIM
Now, players are firmly placing their stake in the ground in one of five core areas. “Pure plays are attenuating, related to probably a better understanding of the segmentation of DCIM,” said Onyshkevych. ”It used to mean anything that anyone wanted to, there was lots of confusion in the market. Now it’s pretty clear that the basic part of DCIM is about five things: monitoring, IT asset management, thermal/dynamic control of temperatures, cable management, and computational fluid dynamics (CFD) simulation. Within those areas, the leaders are pretty much crystallizing.” Fieldview has staked its position in monitoring.
Vendors are now focusing on their core competency and addressing other important areas of DCIM through integrations. Players that were once competitors are now playing nice with one another, and the leaders within the categories of DCIM are emerging — Nlyte is strong in asset management, CFD is dominated by Future Facilities, Vigilent is strong in dynamic cooling and iTRACS is strong in cable management. Vendors are being clear about where their strengths lie, and customers are beginning to understand who does what. While each vendor tends to offer complimentary capabilities across all areas of DCIM, they are becoming much more open to integrating for “best of breed” solutions.
“The difference is that at one time, many of these vendors tried to be everything to everybody,” said Onyshkevych. “They’d say you just need us – now they’re increasingly focused on being the best in one core area and integrating with the complementary pieces. We have a few cases where we’re working with one of the companies strong in the asset management space, where the customer has asked us to interface the two products together. We’re seeing this more often. The alternative to a ‘Swiss Army knife’ is integrating best of breed. That’s one reason of our 6.0 release, a major part of that deliverable was the ability to integrate historical and real-time data.”
Out Come the APIs
The focus has been on the Application Programming Interface (API) in order to integrate with best of breed. “We have an API that tells you were to look for temperature data, if you want capacity planning and historical data, we tell you where to look,” Onyshkevych said. “For us, the number of requests has increased to the point where we’re not doing customer interface, but investing in an API to becoming plug and play. A lot of the players are approaching in a similar way.”
Success in the Multi-tenant Data Center Space
Colocation providers are a big area of focus for the DCIM industry, as the multi-tenant data center industry is growing at a quicker clip than enterprise data centers. More businesses are looking to colocation, and more colocation facilities are being built. For this reason, DCIM providers are looking to multi-tenant data centers as a big opportunity going forward.
“We were the first to identify them as a market and to start developing the features to address that space,” said Onyshkevych. “Our first colo customer was 2009, we’ve had colo functionality for a while. The 6.0 release gives us a lot more colo capabilities. We’ve had over a dozen large colo companies as customers, it means they’ve put us through the ringer.” CoreSite and ByteGryd are public examples. “Functionality we already had is being developed more thoroughly to address the colo market,” said Onyshkevych.
Colocation facilities are a different beast. “It’s a many-to-many relationship – tenants often aren’t at just one location. The tenants may have data centers that are not part of just one provider, or rented from other colos. There’s weird relationships where the colo facilities primarily focus for the facilities, and tenants responsible for IT assets. Being a monitoring tool, the fit for FieldView is strong. We handle the colo needs. The tenants need to understand the temp and power chain in many cases, but what they need is asset management. So colo operators are a much closer fit,” Onyshkevych said.
While colocation is not the biggest percentage of FieldView’s market share, it’s a growing percentage. “It’s growing faster than finance, but the banking segment isn’t growing rapidly. Those companies are moving to colo. Colo is where most of the growth is, in the US. They tend to be bigger on average, and bigger is a good fit for Fieldview. Our sweet spot is 10MW and up. The colo and cloud facilities tend to be a higher order of magnitude. The two largest cloud providers in the world are our customers. Our growth is because we’re positioned well with colos.”
Onyshkevytch mentions a famous Wayne Gretzky quote, “A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be.” He believes the puck is going to megascale cloud and colo facilities.
Looking to 2014: Steady as She Goes
“We’ve always focused on the monitoring,” said Onyshkevych. “We’re seeing that. It used to be that every couple of weeks there’s some new entrant. Nobody’s coming in anymore saying we raised $50m from VC.”
“Fieldview will continue to focus on the large data centers, colos, and continuing our international growth, especially in Asia,” said Onyshkevych. “There are a lot more greenfield builds internationally, which presents a great opportunity for us. Our Strengths are in colo, banking and finance, technology.”
The company expanded its sales, operations and development teams in 2013, as well as grew its network of global channel partners to nineteen organizations. The company has reseller programs through six continents as well as new partnerships with IBM, CoreSite Realty Corp and Compass Datacenters. So far, 12 of the top 25 real estate/Multi-Tenant Data Center (MTDC) providers, six of the top 10 banks and three of the top five IT/Internet firms – currently use FieldView’s DCIM solution. | 8:32p |
Georgia Tech, Emory Team to Create TARDIS Supercomputer There’s no sign yet that supercomputers have solved time travel. But Emory University and Georgia Tech are teaming to create a new high performance computing cluster named TARDIS, a nod to the time machine spacecraft from the popular science-fiction TV series.
Why the name? The schools note that the TARDIS’ notable characteristic is that it’s “bigger on the inside” – is that its interior is larger than its exterior. That’s in line with the space-saving features of the new computer cluster, which has packed its computer power into one cabinet instead of the 20 cabinets that previously were required.
“The performance advantages will be significant, and the power savings are tremendous,” says Dieter Jaeger, PhD, professor of biology and chair of the executive committee for high performance computing at Emory. “In addition, the old cluster was reaching the end of its expected life span and service contracts were running out. Having new hardware, all under warranty, bolsters reliability.”
“On the new server, we can now process 20 exomes per hour, a 60-fold increase in speed,” says Michael Zwick, PhD, associate professor of human genetics and scientific director of the Emory Integrated Genomics Core. “This is a dramatic improvement and will allow members of the Emory community to perform larger experiments faster and for less money. We will be a significant user of the new cluster and our computational services will be taking advantage of this exciting new capability.”
The new cluster is configured as 12 nodes with 768 cores. 8 gigabytes RAM (four times more than previously available) are allocated to a single core. Larger amounts of RAM can be scheduled for a single core if necessary. More storage is available than previously: a total of 40 terabytes of storage space is High density Advanced Micro Devices “Abu Dhabi” processors are designed to consume less power and generate less heat.available for all Emory projects, all fully backed up.
The TARDIS cluster will be located at Georgia Tech in the Rich Computer Center, but the physical separation is expected to be negligible because of the 10 gigabit per second connection.
Hat tip to InsideHPC for the link. | 9:00p |
Modular Data Centers: Adoption, Competition Heat Up in 2014  Last week’s Schneider-AST deal highlights the modular data center market, where both adoption and competition are on the rise.
Will 2014 finally be the breakout year for pre-fabricated data centers? The year is young, but the modular market has already seen its first major M&A deal, and may soon have its first IPO.
With marquee customers in the hyperscale market, and slow but steady progress with enterprise customers, modular designs continue to gain traction. New players and new designs are emerging, further advancing the potential for pre-fab deployments.
But barriers remain. The ISO container casts a long shadow over the modular data center market. Executives in the sector say it will take additional education, as well as more public customer success stories, before the new breed of modular designs can overcome customer resistance dating to the early days of the “data center in a box.”
M&A and IPOs
On Friday, Schneider Electric announced that it had acquired AST Modular, a Barcelona-based modular specialist that has built a global business. The deal reflected the growing importance of pre-fabricated designs and Schneider’s ambitions in the modular sector.
The market for modular data centers is also becoming more competitive, with U.K. specialist Bladeroom entering the U.S. market and investment firm Fidelity launching its Centercore design as a product. Late in 2013, IDC Architects announced that it is commercializing a modular design it has deployed for global banking customers, while newcomer NextFort opened a “modular colo” facility near Phoenix..
Meanwhile, IO is hoping to become the first modular specialist to go public. The company has announced plans for an initial public offering, but hasn’t yet indicated the date for its IPO. The Phoenix-based provider counts Goldman Sachs among its roster of clients, and is bullish on the outlook for modules as the delivery model for the “software-defined data center.”
“The data center market has spoken, and the consensus is that modular has won,” said Troy Rutman, the spokesman for IO.
Progress, But Also Resistance
Other executives in the modular sector see pre-fabricated designs making their way into the mainstream more gradually, but say that resistance persists.
“You’re deploying a new technology into a mature market that is questioning its delivery,” said Rich Hering, Technical Director Mission Critical Facilities at M+W Group. “Most folks don’t like change.”
“A lot of people believe modular is just for scale-out and low reliability,” said Dave Rotheroe, Distinguished Technologist and Strategist for HP. “It’s not true. Modular designs can and do apply in the enterprise.”
“Customers are just beginning to understand what modular allows them to do,” said Ty Schmitt, an executive director and fellow at Dell Data Center Solutions. “As the customer base matures and the supply chain matures, we’ll see exponential growth.”
Early Adopters
Hyperscale cloud builders Google, Microsoft and eBay were among the first earliest users of modular designs. AOL has deployed “micro-modular” data centers both indoors and outdoors. On the enterprise front, Goldman Sachs and Fidelity have been the marquee names embracing pre-fabricated data centers.
Modular designs aren’t for everyone, but Schmitt says the concept is being proven with a nucleus of forward-thinking customers seeking cheaper and faster ways to deploy their IT infrastructure.
“It’s customers who’ve transformed their business,” said Schmitt. “They’re the early adopters. As more and more customers take advantages of software resiliency, we’ll see more adoption. It’s going to be a series of small hurdles.” |
|