Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 25th, 2014

    Time Event
    11:30a
    Data Center Jobs: Viawest

    At the Data Center Jobs Board, we have a new job listing from Viawest, which is seeking a Data Center Engineer in Hillsboro, Oregon.

    The Data Center Engineer is responsible for monitoring the buildings HVAC, mechanical and electrical systems, performing preventive maintenance, site surveys, replacement of electrical and mechanical equipment, reading and interpreting blueprints, engineering specifications, project plans and other technical documents, performing operation, installation and servicing of peripheral devices, assisting with equipment start-ups, repairs and overhauls, preparing reports on facility performance, overseeing vendor facility maintenance, performing emergency equipment repair, and 24×7 facility on-call responsibility.  To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    12:30p
    RAI: A Metric to Measure Whether Your Data Center is Operating Lean

    Rajat Ghosh is a Postdoctoral Fellow at the CEETHERM Data Center Laboratory, G.W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology.
    His website is RajatGhosh.wordpress.com and he is available on LinkedIn.

    Making a data center operation “lean” is increasingly becoming a critical business and regulatory requirement. A lean data center operation can satisfy its customers in the most cost-effective manner.

    One potential solution for data center operational expenditure (OPEX) optimization lies in adopting the usage-based pricing model for resource consumption. To realize that goal, a data center’s resource supply and demand sides should match as closely as possible. Following the time-tested management adage, “You can’t manage what you can’t measure,” this article proposes a metric, namely resource allocation index (RAI), to measure the matching between demand and supply sides of data center resources.

    Data Center Value Chain

    Figure-1

    Figure 1: Resource Utilization Landscape for an Internet Data Center (IDC) Value Chain (Click to enlarge graphic.)

    Figure 1 shows the resource utilization landscape for an internet data center (IDC) value chain, which acts as an engine room for the prevalent client-server based commodity computing.

    The demand side of an IDC is driven by its users. It can be defined as the number of login requests received. Depending on the nature of the IDC, the login requests could be transactional (e.g. Bank of America website), computational (analysis websites such as WolframAlpha Mathematica), archival (e.g. Facebook), merchandise (e.g. Amazon), and content query (e.g. Google). This incoming network traffic poses demands in the form of electronic operations in IT equipment (ITE)—such as volume servers, network switches, and storage disks. Depending on the type of an application, an IDC might utilize different combinations of its IT-enabled capabilities. However, the common denominator of these IT operations is the electricity consumption from renewable or non-renewable sources. Therefore, electricity should be considered as the most fundamental resource for a data center.

    Metric to Measure Resource (Electricity) Utilization: PUE vs. RAI

    Although the data center’s electricity should be primarily consumed by its ITE, a few exhaustive surveys indicate that on average 35-45 percent of data center electricity is consumed by its cooling hardware—such as server fans, computer room air conditioning (CRAC) units, building chillers, and cooling towers. The fraction of total electricity that is consumed by a data center’s ITE is given by its power usage effectiveness (PUE).

    Despite being a useful and prevalent metric, PUE focuses only on the supply side of the value chain for an IDC. In fact, the state-of-the-art trend of designing the data center as a warehouse-scale computer calls for a holistic metric that can encompass the entire resource utilization value chain by a singular metric. In that direction, a metric, namely resource allocation index (RAI), is being proposed, as follows:

    RAI = Normalized Resource Supply / Normalized Resource Demand

    RAI measures how much electricity is required by a data center in order to serve one request. Taking the ratio of two end-points of the value chain (as shown in Figure 1), RAI is an end-to-end metric and useful for the holistic assessment of a data center’s resource allocation. The normalization is carried out with respect to peak demand and supply values.

    RAI as a Quantitative Standard for Resource Provisioning

    Table-1

    Table 1: Comparison of Resource Allocation Performances of Two Data Centers (Click to enlarge graphic.)

    RAI can be used to compare the resource allocation performances of multiple data centers. Table 1 shows RAI values of two hypothetical data centers. While the RAI value for Data Center 1 is equal to 0.91, that value for Data Center 2 is 1.26. With a lower RAI value, Data Center 1 performs better in utilizing its resources.

    A cursory glance to RAI definition suggests that a lower value of RAI is desirable because the data center is spending less resource in order to satisfy its demand. Nevertheless, a lower RAI value might not necessarily indicate better resource allocation performance. In fact, an RAI value that is too low suggests the given data center is not drawing the requisite electricity to support its demand. Such resource under-provisioning might cause a degraded service performance—e.g. slow response of amazon.com during Black Fridays—or even a downtime—e.g. the downtimes seen by users of healthcare.gov.

    On the other hand, an RAI value that is too high means resource over-provisioning that would lead to significant waste of electricity. Therefore, a data center’s RAI value indicates whether its operating resources have been allocated in one of the following three modes: over-provisioned, under-provisioned, and optimally-provisioned. Depending on the tier-status and the operating constraints for a given data center, the upper limit (UL) and the lower limit (LL) for the RAI values can be defined. Within these RAI values, the data center can be considered to be optimally-provisioned. If we suppose UL=1.25 and LL=0.75, then Data Center-1 (RAI=0.91) is optimally-provisioned and Data Center-2 (RAI=1.26) is over-provisioned. Figure 2 illustrates the concept schematically.

    Figure 2: Assessment of Resource Allocation Performance based on RAI Values

    Figure 2: Assessment of Resource Allocation Performance based on RAI Values (Click to enlarge graphic.)

    More on next page

    1:00p
    CenturyLink to Add 20 Megawatts of Data Center Space in 2014

    CenturyLink is planning a big year in terms of growth in its data center network. Ever since its acquisition of Savvis, the telco company has been seeking to build on existing strengths and synergies with Savvis.

    The result is several planned expansions and three new data center sites for 2014. The new builds will add more than 180,000 square feet and 20 megawatts of additional space to CenturyLink’s global footprint, supporting its CenturyLink Cloud network of public cloud data centers.

    “We’re doing more expansion in 2014 than 2013,” said Drew Leonard, vice president, colocation product management for CenturyLink .“We’ve increased our amount of CapEx spending to grow faster.”

    “IT leaders tell us the road to cloud often starts in an outsourced data center,” said David Meredith, senior vice president and general manager for CenturyLink Technology Solutions. “With this global rollout, we’re on an ambitious course to deliver carrier diversity, interconnectivity and managed hybrid services at pace with the market’s appetite for access to flexible and secure IT infrastructure.”

    Focusing on Local Markets

    The CenturyLink expansions add capacity in a number of second-tier markets that have an existing CenturyLink presence, but limited options for data center space. “Colocation, regardless of how big an enterprise is, is still relatively a local purchase,” said David Meredith, senior vice president and general manager for CenturyLink Technology Solutions. “These second-tier markets are absolutely ready for Tier III providers like us to come in.”

    Two new data center sites in Phoenix: Previously announced in January, Phoenix is a great example of the synergies between CenturyLink and Savvis. “We have 1,700 CenturyLink employees,  a strong presence and we’re the local phone company,” said Meredith. “It made total sense to add a data center there.” The company partnered with IO in Phoenix, and the addition of data center modules into the CenturyLink portfolio extends the company’s capabilities for deploying space into new markets. “An idea behind that partnership was to diversify our product on the physical layer,” said Meredith. “There are some customers around the world interested in modules. We’re already bidding on some deals that are coming to us.”

    A new data center opening in Minneapolis in May 2014: Minnesota is another market where CenturyLink has a strong “feet in the street” presence, creating a data center opportunity. The data center is already Tier III certified from the Uptime Institute. The company says it is already getting a good sales funnel there. The facility was built in partnership with Compass Data Centers. It will initially be built our for 1.2 megawatts of critical capacity with option to expand that area up to 5 megawatts. “It’s another market that is a legacy CenturyLink market,” said Meredith. “We decided to go into that market because the strong existing presence of CenturyLink.”

    New data center site in Toronto, opening in May: CenturyLink will add a second facility in Toronto, where it’s seen strong demand in its first data center. The new space  comes online late in the summer and will deliver two megawatts of critical load, with expansion capacity up to 5 megawatts. “The demand already for that site has been extremely high,”  said Meredith. ”We’ve been working with three pre-sale opportunities. The traction has been really good.” Toronto gives the company yet another opportunity to work with modular technology since the second facility is a slab floor building. The company says this, much like other facilities, will be carrier diverse. The facility is Tier III certified, and will also act as another CenturyLink cloud service installation. The new facility will be in Markham, Ontario, a key tech corridor in the Greater Toronto Area. The company’s first facility is in Mississauga.

    Expansions in Five More Markets

    The new facilities are just part of the story. CenturyLink also plans expansions of existing operations in five key markets:

    • Weehawken, N.J. completed in March: The company has a data center campus in Weehawken, with all the data center buildings interconnected. CenturyLink serves a lot of global financial industry companies here, given the campus proximity to trading markets. “We are pleased to see CenturyLink expanding capacity in Weehawken, which serves as an excellent venue for the global financial industry with its proximity to trading markets,” said Curt Schumacher, chief technology officer at CBOE. “We added a point of presence in this key data center, which features a vibrant ecosystem and secure network solutions, to supply subscribing firms with competitive markets data over low-latency trading connections.”
    • Reading, England expansion expected in Q3 2014: “This is traditionally a very large managed hosting facility for us,” said Meredith. “We’ve been expanding to increase the overall capacity.  We’re adding 800kW total at the end of the day for colo in that facility as well. We continue to see more demand in Slough as well as the Docklands. We’ve essentially been full in Reading. We’re expecting to see some uptick in that market as well.”
    • Sterling, Virginia, expected completion in May 2014: Northern Virginia is a key data center market, so the company is adding more capacity there on a legacy Savvis campus inS terling.
    • Chicago: CenturyLink will have four facilities in the Chicago market when this expansion is completedin Q3 2014
    • Irvine, California: This sub-market of Los Angeles is targeted for an expansion completing in June 2014
    1:30p
    Actifio Secures $100 Million, Funding Values Company at $1.1 Billion

    With 182 percent annual growth rate and over 1 exabyte of data under management, copy data virtualization company Actifio announced it has secured $100 million in a funding round led by Tiger Global Management, with participation from current investors North Bridge, Greylock IL, Advanced Technology Ventures, Andreessen Horowitz, and Technology Crossover Ventures.

    The round places a $1.1 billion valuation on Actifio, which says the funds will help accelerate its market coverage, global brand development, and product feature enhancements.

    “From the start we have focused on building the next great technology brand with a singular focus on delighting our customers with revolutionary technology, enterprise-class service, and transformative business results,” said Ash Ashutosh, Founder and CEO at Actifio. “Having shone a light on the $46 Billion global copy data problem, we will use this funding round to expand our copy data virtualization solution across the Global 2000; enable our cloud service provider partners to build thriving businesses powered by Actifio; and extend the reach of our technology down into an even broader base of the mid-market.”

    Transformational data management model

    Boston based Actifio has grown tremendously since its launch in 2009, with more than 300 enterprise users worldwide and customers in 31 countries. Its patented Virtual Data Pipeline technology enables virtualized data management, decoupling application data from physical infrastructure. Last month IBM announced that its SmartCloud Data Virtualization uses Actifio’s Virtual Data Pipeline technology to provide a model for managing critical data, without the expense of managing excess copies.

    “Over the last four years Actifio has transformed data from a liability to an asset in hundreds of major enterprises worldwide,” said Jamie Goldstein of North Bridge, Actifio’s original investor and board member. “Helping customers monetize, analyze, and share their data rather than just store redundant copies of it is the biggest positive disruption in the data center & cloud since server virtualization. Ash and the team have both thought big and executed with focus and customer dedication since our investment at inception. This round is just a reflection of the value they’ve built and the huge opportunity ahead.”

    “Almost whatever you do, the data you produce doing it has huge value,” said Peter Levine, partner of Andreessen Horowitz. “Companies across every industry are struggling not only to protect that data, but to put it to work; to provide deeper insight, improve service, increase sales, and enhance profitability. Actifio’s copy data virtualization makes data available when and where you need it, just as server virtualization did for compute in the current generation of data centers. In the data center of the future, that’s going to be a very big deal.”

    2:00p
    10 Steps to Holistic Data Center Design

    Whether it’s the cloud, big data, or web content delivery, the data center plays a critical role for any organization. So it’s no wonder that there has been such a boom in data center demand and growth.

    Faced with increased demand, how do you determine when it’s the right time to move from data center to cloud or other model? Most of all, do you know how to make the move? Administrators and IT directors must always look at their data center model to understand current as well as future demands. Whether it’s time to grow, move or build something new: are you ready to design your next-generation data center?

    Designing a data center can be a daunting and complicated process. There are many considerations and decisions that ultimately impact the cost to build, operate and scale the data center. But that doesn’t mean that there isn’t a good process to follow. In this white paper from Belden, we examine the key 10 steps in creating a truly holistic data center design.

    In the past, IT departments provided an estimate of the equipment and power needs required for their various systems. Many of these estimates were either inaccurate speculations based on current needs with no forethought given to the IT roadmap and business growth, or they were overinflated based on worst case scenarios. Facilities then used these estimates to either build exactly what was requested or to once again overinflate the design in attempts to protect themselves. This outdated process resulted in large inefficient data centers that were costly to operate and virtually impossible to upgrade.

    Belden

    Fast forward to today. Now, we have truly agile systems capable of direct global distribution. But you have to get there first. Download this white paper today to learn about the 10 steps to a holistic data center design. In creating your next-generation data center design, these steps can save you a lot of time:

    • Step 1: Including All Players
    • Step 2: Setting Ground Rules
    • Step 3: Determining Availability & Redundancy
    • Step 4: Gathering Requirements
    • Step 5: Balancing CAPEX and OPEX
    • Step 6: Selecting the Right Equipment
    • Step 7: Designing Equipment Areas
    • Step 8: Designing the Overall Space
    • Step 9: Constructing and Commissioning
    • Step 10: Ongoing Post-Construction Review

    Remember, successful holistic data center design requires that administrators and managers essentially take into account all business requirements, technology innovations, and energy and operational savings, while identifying and eliminating any ineffective decisions and operational waste. By creating a good though methodology behind your holistic data center design – your platform will be able to better support organizational needs both today, and in the future.

    2:30p
    Recovering From A Data Center Fire in 16 Hours

    Fire in the data center. They’re scary words, and they should be. Fires and smoke events have led to some serious data center outages.

    But a fire doesn’t have to be a showstopper, according to Robert Von Wolffradt, the Chief Information Officer for the state of Iowa. “Here’s what happened and how we responded when an electrical fire took down our primary data center last month,” Von Wolffradt writes at Government Technology, presenting a step-by-step walkthrough of how stateIT COO Matt Behrens and his team assessed the damage, determined their best options and brought the data center back online just 16 hours after the incident.

    “Shortly after 5 p.m., I was escorted into the data center with our top-notch general services staff by the fire department,” Von Wolffradt writes. “General services quickly identified the source of the fire – a wall-mounted electrical suppression unit. The smell from the FM-200 fire suppression discharge was incredibly pungent. Since all power was off, the first issues were restoring power (and bypassing the failure point) and venting the data center. This took some engineering because the air conditioning chillers were on the same emergency power shut off as all of the other equipment in the center.”

    Read his entire account at GovTech.com.

    2:30p
    Titanfall Taps Windows Azure Cloud for Low-Latency Gaming

    Cloud infrastructure has been well tested for enterprise workloads. But can it handle the demands of gamers, who are ultra-sensitive toi network lag? The question is being tested with the release of the new Titanfall game by Respawn Entertainment for the Xbox One, which is leveraging the Microsoft Windows Azure cloud.

    Before going live recently, Titanfall had extensive beta testing last year, with Azure serving the back-end hosting of dedicated servers for game hosting and CPU power. Before taking over as Microsoft CEO recently, Satya Nadella was EVP of Microsoft’s Cloud and Enterprise group and would typically reserve cloud capacity for business applications.

    With the new Xbox One debut, the company has opened up Azure to let designers create gaming experiences. When Xbox One was launched last year the company updated Xbox Live as well, with 300,000 servers backing up the service. While other online games have stumbled in their online endurance tests, Microsoft has been building its server infrastructure.

    Cloud-Powered Gaming “A Real Thing”

    Respawn engineer Jon Shiring said that since the beta ended, some skeptical devs have already changed their minds about the feasibility of using Azure for the parts of a game traditionally handled by a user’s console or PC.

    “Back when we started talking to Microsoft about it, everyone thought it was kind of crazy and a lot of other publishers were terrified of even doing it,” Shiring says. “I’ve heard that since our beta ended, they’ve been pounding down the doors at Microsoft because they’re realizing that it really is a real thing right now.”

    Last summer Shiring blogged about the Xbox Live cloud and why dedicated cloud servers were selected for their common multiplayer design. Typically games will have user computers host multiplayer matches, because it is too cost-prohibitive to scale hundreds of thousands of cloud servers. When small gaming company Respawn approached Microsoft with the challenge of making dedicated cloud instances affordable, they realized that player-hosted servers were holding back online gaming and that this was something they could help solve.

    Shiring notes that the Xbox group came back to them with a way  to run all of the Titanfall dedicated servers, which lets Respawn push games with more server CPU and higher bandwidth, which lets them have a bigger world, more physics, lots of AI, and potentially even more.

    Shifting Processing Power to Unleash Imaginations

    Engadget notes that many gamers look at Titanfall as the first true next-generation game, offering an experience we haven’t seen on last-generation hardware (think: the PlayStation 3 and Xbox 360). From what Shiring says, the fact that Respawn wasn’t held back by a console’s local processing power was key to the studio’s achievement.

    “There are other games like Battlefield that have dedicated servers, but they haven’t gone the same direction that we have with them. We have all of this AI and things flying around in the world; that has obviously let us build a different game than we would have if we’d have gone with player-hosted,” Shiring says. “Really, the biggest thing with that is that it has uncapped our designers and let them do things that were previously impossible to do.”

    Regional data centers allow Respawn to keep everyone playing even if their closest server farm is overloaded. During the beta, the studio ran Titanfall on an intentionally limited number of servers to discover where the infrastructure’s weak points were when running at a full load. Some 2 million people participated in the game’s test run (across both PC and Xbox One) and at one point, a portion of Europe’s data centers were running at full player capacity and couldn’t accept more users.

    Titanfall went live exclusively on Xbox One on March 13 in North America, in the UK on March 14.

    “At Xbox we have a long history of bringing blockbuster multiplayer games to our fans that have redefined what it means to play games with friends and others around the world,” said Yusuf Mehdi, Chief Marketing and Strategy Officer, Devices and Studios at Microsoft. “Leveraging the power of Xbox One and in close collaboration with our partners at Respawn Entertainment and Electronic Arts, Titanfall is poised to be one of those breakthrough games that ignites the potential of this generation.”

    6:10p
    Hortonworks Nets $100 Million to Accelerate Enterprise Hadoop

    Leading enterprise Hadoop provider Hortonworks announced an oversubscribed $100 million funding round led by funds managed by BlackRock and Passport Capital, joined by all existing investors. The funds will help the firm continue to grow its big data ecosystem and global operations.

    The company’s Hortonworks Data Platform (HDP) has seen tremendous growth since inception, and its versatility has taken it from multiple data processing engines to the depth and breadth of integration across a variety of infrastructures and applications.

    Strategic reseller partners that include Microsoft, SAP, Teradata, HP and others, combined with hundreds of technology partners, enable enterprises to build on the Hadoop platform, using a modern data architecture that integrates with the technologies that they already know and rely upon.

    “Through our unrelenting focus on innovating completely in the open, and deeply integrating with existing data center systems, we have seen phenomenal growth in our business over the past 24 months,” said Rob Bearden, CEO of Hortonworks. “We will continue to deliver on the promise of a completely open platform and further cement Hortonworks as the unquestioned Hadoop leader in the IT ecosystem.”

    In a blog post, Bearden talks about building business momentum and the things that this funding round makes possible. After adding over 250 customers in the past year, the company will continue to innovate its Hadoop-powered enterprise data platform, and extend and enable its ecosystem and increasing roster of HDP certified applications.

    6:15p
    Dell Adds to Software Portfolio With StatSoft Acquisition

    Adding to its information management software lineup, Dell announced the acquisition of StatSoft, a provider of advanced analytics solutions that deliver a wide range of data mining, predictive analytics and data visualization capabilities.

    StatSoft combines comprehensive statistical analysis with advanced analytics to help organizations better understand their businesses, predict change, increase agility and control critical systems. Its software capabilities include database management and optimization, application and data integration, and big data analytics, which will be underpinned by Dell’s myriad software, storage, server and services offerings and industry relationships.

    Its solutions are widely used across a broad set of industry sectors including pharmaceuticals, financial services, technology and manufacturing. Harnessing big data, its solutions contain sophisticated algorithms for predictive analytics, machine learning and statistical analysis, enabling companies to find meaningful patterns in data and thrive in today’s data-driven economy. The company’s offerings span from desktop data modeling and analytics to high-performance, enterprise deployments. StatSoft’s software can be deployed on premises, in the cloud, or as software-as-a-service.

    “With the rapid explosion of data comes the opportunity for organizations to gain deep insight into their business processes in order to facilitate better decision making and maximize profitability,” said Matt Wolken, vice president and general manager, information management at Dell Software. ”The acquisition of StatSoft gives Dell’s customers access to a proven advanced analytics solution that delivers the predictive and prescriptive analysis capabilities businesses need in order to make faster, more accurate decisions.”

    “We’re excited to join the Dell family and add our technology and expertise to Dell’s rapidly growing set of information management capabilities,” said Dr. Paul Lewicki, founder and chief executive officer, StatSoft.  ”Together with Dell, we can create new opportunities for customers to better leverage the growing volumes of data that are quickly becoming the lifeblood of organizations of all sizes, and further advance StatSoft’s mission of making the world more productive.”

    6:42p
    Google Slashes Cloud Pricing, Adds Slew of Features

    The 800 pound Gorilla of cloud computing has awoken. Google today demonstrated some cool new developer-friendly features and announced major price drops for its Google Cloud Platform services.

    At the Google Cloud Platform live event today, Google senior vice president Urs Hölzle demonstrated Live Migration, a feature which allows customers to seamlessly move virtual machines between data centers without interrupting service, and could be enormously useful in allowing users to route around data center outages.

    The company also introduced CloudDNS, as well as added a command line tool and Managed VMs, which blur the line between Platform as a Service and Infrastructure as a service, allowing developers to leverage the best of both worlds.

    Google also added support for VMs using the Windows operating system, something that has long been in demand by customers. Preview support of Windows Server 2008 R2 is available starting today. The company also announced general availability of Linux operating systems from SUSE and RedHat Linux on their cloud.

    Cloud Pricing Meets Moore’s Law

    “Pricing hasn’t followed Moore’s Law: over the past five years, hardware costs improved by 20-30 percent annually but public cloud prices fell at just 8 percent per year,” Hölzle wrote in a blog post summarizing the announcements. “We think cloud pricing should track Moore’s Law, so we’re simplifying and reducing prices for our various on-demand, pay-as-you-go services by 30-85%:

    Google will have on-demand price reductions as well as sustained-use discounts, starting with a 32 percent price drop today. Storage dropped a staggering 68 percent, at .026 per GB, or .02 per GB DRA. BigQuery prices dropped 85 percent.

    “The Google cloud platform is a central part of our infrastructure development, and we’re investing heavily in it to make it as great a platform to external users as it has been to internal users at Google,” said Hölzle, who said Google has laid the groundwork for years and years of future improvements. The aim is to make developers more productive.

    New Pricing Takes Guesswork Out

    With Google’s new pricing, the on-demand price for VMs is now lower than the three-year reserve for most providers. Hölzle said there is too much complexity optimizing cost and performance. Hölzle says that current clouds force too many trade offs.

    “Pricing is still way too complex,” said Hölzle. “It seems like you need a PhD to figure out the best option.”

    That was reinforced by an analysis from cloud integrator RightScale that compared Google’s new pricing to cloud market leader Amazon Web Services, which allows users to manage future capacity through the ourchase of reserved instances.

    “The new Google sustained-use pricing avoids the complexity, lock-in, and upfront costs of AWS reserved instance purchases,” RightScale notes. “Google users will automatically receive the best price for their level of usage, with no planning required on their part. However, tying the sustained-use discounts to Google’s monthly billing cycle could create an incentive to make decisions such as switching instance sizes on the monthly billing boundaries.”

    << Previous Day 2014/03/25
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org