Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, December 8th, 2014

    Time Event
    1:00p
    Phoenix Utility to Pilot Data Center Without Generator

    There is only so many ways to build a data center without a generator or some other kind of backup power source and still have the level of reliability companies that rely on their applications to stay up around the clock need.

    One way is to feed the data center power directly from more reliable parts of the grid. BaseLayer, the company in the process of being spun out of the Phoenix-based data center provider IO, has partnered with Phoenix-area public utility Salt River Project to attempt just that.

    SRP is going to deploy a BaseLayer data center container near one of its generation plants. The data center will receive power directly from a “bulk transmission” line, which is more reliable than distribution infrastructure, since, as the name implies, it is designed to transmit massive amounts of electricity over long distances.

    To date, the most prominent example of a production data center that does not rely on generators for power backup is eBay’s latest facility in Salt Lake City, Utah. The data center is powered by fuel cells that convert natural gas to electricity and uses the local utility grid as backup.

    Researchers at Microsoft have been other ways to get rid of the generator. One idea is to put a data center module at the site of a waste treatment plant and use fuel cells to convert biogas the plant generates into electricity for the data center. Microsoft has a proof of concept for this idea running in Wyoming.

    Pilot to Prove Reliability

    SRP and BaseLayer’s deployment will also be a pilot project.

    “It’s currently a proof of concept until we have the data to be able to show that it’s commercially viable,” William Slessman, BaseLayer CEO, said. “We believe that we can remove … need for generation or UPS capabilities.”

    The key to the concept is being able to place the data center where it can tap into bulk transmission lines. “Some bulk transmission lines have never experiences an outage,” Clint Poole, manager of SRP Telecom, said.

    There isn’t an established system do validate reliability of the solution, such as the Uptime Institute’s Tier rating, so it is important to do the pilot and have a third party evaluate it, Poole explained.

    Official availability of the bulk transmission line where the first “DataStation” will be installed is 99.99999 percent, but that number is conservative, according to Poole. “Our engineers allow me to say seven nines,” he said.

    Since the data center will receive two redundant power feeds from the grid and since there will be no transfer switch in the power chain (fewer components usually means better reliability), he expects that availability number to be even closer to 100 percent than the official estimate.

    Addressing Internet’s Hunger for Power

    SRP will operate the DataStation, but the model can be different if proven out and adopted by others. A company may buy a module (or several), deploy it on a utility’s property, and outsource management to a third party. In another possible model, a data center service provider may own and operate the modules and lease the space within.

    For the utility there’s the immediate commercial interest of being able to sell power to customers with extremely high but steady load. Utilities value customers like that, since predictability of demand enables them to operate more efficiently.

    But there is also potential long-term benefit for electrical utilities. Since more and more data centers are being built and data center power demand continues to grow, being able to put them next to transmission lines means utilities or independent grid operators will not have to make big investments in expanding transmission infrastructure to meet the demand, Poole explained.

    For SRP, there is also the interest in selling connectivity services on its fiber network, which spans its entire bulk transmission system and parts of its distribution infrastructure and connects to major meet-me rooms throughout Phoenix. It is a “full-on commercial network,” Poole said.

    BaseLayer’s New Edge Module

    The container SRP is going to use is BaseLayer’s next-generation Edge module. It is designed specifically for being deployed outdoors.

    A single module will contain all necessary cooling and power distribution equipment and SRP’s particular model will support 500kW of IT load across 20 cabinets, but the design will be able to support higher densities too, Slessman said.

    BaseLayer is a new company. It is one of the two companies that resulted from the split of data center service provider IO announced earlier this month. The other company will continue to be called IO.

    4:30p
    Establishing DevOps-Friendly Infrastructure Orchestration

    Alex Henthorn-Iwane is the Vice President of Marketing at QualiSystems.

    Orchestration and DevOps are buzzwords often heard these days, yet they have serious meaning for IT and data center professionals. The intersection between orchestration and DevOps indicates a major shift in how data center infrastructure is expected to be deployed and utilized to achieve line-of-business goals. IT organizations face increasing pressure to move faster to support business initiatives. How can they leverage infrastructure orchestration to support DevOps initiatives in a sustainable fashion?

    Infrastructure That Supports DevOps

    DevOps is a culture of collaboration between developers and IT operations. Its lean manufacturing roots often explain the software’s progression from development to production deployment as analogous to the journey from raw materials to finished goods on factory floors. There are other teams that must work on the software ‘goods’ before they hit production, including QA/testing, information security, and compliance teams.

    Think of infrastructure as each team’s workbench, which can either enable or hinder productivity, quality and velocity. If infrastructure provisioning to developers and other teams lags, then this is obviously a problem for velocity and productivity. According to Enterprise Management Associates’ (EMA) 2014 study on the software defined data center, 47 percent of enterprise IT teams take from one week to over a month to provide infrastructure resources to developers and testers.

    Infrastructure is supposed to support DevOps by becoming an Infrastructure-as-a-Service (IaaS) “cloud,” defined as follows:

    An IaaS cloud allows efficient self-service access to infrastructure resources that can be assembled into production-like working environments and automatically provisioned.

    Orchestration is the software-based process to create IaaS clouds. If collaboration is the goal of DevOps, then infrastructure orchestration must enable the IaaS with the characteristics below.

    Flexible Self-Service Model

    Each team has different roles and needs in the pipeline from Dev to Ops. Developers are innovators who create new code and model an accompanying infrastructure environment that is best suited for running that code. For today’s complex, distributed applications, that means creating an infrastructure topology of multiple VMs and other infrastructure resources in a dynamic sandbox. Once developers finish their work, QA testers run functional and performance tests against any permutations of the infrastructure existing in the production environments. Information security personnel need to ensure sound interactions with other infrastructure components such as firewalls; compliance teams must run their own tests based on applicable regulatory frameworks such as HIPAA or PCI.

    DevOps-friendly orchestration must offer infrastructure self-service needs that are looser and more sandbox-like at the development stage where innovation happens, and progressively tighter, more standardized and catalog-driven self-service for teams working in the later stages. Empowered by this orchestration capability, the IT group can collaborate with the developers innovating new infrastructure topologies to adapt them into standardized environments that other teams operate on in later stages.

    GUI and API-Enablement

    Unlike Web companies, most enterprise IT infrastructure teams are staffed primarily with vendor-trained domain experts, most of whom can do some scripting, but aren’t professional software engineers. Ultimately the goal is to create continuous automated processes. This means that orchestration should support both GUI and API-driven ways to create infrastructure environments and the underlying automation workflows.

    Handling Physical and Virtual Infrastructure

    DevOps was first practiced by hyperscale and Web companies with the luxury of totally modern, virtualized or public cloud infrastructure, making it relatively easy to adopt a code-centric approach to infrastructure. Many enterprises, however, have a mix of legacy, physical, and virtual infrastructure, meaning that the concept of infrastructure clouds can’t assume that everything is already virtualized.

    The EMA study referenced above revealed 89 percent of enterprise IT teams still run applications on dedicated, non-virtualized servers. Networking is also primarily non-virtualized today, and despite the SDN hype, this will be the reality for a while. Many of these non-virtualized systems and networks have remarkable staying power, are mission critical and touch many applications. It’s reasonable to start with the easiest infrastructure and less critical applications when trying to practice DevOps, but if the plan ignores physical realities, the initiative may stall when investment plans weren’t sufficiently thought out.

    One banking IT infrastructure team invested in an orchestration product assuming that virtualization was enough, then discovered that users needed access to physical resources. That orchestration investment fell apart after a year of consulting services were spent trying to Band-Aid over the gap and they had to admit defeat. It’s beneficial to determine where your infrastructure cloud roadmap is going to lead you in what timeframe. If you believe that your non-virtualized resources will be upgraded and migrated before needing servicing, then your orchestration job is easier. Otherwise, you need orchestration that bridges the physical and virtual resource gap.

    Pre-Deploy vs Deploy Use Cases

    When developers, testers, InfoSec and compliance teams are working, the infrastructure resources use case is fundamentally different than when applications or services are deployed. DevOps-friendly orchestration needs to support both use cases. Orchestration for the deployment-stage focuses on ensuring infrastructure resources are available to support application up-time and performance. Production orchestration typically provides pools of resources dedicated to particular applications or services, or shared between different applications with a policy for prioritizing allocation when conflict occurs.

    In the pre-deployment stages, the focus is team productivity, to enable many users contending for infrastructure resources from a shared pool. Developers and testers need to use resources for relatively short periods then release them after they’ve done a particular task. Orchestration therefore must support rapid assembly and disassembly of environments from a shared infrastructure resource pool, based on ad-hoc requests by users generally considered equal in priority. The orchestration goal is to manage resources, allocation conflicts and utilization to enable maximum user productivity.

    Properly done, infrastructure orchestration can be the tool and process that IT uses to provide a collaborative platform for developers, testers, InfoSec, compliance and Ops teams to drive high performance, high quality software outcomes. Consideration of varying requirements around self-service models, GUI vs. API-enablement, physical vs. virtual infrastructure, and pre-deploy vs. deploy use cases can ensure that orchestration will help IT and data center professionals play a constructive role in creating DevOps culture.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:30p
    Meet Apollo: Reinventing HPC and the Supercomputer

    Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges revolving around advanced computations for science, business, education, pharmaceuticals and beyond.

    The challenge, however, is that many data centers are reaching peak levels of resource consumption making it difficult for these individuals to work around such high-demand applications. How can they continue to create ground-breaking research while still utilizing optimized infrastructure? How can a platform scale to the new needs and demands of these types of users and applications? This is where HP Apollo Systems help reinvent the modern data center and accelerate your business.

    New applications are being deployed which require greater amounts of resource utilization. These applications and data sets are critical to helping an organization run and stay ahead of the competition. Most of all, these applications are gathering data, quantifying information and producing critical results.

    One of the biggest challenges surrounding the modern, high-intensity, applications revolves around resource consumption and economies of scale. Remember, we’re not just talking about server platforms here. It’s critical to understand where power, cooling, and resource utilization all come into play. This is why modern organizations requiring high levels of processing power must look at new hardware systems capable of providing more power while using less space and being more economical. It’s time to look at powerful, modular, solutions which can break all the norms around high performance computing and data gathering.

    This white paper explores HP Apollo Systems, where it can be deployed and how it directly impacts modern, high-performance, application requirements.

    The demand for more compute performance for applications used by engineering design automation (EDA), risk modeling, or life sciences is relentless. If you work with workloads like these, your success depends on optimizing performance with maximum efficiency and cost-effectiveness along with easy management for large-scale deployments. Being able to deploy a powerful ProLiant XL230a Server that goes inside the Apollo a6000 Chassis allows for complex multi-threaded applications to truly run optimally.

    The clock is always ticking to find the answer, find the cure, predict the next earthquake, and create the next new innovation. That’s really why high-performance computing (HPC) is always striving to find the answers faster to engineering, scientific, and data analysis problems at scale.

    For example, the HP Apollo 8000 System offers the world’s first 100 percent, liquid-cooled supercomputer with built-in technology that protects the hardware. Remember, one of the most important ingredients behind modern HPC requirements is scalability. Apollo Systems work with a rack design supporting up to 144 servers per rack. This translates to about four times the teraflops per rack compared to air-cooled designs, and the energy-efficient design helps organizations eliminate up to 3,800 tons of carbon dioxide waste from their data center per year.

    Download this white paper today to learn how the Intel-Powered HP Apollo 8000 is the new type of HPC and supercomputing architecture.

    5:43p
    CenturyLink Acquires Cloud DR Provider DataGardens

    CenturyLink has acquired Disaster Recovery-as-a-Service provider DataGardens. Building on an existing relationship, the acquisition gives CenturyLink a proven, fast cloud DR service it can offer its customers and technology that enhances data migration capabilities.

    DataGardens allows a private data center to failover to public cloud in just a few minutes, creating an exact replica, DataGardens CEO Geoff Hayward said. The company offers multiple deployment options and “protection groups” that ensure that a pool of servers stays consistent.

    Acquiring forward-looking technology firms has been an important part of CenturyLink’s strategy. Its acquisitions have turned the telecom into a top cloud competitor.

    The cloud journey began with the acquisition of Savvis and was later complemented by acquisitions of Platform-as-a-Service AppFog and Infrastructure-as-a-Service provider Tier 3, forming the foundation of its cloud. Tier 3 has enhanced the company’s “self service” cloud offerings, and DataGardens enhances it further with DR, and eventually more cloud migration capabilities.

    A number of organizations turning to DR in the cloud is growing, according to Forrester Research. Cloud DR is advancing in terms of capabilities and suitability, while traditional disaster recovery can be complicated to set up, expensive, and falls outside of the expertise of many companies.

    DataGardens’ technology and ease of use attracted CenturyLink. “We saw our customers using the DataGardens tool and replicating workloads from client data center to our cloud platform for DR-as-a-Service,” said Jonathan King, vice president of cloud strategy and business development at CenturyLink. “We also saw cloud-to-cloud recovery scenarios. This is a core platform attribute that we feel needs to be built in.”

    DataGardens has already integrated with not only CenturyLink’s cloud, but its legacy cloud environments acquired through Savvis.“There’s already a good base of integration, but we are going to be spending the year to make the user experience that much tighter,” said King.

    The disaster recovery capabilities made the deal attractive, but CenturyLink is also getting talented technologists and software capabilities that extend beyond, including cloud-to-cloud migration.

    DataGardens will be attractive to small and mid-size companies, and departments or application-specific enterprise workloads in need of a DR solution. It enables customers to share DR across a large base in a multi-tenant setting to solve the cost issue. The price point is comparable to cloud backup, said Hayward.

    Diagram shows one potential deployment model where both physical and virtual resources are mirrored to a recovery group in the CenturyLink Cloud. (source: CenturyLink Blog Announcement)

    One potential cloud DR deployment model is where both physical and virtual resources are mirrored to a recovery group in the CenturyLink cloud. (Source: CenturyLink)

    “We see a lot of demand in the market for this offering because things are becoming more distributed and critical as systems grow,” said King.

    “It takes things forward,” said Hayward. Traditional DR is “something that rarely gets implemented, or they use something like tape with awful recovery times.”

    In addition to acquiring and developing technological capabilities, CenturyLink has been building out its infrastructure in support of its cloud services, announcing several new data centers this year. The company recently committed to getting Uptime Institute’s Management and Operations Stamp of Approval across its entire footprint (close to 60 data centers).

    It has achieved Uptime’s Tier III certification in Minnesota and Toronto for Design and Constructed Facility.

    9:00p
    Hacker Group Lizard Squad Takes Down PlayStation and Xbox Live; Threatens Christmas

    logo-WHIR

    This article originally appeared at The WHIR

    Online, cloud-based games from Sony PlayStation began showing the error message reading: “Unable to connect to PSN” on Sunday evening. The hacker group Lizard Squad is claiming credit for the attack via twitter. At 4:29 EST the group posted “PSN Login #offline #LizardSquad” It’s unknown at this time if the problem is due to a DDoS attack but the group is well known for that method.

    This attack comes while Sony is still reeling from a hack that resulted in massive amounts of private company data being posted to the internet.

    The group took PlayStation and Sony Entertainment offline in August and have been taking credit for an Xbox Live outage on Friday evening that lasted several hours. They also claimed responsibility for the Xbox outage on December 1, 2014.

    Xbox is having a run of bad luck with outages with an Azure disruption taking it offline mid-November.

    According to a Lizard Squad tweet, these disruptions are “just a small dose of what’s to come on Christmas.” They also tweeted “Who’s next?” prior to Sunday’s outage indicating that they may have already been planning the PS4 and PS3 attacks.

    Crave Online said, “The attack is very real, too. You can see a visual representation of the DDOS attack live on websites such as Digital Attack Map. Traffic appears to originate from proxies in China which are slamming the North American PSN servers. There also appears to be an attack hitting Peru. It is presently unclear what this is affecting.”

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/hacker-group-lizard-squad-takes-ps4-ps3-xbox-live-threatens-christmas

    9:30p
    UK Mass Surveillance Doesn’t Violate Human Rights, Tribunal Says

    logo-WHIR

    This article originally appeared at The WHIR

    An investigatory powers tribunal has ruled that the current system that governs mass surveillance in the UK is not in violation of human rights.

    According to a report by the BBC on Friday, Amnesty, Privacy International and other human rights groups brought the case forward earlier this year, and plan to appeal the decision to the European Court of Human Rights in Strasbourg.

    The hearing took place in July, where the IPT was to determine whether the UK’s mass surveillance program, called Tempora, violated citizens’ right to privacy and freedom. The Tempora program is allegedly used by GCHQ to intercept communications through fiber optic cables entering and leaving the UK.

    The IPT found that the surveillance and information sharing with US agencies is legal, however acknowledged that “questions remained about GCHQ’s previous activities,” the BBC said.

    The group of human rights organizations said that the documents released by Edward Snowden uncovered ways in which GCHQ violated the European Convention on Human Rights, specifically article 8, the right to privacy, and article 10, which protects the freedom of expression.

    According to the BBC, “[T]he judges at the Investigatory Powers Tribunal (IPT) said the disclosures made during this case, which included the legal footing of the intelligence system’s activities, had contributed to their decision that the intelligence agencies were not in breach of human rights.”

    “The proceedings forced the Government to disclose secret policies governing how foreign intelligence agencies, including the NSA, share information with GCHQ,” Legal Director at Privacy International Carly Nyst said in a statement following the decision. “Privacy International believes that the fact that these secret policies are only now public because we have forced their disclosure in court means that such rules could never make the actions of GCHQ in accordance with the law. The IPT must find that secret law is not law, and should at the very least rule that all UK access to PRISM was unlawful prior to today.”

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/uk-mass-surveillance-doesnt-violate-human-rights-tribunal-says

    9:45p
    Gartner, IDC: Hyperscale Data Centers Drive Server Sales

    Both IDC and Gartner think growth in the server market is being driven primarily by investment in hyperscale data centers. The top four customers in the server market are cloud service providers, accounting for a fifth of all servers shipped, according to IDC.

    The two research firms released their third-quarter worldwide server revenue figures this month, and while both reports show modest growth in the market, big data analytics, mobile, cloud, and social offer a bright spot.

    “The third quarter of 2014 produced modest growth on a global level, highlighting positive but constrained demand,” Errol Rasit, research director at Gartner, said in a statement. “Only North America and Asia Pacific exhibited shipments growth, largely driven by demand from hyperscale organizations located there. These results support the continued bifurcation of enterprise and consumer services server demand.”

    North America is the biggest cloud services market, while Asia Pacific is the fastest growing market. Among the most recent cloud providers to expand capacity in Asia Pacific are Salesforce, which plans to launch a new data center in Japan, LeaseWeb, which has added a Hong Kong site within a Pacnet facility, and Microsoft, which is working on adding an Azure data center in Australia.

    There were variations on shipments and revenue between the two research giants, but both outlined similar general trends.

    Overall, Gartner saw shipments grow 1 percent in the third quarter year over year, and IDC saw them improve 5.7 percent. Gartner saw 1 percent revenue growth, while IDC said revenue had grown 4.8 percent. This is the second consecutive quarter of year-over-year improvement, according to IDC.

    Overall shipments and revenue are growing, but different regions and platforms skewed figures one way or the other. Parts of the market are dragging down the overall number, hiding the impressive growth in the hyperscale world and in certain regions, such as China.

    U.S. and Asia Pacific are experiencing fastest growth, while Latin America and Eastern Europe have declined. IDC breaks the numbers down to smaller regions in its statement, noting the four top Chinese manufacturers are seeing very strong revenue growth — more than 35 percent in year over year.

    HP continues to be the leading server vendor, despite a small decline in revenue, which according to Gartner was 1 percent, and according to IDC was 0.5 percent.

    x86 server shipments and revenue are growing, while RISC/Itanium Unix servers are dropping, said Gartner. IDC noted HP’s x86-based ProLiant servers are seeing increased market demand despite the company’s overall slight server revenue decline. The decline is due to weakness from Itanium-based Integrity server revenue.

    Non-x86 servers continue to see a tough time according to both firms. IDC noted a thirteenth consecutive quarter of revenue decline in non-x86 servers.

    IDC also broke out blade and density-optimized server figures. Blade servers, which are highly leveraged in virtualized and converged enterprise environments, increased almost 2 percent year over year, reaching $2.3 billion. Density optimized servers, utilized by large homogeneous data centers, declined year over year because of several large deployments that occurred in the third quarter of 2013.

    IBM leads the non-x86 segment, according to IDC, but saw a sharp 24 percent revenue decline for non-x86 year over year. IBM saw decreased demand across all of its lines in advance of October’s divestiture of the x86 business and the company’s natural technology cycle.

    Cisco experienced the highest growth in the third quarter. Both research firms said its server sales grew more than 30 percent year over year, but from a much smaller base, equivalent to roughly 3 percent (Gartner) to 5 percent (IDC) of the market. Both see Cisco gaining market share.

    10:00p
    Schneider, HP Link Application and Power Management in DCIM Integration

    Schneider Electric and HP have integrated Schneider’s data center infrastructure management software StruxureWare with HP’s OneView infrastructure management platform. The combination will provide data center managers with further insight into servers, storage, and networking.

    Bridging the silos between facility and data center managers continues to be a top concern in the DCIM space. The two companies announced in January that they were working on a converged data center and IT management platform. The OneView integration builds upon an earlier integration of StruxureWare with HP’s Universal Configuration Management Database (uCMB).

    StruxureWare provides the ability to monitor, operate, analyze, and optimize data center power, cooling, security, and energy, and OneView give it the IT monitoring and management element.

    “It’s a good move for Schneider, as DCIM is moving up the IT stack,” said Rhonda Ascierto, research manager at 451 Research. “The big drivers are greater visibility into the cost for workloads and applications. Data Service Optimization integration up the stack is important. As more IT services are being outsourced, people need to understand the true cost of running a service.”

    Ascierto said that giving application data views of physical infrastructure is an early, big trend in the space, and that this partnership speaks to that trend. DCIM tracks the physical resources of the data center. When you integrate it with services you can track virtual resources. It’s about associating applications and services with the underlying elements.

    The integration means the ability to share info and data across platforms easily to give a more complete view of facilities and IT. It shows application dependencies on physical infrastructure and helps determine true cost of ownership by associating workloads with watts, bridging two often very separate worlds of IT operations and the facility.

    “IT and facilities partnerships are the core of converged management,” said Rick Einhorn, HP vice president of data center consulting, in a release. “It’s an important element for businesses looking to benefit from the new style of IT across the data center and facilities lifecycle.”

    HP has also joined Schneider’s EcoStruxure Technology Partner Program. The program is about providing collaborative solutions that provide better interoperability and reduce integration costs for easier deployment.

    There are a number of other DCIM providers that integrate with OneView, including Panduit, iTracs and Nlyte.

    Schneider has been boosting Struxureware through partnerships and adding functionality. The company added some colo friendly features to the DCIM this year and made it available on Microsoft Azure, bringing energy management tools to the cloud world.

    Major data center vendors known for power chain offerings are looking to software in general to better tie facilities and IT. Eaton recently noted it was looking to software to unify IT and facilities management. Emerson has also been doing the same with its Trellis DCIM.

    Power management is an integral part of the infrastructure management picture, but is often an island. Schneider and HP integration work speaks to the facilities and IT worlds moving closer together, as the building and what’s on the servers directly affect one another.

    << Previous Day 2014/12/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org