Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 8th, 2014

    Time Event
    12:00p
    Every Last Raised Floor Tile

    The real estate portion in the overall data center cost calculation in the U.S. is so small – compared to the cost of energy and electrical and mechanical equipment – it is rarely talked about. It is a much bigger consideration in China, however, where available physical space is in short supply and almost always comes at a premium.

    The Green Grid’s Chinese team has used a concept similar to Power Usage Effectiveness to come up with a metric to measure how effectively available physical space in a data center is utilized. Called Space Usage Effectiveness, or SpUE, it is a way to compare the number of server racks in a data center with the number of server racks that data center is built to accommodate.

    David Wang, data center architect at Teradata and a China liaison for The Green Grid, introduced the new metric at the industry group’s annual forum in Burlingame, California (just outside of San Francisco), Tuesday.

    SpUE is meant to be used for space the same way PUE is used for energy: to assess current status and to track improvements.

    “We really need a metric for the end user to look at what the data center’s current status is in terms of space utilization,” Wang said. “It gives you clear direction where you can go.”

    A 3,200-square-foot data hall, for example, may have 29 racks set up in a suboptimal configuration with a number of hot spots. Its design power and cooling capacity may be multiple times the actual load, but the ability to increase the load may be limited because of the way the racks are laid out.

    SpUE is a way to assess how big or small that disconnect between design and actual load is.

    Unlike the single-number PUE score, a SpUE score has two numbers: actual and potential. Each of them is a ratio that illustrates the relationship between kilowatts per rack and square feet per rack (rack power density and space density).

    That 3,200-square-foot facility used as an example can potentially accommodate 72 racks instead of the 29 it has. A densely packed data hall will have space density of 20 to 30 square feet per rack, Wang said.

    Whether SpUE will be adopted by The Green Grid is unknown at this point. The organization’s PUE metric has enjoyed widespread usage because of its elegance and accessibility, but Water Usage Effectiveness and Carbon Usage Effectiveness that came out subsequently have not seen similar levels of adoption.

    The metric may also be more useful in China than in North America and Europe, where availability of physical space is not a big concern compared to availability of power and where companies have gotten very sophisticated in the way they build out data center capacity, bringing new space online incrementally, as demand arises.

    Chinese government grades data centers for greenness

    There are about 450,000 data centers in China, Ben Tao, a senior engineer at Intel and vice-chair of The Green Grid’s technical work group in China, said. Together, they are responsible for about 1.5 percent of the nation’s total power consumption, he said.

    The issue of data center power consumption is a rising concern for the nation’s government. To address it, a government body has come up with a way to rate data center energy efficiency, which Wang also talked about at the forum.

    Called Green Grade Assessment, or GGA, it is meant to assess efficiency beyond PUE, although PUE is part of the calculation.

    It was created by the China Cloud Computing Promotion and Policy Forum, which is affiliated with the Ministry of Industry and Information Technologies of China.

    The rating consists of multiple categories, each weighted differently. The maximum energy efficiency score, for example, is 55 points, but there is also a component for energy saving technologies (35 points), green management (10 points) and bonus points.

    GGA was conceived in 2011, and in 2012 and 2013 the organization rated 17 data centers.

    3:30p
    Storage Technology: Breaking Free From Ownership

    Andres Rodriguez is the CEO and Co-Founder of Nasuni, a unified storage company that serves the needs of distributed enterprises. Previously he was a CTO at Hitachi Data Systems and CTO of the New York Times.

    There is a lot of pressure on today’s CIO. Enterprises are asking them to think more strategically, trim budgets, and make decisions that will drive the business forward. Juggling those responsibilities is a delicate act, and the fear of dropping the ball, especially when it comes to budgets, keeps many CIOs up at night.

    Technology vendors have worked diligently in recent years to build scalable systems with flexible payment options that better fit business needs. Software-as-a-Service (SaaS) found its start with CRM, led by Salesforce.com, but the idea of moving enterprise software to the cloud quickly caught on in a wide variety of enterprise technologies. Infrastructure-as-a-Service (IaaS) followed suit, with companies like Amazon offering previously unimaginable compute and storage power in the cloud.

    Yet the big hardware storage vendors have been slow to pick up on this transformation. And it’s no secret why – their margins and mindset won’t allow it. That’s why no CIO should ever have to buy storage hardware again.

    A new breed of storage technology

    That last statement may sound far-fetched. To those who have worked with storage hardware for their entire career, it may even sound like a terrifying proposition. But no matter how one feels about owning storage hardware, a growing number of enterprise IT organizations are already making the decision to cut the ties of storage hardware ownership in favor of enterprise Storage-as-a-Service. Those willing to swear off buying hardware are reaping enormous cost savings and management efficiencies from this new breed of storage technology.

    Even the most ardent cloud skeptic would have to admit that owning storage hardware is an enormous headache. Planning for capacity is a major issue, one that is made difficult by the rapid increase in enterprise data. On average, the amount of enterprise data is doubling every year or two, which makes the traditional three-year cycles almost impossible to navigate; the CIO must either pay up front to increase capacity by far more than is currently needed, or make an emergency purchase a year or two down the line. Either way, IT is ballooning its capital expenditures balance while creating the impression that the CIO did not plan ahead.

    > Improved capacity planning, lower spending

    From a financial standpoint, adopting the enterprise Storage-as-a-Service model means that IT only has to pay for usable storage, with the ability to scale up or down whenever they want. But, the advantages of no longer owning storage hardware amount to more than improved capacity planning and lower IT spending.

    When an enterprise owns its hardware, all enterprise data lives in those boxes. When it comes time to synchronize data, manage and provision storage, or back up data, those boxes are no help and require IT to spend further on additional systems. Those problems are compounded as the amount of hardware and the number of offices grows, adding an additional layer of complexity that should not exist.

    For example, enterprises using on-premises hardware often spend boatloads of money on complicated WAN acceleration schemes to synchronize data among different offices, which would not be necessary in a service-oriented storage model.

    In an enterprise Storage-as-a-Service model, hardware (usually in the form of a cloud gateway that acts as a storage controller) is rented at minimal cost. Customers can receive upgrades and refreshes on demand, rather than being locked into a fixed three or five year purchase cycle. The backend of this system is not iron, but rather commodity cloud storage from leading providers like Amazon S3 and Microsoft Azure, with additional intelligence from the storage services vendor.

    In this model, cloud storage is used in much the same way that hard drives are in traditional storage. The cloud is an enabling technology that can power a host of services including centralized storage management, backup and disaster recovery, file locking, and scalability. Best of all, using the cloud as a backend eliminates any single point of failure in the storage system.

    Breaking free from the shackles

    For CIOs to gain the same functionality with a traditional storage setup, they need to pay extra for a management layer and for backup/DR software, and, of course, for more storage to handle the backups. It’s a system designed to benefit the storage hardware vendors, not the businesses consuming the storage. It’s time for CIOs to look to vendors who are building systems that meet the needs of today’s enterprises instead of its need to sell more of its legacy, on-premises storage hardware.

    The transformation that has happened in IT over the past few years is a services revolution. Enterprise IT has freed itself from the shackles of ownership when it comes to a variety of technologies, and it’s time they do the same with storage hardware. Simply put, they never need to own it again.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:00p
    Report: Google Pumps Up Taiwan Data Center Spend

    Google is planning to invest $100 million to $200 million in expanding its Taiwan data center, according to local press reports that cited a story by Taiwan’s Economic Daily News.

    This will be the third phase of construction at the Google data center in the Changhua Coastal Industrial Park. It finished Phase I last year, and Phase II is on its way to completion by the end of this year. The latest phase is expected to come online in 2015.

    Google has invested about $600 million in the facility outside of the third phase.

    Set on 15 hectares of land, the data center came online in 2013. It was one of three facilities the giant announced in September 2011 to boost its infrastructure in the Asia Pacific.

    Google continues to invest record amounts in its data center infrastructure. The company says most of its capital investment goes to IT infrastructure, including data enters, servers, and networking equipment.

    Google’s data centers are the backbone for everything the company offers, and the amount of money it spends on these facilities has been growing continuously.

    In 2011 Google spent $890 million on data centers. That number jumped to $2.5 billion in 2012.

    In 2013, the company invested $1.2 billion in its server farms in the first three months of the year alone, followed by $1.6 billion in the second quarter. Here’s what Google data center spend in 2013 looked like as far as we know:

    Google also recently disclosed it was planning a data center in the Netherlands.

    Taiwan was where Google for the first time used a thermal storage system. These systems reduce costs by allowing companies to run air conditioning systems at night when power rates are cheaper and rely on stored cooling capacity during the day, when the rates go up.

    Thermal storage systems typically use ice or liquid coolant that can be chilled and then used in heat exchange systems.

    5:30p
    Tyan First Out With Commercial OpenPOWER Reference System

    Tyan, the server brand of Taiwanese electronics manufacturing giant Mitac, rolled out the first OpenPOWER customer reference system, available at the end of the month.

    Led by IBM, OpenPOWER is a potentially disruptive development alliance. It allows other companies to license the intellectual property for IBM’s POWER platform, making POWER hardware and software available to open development. The hope is to enable unprecedented customization and new styles of server hardware for a variety of computing workloads. Tyan, considered an original design manufacturer, was one of IBM’s initial partners in the consortium and today became the first out of the gate with a commercial customer reference system based on the architecture.

    An alternative to server chips produced by vendors in the ARM ecosystem, OpenPOWER now has more than 60 members and continues to increase momentum.

    Tyan’s new reference system is based on IBM Power 8 structure. It is called TYAN GN70-BP010. It allows end users to deploy software based on the OpenPOWER architecture tailored to their requirements.

    The 2U Palmetto system contains:

    • 1 IBM Power 8 Turismo SCM processor
    • 4 240-pin R-DDR3 1600/1333Mhz w ECC DIMM
    • 8 2.5” /3.5” hot-swap HDD and supports multiple PCI-E G3 ports
    • 4 SATA -III 6.0Gb/s ports with 1 CPU and heatsink
    • 4 4GB DDR-3 and
    • 1 500GB 3.5” HDD L10 system

    “Open resources, management flexibility, and hardware customization are becoming more important to IT experts across various industries,” Tyan vice president Albert Mu said in a statement. “As the first commercialized customer reference system provided from an official member from the OpenPOWER ecosystem, the Tyan GN70-BP010 is based on the POWER 8 Architecture and follows the OpenPOWER Foundation’s design concept.”

    IBM recently positioned a new system incorporating OpenPOWER to better address Big Data challenges.

    6:00p
    ViaWest Launches PCI-Compliant Cloud for Credit Card Data

    ViaWest has launched a purpose-built, audit-ready PCI-compliant cloud service. PCI compliance means the cloud can be used for accepting, storing, processing or transmitting credit card data.

    ViaWest exemplifies the trend of data center providers adding cloud services to colocation and managed services but tailoring them to specific verticals and use cases instead of competing with so-called “commodity clouds,” offered by the likes of Amazon Web Services, Microsoft Azure or Google Cloud Platform.

    In the case of ViaWest’s KINECTed PCI cloud, the company is catering to the security- and compliance-minded. The provider also has a HIPAA-audit-ready version of KINECTed cloud service for the healthcare sector.

    The PCI-compliant offering is a virtual private cloud that uses a dedicated, secure network infrastructure with security baked in by default. It comes with a 99.9 percent availability Service Level Agreement (SLA) on compute resources.

    The company said the PCI cloud extends beyond the need of financial and e-commerce sectors. It is designed for businesses of all sizes that deal with credit card data in general.

    Amazon Web Services is PCI-DSS Level 1 compliant but it requires more in-house configuration. Other providers of PCI-compliant cloud hosting services include Rackspace, FireHost and Online Tech, to name a few.

    Data breaches continue to be an issue. A 2013 Ponemon institute study found that 35 percent of data breaches identified were the result of company negligence.

    “We’ve designed our PCI Compliant Cloud solution from the ground up to satisfy the needs of customers who want to protect themselves against PCI DSS non-compliance,” ViaWest CTO Jason Carolan said.

    Another trend exemplified by ViaWest is one of large telcos buying data center providers. Canada’s Shaw Communications acquired ViaWest for $1.2 billion in July. It wasn’t the first Canadian telco to make such a deal: Cogeco bought acquiring Peer 1 Hosting in 2012.

     

     

    6:30p
    IBM Brings Watson APIs to Bluemix PaaS

    IBM has made Watson services available on its Bluemix Platform-as-a-Service. The company also announced the opening of the Watson World HQ at 51 Astor Place in Manhattan, the new headquarters for the business it is building around its cognitive computing system.

    The tie-in with IBM Bluemix is significant, as it allows any developer to tap into cognitive computing APIs and content to build apps that can learn natural language. It’s a chance to turn applications into intelligent mini Watsons. The company launched a freemium version of Watson services in September.

    In January, IBM announced a $1 billion dollar investment in the Watson business.

    The nickname “Watson” comes from IBM founder Thomas J. Watson. The technology is widely known outside of the tech world for its Jeopardy appearance many years ago, where it mopped the floor with human contestants. It gave Watson valuable exposure prior to a wider commercial roll-out. IBM has done other publicity stunts for its cognitive computing system, including a Watson-powered food cart at the SXSW conference and one of its conferences in Las Vegas.

    Location of the new headquarters is significant too. New York City has a large high-tech talent pool and an active venture capital industry.

    IBM said it was also opening five more “Watson Client Experience Centers,” which would make for a total of six worldwide. The five new centers will combine with IBM Research and Design teams to assist Watson adoption.

    “Watson is bringing forward a new era of computing, enabling organizations around the globe to launch new businesses, redefine markets, and transform industries,” said Mike Rhodin, senior vice president of the IBM Watson Group. “Watson is fueling a new market and ecosystem of clients, partners, developers, venture capitalists, universities, and students. The next great innovations will come from people who are able to make connections that others don’t see and Watson is making possible.”

    7:30p
    Hackers Exploit Shellshock Vulnerability to Gain Access to Yahoo Servers

    logo-WHIR

    This article originally appeared at The WHIR

    Romanian hackers have exploited the Shellshock vulnerability to gain access to Yahoo servers, according to Jonathan Hall of security consulting company Future South Technologies. Hall announced the hack of Yahoo, as well as Lycos and WinZip, on the Future South blog after informing the companies and the FBI.

    According to a series of blog posts, Hall discovered the vulnerabilities on Saturday night, and watched overnight as the exploit expanded. Hall claims he began attempting to alert Yahoo before 5 am CST, but that it, like the other two companies, was slow to respond.

    WinZip confirmed to Hall that they were hacked, while Lycos initially denied that it had been breached, and subsequently admitted the need for further testing. Yahoo confirmed that it had been breached midday on Sunday, and on Monday Yahoo CISO Alex Stamos posted a response to the incident to Hacker News.

    “Earlier today, we reported that we isolated a handful of servers that were detected to have been impacted by a security flaw. After investigating the situation fully, it turns out that the servers were in fact not affected by Shellshock,” Stamos said. “Regardless of the cause our course of action remained the same: to isolate the servers at risk and protect our users’ data. The affected API servers are used to provide live game streaming data to our Sports front-end and do not store user data. At this time we have found no evidence that the attackers compromised any other machines or that any user data was affected. This flaw was specific to a small number of machines and has been fixed, and we have added this pattern to our CI/CD code scanners to catch future issues.”

    Stamos also responded to allegations by Hall that Yahoo had been slow to react to the breach, saying that the affected systems had been isolated and the investigation begun within an hour of the email Hall addressed to CEO Marissa Mayer.

    Hall in turn responded to Stamos, at first accusing him of giving misleading information, and then trashing Stamos’ explanation for how the breach really occurred.

    “I’m not saying for a fact that more than what they are saying was compromised was,” said Hall. “But what I am saying for a fact is that there’s no way in hell they can be certain when they can’t even honestly provide a technical explanation of how the breach occurred in the first place.”

    The Independent notes Yahoo’s reputation for under appreciating bug bounty hunters. Yahoo gave a $25 voucher to an ethical hacker who disclosed three bugs in Yahoo servers last year.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/hackers-exploit-shellshock-vulnerability-gain-access-yahoo-servers

    7:30p
    VMware Brings Latest vCloud Director 5.6 Release to Market

    VMware vCloud Director 5.6 has entered general availability. It is the first release designed exclusively for VMware vCloud Air Network service providers, giving access to new enhancements used in vCloud Air.

    vCloud air was formerly known as vCloud Hybrid Service. The rebranding came along with new object storage and enhanced mobile application development capabilities. The vCloud Director update adds these new capabilities.

    vCloud Director orchestrates the provisioning of software-defined data center services. The software defined data center is the major initiative at VMware, the company positioning itself as a provider of virtualized everything.

    The new features are targeted at service provider customers. vCloud Director enables vCloud Air Network service providers and their customers to move on-premise workloads to the cloud.

    New features are across storage tiering, security across software defined networking, enhanced monitoring, updated software development kits and the inclusion of a user interface tied into an independent software vendor ecosystem.

    New in vCloud Director 5.6:

    VM Disk Level Storage Profiles allows a single VM to access different tiers of storage such as storage area network, network-attached storage, and local storage to help balance storage cost versus storage performance. VMware vCloud Director 5.6 also supports VMware Virtual SAN.

    VMware NSX support has been added in addition to current VMware vCloud networking and security (vCNS) support. This provides customers with easy to use security for software-defined networking using the NSX network virtualization platform.

    The VM Monitoring Service provides visibility into current and historical VM performance metrics at an individual tenant level. Tenants can use this new capability to troubleshoot application performance issues, auto-scale their applications and perform capacity planning.

    Updated SDKs to the vCloud API include a new set of Java, PHP, and .NET SDKs with documentation and samples available.

    Independent Software Vendor Ecosystem User Interface provides ISVs with a platform that allows them access to underlying vCloud Director capabilities through APIs. These APIs enable ISVs to build multiple, flexible UI based cloud-related services, such as VM monitoring, catalogs and vApps, on top of vCloud Director 5.6 that can be leveraged by service providers. All new functionalities within vCloud Director 5.6 are only available through APIs.

    “With VMware vCloud Director 5.6, we are providing the tools to VMware vCloud Air Network service providers to continue to offer customers flexibility and choice of cloud platforms on a local basis,” said Geoff Waters, vice president of service provider channel, Cloud Services Business Unit, VMware.

    8:00p
    Phoenix NAP Partners with Veeam for Cloud Backup Service

    logo-WHIR

    This article originally appeared at The WHIR

    Veeam Software and Phoenix NAP have teamed up to release Phoenix NAP Cloud Backup for Veeam, the initial release of Veeam Availability Suite v8, the companies announced Tuesday. The product allows customers to avoid the cost of deploying off-premise infrastructure.

    Phoenix NAP was selected by Veeam to support Veeam Cloud Connect, a new capability of Veeam Availability Suite v8, which will be released to general availability later this quarter. Phoenix NAP will leverage Veeam Cloud Connect to offer customers an integrated, secure, and efficient way to move Veeam backups to a Phoenix NAP-managed site.

    “Veeam has a reputation for innovation, and Veeam Cloud Connect is a good example of how that reputation is applied to meet market demands,” said William Bell, VP of product development for Phoenix NAP. “In today’s climate, it’s imperative for businesses to have a back-up strategy and service in place. Through this new solution and our partnership with Veeam, our customers can safely and securely back-up their mission critical data to our off-site cloud environments, and do so quickly. We are proud to be included as one of Veeam’s launch partners, and look forward to offering this new service to our customers.”

    Phoenix NAP Cloud Backup for Veeam is available out of Phoenix NAP’s Phoenix, Ashburn, or Amsterdam data centers, and the company says it can be provisioned in four hours or less.

    Veeam launched its cloud backup management suite in June 2013, and a month later Phoenix NAP expanded its partnerships with the launch of a new Channel Division.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/phoenix-nap-partners-veeam-cloud-backup-service

     

    10:51p
    Technology Development First Priority for AMD’s New CEO Lisa Su

    Investing in differentiation through intellectual property and product design capabilities will be the highest priority for AMD’s newly appointed CEO Lisa Su.

    The company announced Su’s promotion from chief operating officer Wednesday afternoon. She is replacing Rory Read, who has been in the chipmaker’s CEO role since early 2011. Reed will remain in an advisory role through the transition process until the end of 2014, the company said.

    The stock market did not take the news well. AMD shares were down about 6.7 percent in after-hours trading following the announcement.

    Read reassured analysts on a conference call Wednesday that the change in leadership was pre-meditated and planned by himself and the board of directors (he is giving up his board membership as well). “We hired Lisa, we developed Lisa, and we saw her grow to be positioned and ready to take it to the next level,” he said.

    Su, 44, joined the company in 2012 to lead its product strategy, product definition and business plans. She was promoted to COO this past June and tasked with implementing a major restructuring, consolidating AMD’s various units into two business groups: Computing and Graphics Group and Enterprise, Embedded and Semi-Custom Business Group.

    She and Read pitched this week’s leadership change as the next step in AMD’s transformation which will be focused on developing more technology to differentiate. “It really is about delivering the next step of leadership IP,” Read said, pointing out Su’s extensive experience as a technologist and qualifications to deliver the next generation of IP for AMD processors.

    “She’s a semiconductor professional,” he said. “She knows this space.”

    Su did not provide any specific plans for future technology roadmap, but said that much of the development will be focused on leveraging the company’s existing IP. The amount of the world’s interconnected devices is growing rapidly, and they will continue to be powered by x86 processors, ARM processors and GPUs. AMD has extensive IP in the x86 and GPU space and has been hard at work on ARM technology over the past several years.

    Su said it was critical for the company to grow in enterprise, embedded and semicustom processor markets. “It is very much about choosing the right product investments,” she said.

    ARM has been one of the heaviest investment areas for AMD. The company was one of the first to market with 64-bit ARM Server-on-Chip devices.

    Su does not expect the low-power chip architecture, licensed from UK’s ARM Holdings, to displace x86, however.

    “I do believe … x86 and ARM will coexist,” she said. “ARM products are quite important to us, as well as the x86 roadmap.”

    While it has been one of the leaders in ARM SoCs for servers, it has serious competition in the space. Earlier this month HP announced availability of its first server product powered by 64-bit ARM, choosing X-Gene SoCs by AMD competitor Applied Micro for the system.

    The man who for a long time was the public face for AMD processors based on the ARM architecture, Andrew Feldman, left the company earlier this year, around the same time Su was appointed as COO.

    Feldman came on board after AMD acquired his microserver startup called SeaMicro in 2012. AMD has not discontinued the SeaMicro server line and has been shipping the systems with its own as well as competitor Intel’s processors.

    Su plans to forge ahead with the SeaMicro microserver business, although she does not expect it to bring significant results until three to five years from now. “This is a new market, and it is a market that we believe will be very important,” she said.

    << Previous Day 2014/10/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org