Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, September 13th, 2013

    Time Event
    11:30a
    Data Center Jobs: CBRE

    At the Data Center Jobs Board, we have a new job listing from CBRE, which is seeking a Project Manager – Critical Environment Experience Required in Redmond, Washington.

    The Project Manager – Critical Environment Experience Required is responsible for overseeing all phases of project management including design, construction, occupancy, quality control, staffing, and budget management, interfacing with clients to define construction project requirements, establishing project work plan and deadlines, creating persuasive presentations that meet the project’s objectives, tracking progress of projects against goals, objectives, timelines, and budgets, and generating reports on status. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    12:29p
    CyrusOne Lands Fortune 100 Retailer in Dallas
    The CyrusOne data center in Carrollton, Texas.

    Here’s an overhead view of the raised floor area inside the 47,000 square foot first phase of the CyrusOne data center in Carrollton, Texas. (Photo: Rich Miller)

    A Fortune 100 global retailer is the newest customer at the massive new CyrusOne data center in the Dallas market, the company announced this week. The colocation provider said it had also lined up tax incentives with local officials that may make the facility more attractive to enterprise customers.

    The 680,000 square foot facility in Carrollton, Texas is more than 1,428 feet long and 480 feet across and could eventually support up to 60 megawatts of critical IT load. The new customer said it picked CyrusOne for its ability to rapidly deploy data center space and its reputation for customer service. The retailer will have access to  the CyrusOne National Internet Exchange (National IX), which links Carrollton with a dozen CyrusOne facilities in five other metropolitan markets.

    “Our Carrollton data center, which opened in August 2012, continues to be very attractive to customers needing colocation space in the Dallas/Fort Worth metro area,” said Gary Wojtaszek, president and chief executive officer of CyrusOne. “With our innovative Massive Modular design engineering approach, we’re able to offer deployment times that data-center-in-a-box solutions simply can’t match. The efficiency and speed with which we can commission large data facilities enables our customers to deploy quickly and not worry about future capacity constraints.”

    CyrusOne Customers to Benefit from Tax Incentives

    CyrusOne also disclosed that recently approved data center legislation is expected to benefit customers deploying new space in Carrollton. The tax incentive eliminated the state sales and use tax of 6.25 percent on data center utilities, infrastructure, and networks purchases.

    The Carrollton-specific incentive rebates up to another 0.5 percent in sales and use tax for the same customer purchases, or up to 50 percent of the taxes the city would normally receive. To be eligible for the incentive must create at least 20 qualifying jobs in Carrollton, occupy at least 100,000 square feet of space to process, store, and distribute data, and agree to a significant investment in CyrusOne’s Carrollton facility over a five-year period.

    “This tax incentive will translate into as much as $31 million in savings over 10 years for a large data center deployment,” said Wojtaszek. “These are savings that enable the provider to hire more highly skilled technology positions in the Carrollton data center. This all further enhances the ecosystem of technology companies that have already found this to be a highly desirable location to do business.”

    12:30p
    Microsoft Uses SOC 2 To Demonstrate CSA CCM Compliance

    Chris Schellman is the President and Founder of BrightLine, which is accredited as a CPA Firm, PCI QSA Company and ISO 27001 Registrar. He is a licensed CPA, CISSP, PCI QSA and ISO Lead Auditor, and has contributed to nearly 1,000 SOC examinations.

    Christopher_Schellman_BrightLineCHRIS SCHELLMAN
    BrightLine

    SOC 2 reporting is still in its infancy stages. However, since its introduction in 2011, BrightLine has been engaged to perform hundreds of SOC 2 projects. That’s a lot. In fact, it’s very possible that BrightLine is not only a major pioneer in this arena, but is also the world’s leading provider.

    That said, we have a deep interest in the developmental path of SOC 2 reporting. A major milestone in that development occurred recently when Microsoft claimed to be the first cloud provider to complete an SOC 2 examination for Windows Azure that integrated the Cloud Security Alliance’s (CSA) Cloud Control Matrix. (See Microsoft’s blog post.)

    Tips for Cloud Service Providers

    It goes without saying that other cloud service providers (CSPs) are now considering following Microsoft’s lead. In anticipation of this, I would suggest that CSPs consider the following points before making any decisions:

    1. Unless Microsoft takes the unlikely step of publicly posting its SOC 2 report, relatively few people will ever see the report. Professional guidance gave Microsoft considerable leeway in defining the scope of the examination, including the additional CCM criteria. So without actually reviewing the report, it is impossible to know how Microsoft defined CCM compliance for itself. It should not be assumed that such claims mean full compliance with CCM. It could be the case that Microsoft only included those CCM criteria they deemed applicable to their services. In other words, it’s simply unknown to anyone that is not privy to the actual report. CSPs should take note of not only this issue, but the fact that they would also be afforded the same leeway if they choose to undergo an SOC 2 examination that integrates the CCM.

    2. Any cloud service provider that wants an SOC 2 examination should acquaint themselves with the AICPA Trust Services Principles and select the combination of the five principles they would like to be assessed against (i.e., Security, Availability, Processing Integrity, Confidentiality, and Privacy). It is not possible to obtain an SOC 2 examination that integrates CCM without including at least one of these five principles in the scope of the examination. The criteria for compliance with any given principle is straight-forward. Obviously, it would be a waste of both time and money to engage an auditor to attest to compliance with Trust Services criteria that the CSP could have self-assessed as non-compliant prior to specifying the scope.

    3. The Trust Services Principles are highly redundant, somewhat convoluted, and have worn with age. For this reason, the AICPA has convened a committee to revamp the guidance. The exposure draft for the new version was released on July 30, 2013, with responses due by the end of September. Preliminary analysis of the exposure draft indicates significant improvements. As such, CSPs may consider delaying any new SOC 2 examinations until the next version of the Trust Services Principles is effective.

    4. In situations where CSPs are solely concerned with third party attestation regarding CCM compliance, an AT 101 report should be considered. There is very little difference between the two reports and it would save considerable time and effort over performing an SOC 2 with CCM integrated into the assessment. In fact, all SOC reports are AT 101 reports, with each type of SOC report simply having a distinct purpose. When none of the branded SOC reports fit the bill, service organizations often prefer the more “generic” AT 101 examination.

    Finally, while the Microsoft announcement may be a positive development for both CSPs and SOC 2 providers, it is also the world’s best demonstration of the inadequacy of SOC 2 for technology providers. In other words, SOC 2 was unable to meet the reporting needs of Microsoft’s customers and prospects without massive supplementation (i.e., the CCM). Suffice to say that there is a major “doughnut hole” in the SOC reporting structure that deserves serious consideration by the powers that be.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    12:30p
    SAP Advances Big Data With KXEN Acquisition

    SAP AG  announced plans to acquire KXEN, a leading provider of predictive analytics technology for line-of-business users and analysts. The combination of KXEN and enterprise business intelligence capabilities from SAP, along with the SAP HANA platform is intended to help companies to harness big data, engage users across the enterprise and aims to execute before their competitors to gain advantages they never thought possible.

    “With increased demand for actionable insights from ever-growing volumes of data, broader access to predictive analytics is key,” said Henry Morris, senior vice president for Worldwide Software and Services Research, IDC. “KXEN supports this objective by moving predictive analytics into the cloud and inside of the enterprise applications most popular with end users.”

    Complimenting advanced analytics from SAP, KXEN’s predictive technology can be used to enhance the value of core SAP applications for managing operations, customer relationships, supply chains, risk and fraud. Additionally, the company plans to incorporate KXEN technology into cloud and on-premise SAP applications built on SAP HANA. The acquisition is expected to allow SAP to introduce new predictive capabilities to the portfolio of solutions for more than 25 industries, particularly data-intensive vertical industries such as telecommunications, retail, consumer products, manufacturing and financial services.

    “The compelling combination of KXEN with market-leading analytics and business intelligence solutions from SAP intends to deliver the capability organizations need to innovate for growth in high-volume data environments,” said Michael Reh, executive vice president, Business Information Technology, Products and Innovation, SAP. “Just as SAP is revolutionizing visualization, KXEN will allow us to bring predictive analytics to more business users, enabling easier-to-adopt solutions and the delivery of greater value via SAP solutions.”

    “We are excited to integrate our leading predictive analytics capabilities with market-leading enterprise BI and agile visualization platforms from SAP, which we expect will result in unmatched analytics breadth and depth in the marketplace,” said John Ball, CEO, KXEN. “Our customers were increasingly asking us to focus on predictive applications that could be easily consumed by the business, both in the cloud and on premise. The real-time big data capabilities of SAP HANA make it the ideal platform to deliver on this vision.”

    SAP to Resell Hadoop Solutions

    SAP also announced the expansion of big data offerings around three major pillars: the SAP HANA platform, applications enabled for big data and data science. SAP has entered reseller agreements with Intel and Hortonworks to resell and support the Intel Distribution Apache Hadoop and the Hortonworks Data Platform with SAP HANA, expanding existing partnerships with both vendors.

    “The SAP HANA platform and its integration with Hadoop have married together real-time insights with extreme storage, solving one of the biggest problems with current big data solutions, a fragmented landscape of solutions that are difficult to connect together,” said Steve Lucas, president, Platform Solutions, SAP. “Our expanded big data strategy provides customers a single, integrated approach to combine enterprise data and additional information to employees and consumers, as well as improve business processes such as customer engagement, preventative maintenance and responsive supply chain.”

    Integration with Apache Hadoop is part of SAP’s overall strategy to provide valuable insights across a continuum of data from the efficient storage of massive amounts of cold data, to petabyte-level storage of warm data to real-time and streaming data analysis.

    “By reselling the Hortonworks Data Platform, SAP can assure their customers they are deploying an SAP HANA and Hadoop architecture fully supported by SAP, while giving them the benefits of a fully open source Hadoop distribution,” said Shaun Connolly, vice president, Corporate Strategy, Hortonworks. “Hortonworks is committed to contributing all of its innovations back to the Apache project, ensuring that 100 percent of their Hadoop distribution is open source.”

    3:12p
    Extreme Networks Acquires Enterasys For $180 Million

    Extreme Networks (EXTR) announced that it has entered into a definitive agreement to acquire all outstanding stock of Enterasys Networks in an all cash transaction valued at $180 million.  The combined company will be committed to continue to support the product roadmaps of both companies going forward to protect the investments of current customers and avoid any disruption to businesses. Within two years the Extreme Networks network operating system ExtremeXOS will incorporate additional features that are available in the Enterasys network operating systems and  fully support both hardware platforms.

    “The combination of Extreme Networks and Enterasys is significant in that it brings together two companies with distinct strengths addressing the key areas of the network, from unified wired and wireless edge, to the enterprise core, to the data center and cloud,” said Zeus Kerravala, principal analyst and president of ZK Research.  “With an open software approach, the companies can drive product innovations and customers will benefit from their increased resources and larger scale.”

    Privately held Enterasys provides wired and wireless network infrastructure and security solutions, and holds a rich portfolio of patents. Recently the Philadelphia Eagles partnered with Enterasys to equip their stadium with the latest high-density Wi-Fi technology – the Enterasys IdentiFi solution. Combined with its OneFabric Control Center management it provides centralized visibility and control over the network, giving the Eagles valuable intelligence to more easily roll out new applications and services to improve the overall in-game experience.

    “Since its first release in 2004, ExtremeXOS® has been developed with a Linux abstraction layer that makes it relatively easy to extend ExtremeXOS to support other vendors’ switching hardware,” said Chuck Berger, President and CEO of Extreme Networks.  ”Combining Enterasys technologies and products including their Coreflow modular switches, IdentiFi wireless and the NetSight system management application will extend and complement our product offering which we expect will provide significant added value to the current customers of both Extreme and Enterasys.”

    3:30p
    Cisco Launches Next Generation Network Processor

    cisco-npower

    Purpose-built for software-defined networking (SDN) and powering the Internet of Things (or Internet of Everything as Cisco calls it) Cisco (CSCO) launched an advanced network processor , the nPower.  In its first generation, the nPower X1 is capable of scaling to multi-terabit performance levels while handling trillions of transactions. Incorporating over 50 patents the new chip enables on-the-fly reprogramming for new levels of service agility and simplified network operation.

    The nPower X1 features tru 400 Gigabits-per-second throughput from a single chip, to enable multi-terabit network performance. All packet processing, traffic management and input/output functions are integrated on a single nPower X1 and operate at high performance and scale. Its programmable control is designed to seamlessly handle hundreds of millions of unique transactions per second.  The X1 houses 4 billion transistors and enables solutions with eight times the throughput and one quarter the power per bit compared to previous Cisco network processors.

    In a blog post about the new network processor Cisco Vice President of Engineering Nikhil Jayaram talks about Cisco’s market success and how it was “ driven in large part by our ability to offer industry-leading solutions with the best combination of price, performance, and capabilities. This in turn was fueled by Cisco’s use of internally developed network silicon using advanced ASIC development models ahead of competitors who continued to rely on general purpose CPUs or FPGAs to power their products.”

    Cisco will introduce networking innovations that will feature the nPower X1 on September 24th.

    5:15p
    New Moves in Multi-Player Online Game Hosting

    Brought to you by The WHIR.
    WHIR_logo_100

    The social aspect of networked games started with LAN parties, where friends would gather in-person with their computers or consoles to create a local area network to play games together. For years, this was the best way to get the network latency needed to play demanding games, before the growth in Internet capabilities that led to this experience being available anywhere and anytime on the web.

    “You didn’t need that physical proximity,” Wu-chang Feng, a gamer and associate professor at Portland State University’s department of computer science says. Yet Feng and his students (some of whom work in the gaming industry) and many others are interested in making a more immersive end-user experience – the networks and servers fade into the background, and just the game exists.

    And there are, indeed, exciting developments happening in the industry addressing this challenge from a hosting perspective.

    Proximity to servers is paramount; it always has been

    When given the choice of a game server, players are likely to choose a fast server that’s also very close to their location. In multi-player games, latency is the key to a smooth gameplay experience.

    Research from Mark Claypool at Worcester Polytechnic Institute found that latency requirements depend on the type of game being provided (PDF). Generally, it is more important to the user experience for so-called first-person avatar games (ex. FPS and racing games) as opposed to third-person avatar (ex. role-playing, sports) and omnipresent (ex. real-time strategy) games. Research found that gamers of first-person avatar games would find latency significantly disruptive once it reached 100 milliseconds, but when playing third-person avatar games it could reach as high as 500 ms, and when playing omnipresent games, it’s around the 1 second mark.

    Claypool writes, “While there are other measures of performance that may affect online gameplay, such as packet loss and available bandwidth, player performance is typically dominated by network latency (also called ‘lag’ by game players).”

    Despite latency making no change to the rules of the game, latency actually affects the game in real, noticeable ways, effectively skewing the odds in favor of those with higher latency. A player who’s further away, for instance, may have a disadvantage in the game because their latency acts as a handicap.

    However, cloud computing can help shorten the distance between the client and the server, Feng says.

    “When you decide to host a game and you choose something like Amazon Web Services, that’s basically a CDN for computing – they have clusters all over the world and they can push out the content towards where it’s being used,” he says.

    Game programmers have also used certain client-side methods to help trick gamers into thinking latency is lower than it is. According to North Carolina State University’s R. Michael Young (PDF), one of these tactics is called “dead reckoning.” Basically, dead reckoning means moving objects in the client’s field of view are tracked and their new positions are guessed based on their velocity, acceleration and location data from the last packet from the server. Another practice is client and server time-stamping that creates a simulation of the state of the game world, allowing client latency to be factored into gameplay. For instance, if a player sees a target in her crosshairs and shoots, it will register as a hit even if the latency means the target has moved. Developers that include practices like these in their games help deal with the reality of latency.

    Game hosts aren’t neutral; they should provide added value

    Many players who pay to be server members expect hosts to provide new content such as mods and levels or maps to the game. But they also often expect more engagement and a community feel, where they are being heard, and they’re able to contribute to the community and even the game world by creating new content within the game itself.

    “The user generated stuff has really taken off, and I think stuff like Counter Strike, and then Second Life, and now with Disney Infinity,” Feng says.

    Disney’s recent effort, Disney Infinity has been called by Mashable: “Minecraft meets Skylanders meets Disney and Pixar characters.” Disney’s hosting the online portion on its own, innovative private cloud. In terms of its online strategy, however, it’s particularly interesting to note that Disney is giving up some of the control of the game environment to players – many of them children – who can make custom worlds and chat online with other players. But Disney’s policy is all about maintaining a safe environment which includes parental controls, filters for offensive language and a live staff monitoring game participants.

    Rather than allow its games to be hosted by anyone, a game developer is likely to prefer full control over their game hosting in order to provide a consistent game experience and follow certain community guidelines.

    Game worlds at global-scale will come

    Traditionally, games that featured online worlds would often have to exist in multiple shards – areas or zones with a manageable number of users that could be handled by a particular servers or server clusters. Another way is to create multiple identical universes where each world is populated by a number of users that can be handled simultaneously.

    However, Bungie’s upcoming game, Destiny, is developing a server architecture designed to seamlessly provide a massively multiplayer experience for all end-users simultaneously, says Feng. “They’re building the server backend for this, and it’s one of these scaling things that hasn’t been done before.”

    On-demand games open up new opportunities

    The old model of multi-player online gaming had involved powerful computers and plenty of local data crunching and graphics rendering. But on-demand gaming performs promise to offload this onerous work from the device, promising to provide rich gaming experiences on low-end computers, consoles and handheld devices. These platforms also eliminate the need to download specific game software or cache data.

    Essentially, these on-demand services run the game via remote server, taking input from the player, then return a quick stream of rendered frames to their device.

    While on-demand game providers such as Gaikai and Onlive are pursuing this strategy of providing a variety of well-known games, there are definite challenges to meeting the end-users latency requirements needed for immersive experiences. A solution proposed in a recent paper on cloud latency from Canadian and French researchers (PDF) is to use content delivery networks to essentially get closer to end-users. But these CDN edge servers need processing power and GPUs.

    On-demand, however, could be a source for game delivery gaming in emerging markets. Cloud Union, which provides gaming through China’s Telecom/Unicom IPTV network, is using NVIDIA GRID technology to deliver high density game streams to subscribers.

    Cloud Union CEO Danny Deng says in a statement, “In China we don’t have game consoles. And therefore we see a large opportunity for cloud gaming in China.” The idea behind Cloud Union and other on-demand providers is to make games as easily accessible as movies and music.

    The big recent change in the online gaming culture, according to Feng, has been the shift in the market towards handheld, and being able to provide experiences across different devices.

    Leave your home console game mid-action, then pick it back up on your tablet while on the bus. This could soon be a typical scenario soon.

    Developments in hosting are providing some solutions to the challenges of delivering quality experiences across different devices and localities. Feng notes with all sincerity: “Most of the action in online games is at the backend server architecture.”

    Original article published at: http://www.thewhir.com/web-hosting-news/new-moves-in-multi-player-online-game-hosting

    5:52p
    Network Issues Cause Amazon Cloud Outage
    Amazon is reporting problems with one availability zone.

    Amazon is reporting problems with one availability zone.

    It must be Friday the 13th, because the Amazon Web Services cloud computing  platform is having trouble in its US-EAST-1 Region … again. Between 7:04 a.m. and 8:54 a.m. Pacific there were connectivity issues affecting a portion of the instances in a single availability zone.

    The company also experienced increased error rates and latencies for the APIs for Elastic Block Storage and increased error rates for EBS-backed instance launches. Impacted instances were unreachable via public IP addresses, but were able to communicate with other instances in the same availability zone using private IP addresses.

    The connectivity issues once again impacted major AWS customers Heroku, and parts of Github, and Flickr, to name but a few. It’s the usual group of sites that are widely used and heavily dependent on AWS US-EAST-1 and suffer downtime during an Amazon cloud outage.

    The US East region is the company’s most popular, but also oldest region, in some of its largest and oldest facilities in northern Virginia. Amazon has undergone infrastructure renovations as of late. There have been a number of uptime challenges related to US-EAST over the past several years, ranging from Elastic Block store problems, to a generator failure, a Chrismas Eve outage that took down Netflix, and the big massive general outage of Summer 2012.

    It should serve as a reminder, first and foremost, that if you rely heavily on AWS for your infrastructure, have a failover plan. Customers who leverage AWS and have multi-zone, load balancing, and other stop gaps in place usually come out of these outages unscathed. It’s not as simple as it sounds.

    6:32p
    Friday Funny: What is the Best Caption?

    It’s Friday! Let’s end this week with a little humor. Our Data Center Knowledge cartoon contest is a great way for readers to have a chuckle on this beautiful fall Friday. But before you take off, vote for our reader suggestions for the caption for our new Data Center Knowledge cartoon – Extending More Power.

    New to the caption contest? Here’s how it works: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner receives a hard copy print, with his or her caption included in the cartoon!

    For the previous cartoons on DCK, see our Humor Channel. Please visit Diane’s website Kip and Gary for more of her data center humor.

    Take Our Poll

    << Previous Day 2013/09/13
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org