Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, December 3rd, 2013

    Time Event
    12:40p
    Netflix Signs on With AMS-IX in New York

    Streaming video juggernaut Netflix will be the first customer for AMS-IX New York, making it the first major Internet player to ink a deal with an Internet Exchange created through the Open-IX initiative in the U.S. This agreement follows Netflix’s connection to AMS-IX in Amsterdam earlier this year.

    The announcement is not a surprise, as Netflix executives have been involved in the formation of Open-IX, a new network of neutral, member-governed exchanges that allow participants to trade traffic. The group is embracing a non-profit model that is widely used in Europe and spreads exchange operations across multiple data centers in a market.

    AMS-IX USA Inc. has struck deals with Digital Realty, DuPont Fabros Technology, Sabey Data Centers and 325 Hudson in the New York/New Jersey area to build a distributed Internet Exchange, named AMS-IX New York. AMS-IX also plans to open exchanges in the Chicago and Silicon Valley markets, which are currently planned to go live in the first half of 2014.

    “Netflix has been one of the key drivers of the Open-IX initiative,” said David Temkin, Director of Network Architecture and Strategy and Chair Board of Directors of the Open-IX Association. “With our connection to AMS-IX New York we can finally get benefit from the European Internet Exchange model in the US. Moreover, our experience with AMS-IX in Amsterdam made it an easy choice to connect to AMS-IX New York as well.”

    1:30p
    The Snowden Effect and IT Automation’s Role

    Gabby Nizri is the Founder & CEO of Ayehu Software Technologies Ltd., publishers of eyeShare, an enterprise-class, lightweight IT process automation tool.

    GabbyNizri_tnGABBY NIZRI</p>

    Ayehu Software

    These days, everyone is hyper aware of privacy and Internet security – especially given the paradigm shift toward cloud computing. Across industries, organizations are cracking down to ensure they prevent leaks of confidential and sensitive information. We’re all aware of the so-called “Snowden Effect”, which essentially highlights what could happen when personal information is released, so how does one continue to compete in an increasingly virtual climate without sacrificing the need to keep information secure?

    Balancing Security and Transparency

    IT Automation may be the key that solves this problem, and it’s starting in the most unlikely of places: the U.S. government. The reason behind this change; however, is what’s being called into question.

    Recently, the National Security Agency/Central Security Service (NSA/CSS) announced that it would begin the process of automating nearly 90 percent of its system administration duties in an attempt to eliminate waste and free up valuable resources. The NSA/CSS is a U.S. defense agency that is responsible for providing timely information to key government officials and military leaders. The agency is also tasked with the broad responsibility of protecting sensitive or classified national security information from foreign adversaries.

    Perhaps no other agency or government body has as much official responsibility for the privacy of information as the NSA/CSS. Yet many critics have called into question its plan of automation, touting the security risks associated with removing the human element from the picture and introducing technology as its replacement. Keith Alexander, the agency’s director has defended the decision, boldly stating that:

    “[Until now] we’ve put people in the loop of transferring data, securing networks and doing things that machines are probably better at doing.” He further went on to point out how automation would “make those networks more defensible … [and] more secure.”

     

    Contrary to popular belief that software and computers are inherently risky in terms of security breaches, the NSA feels instead that leveraging such technology will actually improve the ability to maintain the confidentiality of information securely. This is due, in great part, to the infamous Snowden Effect, in which a former CIA and NSA employee, Edward Snowden, leaked details of several top-secret United States and British government mass surveillance programs to the press.

    The devastating results have rocked the cloud computing industry across the globe, striking fear in individuals and businesses alike and creating an environment of uncertainty on a worldwide scale.

    Reducing Risk or Increasing Efficiency?

    The idea of rolling out a massive automation project within one of our own government agencies seems, to some, to be about much more than just a way to improve efficiency. Rather, many feel it is more about finding a way to remove what is now viewed as the biggest risk to our national security and critical, confidential information – human beings. Even if the real reason behind the shift toward automation is, indeed, to boost efficiency and cut costs, the real benefit of automation in this case becomes diluted or lost completely.

    How the government will actually leverage IT automation remains to be seen, as does the long-term effects of doing so. In the meantime, the real reason why this technology can and should become an integral part of the business culture – regardless of industry – remains not in eliminating people and the risk they pose from the business process, but rather providing innovation that will free up those talented and highly skilled people to be able to focus on much more important matters, like driving the future growth and success of their organization.

    Up until now, organizations have primarily justified IT process automation by its ability to eliminate manual, labor-intensive tasks that kept expensive technicians busy. We could now be seeing an entirely new justification take root based on IT process automation’s ability to prevent the far greater expense & damage of information security breaches. Ultimately, the Snowden Effect’s greatest legacy could be raising awareness about the importance of securing computer systems efficiently and cost-effectively.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    The Clifton Campus: Inside the New Telx NJ Facility
    Telx facility in Clifton, NJ

    The Telx NJR2 facility in Clifton, NJ

    Earlier this year we brought you the first look inside the new Telx NJR2 data center on the company’s campus in Clifton, New Jersey. The project marked the first greenfield build for Telx, which has been scaling up its data center capacity in the New York metro region, adding space in prominent Manhattan carrier hotels as well as its NJ campus. In this video, Telx provides an overview of the Clifton campus and NJR2, which currently offer 30,000 square feet of raised floor space in its first phase. In addition to providing interconnection access to domestic and international carriers, Telx colocation customers can also connect directly to financial exchange networks like SFTI, BATS, ARCA, and others. This video runs about 4 minutes.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    2:04p
    Strong Uptime for Online Retailers as Holiday Shopping Kicks Off
    money-470

    The wave of holiday online retail traffic didn’t produce any major casualties, although some mobile sites had delays.

    There were no major outages during the kickoff of the holiday shopping season, but  several trends emerged. People are shopping earlier and increasingly using mobile devices for their purchases.

    A lack of page optimization across three screens, as well as slowdowns during particular parts of the transaction process were the major problems with some retail sites, according to monitoring firms Keynote and Compuware APM.

    Some Hiccups on Cyber Monday

    There were some performance hits on Thanksgiving night. Retail sites were, for the most part, prepared for the rush on Black Friday going into Cyber Monday. According to Keynote, Cyber Monday was when some retail sites began to see some hiccups. Sites were experiencing hiccups on specific devices, suggesting that they weren’t optimized for three screens: mobile, tablet and desktop

    • HP desktop performance began slowing down significantly starting at 3:00 AM Pacific, according to Keynote.  Transactions slowed 30 percent compared to 24 hours earlier. The problems occurred during Search and with the Add to Cart steps; two integral points in a transaction. Keynote noted the same problems with Search and ‘Add to Cart’ on the HP smartphone site.
    • Sony Style on the desktop also had a major performance slowdown from about 10:00 AM to 2:00 PM Pacific.  In this instance it was the Product Details page that suffered a performance hit, taking upwards of 80-90 seconds in some cases. The Sony Style tablet measurement began having a higher error rate on Cyber Monday as well, primarily attributed to timeouts at the 300 second mark. The Sony Style webpage wasn’t optimized for tablets, and were being served the full desktop page.
    • Overstock on the desktop had a big performance drop around 9:15 AM Pacific. Any user trying to use the Category and Product Details pages would have suffered from the slow response time.
    • The performance of Office Depot’s smartphone site took a hit at roughly 8:00 AM Pacific. There was no clear pattern to identify a particular page or app call being responsible, according to keynote.
    • Best Buy tablet pages delivered a very high error rate this morning into this afternoon due to timeouts. Again we are seeing a desktop-optimized site trying to load over a mobile network. Interestingly, the Best Buy smartphone site is not seeing these issues with timeouts.

    Mobile Device Optimization Needed

    The big performance hit, according to Compuware APM, was with mobile devices. While web pages were optimized, many retailers weren’t as prepared to deal with spikes in mobile traffic.

    “With traffic spiking early on Black Friday and again around 8:00 pm, the average page response times remained around 8 seconds,” said says Steven Dykstra of Compuware APM’s Benchmarks Division. “Our data shows that an average page response time over 6 seconds increase page abandonment rate from 12% to over 20%, which will significantly impact many retailers revenue this holiday season.”

    Black Friday

    Retailers were, for the most part, ready for Black Friday traffic. The only notable outage was Sears, according to Keynote. The Sears site returned a message stating it is too busy on the desktop and iPad. The outage started around 1:15 PM EST.

    Site performance actually improved compared to the week leading up. According to Keynote:

    • Average performance for Black Friday through 2pm EST on desktop screen was at 15.822 seconds, improvement from the weekly average of 16.56 seconds leading up to Black Friday.
    • Smartphone performance  averaged 28.48 seconds, also slightly better than the weekly average.

    There were no major outages, apart from Sears. There were some mild performance impacts:

    • There was an overall 10% performance slowdown between the hours of 6-7 am EST, and around 10:30 am EST, according to Keynote.
    • The Sony Style site experiences a 10 second, 30 percent slowdown between 2:30-3:30 am EST, with performance returning to normal by 5:30 am EST. The performance issues were related to searching.

    Compuware APM noted huge increases in mobile device (tablet and smart phone) traffic beginning on Thanksgiving and continuing into Cyber Monday. During Black Friday, iPad traffic was up as much as 90 percent compared to last year (during 6:00-12:00pm ET). During that same time frame, iPhone traffic was up 117 percent respectively from Thanksgiving. iPad traffic outpaced iPhone 72 percent to 28 percent. From Keynote and Compuware APM’s data, we can assume that it was tablet traffic, and lack of tablet specific-optimization that caused a good portion of performance hits.

    Cyber Monday – Losing Relevance?

    Given the increasing comfort with shopping on mobile devices and the busy shopping season starting earlier and earlier, is Cyber Monday becoming less of a big deal? “One theory is that it’s becoming a bit more irrelevant,” said Dykstra. “People used to hop on their work PCs on Monday – now it’s not necessarily the case. There’s bigger acceptance, as people have had experiences with mobile shopping, That anxiety of giving information over the web is dissipating.”

    While online shopping is becoming less confined to Cyber Monday, and less confined to the desktop, it’s not a case of Cyber Monday disappearing. The traffic reports suggest that online shopping is spreading over the course of the holiday. Taking Thursday-Monday into account, the amount of online shopping traffic that occurs each year is growing. As to the actual shopping figures, the same might not be true, as some are reporting a drop off. From a traffic perspective, better optimization across devices is needed on the part of major retailers, and traffic spikes are occurring earlier than years prior, beginning on Thanksgiving itself.

    2:53p
    Data Center Summit Series – Los Angeles

    CapRate’s National Data Center Summit Series is bringing its Second Annual Los Angeles and Southwest Data Center Summit to Los Angeles on December 13.

    Event organizers expect 325+ Los Angeles and Southwest data center executives at the Los Angeles Athletic Club, and the topics under discussion include market trends, challenges and opportunities, such as:

    • Analysis of regional demand-supply in Los Angeles and El Sugundo markets
    • The impact of emerging western markets on L.A.: How have Las Vegas and Phoenix changed the Los Angeles-area data center landscape?
    • The stagnant Los Angeles-area wholesale market: Why is retail hot, and wholesale cold as the industry enters 2014?
    • Connectivity opportunities and challenges to Asia and other markets
    • Data center end user trends
    • The impact of the cloud on new development
    • Data center financing
    • Data center development debate: Performance sensitive vs. undifferentiated site selection decision-making
    • Energy-efficient data center construction & development
    • Data center disaster scenarios: Strategies for “The Day after Tomorrow,” including fire, redundancy & UPS

    Venue
    The Los Angeles Athletic Club:
    431 West 7th Street
    Los Angeles, CA 90014

    For further information and registration visit Caprate’s website. For more events, please return to the Data Center Knowledge Events Calendar.

    3:00p
    DCK Webinar: The Software-Defined Data Center as the Foundation for the Cloud

    Organizations are looking to the cloud as a way to reduce cost and complexity, improve time-to-market, and improve the overall availability and security of their applications. An agile, reliable, and secure data center is an absolute necessity in order to support cloud deployments. Without this strong foundation, the promise of cloud quickly evaporates.

    Join Data Center Knowledge on Tuesday, December 17 for a special webinar in which Jason Ferrara (IO Group Leader, Global Marketing) will discuss The Software-Defined Data Center.

    In this webinar, IO will discuss the prerequisites for a successful cloud deployment and how the software-defined data center is the foundation for the cloud. In this webinar, you’ll learn about:

    • Trends that will impact the modern data center
    • Factors that affect the cost, complexity, and reliability of your data center
    • Prerequisites for a successful cloud deployment
    • The software-defined data center

    Webinar Details:

    Title: The Software-Defined Data Center: The Foundation for the Cloud
    Date: Tuesday, December 17, 2013
    Time: 2 pm Eastern/ 11 am Pacific (Duration 60 minutes, including time for Q&A)
    Register: Sign up for the webinar.

    Following the presentation, there will be a Q&A session with Jason and your peers. Sign up today and you will receive further instructions via e-mail about the webinar.

    3:45p
    Video: A Closer Look at Audit Buddy

    At the recent Data Center World Fall 2013 conference in Orlando, we had a chance to visit with Indra Purkayastha, CEO and Founder of Purkay Labs, which makes an environmental monitoring tool called Audit Buddy. It’s a simple system that gathers environmental data at the white space. It’s portable, easy to use, and by design doesn’t have any links to existing infrastructure. In this video, Purkayastha provides an overview of Audit Buddy and its capabilities.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    6:00p
    Cloud Startup DigitalOcean Expands in Europe

    Brought to you by The WHIR. http://www.thewhir.com/web-hosting-news/cloud-startup-digitalocean-expands-europe-amsterdam-data-center

    Cloud infrastructure company DigitalOcean has opened its new Amsterdam data center, located in a TelecityGroup facility.

    The AMS2 data center will offer DigitalOcean expanded server capacity in Europe, and adds shared private networking, a feature that was only available in its NYC2 data center until today.

    With Shared Private Networking, users can set up file storage, database replication, and other features across a private network. The traffic in the private network does not count towards the bandwidth cost in user’s account. DigitalOcean plans to add graphs to manage private networking usage soon.

    DigitalOcean said it is looking at expanding in locations around the world, including the UK. It will focus on adding more capacity through Europe, as evidenced through the Amsterdam data center expansion. Currently, DigitalOcean has data centers in NYC, San Francisco and Amsterdam.

    DigitalOcean has focused on serving developers by providing an easy to use cloud infrastructure, and has seen tremendous growth. For example, between December 2012 and June 2013, its number of web-facing servers grew 50 times over, growing faster than any company other than Amazon, Alibaba, and Hetzner. In June 2013, it became the world’s 72nd largest hosting provider based on web-facing servers.

    By expanding in Europe, and offering a private networking feature, DigitalOcean will be in a better position to compete with other international cloud providers such as AWS.

    Recently, DigitalOcean named former Makerbot VP of finance Larry White to lead finance at DigitalOcean. 

    Original article published at: http://www.thewhir.com/web-hosting-news/cloud-startup-digitalocean-expands-europe-amsterdam-data-center

     

    6:30p
    Google Compute Engine Reaches General Availability
    google-coldaisle

    Inside the cold aisle of a Google data center. Consumers can now run applications on Google’s infrastructure using Google Compute Engine.

    A year after launching its cloud platform, Google has announced the general availability of Google Compute Engine, and added several new features and lower prices for persistent disks and popular compute instances. After a year in preview, Google Compute Engine is now backed by the vast Google infrastructure and data center footprint, and features 24/7 support and a service-level agreement (SLA) offering 99.95 percent uptime. Google is also introducing what it calls transparent maintenance, that combines software and data center innovations with live migration technology to perform proactive maintenance while virtual machines keep running.

    Three new 16-core instances are being launched in preview mode, featuring up to 16 cores and 104 gigabytes of RAM. They are available in the familiar standard, high-memory and high-CPU shapes. After launching with support for just two Linux distributions, Compute Engine now runs on any out-of-the-box Linux distribution. In limited preview it will also support SUSE, Red Hat Enterprise Linux, and FreeBSD.

    Enhancing its storage for the cloud platform, Google has lowered the price of Persistent Disk by 60 percent per Gigabyte and dropped I/O charges for a predictable, low price for a block storage device. I/O available to a volume scales linearly with size, and the largest Persistent Disk volumes have up to 700% higher peak I/O capability.  Google also said that it is lowering prices on popular standard Compute Engine instances by 10 percent in all regions.

    “We find that Compute Engine scales quickly, allowing us to easily meet the flow of new sequencing requests,” said David Schlesinger, CEO of Mendelics.  ”Compute Engine has helped us scale with our demands and has been a key component to helping our physicians diagnose and cure genetic diseases in Brazil and around the world.”

    “Google Cloud Platform provides the most consistent performance we’ve ever seen,” said Sebastian Stadil, CEO of Scalr. ”Every VM, every disk, performs exactly as we expect it to and gave us the ability to build fast, low-latency applications.”

    8:00p
    Study: Data Center Downtime Costs $7,900 Per Minute
    The cost of data center downtime is rising, according to a new study.

    The cost of data center downtime is rising, according to a new study.

    Unplanned data center outages are expensive, and the cost of downtime is rising, according to a new study. The average cost per minute of unplanned downtime is now $7,900, up a staggering 41 percent from $5,600 per minute in 2010, according to a survey from the Ponemon Institute, which was sponsored by Emerson Network Power. The two organizations first partnered in 2010 to calculate costs associated with downtime.

    Downtime is getting more expensive as data centers become more valuable to their operators. The increase is driven by the increased value of the business operations being supported by the data center, the survey indicated.

    “Given the fact that today’s data centers support more critical, interdependent devices and IT systems than ever before, most would expect a rise in the cost of an unplanned data center outage compared to 2010,” said Larry Ponemon, Ph.D., chairman and founder of the Ponemon Institute. “However, the 41 percent increase was higher than expected. This increase in cost underscores the importance for organizations to make it a priority to minimize the risk of downtime that can potentially cost thousands of dollars per minute.”

    Highlights of the study include:

    • The average cost of data center downtime across industries was approximately $7,900 per minute. (A 41 percent increase from the $5,600 in 2010.)
    • The average reported incident length was 86 minutes, resulting in average cost per incident of approximately $690,200. (In 2010 it was 97 minutes at approximately $505,500.)
    • For a total data center outage, which had an average recovery time of 119 minutes, average costs were approximately $901,500. (In 2010, it was 134 minutes at about $680,700.)
    • For a partial data center outage, which averaged 56 minutes in length, average costs were approximately $350,400. (In 2010, it was 59 minutes at approximately $258,000.)
    • The majority of survey respondents reported having experienced an unplanned data center outage in the past 24 months (91 percent). This is a slight decrease from the 95 percent of respondents in the 2010 study who reported unplanned outages.

    The study looked at 67 data centers with a minimum size of 2,500 square feet across varying industry segments.  Comprehensive analysis of direct, indirect, and opportunity costs from data center outages was performed. The study measured damage to mission-critical data, the impact of downtime on organizational productivity, damage to equipment, legal and regulatory repercussions, and lost confidence and trust among key stakeholders.

    The study reveals that even more significant costs are incurred by organizations with revenue models that depend on the data center’s ability to deliver IT and networking services to customers. The highest cost of a single event in the study was more than $1.7 million. These industries saw a slight decrease compared to 2010 costs, while organizations that traditionally have been less dependent saw a significant increase.

    The industries with the largest increases were:

    • Hospitality sector (129 percent)
    • Public sector (116 percent)
    • Transportation (108 percent)
    • Media organizations (104 percent)

    “As data centers continue to evolve to support businesses and organizations that are becoming more social, mobile and cloud-based, there is an increasing need for a growing number of companies and organizations to make it a priority to minimize the risk of downtime and commit the necessary investment in infrastructure technology and resources,” said Peter Panfil, vice president, global power, Emerson Network Power. “This report gives these organizations the data they need to support more informed business decisions regarding the cost associated with eliminating vulnerabilities compared to the costs associated with not taking action.”

    The study is available at the Emerson  Emerson web site, along with a handy infographic.

    8:16p
    Open Compute Summit, 2014

    The fifth annual Open Compute Summit will be held January 28-29 at the San Jose Convention Center in San Jose, CA. The Open Compute Project, which was initiated by Facebook and now has its own foundation, is two years old and its meetings are growing. By releasing Open Compute Project technologies as open hardware, the group seeks to develop servers and data centers following the model traditionally associated with open source software projects.

    The event is open to the public, but (free!) registration is required.

    Venue

    Santa Clara Convention Center/Mission City Ballroom
    5001 Great America Parkway
    Santa Clara, CA 95054

    For more events, please return to the Data Center Knowledge Events Calendar.

    << Previous Day 2013/12/03
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org