Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, January 25th, 2016

    Time Event
    1:00p
    Startup Envisions Data Centers for Cities of the Future

    This month, we focus on data center design. We’ll look into design best practices, examine in depth some of the most interesting recent design trends, explore new ideas, and talk with leading data center design experts.


    rhizome (rī′zōm′) – a horizontal, usually underground stem that often sends out roots and shoots from its nodes.

    Sometimes, people outside of a particular field get ideas for the field that are better than any insiders are capable of, being unencumbered by knowledge of what has and hasn’t worked in the past, or preconceptions about the “right” ways of doing things. Of course, lack of expertise makes them capable of coming up with some of the worst ideas too.

    Founders of one European data center design startup aren’t sure at this point where on that continuum their ideas fall, and they don’t pretend to be. What they’re trying to do is envision people’s relationship with computing in the near future and the physical form that relationship will take.

    The people behind Tallinn, Estonia-based Project Rhizome don’t all have background in data centers. Two of the three founders have backgrounds in design and architecture, and the third comes from the world of IT. But they believe their architecture sensibility brings a useful perspective to data center design, a perspective that will presumably grow in importance as more and more data storage and processing capacity moves into densely populated areas.

    “There’s a level of infrastructure that seems to be missing right now,” Ivan Sergejev, one of Project Rhizome founders, says. That’s computing infrastructure that occupies the same spaces people in cities occupy, as opposed to gigantic warehouse-type facilities hidden away in suburbs or rural areas. While the need for those massive data centers is unlikely to ever go away, web content, cloud services, and the explosion of devices connected to the internet – the Internet of Things – may create a need for more data storage and processing capacity directly where the end users are.

    Read more: How Edge Data Center Providers are Changing the Internet’s Geography

    The Project Rhizome team is thinking of ways to design small urban data centers so they fit in urban environments functionally, economically, and aesthetically. Imagine exhaust heat from server racks helping keep water warm in an all-season community swimming pool; or a four-story data center where one side of the building is a bouldering wall; or a community playground with rows of IT racks under a skateboarding ramp. Those are some of their ideas.

    Project Rhizome data center playground concept

    One Project Rhizome concept is a community playground integrated with a small data center (Image/Concept: Project Rhizome)

    The running themes here are multi-use and physical beauty. If a data center facility has a function beyond storing and processing data, and if doesn’t look ugly or boring, it will be easier to find room for it in the dense and busy urban environment. “When technology comes within close proximity to people, it finds itself in need of a human interface,” the company says on its website.

    A “Crazy Architecture Thesis”

    Sergejev, a Russian-born Estonian, whose time today is divided between Estonia and Holland, originally came up with the idea for Project Rhizome in his master’s thesis at Virginia Tech. He studied architecture in the US on a Fulbright scholarship after getting a degree from the Estonian Academy of Arts in Tallinn. It was, as he describes it, a “crazy architecture thesis,” where he proposed everything from robotics to liquid cooling. But a few years later, when things like edge computing started emerging in the press, some of the ideas started looking less and less crazy.

    The company is still at an early stage, working on defining exactly what it is going to do and who its clients are going to be. Sergejev and his co-founders are still “probing, poking around,” he says. We “haven’t proven the fact that we’re all that necessary just yet. The idea itself is pretty strong, so we need to find a way to implement it.”

    Project Rhizome data center bouldering wall

    A multi-story data center could double as a bouldering facility (Image/Concept: Project Rhizome)

    Some of the potential clients they plan to reach out to are colocation providers, who may need more data center capacity in cities than they have today. While there are colocation facilities in all major cities, they are extremely expensive to operate, and it is very difficult to secure real estate, power, and permits to build new ones. Design can solve many of those problems, Sergejev says.

    7:05p
    Facebook Data Center Coming to Ireland

    A village in Ireland will be the location of the second Facebook data center in Europe, the company officially confirmed on Sunday.

    The social network has been doing site selection work in Ireland and seeking planning approvals since at least nine months ago, when it was reportedly working with County Meath officials. The next Facebook data center will be built in Clonee, which is part of Meath.

    There were 1.01 billion daily active Facebook users on average in September 2015, the latest month for which the data is available. Out of 1.5 billion registered Facebook users about 310 million were in Europe as of November 2015, according to Internet World Stats. It is unclear how many of them were active users, however.

    The first Facebook data center in Europe came online in 2013 in Luleå, Sweden. “Ireland has been our international headquarters since 2009, and our new Clonee data center will continue Facebook’s significant investment in the country and in Europe,” Tom Furlong, Facebook’s VP for site operations, wrote in a blog post.

    Safe Harbor and Data Centers

    Besides Europe being one of the top global markets for internet services, one major impetus to build data centers there was created last year by the European Court of Justice, when it annulled Safe Harbor, a 15-year-old blanket legal framework internet companies used to transfer European citizens’ personal data across the Atlantic for storage in US data centers. ECJ’s decision to strike down Safe Harbor created a lot of confusion for companies with global data center infrastructure and sent a signal that European authorities preferred that service providers store Europeans’ data in Europe.

    Read more: Safe Harbor Ruling Leaves Data Center Operators in Ambiguity

    Custom Hardware Throughout

    The facility will support the latest and greatest of custom Facebook hardware, a lot of which the company has open-sourced through the Open Compute Project, its open source hardware and data center design initiative. All server and storage hardware will be OCP, the company said. It will also employ the company’s custom network fabric, as well as its Wedge and 6-Pack network switches.

    Read more: With its 100-Gig Switch, Facebook Sees Disaggregation in Action

    It will be one of the first two Facebook data centers to use its super-fast 100G switches. The other one will be the facility that’s currently under construction in the Dallas-Fort Worth region.

    Facebook Data Center in Clonee, Ireland, by the Numbers:

    • Target completion date: late 2017 to early 2018
    • Site: 227 acres
    • Phase I data center: 340,000 square feet
    • Phase I admin area: 70,000 square feet

    Powered by Wind

    Energy consumed by the future Facebook data center in Ireland will be carbon-neutral, the company said. Like its future Dallas data center, as well as its existing Altoona, Iowa, site, it will rely completely on wind power. The Luleå data center is powered by hydro.

    In addition to Altoona and Luleå, the company has built data centers in Prineville, Oregon, and Forest City, North Carolina. It also leases capacity from wholesale data center providers in Ashburn, Virginia, and Singapore.

    Indirect Free Cooling

    Because of high air salinity in Clonee – the village is close to the Irish Sea – Facebook will be using a different cooling system design than usual. It will rely on an indirect economization rather than its typical airside economization system that uses filtered outside air to cool the hardware inside.

    Indirect economization systems still use outside air for cooling, only instead of pushing outside air onto the computer floor, they use it to cool a heat exchanger, which extracts heat from the air that circulates through the hardware. This is a form of free cooling used in areas where air contains a lot of salt or other particulate matter.

    7:15p
    Three Trends Driving the 100G Ethernet Market

    Omar Hassen is Associate Vice President of Connectivity Business at AppliedMicro.

    The Ethernet market has seen tremendous growth in the past few years. Changes to transmission speeds and expansions in data center capacities are helping fuel this trend. IHS Infonetics reports that by 2019, 100-gigabit-per-second (100G) Ethernet will make up more than 50 percent of data center optical transceiver transmissions. This industry’s revenue has already grown 21 percent since 2014, topping out at $1.4 billion. As 100G silicon heads into production, the hype for 100G Ethernet is ramping up. But is it all just hype, or will 100G Ethernet revolutionize the transceiver marketplace?

    Changes to Data Center Architecture and Traffic

    Currently, the industry relies on 10-gigabit-per-second (10G) and 40-gigabit-per-second (40G), which has worked well for some time. These technologies are efficient, and most people don’t have issues with them. To most users, 40G is more than enough. The issue becomes apparent only when looking at it through a data center lens. Internet content providers and enterprise in-the-cloud data has and will continue to grow in size and traffic.

    Cisco Systems, Inc. predicts that global data center Internet protocol (IP) traffic will grow by 31 percent annually within the next five years. Changes to the way people use the Internet are responsible for this growth. Cloud computing has become huge, and mobile devices access video social media content data around the globe.

    Data centers have had to do more with less, which demands a better data management solution. This influx of traffic has led to three-tier networks (systems that combine user interface, data processors, and database management systems) and other changes to the way traffic moves through the data center. Newer technology allows parallel processing that can transfer more volume. The Internet is becoming more complex, and websites are requiring more interconnectedness. Data center architecture is changing, with more focus on integrated nodes and an increase in higher-bandwidth speeds. Obviously then, 100G will become the new standard for higher bandwidths and more intelligent data center architecture.

    10G Can’t Keep Up With Growing Enterprise Networks

    Some large-scale data centers are already switching. The Howard Hughes Medical Institute recently switched to 100G, delivered through Brocade MLXE routers. The data center includes 56 ports, all featuring 11G. This is the largest installment of 100G in any research facility. Efficiency was ranked as the top priority for the switch. Traditionally, the data center would rely on multiple bundles of 10G, requiring link aggregation and resulting in sub-optimal and inefficient load balancing.

    That’s where 100G comes in. It frees up space and minimizes aggregation, significantly improving overall efficiency. As companies continue to grow in scale and their data needs become more complex, 100G will offer the bandwidth speeds and efficiency they desperately need. Businesses that have more than four or five 10G ports are witnessing growth in their databases and may find a switch to 100G a more affordable and scalable option. Of course, this is driven by cost and a company’s resources.

    Evolving CMOS Technology Will Make 100G Mainstream

    As with the evolution of 10G technology, it will take some time before 100G goes mainstream. Transceiver technology, at its start, was expensive and required a lot of power. Over time, advances in silicon technology led to more affordable and energy-efficient technology. That’s where the 100G market is now, but complementary metal-oxide-semiconductor (CMOS) technologies are set to make it an industry standard. CMOS architecture will become faster, while using less power.

    Once mature, the architecture of 100G systems should offer speeds up to 10 times faster while using 50 percent less power. Currently, Cisco and Brocade Communications Systems Inc. sell 100G switches and routers at the enterprise level. But switches cost an average of $2,500 per port, meaning a 100G network is quite the expense for non-enterprise companies. But as CMOS technology evolves, creating these systems will become easier and more affordable. These systems will reduce the cost, size, and power needs for 100G data centers, making mainstream adoption a reality.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    9:50p
    Amazon, IBM, HPE: Government Cloud Security Process Broken

    An industry group representing Amazon, IBM, HPE, and several other companies, as well as some federal agencies and lawmakers, is calling on the government to fix its process for certifying government cloud service providers as fit for serving the federal government.

    The certification process, called FedRAMP, or Federal Risk and Authorization Management Program, was created to make it easier for government agencies to use cloud services. By choosing from a list of FedRAMP-certified providers, agency IT heads are guaranteed that the services they choose meet federal cloud security standards.

    The FedRAMP certification process, however, is “fundamentally broken,” according to an industry advocacy group whose affiliates include Amazon Web Services, HPE, IBM, CGI, General Dynamics, and CenturyLink, among others. The group, called FedRAMP Fast Forward, today published a six-step plan for reforming the process.

    There are problems of transparency, accountability, and cost, the group claims.

    “The real promise of FedRAMP — embodied in the ‘certify once, use many times’ framework — has been jeopardized by what has become a costly and time-consuming process that lacks transparency and accountability,” the report that outlines the suggested reform plan, reads.

    Government cloud adoption promises to generate billions in IT savings. Much of the current $80 billion government IT budget goes to maintaining the sprawling legacy data center infrastructure, and the thinking is that cloud computing will enable the government to shut down old and expensive data centers faster than it has been to date.

    Read more: Ten Key Figures from Latest Progress Report on US Government IT Reform

    A broken FedRAMP certification process, however, is a big impediment to government cloud adoption, according to the group. Cloud service providers don’t have visibility into their status in the approval process or guidance about the steps necessary to move the process along, the group said in a statement. Agencies don’t have insight into where cloud services that have been authorized operate.

    Both time and costs necessary for a cloud service provider to get certified went from nine months and $250,000 two years ago to two years and $4 million to $5 million today, according to an annual report by the Cloud Computing Caucus, a congressional member organization that consists of 11 Democrats and Republicans. The Caucus gets advised by technology companies and industry groups.

    Here is the six-step FedRAMP reform plan FedRAMP Fast Forward is proposing:

    1. Normalize the certification process. CSPs can take several routes to an ATO, and not all are seen as equal, which fundamentally undermines the value proposition of the FedRAMP program (DCK: ATO stands for Authority to Operate. Individual agencies issue ATOs to FedRAMP-compliant cloud service providers whose services they want to use)
    2. Increase transparency about the approval process, what it takes to gain approval, and the time and cost involved
    3. Harmonize security standards, so that CSPs can meet some FedRAMP requirements through compliance with existing international and privacy standards
    4. Reduce the cost of continuous monitoring for CSPs that have achieved an ATO
    5. Enable CSPs to upgrade their cloud environments while remaining compliant with FedRAMP requirements
    6. Help CSPs map their FedRAMP compliance to Department of Defense security requirements, rather than forcing them to start over again to obtain the ability to provide cloud services to DoD
    10:24p
    Latest Linux Security Vulnerability: Hype Versus Reality

    the var guy logo

    By The Var Guy

    In the latest bout of alarmist frenzy to sweep the security world, researchers disclosed a vulnerability in the Linux kernel’s open source code last week. It turns out the vulnerability poses little real threat.

    The flaw, which has existed in Linux since 2012 but remained unknown, was reported by the Israeli security company Perception Point. It allows attackers to gain root access to computers running affected versions of the kernel. With root access, they can do anything they want to the system.

    Perception Point ominously warned that the vulnerability affects “tens of millions” of Linux PCs and servers, as well as some Android devices (since Android is based on a version of the Linux kernel). The company urged administrators and users to upgrade their systems as soon as possible in order to apply the fix that the Linux kernel developers created after Perception Point notified them of the flaw.

    Theoretically, the vulnerability does threaten tens of millions of machines. And there is no reason not to apply the patch as soon as possible. Yet in this case, the frenzied warnings about computers being compromised in droves seem over the top for a couple of reasons.

    First, Perception Point itself admits that there is no evidence of “any exploit targeting this vulnerability in the wild.” In fact, the only known exploit for this is the “proof of concept” attack that Perception Point itself created in order to show that the flaw actually existed. So, for now, there is no reason to believe that any machines are under attack from this error.

    Second and more important, the time and conditions required to execute the exploit mean that, in reality, only a minority of PCs and servers — and probably no Android devices — can be attacked through this flaw. As Steven Vaughan-Nichols and others have noted, the attack takes many hours to complete, even on high-end hardware. It also requires gobs of memory — apparently more than 8 gigabytes in some cases. That excludes my trusty laptop, with its 4-gigabytes of RAM, from a successful attack, along with plenty of other PCs and almost certainly every Android phone or tablet in existence.

    To be sure, servers are likely to have more memory and therefore be vulnerable. But there are still plenty of servers that lack lots of RAM.

    Beyond all this, kernels with certain security hardening features enabled also seem to not be vulnerable to the attack.

    The open source community has seen its share of truly worrisome security threats in the past couple of years, chief among them Heartbleed. And Linus Torvalds’s unorthdox attitudes toward security in the Linux kernel may not sit well with everyone (though they are arguably healthier than the illusional norm of pretending that perfect security is a real possibility). But in this case, the hype suggesting the imminent demise of millions of Linux computers stop far short of living up to reality.

    This first ran at http://thevarguy.com/open-source-application-software-companies/linuxs-latest-security-vulnerability-hype-vs-reality

    10:35p
    Biggest Cloud Providers Ride Through Winter Storm Jonas Without Downtime

    WHIR logo

    By The WHIR

    Cities on the north-eastern US coast are recovering from a massive weekend snowfall brought by Winter Storm Jonas, but the cloud infrastructure in the region powering websites and services appears to have been largely unaffected.

    The service status pages for major cloud services including Microsoft Azure, Google Cloud Platfrom, and Amazon Web Services didn’t report any disruptions to facilities on the east coast.

    Hurricane Sandy in 2012 caused several outages including flooding and generator fuel shortages at PEER 1’s facility and Internap’s Mahattan facility going down. In anticipation of winter storm Jonas, AWS has noted that this is unlikely to happen.

    Read more: How East Coast Data Centers are Preparing for the Storm

    “In the days leading up to a known event such as a hurricane, we make preparations such as increasing fuel supplies, updating staffing plans, and adding provisions like food and water to ensure the safety of the support teams,” wrote AWS’s Jeff Barr in a blog post. “Once it is clear that a storm will impact a specific region, the response plan is executed and we post updates to the Service Health Dashboard throughout the event.

    It’s not just cloud providers that have had to reassure customers, but also companies that rely on on-premise data centers and upstream service providers to deliver services to their customer. This means having contingency plans in case something fails.

    For many companies that have local data cached on-premise, critical data loss is a possibility. Companies like Panzura provide hybrid strategies that leverage cloud services from providers like AWS, Oracle and Google to ensure data isn’t lost during a storm.

    etherFAX, a provider of a cloud-based FAX services, explained to customers in a blog post that their primary data center, located in a local Equinix IBX facility in New York remained up throughout Hurricane Sandy in 2012. If the Equinix IBX facility fails, etherFAX fails over to its redundancy site in Toronto within 15 minutes. They reported no downtime because of the storm.

    This first ran at http://www.thewhir.com/web-hosting-news/major-cloud-service-providers-stay-online-throughout-winter-storm-jonas

    << Previous Day 2016/01/25
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org