Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, August 27th, 2014

    Time Event
    11:57a
    Weekend Earthquake A Disaster Recovery Reminder

    A 6.1 magnitude earthquake struck 6 miles southwest of Napa, California on Sunday. No casualties have been reported, though extensive damage to property occurred, including devastating damage to wine country. Although data centers safely navigated through the quake, it was a reminder that they must always be prepared for natural disasters.

    This was the first earthquake of serious magnitude in the area since 1989. The 1989  earthquake had a magnitude of 6.9 and caused the collapse of part of the Bay Bridge. A 7.2 magnitude earthquake hit Baja California in 2010.

    “Natural disasters are true of any geography,” said RagingWire CTO Bill Dougherty. Dougherty is a resident of the area and RagingWire has a data center in Sacramento, nearby but outside of the quake zone. “It’s a reminder you can’t ever stop planning for a disaster. The area was lulled in a false sense of security. We haven’t had a bad earthquake in Napa in a long time.”

    There are several data centers within the affected area, however all remained online and operational – a success story for the industry. However, it was a wake up call to take disaster recovery seriously. Data center customers in the area should make sure they have a second deployment outside of the fault zone. It could have been a lot worse.

    Data center providers themselves are seasoned pros at protecting infrastructure. Racks need to be bolted down and use seismic restraints, and the facility must have multiple layers of redundancy. While the facility may navigate through an earthquake, it’s the outside infrastructure that poses the biggest threat.

    “Remember, even if the data center building survives a major quake, the surrounding infrastructure is not resilient,” said Dougherty. “Bridges, roads, power grids, fiber paths, and fuel suppliers are all vulnerable and have a direct impact on your operations and service availability. And there’s no question, another quake will hit the Bay Area. I’m thankful and blessed, it could have been so much worse.”

    Many people in the area lost power. Other factors that are less evident come in to play as well.

    One factor is the loss of water pressure. “Any water that’s flowing is needed for fires,” said Dougherty.

    Water is used in many critical facilities for HVAC systems that use evaporative cooling. If the community source goes down, an additional water supply is needed or the facility may become inoperable.

    While data centers rely on fuel contracts to power backups, if there is damage to road infrastructure suppliers might not be able to reach the data center. While data centers are preferred customers and have contracts in place, damage to routes can prevent both fuel and staff from reaching the facility.

    Have contingency plans in place not only for infrastructure, but staff. Make sure protocols are established in the event of a disaster.

     

    11:59a
    Verizon Makes $40 Million Investment In Solar Energy

    Telecom giant Verizon is boosting its solar capacity, building out 10.2 megawatts generated by over 44,000 solar panels manufactured by SunPower. The investment in clean energy will total $40 million and follows a previous investment of $100 million on solar panels and fuel cells. The company will have around 25 megawatts of clean energy under management.

    Verizon is cutting its carbon emissions footprint, a trend among the largest telecom and Internet players. Rooftop and ground-mounted solar-power systems will be located in California, Maryland, Massachusetts, New Jersey and New York.

    The price of solar panels has dropped considerably since its investment of $100 million last year. That investment added around 5 megawatts of solar and around 10 megawatts from a few dozen fuel cells from ClearEdge. The fuel cells provide 10 megawatts powering the wireless network in six different locations in western U.S.

    “Our investment in on-site green energy is improving the quality of life in the communities we serve by reducing CO2 levels and reducing strain on commercial power grids, while increasing our energy efficiency,” said James Gowen, Verizon’s chief sustainability officer. “By almost doubling the amount of renewable, solar energy we’re using, we are making further progress toward Verizon’s goal of cutting our carbon intensity in half by 2020, in part, by leveraging the proven business case for clean-energy alternatives to the commercial power grid.”

    In addition to various solar and fuel cell installations at Verizon data centers, the company has also implemented better cooling efficiency and energy-consumption reduction measures in its data centers.

    There is a very healthy competition among Internet and telecom players to build out renewable and clean energy. Apple, Google, eBay and Microsoft all have clean power initiatives in play, whether it’s building solar panels or buying clean energy from the utility.

    Apple recently got the green light to build out its third solar farm in Maiden, North Carolina, a 100-acre 17.5 megawatt plant. The company is big on solar, also planning a 20 megawatt solar farm in Reno, Nevada. Apple’s data center energy supply is 100 percent renewable. Also in North Carolina, it uses fuel cells that make electricity out of biogas from nearby landfills.

    Facebook uses solar at its Prineville facility. The social network also used its leverage to push its utility in Iowa, convincing MidAmerican Energy to power its data center with wind energy. MidAmerican’s investment totals $1.9 billion in wind power generation. It’s the largest order of onshore wind turbines to date, in support of Facebook’s demands.

    eBay provided a look into its solar array last year. The array features 72,000 square feet of solar panels and covers nearly the entire roof of the data center.

    Dupont Fabros, Cisco, and Emerson Network are just a few more that have installed solar panels in support of operations

    Foxconn recently announced plans to design containerized data centers that come with renewable energy generation capabilities, such as photovoltaic panels. The containers are targeted at “hyperscale” internet giants.

    1:00p
    CenturyLink Opens Second Toronto Data Center

    CenturyLink has opened its second data center in the Greater Toronto area, consisting of 100,000 square feet of raised floor space and capable of supporting up to 5 megawatts of IT load.

    Located in Markham, the facility has received Uptime Institute’s Tier III Certification of Constructed Facility and Design Documents. CenturyLink is pursuing Uptime Tier III for both design and construction on all of its new builds.

    CenturyLink has long had a hosting presence in Toronto and the company reports that pre-sales for the new data center have gone well. The new TR3 facility brings the company’s total Toronto data center footprint to more than 11 megawatts of IT load capacity and approximately 185,000 square feet of raised floor.

    The company’s entire product portfolio including cloud and colocation will be available from TR3. The data center adds much needed capacity to meet demand. It also holds strategic importance in the market.

    “Part of the importance of this facility is that it is far enough away from our initial Toronto facility that it fills a lot of disaster recovery requirements for customers,” said Drew Leonard, VP, colocation product management. “We’ve also built out a metro ring from Toronto 1, To the new TR3 facility, to 151 Front street.” The new data center opens its doors with 4 carriers on-site.

    According to statistics collected and shared by the Toronto Financial Services Alliance, the Toronto region ranks among the top financial centers and technology hubs in the world.

    “CenturyLink’s Toronto data centers not only support local businesses but also the company’s growing base of international clients that require in-Canada hosting delivery,” said Mark Schrutt, director, services and enterprise applications, at global analyst firm IDC.

    CenturyLink now operates 57 data centers worldwide, including a Canadian footprint in the three major markets of Toronto, Montreal and Vancouver.

    It has been a very active 2014 for CenturyLink in terms of data center expansion. The initial plan for 2014 was to add more than 180,000 square feet and 20 megawatts to its global presence. “We’re always doing more expansion and recently kicked off another slate of projects,” said Leonard. “At any given time we have 8-10 expansion projects.”

    The company recently opened a new facility in Minnesota and launched cloud in Toronto. CenturyLink recently launched a new private cloud service, offering to set up elastic private IT infrastructure for customers in any of its data centers around the world. CenturyLink has had a private cloud offering before, which came along with its acquisition of Savvis in 2011. Called Symphony Dedicated as part of Savvis’ product offerings, CenturyLink renamed it into Dedicated Cloud – a product that still exists and that the company still supports. CenturyLink’s colo roots remain core to its strategy.

    1:07p
    Time Warner Suffers Widespread Outage

    Many are waking up to no Internet service this morning as Time Warner Cable suffered a major Internet outage early today. During routine network maintenance at 4:30 a.m., an undisclosed issue with their Internet backbone disrupted the entirety of Time Warner Cable’s network, including Internet and on-demand services.

    The widespread outage began while most were sleeping, but the scope is immense hitting both coasts. Time Warner Cable has 11.4 million subscribers in the United States. Internet outages are more damaging today as more of our lives are online and the remote workforce grows. Several people are waking up today to find they can’t do their jobs.

    Time Warner hasn’t revealed many details on the outage outside of a Tweet and a statement saying it was a problem with the Internet backbone:

    “At 430am ET this morning during our routine network maintenance, an issue with our Internet backbone created disruption with our Internet and On Demand services. As of 6am ET services were largely restored as updates continue to bring all customers back online.”

    All services were down including their website, making it hard for customers to get information early on. Services are still in the process of being restored several hours later.

    2:00p
    Is the Commodity Data Center Around the Corner?

    The data center is changing. We have new methods of cooling, optimizing the data center and even the utilization of green energy through next-generation geothermal technologies. The insides of the data center and what goes into the rack has been changing as well. New platforms around consolidation, server technology and cloud computing are all impacting how we process and utilize resources.

    The conversation around custom-build servers, networking components and now storage has been heating up. The concept of a commodity data center is no longer locked away for mega-data centers or large organizations. Looking at Google as an example, you’ve got an organization which builds its own server platform by the thousands. In fact, Google has developed a motherboard using POWER8 server technology from IBM, and just recently showed it off at the IBM Impact 2014 conference in Las Vegas. DCK’s Rich Miller recently outlined how “POWER could represent an alternative to chips from Intel, which is believed to provide the motherboards for Google’s servers.”

    But can this translate to the modern organization? Can SMBs and even larger IT shops adopt the concept of a commodity data center? Let’s look at some realities behind what is driving the conversation around a commodity data center, and where there are still some challenges.

    • The emergence of software-defined technologies. Network, storage, compute, management and the data center can now have software-defined technologies abstracting the entire environment. The idea behind all of this is to allow the virtual layer to manage and control physical resources. Now, these technologies are absolute realities as a number of vendors are allowing you to simply point network, storage, compute or other resource to a virtual layer. This can span cloud computing and beyond. Basically, there’s really nothing stopping you from buying your own servers, loading them with a flash array and providing those resources to a software-defined storage controller. Congratulations! You now have commodity storage with a powerful logical management layer.
    • How hardware is changing. In the past, we were absolutely dependent on the hardware platform. Now, data is so agile that hardware is there simply for the resource and performance aspect. Redundancy and data replication allow organizations to hop between storage shelves and even entire blade chassis environments. The point is that the virtual layer is much more in charge than ever before. More organizations are able to purchase hardware and simply have their hypervisor manage it all. If your information is agile and you have a solid N+[insert your number here] methodology, why does it matter if your hardware is proprietary as long as your data stays safe?
    • More automation and orchestration. Open-source tools allow you to scale from your private data center and beyond. These automation tools simply ask you to point resources to it and allow the automation policies to run their course. Virtual hardware and software profiles are allowing administrators to re-provision entire stacks of hardware dynamically. This level of control was never available before.
    • How robotics help create commodity platforms. OK, so we’re not quite there yet. But we will be. There has been a lot of debate around the introduction of robotics into the data center platform. Many still argue that future data center models will have a lot more standardization and allow for more robotics to impact a now commoditized data center. Robotics and automation technologies can span extremely complex designs to simple data center optimization techniques. For example, a recent TechWeekEurope article discusses how IBM is plotting the temperature patterns in data centers to improve their energy efficiency, using robots based on an iRobot Roomba base.
    • Modular and custom built data centers. You can basically order your data centers to-go these days. Modular and custom built data center designs are allowing organizations to create the ideal compute model for their business. This can have proprietary systems or commodity platforms. The point is that more organizations are looking at smaller, modular platforms for better density and economics. Traditional data center models still have their place – but the interesting part is the growing diversity in modular and custom-built data center designs.

    New technologies are creating a powerful new data center model. Both logical and physical changes are allowing a lot more diversity within the modern data center. This means organizations can deploy more logical controllers and allow the abstraction of vast hardware resources. But it’s not all simple. There are still some challenges around moving towards a commodity data center.

    • Asset management. What if a drive fails? What if your board goes bad? How do you replace an entire chassis? Who is in charge of that process? There are a lot of new questions that are added when you own the hardware. In some cases, commodity and custom-built systems also means that your organization is responsible for complete maintenance and hardware resolution issues. Unless you have a good control system in place, managing hundreds or even dozens of server platforms might not make a lot of sense.
    • Challenges around open-source technologies. Are you using an open-source hypervisor? Many an open-source cloud management tool? Are you ready to create your own scripts and policies? Unless you have a solid partner to work with or internal resources, managing and configuring open-source technologies isn’t always easy. Which brings us to the next point…
    • Lack of human resources. Commodity data centers require a new level of management and control. You have to have administrators ready to help support a much more virtualized environment. Policies, control mechanisms, and physical resources are all managed from a logical layer. How ready is your staff to take on that kind of challenge?
    • When technology capabilities surpass the business. You might be ready to adopt a commodity data center, but is your business? There needs to be complete alignment between the organization and IT entities within a business model. How are your users accessing applications? How are they receiving data? It’s critical to understand how a technological leap to a commodity data center can impact your business.

    Not everyone can build their own servers or even data center platforms. Commodity systems are gaining traction but slowly. Still, as the data center continues to become the epicenter of all modern technologies, organizations will look for ways to optimize the delivery of content and resources. In some cases, you’ll see commodity systems. Most likely, for those organizations outside of Google, Facebook and Amazon, you’ll begin to see a new trend emerge. Hybrid commodity data centers will become a lot more popular as pieces of your architecture can be custom-built. The amazing piece here will be the virtual services capable of interconnecting both commodity and proprietary systems. Ultimately, this will mean more options for administrators, the data center and of course – your organization.

    2:29p
    Disaster Recovery and Business Continuity: Putting Your Plan in Place

    Ray Emirzian is Vice President of Operations & Product Management at docSTAR, a B2B software firm specializing in cloud document management software and business process automation.

    East Coasters will remember where they were when the lights went out in 2003. Fifty million people lost power during the great power outage due to a falling branch in Ohio and a chain of events – including “deficiencies in corporate policies, lack of adherence to industry policies, and inadequate management”[1] that ultimately resulted in the blackout. Looking across the Hudson River to NYC from New Jersey, there was nothing but darkness.

    Figure 1: Northeast Blackout 2003

    Figure 1: Northeast Blackout 2003

    So the most important question to ask is, where would your business be without electricity?

    For those who think it won’t happen to them, think again. According to a recent Forrester survey – The State of IT Resiliency and Preparedness – 1 in 3 companies have declared a disaster in the past five years. In 2010, the statistic was 1 in 5.

    Don’t get caught in the dark, preparing for the worst

    To prepare for the worst, there are two areas that you need to address. One is Disaster Recovery (DR) – the processes put in place to prepare for everything from power outages and natural disasters, to terrorist attacks. Systems to manage data backups, data recovery processes, access to information stored offsite, including online and physical storage, are all integral components of a DR plan.

    The second area is Business Continuity Planning (BC). Implementing policies and procedures that can help keep the most important pieces of your business running, ensuring your employees know where to go in the event of an emergency, and how to access their documents and applications are critical components to a full business continuity plan.

    Many organizations are looking increasingly to third party solutions to perform a Business Impact Analysis (BIA) and handle DR and BC/BCM initiatives. For small to medium sized businesses however, many large scale DR systems can be cost-prohibitive.

    Before you throw a whole lot of cash at the problem, there are four key areas you need to consider:

    PEOPLE & PROCESSES What is the chain of command and workflow in the event of a disaster? Each department should have a plan in place for their respective teams, and executive management needs to develop a detailed blueprint for communicating with each other and departments. Make a list of contacts at other vendor sites that may need to be contacted in case of emergency. Communication is key. You should also talk with your technology vendors about their disaster recovery plans to ensure your information is protected and accessible when needed.

    DATA & DOCUMENTS Where are our documents and important data stored now and how do we access these data files? Ensuring you have continuity of your business processes is critical during a disaster. As a first step, more and more businesses are looking to enterprise content management systems to store documents and data in a central location that can be accessed anytime, anywhere.

    With centralized storage, you don’t need to worry about the destruction of or access to physical files – on or off-site. Employees can access from multiple devices via a web browser. With hosted service, small and medium sized businesses can enjoy the same security benefits of the larger corporations with the benefit of very low overhead, minimal startup investment, and the security of a redundant, disaster-proof data center. There is no specialized hardware to manage and there is no up-front capital investment.

    COMPLIANCE & CONTROL What are the industry regulations regarding document and data security? Every industry has its own rules and regulations about handling sensitive data – particularly legal, education, healthcare, manufacturing and state/federal government agencies. Ensuring you are storing documents and data in compliance with industry standards is an important step in protecting your business in the event of an emergency.

    With enterprise content management systems, organizations can save time and money by eliminating on and off-site storage. Not only can all critical files be stored electronically, they can be accessed immediately, and business rules can be implemented to ensure that you control who accesses your data.

    TRAINING & TESTING Do we have the staff and budget in place to make training and testing of our DR and BC plans a reality? Many firms don’t understand the full costs of DR when managed internally, and are unable to manage it successfully. Full scale DR vendors exist, but many small to medium size businesses just can’t afford the cost. Most organizations also struggle to understand their cost of downtime.

    In the Forrester study cited above, 57 percent of businesses surveyed said their organizations had not calculated this cost. Those who did know their hourly cost of downtime gave answers in the range of $10,000 to $3.5 million.

    Take the time to calculate how much you could lose with just one to two days of downtime, and then compare this to the cost of aggressively managing your DR plan. Take the time to discuss what your current plan is, train your employees – and then test it out. If it works, you will sleep well knowing that your data is protected!

    If you are like many businesses, there may be more work to do. Start by protecting your data and documents today.

    [1] U.S.-Canada Power System Outage Task Force  August 14th Blackout: Causes and Recommendations, April 2004

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:06p
    Intel and HyTrust Partner For Chip-to-Cloud Security for Virtualized Workloads

    Cloud Security Automation company HyTrust has partnered with Intel to help secure applications and data in virtualized data centers, with a new feature called HyTrust Boundary Controls.

    The new controls leverage Intel’s Trusted Execution Technology (TXT) to provide processor-level attestation of the hardware, BIOS and hypervisor to help keep workloads safe from malware and rootkits. Intel Capital joined VMware, Fortinet and In-Q-Tel last year in an $18.5 million financing round for HyTrust.

    Trusted Geolocation in the cloud

    Workloads have become increasingly portable across virtualized computing infrastructures in the enterprise, leaving security and compliance teams scrambling to track and secure resources and enforce policies.

    HyTrust says that its new Boundary Controls allow organizations to set policies for virtualized applications and data to enforce that they run on a proven and trusted host that is physically located within defined parameters. These automated mechanisms then ensure that workloads can only be accessed via a specific, designated or trusted server in a trusted location. The company says this will help reduce the potential for theft or misuse of sensitive data, or any violation of regulatory compliance.

    The geo-fencing capabilities work just like title suggests, HyTrust says. Boundary Controls policies set when and where virtual workloads are able to run. With these controls in place, “if the virtual machine is copied or removed from its defined location, it will not run at all, and the data will not be decrypted on untrusted hosts.”

    Besides policy control by country, state, county or province, HyTrust Label-Based access controls can segment data and data centers based on risk classification or level of confidentiality. An availability control allows IT to classify and validate that hardware in place meets the appropriate availability requirements for a given workload.

    Ravi Varanasi, General Manager of Cloud Security at Intel said “customers need an assured root-of-trust and attested parameters like location information that can be relied upon to allow seamless movement of VMs in various cloud deployments. As enterprises become increasingly reliant on software-defined networks within virtualized and cloud infrastructures, HyTrust Boundary Controls are exactly the kind of policy driven control with an assured source of such policy information needed to enhance security and ensure compliance.”

    Intel added hardware assisted security such as TXT and Intel AES New Instructions (Intel AES-NI) in the Xeon E5 architecture years ago. Intel’s TXT hardware technology stemmed from the Trusted Platform Module (TPM), which is based on an initiative from the Trusted Computing Group (TCG), which aimed to defend against software-based attacks that attempted to change a platform’s configuration. TCG and the Trusted Platform Module have since worked with numerous international standards and have been published as ISO/IEC 11889 parts 1-4. Intel’s TXT is used in a variety of hardware and software platforms, such as Dell, Hitachi, IBM, Quanta, Red Hat, Supermicro, VCE, VMware and others.

    5:30p
    Data Center Jobs: ViaWest

    At the Data Center Jobs Board, we have a new job listing from ViaWest, which is seeking a Data Center Engineer in Aurora, Colorado.

    The Data Center Engineer is responsible for monitoring the building’s HVAC, mechanical and electrical systems, performing preventive maintenance, site surveys, replacement of electrical and mechanical equipment, reading and interpreting blueprints, engineering specifications, project plans and other technical documents, performing operation, installation and servicing of peripheral devices, and assisting with equipment start-ups, repairs and overhauls. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

     

    8:00p
    The Home Data Center: Man Cave for the Internet Age

    In the ultimate manifestation of the “server hugger” who wants to be close to their equipment, a number of hobbyists and IT professionals have set up data centers in their home, creating server rooms in garages, basements and home offices.

    The home data center is a novel extension of the central role that data centers now play in modern life. These enthusiasts are driven by a passion for IT, and use their gear for test-driving new equipment, lightweight web hosting or just as the ultimate technology ManCave.

    Whatever the motivation, this level of connected house requires some adaptations, including upgrading power and network connections and running cable throughout a residential home.

    Here’s a look at a few examples of these projects:

    The enterprise is in the house

    Canadian IT professional Alain Boudreault has enterprise class equipment from Dell, HP, Sun, Juniper and IBM in his home data center in the basement of his house, including a high-density IBM BladeCenter. His web site provides a detailed overview of his setup, including a diagram of all the components. It includes an Open Stack MAAS (Metal as a Service) cloud and multiple storage systems (iSCSI and Fiber Channel).

    “My first step was to install an electrical box to provide a power of 240 volts at 40 amp, which will provide a maximum of 9.6kw/hr when needed, writes Boudreault, who teaches application development and uses the facility for testing. “The servers are rarely open all at once, so average consumption is 1-2 kW/hour.” Electricity is about 7 centers per kW/hour in Quebec, he says.

    Nonetheless, Boudreault writes that this type of home data center is “not for the faint of heart.”

    The data center as YouTube star

    Some home data center builders post videos to YouTube. The most popular of these is the Home Data Center Project, another project in Canada that began in 2005 as two computers in a closet and had grown to more than 60 servers as of 2013. The project has been documented in a series of videos that have racked up more than 500,000 views on YouTube. The videos and web site document the extensive cabling, cooling and network infrastructure upgrades.

    home-data-center

    Racks of servers in a garage are part of the Home Data Center Project, which has been documented in videos with more than 500,000 views on YouTube. (Photo: Home Data Center Project).

    “This project was not designed to make a profit,” writes developer Etienne Vailleux of Hyperweb Technologies. “This setup was simply there as a hobby. But after some time, it quickly became a passion.”

    In 2013, the project migrated from one house to another and downscaled a bit. “A part of the basement was specially designed to house servers and air conditioners,” Vailleux shared in an update. “The project is currently hosting 15 servers. The capacity of the connection is 60 Mbit/s.”

    << Previous Day 2014/08/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org