Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, December 23rd, 2013

    Time Event
    1:00p
    Cloud 2014: Top 10 Trends to Watch in The Year Ahead
    Today we look at the Top 10 cloud trends for 2014.

    Today we look at the Top 10 cloud trends for 2014.

    Cloud computing is more than just a buzzword. It’s a mega-trend that has become an umbrella term for the ongoing shift from in-house data centers to third-party facilities. Within that mega-trend are many smaller trends, which have important implications for the industry and your IT operations. Here’s a look ahead at the trends that cloud thought leaders believe will be the most significant in 2014.

    Cloud and CDN continue to blur, and the Network plays a bigger role.

    As perhaps the last bastion to get “cloudified”, the network will continue to be a focal point in 2014. There’s also been a lot of interest in “edge” computing, and using cloud in a way to deliver content more efficiently.

    “The world of cloud and CDN is going to continue to blur,” said Robert Miggins, senior VP of business development at Peer 1. “Clouds that run in multiple geographies will be important. In order to be bold, in order to be a real player in the cloud you have to be in multiple locations. Those locations need to be able to operate between very seamlessly. You get at some distributed content ala CDN. We’re seeing customers come to us because we have lots of data center choice and they’re connected to one another.”

    “Let’s get that data closer to the edge,” said Tyler Hannan, Director of Marketing, Basho. “The ability to insure continuous experience regardless of where the user is located will be critical.”

    “Cloud computing will become even more fundamental to business strategy, and IT leaders will need to rethink the role of the network to deliver on the promise of cloud,” predicts Juniper Networks. “Enterprise data centers typically operate in silos and the lack of integration between compute, storage and networking has prevented businesses from achieving the goals of agility and operational efficiency. As enterprises begin to realize the benefits of cloud to achieve these goals, and the deployment and orchestration systems like OpenStack and CloudStack mature, managing data centers will be about managing workflows. As a result, there will be full integration of the network into the cloud workflow, making it more operationally efficient and easier to quickly meet changing demands.”

    Open Source: From Alternative to Prime Time

    “Open source is something that’s been out there for a long time, said Rackspace CTO John Engates. “A lot of times it was the alternative. The CIO would choose enterprise technology, but there was always a sort of rebellious developer thing. Open Source has flipped to where it’s leading the way. Rackspace never believed we’d be able to run our entire technology on open source.

    “You won’t be looking to a specific vendor for your roadmap,” Engates cointinued. “This won’t happen over night for enterprises. I think it’s DevOps that really drives this change. The developer didn’t have as much clout in the old days. They are now the much larger influencers of the technology decision. Sometimes it’s just the developer doing his thing with open source software – without an RFP. The developer prototypes the whole thing.

    “The scale of things has driven people to open source. License fees for many, many users doesn’t make sense. Nobody wants to pay a license fee every time you spin up a cloud server. With web, mobile and social, people will make proprietary the fall back. The emergence of software as a service adds to this – you’re renting, reducing your dependence on some of the historic software vendors. In the old days, nobody could imagine not having Microsoft, Oracle, SAP, but every company is starting to see the opportunity. It’s losing its stranglehold on IT. A lot of SaaS companies see that as the way to build their infrastructure.”

    From Public or Private, to Public AND Private

    Both public and private cloud models will take root; they are complementary, not competitive. “We also see the private-or-public cloud debate of yesteryear shifting to the private-AND-public cloud debate where enterprises are benefiting from the security and reliability of the private cloud while harnessing the scalability and flexibility power of the public cloud,” said Equinix. “In fact, Gartner recently advised enterprises to design private cloud services with a hybrid future in mind and make sure future integration and interoperability are possible. We couldn’t agree more.”

    “We feel so confident about the demand for hybrid computing, and specifically within the world of hybrid, calling it a prediction is taking it the easy way out,” said Peer 1′s Miggins. “We have heard demand for hybrid computing and the requests have intensified over the last 12 months.”

    Cloud Containers

    Container technology is an easy way to spin applications up and down in a more efficient way. Rackspace recently acquired a company called ZeroVM, which has Container technology. Docker is another player.

    Rackspace CTO John Engates said if he cut put his hat on one prediction, it’d be this one: Containers will take off and be used heavily in production. “We’re getting a lot of interest. Every customer is talking about it, playing with it,” said Engates.

    Containers simplify the deployment and management of cloud applications. The next big thing is containerizing and virtualizing the application, not just the machine. It makes things smaller, lighter and faster. It isolates each individual user in a separate container, and it makes the development experience better

    Cloud: The Birthplace of Value-Added Services

    “Expect more MSPs and VARs to add cloud Backup-as-a-Service and DR-as-a-Service to their lineup of offerings as a way to bring more value to their customers,” said Janae Stow Lee, senior vice president, Filesystem and Archive at Quantum. “Major storage companies will play a key supporting role in providing the underlying technologies as part of a broader effort to compete with cloud leaders such as Amazon.”

    1:30p
    Smart Equation for Emergency Power at Data Centers: Code + Common Sense

    Bhavesh Patel is director of marketing and customer support, ASCO Power Technologies, Florham Park, NJ, a business of Emerson Network Power.

    ASCO-Bhavesh PatelBHAVESH PATEL
    ASCO

    The devastating impact of hurricanes, tornadoes and other severe weather over the past several years has placed an unprecedented burden on commercial facilities, including data centers that need uninterrupted power to maintain business operation.

    As has been evident in the detritus of extreme weather in the U.S. and elsewhere, including this past November’s typhoon in the Philippines and last year’s Hurricane Sandy, codes and standards cannot cover every eventuality. For example, during Hurricane Sandy some generators and fuel sources set up to run emergency backup power in critical facilities – and located within a facility or on a campus in accordance with prevailing code – were unable to operate due to flooding or water damage even though the installation was to code.

    For data centers, which rely on a steady and reliable stream of electricity, power interruption can be extremely costly and, in some cases, damaging to a facility’s reputation.

    Indeed, the price of lost business productivity stemming from interrupted power can soar quickly. According to a Ponemon Institute study completed in 2011, the average cost of data center downtime based on cost estimates provided by survey respondents was about $5,600 per minute. The average length of reported incident was 90 minutes, which tallies to an average cost per incident of over half a million dollars.

    When building new data centers, owners and decision makers involved in the design, construction and operation of the facilities should look beyond code to decide where to place electrical components to ensure reliability of emergency backup power in the face of an extreme weather event.

    Go Beyond The Code

    For example, NFPA 110: Standard for Emergency and Standby Power Systems, 2013 edition, Annex A, Explanatory Material paragraph A.7.2.4, does say: “EPSS [emergency power supply system] equipment should be located above known previous flooding elevations where possible.” And goes on to state, in paragraph A.7.2.5, “For natural conditions, EPSS design should consider the ‘100-year storm’ flooding level or the flooding level predicted by the Sea, Lake, and Overland Surges from Hurricanes (SLOSH) models for a Class 4 hurricane.”

    But these are suggestions, not requirements.

    And the NFPA 110 is not particularly specific with respect to flooding. Chapter 7: Installation and Environmental Considerations, paragraph 7.2.4 says, “The rooms, enclosures, or separate buildings housing Level 1 or Level 2 EPSS equipment shall be designed and located to minimize damage from flooding, including that caused by the following:

    1. Flooding resulting from fire fighting
    2. Sewer water backup
    3. Other disasters or occurrences

    These references appear in section 7.2.2: Outdoor EPS [emergency power supply] Installations, but are not referred to in the previous section, 7.2.1: Indoor EPS Installations. So unless someone is looking in 7.2.2, they might not even see the references.

    And here’s another example. NFPA 70: National Electrical Code (NEC), 2011 edition Article 517 explicitly defines division of a facility’s electrical systems: which loads are essential and which are not. With respect to the essential electrical system, the code defines the equipment system, critical branch, and life safety branch. However, as with NFPA 110 and NFPA 99, the NEC does not address flooding with specificity.

    NEC Article 517.35 (C) says, “Careful consideration shall be given to the location of the spaces housing the components of the essential electrical system to minimize interruptions caused by natural forces common to the area (e.g., storms, floods, earthquakes, or hazards created by adjoining structures or activities).” But, again, there are no specific mandates.

    Another Issue: Separation of Wiring

    Another issue not standardized by code is the separation of wiring to enable adequate emergency coverage. NEC Article 517.30 (C) (1): Separation from Other Circuits states: “The life safety branch and critical branch of the emergency system shall be kept entirely independent of all other wiring and equipment and shall not enter the same raceways, boxes, or cabinets with each other or other wiring.” Though that section of code dictates not placing life safety branch and critical branch wiring in the same raceway to prevent simultaneous, it does not give an actual distance or even detailed guidelines. That is another area where good judgment based on specifics of the installation comes into play.

    Code does not specifically warn against placing electrical equipment for emergency power in the basement of a building. For instance, there is no language that precludes having the generator, switchgear, and the paralleling gear located in the basement, where flooding could occur during times of rising water and where components are more likely to be damaged by water. If there is a major problem in the basement, then paralleling switchgear would both potentially be compromised.

    NFPA 110, Annex A, paragraph A.7.2.4 says, “EPSS equipment should be located above known previous flooding elevations where possible.” Good advice even if not mandated. But following along that train of thought, in flood zones, breaker boxes, building connections and other critical electrical equipment should be not only “not in the basement” but actually above the ground floor, as well.

    Placement of components for emergency power systems in data centers should take into consideration both code and common sense conclusions. What’s good in one location for generator placement and fuel storage – i.e. at a facility nowhere near a river or a shoreline – is not necessarily good for another. Taking stock with code and common sense in mind will make for the best decisions.

    Until code catches up with weather realities, it is up to every stakeholder to make sure emergency power components get the best chance possible when called upon to perform as intended.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:57p
    Microsoft Expands in Quincy, Acquiring 200 Acres

    Microsoft’s Quincy, Washington data center campus is about to get much bigger. Microsoft intends to purchase 200 acres of industrial property from the Port of Quincy for $11 million. This is three times the size of Microsoft’s sizable, existing property. Construction on the new site will begin in the spring, with the first phase expected to be completed in early 2015.

    “The land deal is one of the largest in port history,” port Commissioner Curt Morris said. The sale is expected to close in January following a public hearing to announce plans to sell the property.

    Quincy has attracted 6 other server farms to the area over the years, which sit among 200,000 acres of farmland. Quincy’s motto is “Where Agriculture Meets Technology.” It’s an attractive data center location thanks to cheap power via nearby Columbia River, and dirt cheap (at least initially) land prices, when Microsoft first established a data center there. Additionally, 20 years ago the Quincy mayor had the foresight to  invest heavily in dark fiber, meaning the connectivity is also there. Along with Microsoft, Yahoo!, Dell, Sabey, Vantage and Intuit have all built sizable data centers in Quincy.

    When completed the new data center will create 100 full-time jobs.

    This new development will be the largest server farm in Quincy; the site is more than three times the size of the current property Microsoft owns there, which is the size of 10 football fields. The company is clearly building out its infrastructure in support of its cloud computing initiatives.

    Microsoft will pay $3,985,500 million for 60 acres the port already owns. It will also pay the port $7,058,700 million for adjacent acreage that the port is first buying from private landowners for just over $6.6 million before selling the land to Microsoft. The city of Quincy on Tuesday annexed the private property into the city limits, making way for the transaction to move forward.

    “The company meets all of the zoning requirements of the city,” Morris said. ”Our mission is to continue to advance the industrial development of Quincy.”

    The Economic Benefits of a Data Center Cluster

    The combination of cheap power, cheap land and dark fiber set up the perfect storm. Now, thanks to Microsoft and others like Yahoo locating here, property values are rising, new houses and stores are everywhere. There is always new construction in town, which has grown from 5,400 residents in 2007 to more than 6,200 today.

    The arrival of data centers has meant much more than just jobs. The town has benefited in a variety of ways, from a surge in construction to being able to get 100mbs internet connections for 20 bucks a month.

    After receiving $700,000 in sales taxes in 2005, Quincy’s tax revenue grew to $1.5 million in 2006 and nearly tripled to $4.3 million in 2007 due to data center construction by Microsoft and Yahoo. Those two Internet giants were followed by new data center projects from Intuit, Sabey Corp.Dell and Vantage Data Centers.Now, Microsoft is building again.

     

     

    << Previous Day 2013/12/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org