Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, March 27th, 2013

    Time Event
    12:30p
    Timelines and Potential Pitfalls of a Custom Data Center

    This the fourth article in series on DCK Executive Guide to Custom Data Centers.

    The advantages of a custom design can look and be attractive, since they may be able to accommodate non-standard IT hardware or provide very high energy efficiency or high cooling density. However there are potential pitfalls for those who have not had a great deal of experience with custom designs. If you change the design radically to meet specialized custom built hardware requirements you may lose the ability to easily adapt to different equipment after a hardware technology refresh cycle. This is especially true if your custom hardware requires non-standard cabinets, since the majority of typical computer hardware is designed for standards based cabinets.

    That is not to say that one should forgo custom data center design to support specialized hardware and simply limit themselves to standard designs. You may choose to simply allocate one section of the data center to deal with specialized hardware requirements, if your IT systems can gain substantial performance benefit.

    Timeframe
    If, after carefully weighing the facts, you decide to proceed with custom design, bear in mind the impact on the timeline. A standard data center can be designed and built in 12-18 months by an experienced builder, once the basic size and capacity has been defined. With custom design you need to anticipate extended timelines. The first timeline is the preliminary technical requirements discussions, as well as the business and cost justification, within your own organization. Once your internal requirements have been defined, there will be extended time required for meeting with designers and builders to explore the feasibility and cost projections for your custom requirements. These extra steps can add 6-12 months to the timeline. Once the cus¬tom design has be finalized, the build-out should take 12-18 months if standard power and cooling equipment has been used, however if custom equipment needs to be specially fabricated, then additional time may be required for these items.

    You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty.

    2:30p
    10 Essential Domains for Moving to Private Cloud

    With the expansion of cloud computing and various cloud services – many organizations are now further considering some type of cloud model. In many instances, companies looking to keep their data outside of a provider are looking to move to a private cloud platform. Where this type of environment can certainly bring a lot of benefits; the deployment and planning process have to be conducted very carefully. One of the first concepts to understand is that there isn’t just one magical cloud product. Rather, cloud computing revolved around the functionality of many different data center and infrastructure components.

    To move to a cloud model, there has to be a solid understanding of those underlying components and how they all work together. Starting with a look at infrastructure and virtualization through process and governance, Cisco’s video discusses the Cisco Domain Ten —the ten essential domains you need to know to get started with the cloud journey.

    cisco1

    [Image source: Cisco.com | Cisco Domain Ten - Simplifying Data Center Transformation]

    Based upon the many cloud deployments — private and public, enterprise, public sector and service provider – Cisco worked to formulate this comprehensive framework to help you transform your data center and guide new initiatives.  In many cases, these new projects may include cloud, virtual desktop, application migration, and data center consolidation.

    Click here to view Cisco’s video on The Cisco Domain Ten framework.  The important takeaway here will be the understanding of the ten key framework areas, or domains, that critical to consider, plan for and address as part of your data center and cloud transformational process.

    6:04p
    Global News: Oracle Opens Data Center in Japan

    Here’s our review of some of this week’s noteworthy links for the global data center industry:

    Oracle opens Japan data center. Oracle (ORCL) announced that it is opening a data center in Japan, to support its RightNow Cloud Service. With a data center in the region Oracle can manage service level objectives and data governance for local customers. Oracle RightNow Cloud Service combines Web, Social and Contact Center experiences for a unified, cross-channel service solution in the Cloud, enabling organizations to increase sales and adoption, build trust and strengthen relationships, and reduce costs and effort. “We are dedicated to simplifying IT so customers can focus on driving business innovation,” said Oracle President Mark Hurd. “Oracle’s latest data center is a signal of our commitment to the success of Japanese companies. It will free them from the burden of software management, allowing them to have more resources to engage with their customers.”

    Akamai and Korea’S KT expand partnership.  Akamai (AKAM) and Korean telecommunications company KT announced that the companies will expand their strategic partnership. Using Akamai’s Aura Managed CDN, KT will have dedicated CDN capacity that is available for its own content applications or third party CDN services. ”The era has come where culture and digital content are leading the market. Our main priority is to ensure access to content from any device, at anytime, anywhere,” said Heekyung Song, Senior Vice President, Enterprise IT BU, KT. “Through this partnership with Akamai, KT will provide a CDN platform specializing in media delivery, web performance and security so companies can focus on developing quality content and web applications without concerns about delivery.”

    ITG expands with Interxion in London.  Interxion (INXN) announced that ITG, an agency brokerage and financial markets technology firm, has expanded its hosting relationship with Interxion by adding a new London based data center. ITG currently maintains data center space with Interxion in Stockholm, and selected Interxion London for its central city location and access points to leading trading venues. “We are committed to continued investment in technology where performance benefits can be passed to our clients,” said Rob Boardman, CEO of ITG Europe. “Whether you’re looking for sophisticated automated tools, the highest quality dark block liquidity, or personalised high-touch trading, you can rely on our continuous history of innovation to improve your performance while reducing costs.”

    6:18p
    Designing For Dependability In The Cloud

    David Bills is Microsoft’s chief reliability strategist and is responsible for the broad evangelism of the company’s online service reliability programs.

    David BillsDAVID BILLS
    Microsoft

    This article kicks off a three-part series on designing for dependability. Today I will provide context for the series, and outline the challenges facing all cloud service providers as they strive to provide highly available services. In the second article of the series, David Gauthier, director of data center architecture at Microsoft, will discuss the journey that Microsoft is on in our own data centers, and how software resiliency has become more and more critical in the move to cloud-scale data centers. Finally, in the last piece, I will discuss cultural shift and evolving engineering principles that Microsoft is pursuing to help improve the dependability of the services we offer.

    Matching the Reliability to the Demand

    As the adoption of cloud computing continues to grow, expectations for utility-grade service availability remain high. Consumers demand access 24 hours a day, seven days a week to their digital lives, and outages can have a significant negative impact on a company’s financial health or brand equity. But the complex nature of cloud computing means that cloud service providers, regardless of whether they sell offerings for infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS), need to be mindful that things will go wrong — because it’s not a case of “if things will go wrong,” it’s strictly a matter of “when.” This means, as cloud service providers, we need to design our services to maximize the reliability of the service and minimize the impact to customers when things do go wrong. Providers need to move beyond the traditional premise of relying on complex physical infrastructure to build redundancy into their cloud services to instead utilize a combination of less complex physical infrastructure and more intelligent software that builds resiliency into their cloud services and delivers high availability to customers.

    The reliability-related challenges that we face today are not dramatically different from those that we’ve faced in years past, such as unexpected hardware failures, power outages, software bugs, failed deployments, people making mistakes, and so on. Indeed, outages continue to occur across the board, reflecting not only on the company involved, but also on the industry as a whole.

    In effect, the industry is dealing with fragile, (sometimes referred to as brittle), software. Software continues to be designed, built, and operated based on what we believe is a fundamentally-flawed assumption: failure can be avoided by rigorously applying well-known architectural principles as the system is being designed, testing the system extensively while it is being built, and by relying on layers of redundant infrastructure and replicated copies of the data for the system. Mounting evidence paints a picture that further invalidates this flawed assumption; articles continue to regularly appear describing failures of online services that are heavily relied on, and service providers routinely supply explanations of what went wrong, why it went wrong, and summarize steps taken to avoid repeat occurrences. The media continues to report failures, despite the tremendous investment that cloud service providers continue to make as they apply the same practices that I’ve noted above.

    Resiliency and Reliability

    If we assume that all cloud service providers are striving to deliver a reliable experience for their customers, then we need to step back and look at what really comprises a reliable cloud service. It’s essentially a service that functions as the designer intended it to, functions when it’s expected to, and works from wherever the customer is connecting. That’s not to say every component making up the service needs to operate flawlessly 100 percent of the time though. This last point is what brings us to needing to understand the difference between reliability and resiliency.

    Reliability is the outcome that cloud service providers strive for. Resiliency is the ability of a cloud-based service to withstand certain types of failure and yet remain fully functional from the customers’ perspective. A service could be characterized as reliable, simply because no part of the service, (for example, the infrastructure or the software that supports the service), has ever failed, and yet the service couldn’t be regarded as resilient, because it completely ignores the notion of a “Black Swan” event – something rare and unpredictable that significantly affects the functionality or availability of one or more of the company’s online services. A resilient service assumes that failures will happen and for that reason it has been designed and built in such a way to detect failures when they occur, isolate them, and then recover from them in a way that minimizes impact on customers. To put the meaning of the relationship between these terms differently, a resilient service will — over time — become viewed as reliable because of how it copes with known failure points and failure modes.

    Changing Our Approach

    As an industry, we have traditionally relied heavily on hardware redundancy and data replication to improve the resiliency of cloud-based services. While cloud service providers have experienced successes applying these design principles, and hardware manufacturers have contributed significant advancements in these areas as well, we cannot become overly reliant on these solutions as paving the path to a reliable cloud-based service.

    It takes more than just hardware-level redundancy and multiple copies of data sets to deliver reliable cloud-based services — we need to factor resiliency in at all levels and across all components of the service.

    That’s why we’re changing the way we build and deploy services that are intended to operate at cloud-scale at Microsoft. We’re moving toward less complex physical infrastructure and more intelligent software to build resiliency into cloud-based services and deliver highly-available experiences to our customers. We are focused on creating an operating environment that is more resilient and enables individuals and organizations to better protect information.

    In the next article of this series, David Gauthier, director of data center architecture at Microsoft, discusses the journey that Microsoft is making with our own data centers. This shift underscores how important software-based resiliency has become in the move to cloud-scale data centers.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:45p
    Major Expansion for Equinix in Singapore
    equinix-sg2

    A look at the interior of an Equinix facility in Singapore, where the company has announced plans to build a data center. (Photo: Equinix)

    In a period of strong data center growth in Asia, Singapore is the hottest market in the hottest region. Today Equinix announced the latest in  series of major projects in Singapore, teaming with real estate investment trust Mapletree Industrial to build a 385,000 square foot data center that will add capacity for another 5,000 cabinets of servers and IT gear.

    Equinix has two existing data centers in Singapore, where the colocation and interconenction provider reports strong demand from financial and cloud companies. The new facility will be adjacent to Equinix’s existing SG1 data center, and will be interconnected through a dedicated fiber network, allowing customers in SG1 to expand their business within the Equinix platform. The build-to-suit project is scheduled to be completed in the second half of 2014.

    “Singapore is growing in importance as a cloud hub for the region,” said Clement Goh, managing director of Equinix South Asia. “Global and multinational companies have seen Singapore as a gateway for diverse connectivity to the other Asia Pacific markets. Equinix is committed to catering to the increasing demand for data center and interconnection services in Singapore. Our new data center will help our financial and cloud customers drive new business and collaboration opportunities, improve performance and reduce operational cost with low latency and high proximity.”

    The new Equinix IBX data center will be a seven-story building situated within one-north, a 200-hectare development by JTC Corporation designed to host a cluster of research facilities and business park space in the biomedical sciences, IT, media, physical sciences and engineering industries. Located in the southwestern region of Singapore, the property is easily accessible via major expressways and the public transportation network.

    “We are pleased to work with Equinix as it achieves another milestone with the development of its third IBX data center in Singapore,” said Tham Kuo Wei, chief executive officer of Mapletree Industrial Trust (MIT). “The commitment from Equinix demonstrates its confidence in MIT’s development capabilities in customizing high specification industrial facilities. This collaboration allows Equinix to focus on its core business while MIT manages the capital expenditure and development process.”

    Industry bellwehthers Equinix and Digital Realty Trust have both actively expanded their space in Singapore. Other companies that have deployed data center space in Singapore over the last two years include Google, Amazon Web Services, IO, IBM, Salesforce.com, SoftLayer and NTT.

    7:45p
    Apple Ready to Roll in Reno .. With a Coop?

    Apple Insider has been working hard to unearth information about the Apple data center project in Reno, Nevada. After a drive-by shed little light, the site has obtained some aerial photos and one close-up of a new structure on the Apple site.

    “As the project site was still being finalized, the company asked for permission to begin work on an initial, aproximately 20,000-square-foot structure to get head start on the construction project,” AppleInsider reports.

    This is similar to the approach the company has taken in Prineville, Oregon, where a small structure was deployed quickly, with two larger buildings to follow.

    What does this Apple facility look like? Hop over to Apple Insider for a quick look at the photos, and then come back. It’s a long, rectangular structure with a sloped roof with a peak in the center. Let’s see, where have we seen something like that before …

    An aerial view of the Yahoo data center in Lockport, N.Y.

    That’s an image of the Yahoo “Computing Coop” data center in Lockport, New York. The similarity shouldn’t be too surprising, since one of the Yahoo data center executives who worked on the Yahoo Lockport project, Scott Noteboom, now is a leader of the data center team at Apple.

    There are some similarities between the two designs, but some differences as well. Both retain the “coop” structure, adopted from chicken coops that channel hot air into the upper area of the building. Both have large louvers and fans, effectively turning the building into a huge air handler to circulate air around the IT equipment.

    But the Apple facility in Reno appears to be missing the “cupola” that runs along the crest of the roof on the Yahoo data centers, which allows rising server waste heat to be evacuated from the highest point of the roof. This suggests that Apple is using a different approach in handling the removal of hot air. The AppleInsider photos don’t present a full view of the positioning of the louvers on the side of building, but there are large fans and louvers at the end of the facility – which could mean an airflow pattern in which fresh air enters the side of the building, flows through the servers, and then through the hot aisle and into a plenum that brings the hot air out through the end of the building. Or not – only Apple knows for sure.

    It’s interesting to note that the design in Reno is different from the smaller “tactical data center” that sits alongside the 500,000 square foot main building at Apple’s campus in Maiden, North Carolina. Here’s a photo from Apple:

    apple-maiden-tactical

    As is the history with Apple, many of the technical details remain undisclosed. But what’s clear is that Apple is using a combination of small and large facilities, and mixing traditional big-box brick-and-mortar structures with pre-fabricated  modular components to speed its time to market. It’s a flexible approach that matches facility design to capacity planning – as well as the possibility that workloads are being matched to different types facilities (as Facebook has done with its cold storage data center).

    8:15p
    Cloud Providers Aggressively Slashing Prices

    rightscale-cloud-price-cuts

    The cloud pricing wars are on. Cloud management specialist RightScale says it is seeing aggressive price-cutting on the part of major cloud providers.

    The company has counted 29 price reductions over the course of 14 months from AWS, Google Compute Engine, Windows Azure, and Rackspace Cloud. Amazon led the pack with eight price reductions on core cloud services, while Rackspace had four and Google and Azure cut prices three times in eight months. Rightscale tracks pricing with PlanForCloud, a tool that attempts to track cloud pricing across the major providers; it includes over 12,000 different prices across 6 cloud providers.

    Pricing remains volatile, which is partly due to competition. Rackspace introduced new tiered pricing for storage just this February that resulted in price reductions of as much as 25%; AWS, Azure, and Google keep one upping each other with price cuts.

    PlanForCloud is useful in the sense that cloud pricing is in no way uniform, and so can often be like comparing apples to oranges. There are several subsets of services, and these cuts have varying degrees of impact.

    RightScale, as a management platform for cloud, wants the underlying infrastructure to be as transparent as possible. It gives the following recommendations:

    1. Develop competency in cloud forecasting – Price cuts are a positive trend when justifying cloud, but it doesn’t eliminate the need to forecast as accurately as possible. Of course, the pitch for the tool is assisting in this regard.
    2. Consider all your options for price, performance, features and support – Again, each cloud provider has a different mix of features, performance, and support, so pricing is only one consideration. Pricing often gives a cloud provider the edge in one arena, but you can bet they make up that cost somewhere else.
    3. Efficiently use the cloud resources you have – Over-provisioning, running unnecessary resources is a costly proposition. Cloud doesn’t make sense if you’re not using it properly and efficiently.

    Keep RightScale’s position in mind here, as this information is in support of using their platform. However, it’s all sound advice regardless of vendor. RightScale has been going the extra mile in trying to provide a transparent look into clouds lately, recently releasing an overview of major outages the company tracked across public, private cloud, and hosting providers. It all makes for a useful compendium when shopping around.

    << Previous Day 2013/03/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org