Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, March 26th, 2014

    Time Event
    11:30a
    Data Center Jobs: Another Source

    At the Data Center Jobs Board, we have a new job listing from Another Source, which is seeking a Critical Facility Engineer in San Antonio, Texas.

    The Critical Facility Engineer is responsible for performing routine maintenance tasks in accordance with McKinstry Safety Policy and Procedures, inspecting buildings, grounds and equipment for unsafe or malfunctioning conditions, troubleshooting, evaluating and recommending system upgrades, ordering parts and supplies for maintenance and repairs, soliciting proposals for outsourced work, working with vendors and contractors to ensure their work meets McKinstry and Client standards, and performing all maintenance to ensure the highest level of efficiency without disruption to the business. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    12:00p
    IBM SoftLayer Extends its Cloud to Hong Kong

    The first of IBM’s new data centers will be open for online orders this week in Hong Kong. The SoftLayer data center is the first of 15 planned data centers that Big Blue will open in 2014 as part of its $1.2 billion commitment towards data center expansion to support its cloud.

    “As part of IBM’s expanding global cloud footprint, we are launching a new data center in Hong Kong,” said Lance Crosby, CEO of IBM SoftLayer. “This effort is designed to enable clients to benefit from the power and performance of secure cloud services built on open standards, providing the elasticity needed by the fast-growing, entrepreneurial businesses that Hong Kong is known for.”

    The Hong Kong data center will have capacity for more than 15,000 physical servers. It adds to an existing Asia footprint that includes a data center in Singapore and points of presence in Hong Kong, Singapore, and Tokyo. Network connectivity is provided by multiple Tier-1 network carriers. SoftLayer’s strategy is to be within 40 milliseconds of latency from any customer in the world, and this vision is being realized with the funding IBM is pumping into the data center infrastructure.

    SoftLayer already had a solid AsiaPac customer base, and the new data center only strengthens its position in that region. It counts Distil Networks, Tiket.com, Simpli.fi, 6waves and Beijing Elex as customers in Asia.

    SoftLayer builds its data centers around the world in a consistent standardized format. The company touted $4.4 billion in cloud revenue in 2013, and has been pumping in more investment as it takes on incumbent Amazon Web Services, as well as increasing pressure from the likes of Rackspace as well as most recently, Cisco, which just committed to investing $1 billion on cloud data centers and service delivery.

    SoftLayer combines bare metal, virtual servers, and networking in a single platform. Everything is deployed on-demand and with full remote access and control. Customers can create public, private and hybrid cluds for their applications and workloads.

    1:00p
    The DCK Guide to the As-a-Service Revolution

    Something interesting is happening in the cloud and data center space. We’re seeing a lot of abstraction around existing hardware environments and new types of delivery models emerging. Recently, we presented the DCK Guide to Software-Defined Technologies. We were able to understand the power around intelligent resource control and how you can make your hardware span globally.

    Now, there is a new type of revolution happening. This time, it revolves around services and delivery models. The proliferation of cloud has created new types of service models aimed at very direct solutions. We’re not talking about email services here or even online backup, although those are service models. The industry has come much further than that.

    When cloud computing started, there were some powerful foundational service platforms which helped fuel the as-a-Service deployment model. Let’s take a look at the “original” three and where they are now:

    • Infrastructure-as-a-Service (IaaS) – Probably one of the most basic cloud service offerings, IaaS has really come a long way. Traditional data centers are now offering complete IaaS offerings scaling from physical to logical. New efficiencies around resource pooling and intelligent workload routing has boosted the IaaS model to create a powerful on-demand cloud platform. Now, IaaS offerings can include everything from security to virtualization, and everything in between.
    • Platform-as-a-Service (PaaS) – With so much new development around mobile apps, software delivery, and content optimization, the PaaS model has really evolved to become a solid mechanism for logical workload delivery. Now, developers can utilize PaaS offerings to create intelligent software layers which then help optimize the rest of their organization. Platforms like Azure come built with dynamic scale capabilities – adjusting to your application and platform needs on the fly.
    • Software-as-a-Service (SaaS) – This services offering probably deserves an article of its own. The software model has truly evolved since the inception of SaaS a few years ago. Mobility, IT consumerization, and cloud computing have all had direct effects on the SaaS model. Now, federated services, HTML5, and other technologies make the SaaS delivery process even more powerful. There is a software revolution taking place where the application is creating direct hardware agnosticism. This service model will likely continue to evolve further.

    With all of that in mind, let’s take a look at some newer service platforms, what they’re doing and how they can help!

    Logging as a Service (LaaS) – Compliance, security, governance and regulations are still very real aspects in today’s business world. Companies looking to expand their systems into the cloud have to abide by rules governing their specific industry. This is why LaaS is becoming more popular. Numerous managed services providers are actively creating log, source, and even data point aggregation services. The idea is to centrally store logs and create powerful cloud-ready audit trails for those organizations that need it. In a real-world scenario, your major data center location would become the central hub for all log file processing. You could even have a distributed data center platform which still point to a central log aggregation model. This type of service is becoming popular for organizations with a lot of information, unstructured data, or big data and business intelligence solutions.

    Recovery as a Service (RaaS or DRaaS) – First of all, this is NOT cloud-based backup. The big difference is that RaaS protects data and provides standby computing capacity on-demand to facilitate a more rapid recovery process. The idea behind RaaS or DRaaS is that the cloud is used for dynamic recovery purposes of applications, data points, or an entire infrastructure. The great thing about this model is that organizations only pay for the recovery capacity that they need. This makes it much more efficient than traditional DR solutions where a hot site is being run continuously. As cloud becomes more prevalent in the modern business model, more organizations will look to make their platform more resilient. Gartner agrees, predicting that by 2014, 30 percent of midsize companies will have adopted recovery-in-the-cloud, also known as disaster recovery-as-a-service (DRaaS), to support IT operations recovery. There are existing service models already helping out large organizations. For example, Bluelock Recovery-as-a-Service (RaaS) solutions enable organizations to recover their IT resources efficiently and effectively when an adverse situation strikes, protecting you from loss of revenue, data, or reputation. This model directly integrates with VMware vCloud to create a powerful, multi-tenant, recovery solution. Couple in automation, failover testing, intelligent replication, and next-gen cloud security, and you’ve got a powerful RaaS solution.

    1:30p
    Cologix Opens the Doors at its Second Vancouver Data Center

    Cologix has opened for business at its second data center in Vancouver, adding 15,000 square feet of inventory at 1050 West Pender Street. The new colocation facility and Internet exchange aligns with Cologix’s business strategy of providing network neutral internet access in growing second-tier cities. This is Cologix’s fourth new build in the last 13 months.

    “Vancouver is a growing market within the global network traffic map. Similar to the other markets that Cologix serves, it’s an emerging technology region that requires a high level of connectivity,” said Grant van Rooyen, president and chief executive officer, Cologix. “We’re making a significant investment in Vancouver to address the void left by underinvestment in downtown data centre capacity and the consolidation of network neutral providers by carriers. A strong pre-sales period has validated the market need from both local and global firms.”

    The facility is green and designed for efficiency. It leverages air-side economizers and a hot air containment system that takes advantage of the cool Vancouver climate. The data center is supported by N+1 cooling, redundant UPS and N+1 generators. Six fibre networks in the building can be accessed through Cologix’s Meet-Me-Room, and there are dark fibre ring connectivity options to the local carrier hotel six blocks away at 555 West Hastings.

    The first phase of the build-out has already been successfully commissioned, including space for 200 cabinets. As of the March 25 opening date, 25 percent of phase one has already been sold.

    The facility is Cologix’s eighteenth data centre in North America and eleventh in Canada. The expansion was initially announced last July.

    Vancouver is Canada’s third largest market, home to more than two million residents and a plethora of technology companies. Vancouver has been a hot spot for film production as well, subbing for more expensive city locations, so there’s a booming creative industry as well.

    Vancouver is an important market for Cologix. The company believes the market is underserved in terms of network neutral colocation options downtown.

    2:00p
    Data Center Interconnectivity: A Keystone to Business Strategy

    The modern data center is no longer just a single node housing all of your information. Rather, through the power of data center interconnectivity, the entire environment has been transformed into a massive world of shared resources, intelligent connectivity and vast capabilities to scale. Today, there’s a dynamic cloud and data center environment that is the basis for global business.

    Business is now more virtual, geographically dispersed and mobile than ever, with companies building new data centers and clustering servers in separate locations. The challenge is that companies vary hugely in scale, scope and direction. Many are doing things not even imagined two decades ago, yet all of them rely on the ability to connect, manage and distribute large stores of data.

    The next wave of innovation relies on the ability to do connect, manage and distribute your data dynamically.

    When determining which companies to connect with, where to make those connections, and for what purposes, your business strategy is at the core of your data center strategy. In this white paper from Equinix, you quickly learn about the importance of creating an interconnected data center model spanning cloud, SaaS, network, and even IT service providers.

    equinix-map470

    So how can intelligent interconnectivity help? Here are a few ways:

    • Interconnectivity equals more data center and business opportunities.
    • Interconnectivity brings your cloud and organization close to “the Internet of Everything (IoE).”
    • Creating your own globally dispersed interconnected cloud platform.

    Download this white paper today to learn about the power of a cloud and data center model that can help your organization stay globally connected. In creating your own infrastructure, the power of interconnectivity will allow you to span more regions, stay more resilient, and control your critical resources much more efficiently. As Brian Lillie, CTO of Equinix points out, “If you don’t make the right data center choice, your ability to collaborate with partners, customers, and suppliers worldwide is severely compromised [or is not what it could be]. Which means your ability to innovate and compete suffers. So does your business growth.”

    2:05p
    Who Owns Containment?

    Cary Frame is President and founder of Polargy, a provider of hot- and cold-aisle containment solutions.

    Who owns containment? No one. This is the problem.

    Hot- and cold-aisle containment is a data center best practice that is experiencing hyper-growth in adoption because of its large impact on energy efficiency and operating cost savings. Interestingly, there is still no clear ownership of containment within the enterprise, among industry trades or between manufacturers.

    We work on the leading edge of growth in data center containment by focusing on product innovation and enabling fast and precise implementation. We offer this perspective on containment ownership based on our observations from over five years in the containment market.

    In our experience, what drives ambiguity around containment ownership is that it exists along the boundaries of job scope for multiple traditional players within data center white space. It also represents a more customized solution set than much of the industry is accustomed to.

    Different Perspectives on Containment

    On the user side, containment physically touches data center server racks, which are the responsibility of IT or IT Operations management within the enterprise, but it significantly impacts air conditioning performance, which is typically under the purview of facilities management. In addition, some enterprises have corporate energy managers who want or need to participate in the discussion. On the supply side, no single manufacturer type has claimed the category and no trade (mechanical, electrical, etc.) has taken a lead role. Because no one has stepped into full ownership of containment, up to five separate groups inside and outside the data center currently get involved.

    Within the enterprise, we see retrofit projects managed by data center operations as often as by facilities. However, we rarely see IT responsible for driving decisions, and though we find energy managers at the table, they almost never drive a project, but rather consult on ROI. When it comes to commissioning containment, all three constituents have strong stakes in the upgraded operating environment.

    What’s the Cold Aisle Containment Strategy?

    As part of the standard engagement process we request that all three groups participate in outcome targets and commissioning planning. The key question these groups must agree on is what the new cold aisle temperature will be. Typically, IT people seek cold aisle temperatures in the mid-60s, data center operations people tend to favor temperatures in the low-70s, and facilities people prefer to run near the ASHRAE limit of 80.7°F. Besides these three operational groups, trades and manufacturers also suffer containment ownership ambiguity.

    As a lead containment contactor, we routinely trains and subcontracts a variety of firms from other trades to install containment. Our solutions have been installed by low voltage, flooring, interior, mechanical, and electrical contractors. Scholes Electrical and Mechanical in New Jersey has both electrical and low voltage groups, and Polargy has done projects with both groups for the same client. No particular contractor type has emerged as the one best suited to initiate and own containment projects.

    “At CRB, we’ve seen a growing number of owners procure containment from containment companies like Polargy, but also from rack makers like Chatsworth,” reports Daniel Bodenski, Director of Mission Critical Services at CRB. “Likewise, in our mission critical project work, we’ve seen a variety of subcontractors install containment, including electricians, flooring contractors, and again the containment vendors themselves. No single group appears to be claiming full ownership yet.”

    See more on next page

    2:18p
    NVIDIA Targets Need for Speed With Ultra-Fast GPU Interconnect

    At its annual GPU Technology Conference (GTC) in San Jose this week NVIDIA (NVDA) laid the foundation for its Pascal GPU architecture with NVLink high-speed integration, launched a GPU rendering appliance, and introduced a new Tegra K1 powered development kit for the embedded market. The conference conversation can be followed on Twitter hashtag #GTC14.

    NVIDIA announced plans to integrate a high-speed interconnect into its future GPUs. The NVIDIA NVLink will enable GPUs and CPUs to share data five to 12 times faster than they can today. This will eliminate a longstanding bottleneck and help pave the way for a new generation of exascale supercomputers that are 50 to100 times faster than today’s most powerful systems.

    NVLink will be a part of the 2016 Pascal GPU architecture, which is being co-developed by IBM, which is incorporating it in future versions of its POWER CPUs.  NVLink joins IBM POWER CPUs with NVIDIA Tesla GPUs to fully leverage GPU acceleration for a diverse set of applications, such as high performance computing, data analytics and machine learning.

    Overcoming a Bottleneck With PCIe

    “NVLink enables fast data exchange between CPU and GPU, thereby improving data throughput through the computing system and overcoming a key bottleneck for accelerated computing today,” said Bradley McCredie, vice president and IBM Fellow at IBM. “NVLink makes it easier for developers to modify high-performance and data analytics applications to take advantage of accelerated CPU-GPU systems. We think this technology represents another significant contribution to our OpenPOWER ecosystem.”

    The NVLink interface addresses the bottleneck with PCI Express, which limits the GPU’s ability to access the CPU memory system. PCIe is an even greater bottleneck between the GPU and IBM POWER CPUs, which have more bandwidth than x86 CPUs. NVLink will match the bandwidth of typical CPU memory systems, and it will enable GPUs to access CPU memory at its full bandwidth. GPUs have fast but small memories, and CPUs have large but slow memories. Accelerated computing applications typically move data from the network or disk storage to CPU memory, and then copy the data to GPU memory before it can be crunched by the GPU. With NVLink, the data moves between the CPU memory and GPU memory at much faster speeds.

    The Unified Memory feature will simplify GPU accelerator programming by allowing the programmer to treat the CPU and GPU memories as one block of memory. NVIDIA GPUs will continue to support PCIe, but NVLink is substantially more energy efficient per bit transferred than PCIe. NVIDIA has designed a module to house GPUs based on the Pascal architecture with NVLink. This new GPU module is one-third the size of the standard PCIe boards used for GPUs today. Connectors at the bottom of the Pascal module enable it to be plugged into the motherboard, improving system design and signal integrity.

    GPU rendering appliance

    NVIDIA also launched a GPU rendering appliance that dramatically accelerates ray tracing, enabling professional designers to largely replace the lengthy, costly process of building physical prototypes. The new Iray Visual Computing Appliance (VCA) combines hardware and software to greatly accelerate the work of NVIDIA Iray, a photorealistic renderer integrated into leading design tools like Dassault Systèmes’ CATIA and Autodesk’s 3ds Max. Multiple Iray appliances can be linked, speeding up by hundreds of times or more the simulation of light bouncing off surfaces in the real world. As a result, automobiles and other complex designs can be viewed seamlessly at high visual fidelity from all angles. This enables the viewer to move around a model while it’s still in the digital domain, as if it were a 3D physical prototype.

    “Iray VCA lets designers do what they’ve always wanted to – interact with their ideas as if they were already real,” said Jeff Brown, vice president and general manager of Professional Visualization and Design at NVIDIA. “It removes the time-consuming step of building prototypes or rendering out movies, enabling designs to be explored, tweaked and confirmed in real time. Months, even years – and enormous cost – can be saved in bringing products to market.”

    3:00p
    Best of the Data Center Blogs for March 26

    Here are some of the notable items we came across in this week’s surfing of the data center blogs. It’s been a while since our last roundup, so we’ve got seven items in this installment:

    True Detective and Data Center Trade Shows - Compass CEO Chris Crosby reflects on HBO’s “True Detective” and the state of data center conferences: “Interesting and compelling content makes for must see TV. As we enter trade show season, shouldn’t this same axiom work for them? Instead, the most interesting aspect of the typical data center trade show is determining who offered the most interesting tchotchke.”

    Power Quality Issues : Silent Efficiency Killer - From the Schneider Electric Blog: “Power quality issues cost the U.S. economy about $15 billion each year. Because approximately 80 percent of all power-quality problems originate from the customer’s side of the meter, facility owners, managers, designers and other high-tech equipment users need to understand and manage and avoid power quality issues.”

    All Herald The ‘Cloudy Mainframe’ - From Ben Kepes via Forbes: “Cloudwashing or truly revolutionary? ASG Software Solutions is leading with that most attractive of concepts and promising “cloud based automation and service orchestration” for mainframes. Do we see an honest merging of technology paradigms of yesterday (of, 30 years ago) and today?”

    Data Centers and the Internet of Things to Come – At Wired, Digital Realty CTO Jim Smith ponders the role of the data center in an instrumented world: “While it is still too early to tell exactly how the Internet of Things will unfold over the next few years, one thing is clear: data centers will play a critical role.”

    Cisco UCS Five Years On: Can Cisco UCS really be five years old? Todd Brannon looks back in a post at the Cisco Data Center blog: “Thinking back five years to March of 2009, when Cisco introduced UCS, the economy was still spiraling into the worst recession of our lifetime. IT budgets were being slashed. Many wondered if it was the right time for Cisco to enter a new market with deeply entrenched competitors. As it turns out, it was the perfect time. Because change occurs fastest when times are hard.”

    Why Maps Matter to Business Continuity and Risk Professionals – SunGard Availability’s Seema Sheth-Voss looks at maps and DR preparation: “Do maps matter to business continuity and risk professionals? Absolutely. Crisis and incident management personnel in the public and private sector routinely subscribe to all sorts of alerts with respect to their location. But taking that flood of information, putting it all together, and finding the insights contained can be a challenge! It’s time for IT business continuity and risk professionals to leverage the map analytics already available to other professionals.”

    Equinix Performance Hub and the Needs of the Enterprise - At the Equinix Interconenctions blog, Jay Lindsey lays out several use cases which spell out different ways enterprises can benefit from installing a Performance Hub, which extends a company’s existing network into an Equinix data center.

    8:37p
    Amazon Unveils AWS Price Cuts, Launches Desktop WorkSpaces

    A day after a shot across the bow from Google, it was Amazon’s chance to unveil new features and pricing in the cloud wars. At the AWS Summit in San Francisco, Amazon announced new Instance families and the general availability of its virtual desktop offering, Amazon WorkSpaces. And, what a lot of people were waiting for, AWS responded to Google’s price cuts by slashing its prices as well.

    The biggest difference between Amazon and Google’s cloud presentations was the intended audience. Google targeted the tech savvy, while Amazon chose to cover, well, basically everyone. The company threw out so many customer examples that you could write a Russian novel-length rundown of who’s using AWS.

    AWS Slashes Prices

    Amazon has lowered pricing 41 times since its inception. The 42nd reduction was a big one. “We take our large scale and pass that to customers,” said keynote speaker Andy Jassy, Senior Vice President at Amazon. S3 is dropping its pricing by 51 percent on average, while Elastic Compute Cloud (EC2) is lowering prices by about 10 to 38 percent, depending on the instance family. Amazon also announced price drops of 28 percent for its relational database service, and 27 to 61 percent for its Elastic Mapreduce caching service.

    “Lowering prices is not new for us,” said Jassy. “It’s something we do on a regular basis. Whenever we can remove cost in our cost structure, we pass it on. You can expect us to continuously do this.”

    Pricing charts aren’t sexy, but the main takeaway is that the cloud providers with the most scale are aggressively cutting prices, as we saw with Google yesterday. The raw building blocks of computing are becoming commoditized. The key will be adding value atop of these compute resources. Amazon does this through the sheer number of services it offers, as well as the richest ecosystem of applications and developers in the cloud world.

    Commoditization is a much maligned thing: the materials for building a house are all commodities, but it doesn’t mean I’ll buy the raw materials and build the house myself to save money. Price cuts shouldn’t be the main consideration, which is why both Google and Amazon didn’t lead with their pricing.

    Amazon’s price cuts are equally as aggressive as Google’s. Having lived through the hosting pricing wars of the 2000s, we’re seeing the same thing happening now with cloud pricing. However, the race isn’t only to the bottom of pricing, it’s to the top of features. And Amazon has a big head start, plus the biggest ecosystem out there to build out a lot of features.

    WorkSpaces Enters General Availability

    WorkSpaces is Amazon’s Virtual cloud desktop offering. Managing desktops on premises is tricky and hard to deal with, which led to the creation of the VDI space. Most VDI solutions are expensive and put the management requirement on the end user.

    “This was the request we got most frequently from companies,” said Jassy. “That’s why we launched Amazon WorkSpaces. It gives central management and security, without worrying about the hardware, infrastructure management and is half the price of other solutions.”

    WorkSpaces launched in preview in November, and 10,000 customers have signed up since then. WorkSpaces is now available for everyone.

    << Previous Day 2014/03/26
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org