Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, March 5th, 2014

    Time Event
    12:30p
    Puppet 3.2 Adds Content Modules, Speeds Deployment

    IT automation provider Puppet Labs has release Puppet enterprise 3.2 which includes some new capabilities to help customers provision and scale faster. The company has seen the concept of DevOps spread wider within organizations and to larger enterprises. Version 3.2 continues the shift of Puppet to be application-centric, rather than node-centric.

    “As organizations struggle with rapidly increasing node counts and the time pressures of meeting customer demands, reducing cycle-time is becoming more critical,“ said Luke Kanies, founder and CEO of Puppet Labs. “Puppet Enterprise 3.2 addresses these challenges by significantly reducing the time it takes our customers to deliver new value – quickly and with confidence.”

    Version 3.2 includes the first set of supported modules to help teams get up and running. The new modules were selected from over 2,000 modules from Puppet Forge, and include modules to help with time synchronization across nodes, help setting up database services, manage web servers and control windows components.

    “Eighty to ninety percent of the work you do is the same as every one else,” said Nigel Kersten, CIO of Puppet Labs. “Let us commoditize that and let you concentrate on the 20 percent of the rest.”

    The company also simplified deploying and upgrading agents by leveraging native OS packaging systems. “Along similar lines, we’ve streamlined the deployment of enterprise agents,” said Kersten. “An enterprise now saves 2,3,4 minutes with every agent. It helps expand, breakdown silos, particularly in cloud environments when provisioning is critical. The general approach of 3.2 is streamlining everything.”

    Version 3.2 comes with Razor, available in a tech preview. Razor is a next generation physical and virtual hardware provisioning solution.  IT teams can improve the speed of provisioning bare metal and scale infrastructure to meet increasing demand with Puppet Enterprise’s rules-based approach. Razor includes the ability to automatically discover bare-metal hardware, dynamically configure OS and/or hypervisors and hand off to Puppet Enterprise for workload configuration through policy-based automation.

    The Tech Preview will enable customers using racked and stacked machines that want to save time turning bare metal into hypervisors and servers through automation to get early access to upcoming provisioning innovations and the ability to provide feedback and influence future capabilities.

    “What we did in (previous version) Puppet 3.1 was set the stage that we were application centric rather than node centric,” said Kersten. “That foundation in 3.1 plus the building blocks of new content, means people with no automation at all can get going pretty quickly.”

    Other new features include increasing partner support, this version bringing in Oracle Solaris 11 support. Solaris joins Microsoft Windows, IBM, Red Hat Enterprise Linux, AIX, Debian, Ubuntu and others. Puppet users can run agents in 3.2 as non-root, enabling users who don’t have root access to realize productivity gains through automation, extending the capabilities of Puppet Enterprise beyond infrastructure teams to other groups such as app developers to app developers, DBAs and teams using outsourced infrastructure. It also comes with over 300 fixes and improvements to provide better overall stability and performance.

    1:00p
    Scott Noteboom to Keynote Data Center World Global Conference
    Scott Noteboom

    Scott Noteboom, the founder and CEO of LitBit, will be the keynote speaker at the Data Center World Global Conference in Las Vegas from April 29 to March 4. (Photo: Rich Miller)

    Scott Noteboom is watching several horizons. The veteran executive, who has worked at Yahoo and Apple, is constantly scouting new horizons in data center technology. He also is focused on the new geographic horizons for the Internet.

    Noteboom will bring these two themes together in the keynote for the Data Center World Global Conference 2014, which is being held April 28 to May 2 in Las Vegas. Noteboom will present “The Impending Needs and Dynamics of Emerging Market Data Centers – Now to 2025,” which will address the future of the global data center market.

    “With massive emerging market growth occurring between now and 2025, the timing is right to reflect, disrupt and reinvent current gaps that make development in these parts of the world difficult,” said Noteboom.

    A New Global Deployment Model

    Noteboom is immersed in this topic in his new startup, called LitBit, which is developing a new hardware/software platform to serve as the heart of a “data factory” deployment model.

    LitBit is still in stealth mode, much as Noteboom had been since late 2011, when he moved from Yahoo to a leadership position on the Apple data center team. Previously a fixture on the data center conference circuit, Noteboom assumed a low profile during his tenure at Apple, which is famously secretive about its data center operations.

    “It’s good to be back,” said Noteboom. “I went silent for a while, which is our charter at Apple.”

    While he’s not likely to offer many specifics on Apple or LitBit, Noteboom has lots to say on the coming transformation of the global data center market.

    “The next 15 years will see a shift in growth for data center markets,” he said. “This has really been a U.S.-driven industry. Building something optimal relies on a small community of U.S.-centric industry expertise.”

    The Three Ps

    That’s about to change, driven by three factors: population, performance and privacy.

    China and India, with populations of 1.3 billion and 1.2 billion, respectively, have become the fastest-growing regions for Internet usage. Internet infrastructure in these countries lags the standards seen in the U.S., resulting in slower performance. As applications become more interactive, it will become more difficult to serve these users from facilities in the U.S.

    Noteboom says the average data delivery time within the U.S. is 58 milliseconds, compared to 108 milliseconds for users in China.

    “The speed of light becomes an inhibitor,” said Noteboom. “The only way to solve that is to locate the core data centers in China.”

    Closing the Expertise Gap

    The gap between data center construction techniques in the U.S. and China presents a challenge.

    “We believe Chinese data centers are 40 percent less efficient than the leading edge data centers,” said Noteboom, who said a major problem is building facilities in the right location. “There’s honestly about 25 people that are really good at site selection and they’re all in the U.S.”

    LitBit’s platform will be optimized for emerging markets, while allowing data centers of the future to be continuously reconfigurable and upgradeable. Noteboom says the solution will be found in lean and flexible construction methodology that separate real estate and technology.

    “The coming years will drive the increased simplification of the design/construction process that emerging markets need, while disaggregating that traditional process from the core technologies that need to advance further and faster for the future,” said Noteboom.

    Scott Noteboom has worked on many of the data center industry’s most iconic projects. He served as senior director for AboveNet from 200 to 2005, when he moved to the data center team at Yahoo (2005-2011) and then Apple.

    Noteboom is a student of history and industry, and his presentation at Data Center World Global Conference 2014 will review the history of modern data center development over the years, and how designs have evolved – and in some cases, not evolved very much.

    Data Center World

    “In today’s data center world environment, knowledge is king,” says Tom Roberts, chairman of the Data Center World Conference and president of AFCOM. “AFCOM provides its members important and global educational opportunities and Data Center World is an excellent place to continue your professional development.”

    The spring conference is scheduled for April 28-May 2, 2014 at the Mirage Resort in Las Vegas, with an agenda designed for data center executives and decision makers. Some of this year’s conference panels include:

    “All Data Centers are Local: A panel discussion on global colocation strategy”
    Moderator: Jason dePreaux, Associate Director, IHS
    Participants: RagingWire Data Centers, CBRE and Data Foundry

    “Ensuring a Successful DCIM Project: What You Need to Know”
    Moderator: Jennifer Koppy of IDC
    Participants: Emerson, Nlyte, iTRACS and Cormant

    For additional details, see the web site for the Data Center World Global Conference 2014.

    1:00p
    The Ergonomic Data Center: Save Us from Ourselves

    Chris Crosby is CEO of Compass Datacenters.

    chris-crosby-tnCHRIS CROSBY
    Compass Datacenters

    I recently saw a study on data center failures that found that the vast majority of outages are due to human error. Although it’s nice to have some numbers to back up the assumption, was anybody really surprised? After all, we are only human and, as a result, we make mistakes. Usually not on purpose mind you, but sometimes even our best efforts can result in some degree of mayhem. No one wants to be the one to bring an entire facility to its knees, but obviously the most common explanation in a service interruption post-mortem is usually “who” and not “why”. Since we know that we are our own worst data center enemies, doesn’t it seem like someone would start designing these things to help reduce the margin for “human error”?

    While no design may be foolproof, there a few things that you should be looking for to help reduce the likelihood of those “oh s**t” moments that can ruin everyone’s day in the blink of an eye. Providing front access to equipment to make it easier to maintain is something to insist on. Having to service a CRAH should not require the average technician to possess the flexibility of a member of the US woman’s gymnastic team. Those girls are all about 12 years old and four feet tall, the average data center professional is…well, a little older, bigger and vaguely remembers the day he could touch his toes. Thus front access should be standard feature in your data center, for reliability certainly, but also out of pure human compassion.

    Data centers by definition are complex environments. Finding and correcting problems within a jungle of conduit that would have forced Stanley to leave Dr. Livingstone to fend for himself is not the best way to ensure efficient maintenance and the quick resolution of issues when they arise. All of your conduit should be color coded and labeled. Not only would it make navigating through the facility a lot easier, it also looks pretty cool. I think we can all agree that anytime you can marry ergonomics and visual appeal, you’ve got a winning combination.

    It might also be a nice touch for your data center provider to provide you with a detailed written Operating Procedures and Sequence of Operations and settings before they turn it over. Although you’d think that this would be a given, most data center customers get the equivalent of a couple of paper clipped pages documenting their new facility along with the keys to the joint upon turnover. On the job training and trial and error are both effective tools for learning in the right environment, unfortunately your new data center isn’t one of them. Let’s face it, when your new car comes with more documentation that your multi-million dollar data center, you’ve got a problem.

    I guess the fundamental question here is why aren’t data centers designed with their users in mind? If human error is the biggest obstacle to data center reliability then build facilities that minimize that potential. In the near future more customer oriented, ergonomic features that reduce the possibility for human error will undoubtedly become standard requirements, if not for merely for the sake of reliability but to help save us from ourselves.

    Don’t Put That There

    If you’ve ever watched some of the house hunting shows on TV you know that a lot of homes don’t subscribe to what most of us would feel are the standard rules of design. For example, I’ve seen houses that required their owners to reach bedrooms by passing through others, hallways that can only be navigated by moving sideways, and patios populated by all manner of appliances. Data centers tend to be like some of these homes.

    The reasons for this user unfriendliness are pretty straightforward. In multi-tenant environments, the goal of providers is to maximize the area of rentable raised floor. All other considerations thus become tangential, so necessary, but non-revenue producing elements, are located wherever they can be accommodated. For those of you familiar with this rationale, this should help answer the question of “why I need a map to find the POP room?” The rationale for the architectural idiosyncrasies of pre-fabricated solutions—It’s a 12-foot by 40-foot box—provides little solace when you’re carting boxes of new servers through the facility to “your data center.”

    Since data centers are far from static environments, and activities like performing moves/adds/changes and unboxing and staging new hardware are regular events, you should insist that your provider use tighter guidelines than “within the same zip code” in locating site features like the loading dock and storage and staging facilities. Unfortunately, “user friendly” doesn’t seem to have made it past the white board stage for most data center designs.

    While attributes like a low PUE, and the type of fire suppression system used are certainly important customer considerations, ergonomics is going to become more important to address the increasingly dynamic data center environment. Data centers that facilitate the ease of customer operations are the next logical step within the industry, albeit to the detriment of providers whose architectures reflect their requirements and not their customers.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Software-Defined Storage: What Does It Really Mean?

    We’ve covered the concept of software-defined technologies (SDx) and have shown how this very real technology can help your data center truly expand. If you haven’t seen our recent SDx Guide, make sure to take a look because there are powerful solutions that directly impact how your organization controls and distributes data.

    With that in mind, software-defined storage has begun to make an interesting impact in the data center and cloud world. Already, SDx helps bring data centers closer together, but what can it really do for the storage component?

    Let’s take a look at the big picture for a second. The model of the traditional data center, dating back a few years, heavily revolved around the physical infrastructure. We didn’t have virtualization or the concept of the cloud as we know it today. With that, we began to have hardware sprawl issues around servers, racks, and other equipment. Virtualization helped sort that out.

    Still, growth around resource demands continued. This expanded to other pieces inside of the data center – specifically storage. Just how many more disks could you buy? How many more physical controllers would you really need to handle an influx of cloud, virtualization and users? At some point, the logical layer would have to be introduced to help the storage component better operate.

    And so software-defined storage began to emerge. The idea here isn’t to take away from the storage controller, rather, it’s to help direct data traffic much more efficiently at the virtual layer. The power really kicks in because software-defined storage creates a much more agnostic platform to work with. So what does the technology look like?

    SoftwareDefinedStorage

    Got the visual? Now let’s break it down.

    Logical Storage Abstraction: Basically, you’re placing a powerful virtual layer between data request and the physical storage component. This layer allows you to manipulate how and where data is distributed. The great part here is that you’re still able to keep a heterogeneous storage infrastructure while still controlling the entire process from a virtual instance. You can present as many storage repositories to the software-defined storage layer and allow that instance to control data flow.

    Intelligent Storage Optimization: Just because you have a logical storage control layer doesn’t mean you can’t still utilize the efficiencies of your existing storage. The software-defined storage component helps you push information to a specific type of repository. You’re able to control performance and capacity pools and further deliver information to the appropriate storage-type. However, your actual controllers can still help with thin-provisioning, deduplication, and more. The power is in the flexibility of this solution. You can present an entire array to the software-defined layer, or just a shelf or two.

    Creating a more powerful storage platform: This hybrid storage model allows you to leverage the power of your physical infrastructure as well as your virtual. You’re able to create one logical control layer that helps you manage all of the physical storage points in your data center. This helps with storage diversification and helps prevent vendor lock-in. Logical storage abstraction also helps with migrating and moving data between storage arrays and between various underlying resources.

    2:30p
    VYCON Lands 8 Megawatt Flywheel Deal

    A data center in Texas has chosen VYCON’s flywheel systems to provide eight megawatts of battery-free power protection. Flywheels will protect the facility against costly and damaging power outages.

    Since the facility is lights-out, it was seeking a solution with low maintenance and minimal human intervention. The data center will now be protected by multiple 750kVA double-conversion uninterruptible power system (UPSs) modules paired with a total of eight megawatts of VYCON’s VDC-XE kinetic energy storage systems.

    “We are seeing more and more data centers vastly increase their uptime and reduce costs by eliminating high maintenance batteries,” said Frank DeLattre, president of VYCON. “As a lights-out facility, this customer needed a redundant and scalable power platform that was low in maintenance and extremely reliable. We’ve designed our kinetic energy storage systems to meet the scalable power requirements these types of high-availability data centers demand.  With the elimination of mechanical bearings, our kinetic energy storage systems significantly improve system uptime and reduce on-going maintenance cost over a 20-year operating period.”

    VYCON is tested and compatible with all major brands of three-phase UPS systems and available from channel partners including ABB, Eaton (Powerware), Emerson Network Power (Liebert) Schneider Electric (APC/MGE) and more. VDC-XE is the protects power-dependent applications ranging from 40kW to multi-megawatt installations.

    The company says it provides a lower Total Cost of Ownership (TCO) than traditional battery-reliant UPS systems, data centers, hospitals, broadcast facilities and other mission-critical applications around the globe depend on VYCON’s innovative kinetic energy storage systems to provide reliable and environmentally friendly on-demand power.

    3:00p
    Salesforce.com Boosts Strategic Investments in Europe

    Salesforce.com announces plans to boost its European strategy in response to customer and revenue growth in the region, VMware and Carpathia team to offer a vCloud Government Service offering, and Red Hat brings Enterprise Linux to the Amazon GovCloud (US).

    Salesforce.com grows in Europe. Building on significant customer momentum in the region Salesforce.com announced that it is further increasing its commitment to Europe with new strategic investments. The plans are in response to the recently reported record revenue growth of 41 percent. Salesforce.com plans to add more than 500 new jobs across Europe in fiscal year 2015 and open new data centers in the U.K., France and Germany.“Cloud computing is at the heart of growth and innovation in Europe, which is why salesforce.com delivered full fiscal year 2014 revenue growth of 41 percent in Europe,” said Miguel Milano, president, EMEA, salesforce.com. “Our tremendous growth and customer momentum is why salesforce.com is significantly increasing its investment in Europe by adding 500 new jobs and opening three new data centres across Europe, in the U.K., France and Germany.” “The U.K. has a growing reputation as the leader in the European digital economy and we welcome this new investment,” said Stephen Kelly, Chief Operating Officer for Government. “Within the U.K. Government we are driving a policy of ‘Cloud First’ to improve the way the public sector manages crucial functions, engages with citizens and delivers value for taxpayers.”

    VMware vCloud Government Service from Carpathia. VMware (VMW) announced its bid to secure FedRAMP Authority to Operate for a new enterprise hybrid cloud service offering, VMware vCloud Government Service provided by Carpathia. VMware is collaborating with Carpathia, a trusted cloud operator and leading provider of cloud services and managed hosting for government agencies, to bring this service to market. Emulating the VMware vCloud Hybrid Service the vCloud Government Service will provide the security and compliance assurance of FedRAMP authorization by using Carpathia’s extensive experience delivering compliant solutions for government agencies. ”VMware has helped Federal agencies optimize and manage their data center infrastructure for more than 14 years, and helping them to now use that infrastructure in the cloud is a natural extension of our core business,” said Lynn Martin, vice president, Public Sector, VMware. “More importantly, because so many Federal agencies are already using VMware solutions, it’s a natural evolution of what they are already doing. VMware is committed to empowering these agencies to use the cloud to meet their line of business and mission requirements rapidly, easily and confidently.”

    Red Hat Enterprise Linux now on AWS GovCloud. Red Hat (RHT) announced that Red Hat Enterprise Linux is now available in the AWS GovCloud (US) region of Amazon Web Services (AWS). AWS GovCloud enables U.S. government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. The new option enables customers to deploy  sensitive workloads on the AWS cloud and benefit from the use of identical technology as Red Hat Enterprise Linux deployments in their on-premises datacenters. As a result, AWS GovCloud (US) customers will realize the added efficiencies of a standard operating environment across multiple deployment scenarios. “Government agencies need the ability to quickly access computing resources that clouds like AWS GovCloud (US) provide, while still deploying applications on Red Hat’s secure platform, Red Hat Enterprise Linux,” said Paul Smith, vice president and general manager, Public Sector at Red Hat. ”On AWS GovCloud (US), agencies can use Red Hat Enterprise Linux on demand, paying for only what they use, when they need it. As agencies determine their cloud strategies, the ability to use Red Hat Enterprise Linux for both on-premises deployments and in the cloud is game-changing.”

    << Previous Day 2014/03/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org