Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, November 26th, 2013

    Time Event
    1:00p
    Government Clouds: What is FedRAMP?
    What does it take to run a secure government cloud? We take a look at the requirements and who has met them.

    What does it take to run a secure government cloud? We take a look at the requirements and who has met them.

    What do Akamai, Lockheed Martin, Microsoft, AWS and the U.S. Department of Agriculture all have in common? They are all are running government clouds – to be exact, they are FedRAMP Compliant cloud service providers (CSPs). These organizations took a few extra steps to become a part of a very small group of data centers meeting very certain requirements. In some cases, these providers are delivering Infrastructure as a Service (IaaS) capabilities, while others are providing services around Platform as a Service (PaaS).

    What is FedRAMP?

    Let’s begin here: What is the Federal Risk and Authorization Management Program (FedRAMP)? Its website tells us it is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. Now for some background and history.

    Starting its days in 2012, FedRAMP reached its operational capabilities and began to provide guidance to government and corporate organizations. The core objectives are:

    • Reduce duplicative efforts
    • Increase efficiencies and remove security inconsistencies
    • Reduce cost inefficiencies associated with the current security authorization process

    During the creation process, the FedRAMP program collaborated closely with a number of cloud security and industry experts. The great thing here is that this collaboration was done both within the public, private and government industry sectors. This includes those government organizations known by their acronymns – GSA, NIST, DHS, DOD, NSA, OMB – and the Federal CIO Council, and numerous other key cloud and infrastructure professionals.

    With that in mind, let’s dive into the program a bit. FedRAMP helps provide a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. There are three ways to be associated with the FedRAMP program:

    • You can be a Federal Agency which utilizes FedRAMP
    • You can be a Cloud Service Provider which becomes FedRAMP Security Authorized
    • You can become a Third-Party Assessment Organization (3PAO) for the FedRAMP Accredited Assessor Program.

    Examples and Requirements Process

    In understanding this program – it’s important to look at a couple of examples and understand the requirements process.

    Example 1: You would like to become a 3PAO FedRAMP provider.

    According to GSA.gov – To become a FedRAMP Independent Third-Party Assessment Organization (3PAO), organizations must undergo a rigorous conformity assessment process before being accredited by FedRAMP. This conformity assessment process qualifies 3PAOs according to the following requirements:

    • Independence and quality management in accordance with ISO/IEC 17020: 1998 standards
    • Information assurance competence that includes experience with FISMA and testing security controls
    • Competence in the security assessment of cloud-based information systems

    The FedRAMP program goes on to explain that Third-Party Assessment Organizations (3PAO) will perform initial and periodic assessment of Cloud Service Provider (CSP) systems per FedRAMP requirements, provide evidence of compliance, and play an on-going role in ensuring CSPs meet requirements. Once engaged with a CSP, 3PAOs develop Security Assessment Plans, perform testing of cloud security controls, and develop Security Assessment Reports. FedRAMP provisional authorizations must include an assessment by an accredited 3PAO to ensure a consistent assessment process.

    Example 2: You would like to become a FedRAMP Authorized Cloud Service Provider

    According to the FedRAMP documentation, cloud service providers wishing to provide cloud services to Federal agencies must:

    • Use the baseline controls and accompanying FedRAMP requirements
    • Directly apply or work with a sponsoring agency to submit an offering for FedRAMP authorization
    • Hire a Third-Party Assessment Organization to perform an  independent system assessment
    • Create and submit authorization packages
    • Provide continuous monitoring reports and updates to FedRAMP

    Here’s the great part – guidelines to become a FedRAMP CSP are very straightforward and include a great preparation checklist. Here are some of the core components that are included in the FedRAMP Preparation Checklist:

    • You have the ability to process electronic discovery and litigation holds
    • You have the ability to clearly define and describe your system boundaries
    • You can identify customer responsibilities and what they must do to implement controls
    • System provides identification and 2-factor authentication for network access to privileged accounts
    • System provides identification and 2-factor authentication for network access to non-privileged accounts
    • System provides identification and 2-factor authentication for local access to privileged accounts
    • You can perform code analysis scans for code written in-house (non-COTS products)
    • You have boundary protections with logical and physical isolation of assets
    • You have the ability to remediate high risk issues within 30 days, medium risk within 90 days
    • You can provide an inventory and configuration build standards for all devices
    • System has safeguards to prevent unauthorized information transfer via shared resources
    • Cryptographic safeguards preserve confidentiality and integrity of data during transmission

    What FedRAMP Means to You

    Cloud computing isn’t going anywhere. More than ever, data center and cloud providers are seeing the direct impact that they can make on both private, public and government verticals. The FedRAMP program is actually a very comprehensive outline of what it takes to be a secure provider. In fact, with only a dozen listed providers – the evaluation process is certainly in depth. Let’s look at a few examples as outlined by the CSP and FedRAMP program.

    • Amazon AWS GovCloud. This IaaS platform helps deliver a government community cloud infrastructure. According to FedRAMP, AWS GovCloud (US) is an AWS Region designed to allow US government agencies and customers supporting the US government to move more sensitive workloads into the cloud.  In addition to complying with FedRAMP requirements, the AWS GovCloud (US) framework adheres to U.S. International Traffic in Arms Regulations (ITAR) regulations.
    • Windows Azure public cloud solution. As both an IaaS and PaaS solution, Microsoft has created a dynamic offering aimed directly at supporting government IT projects. As the FedRAMP site points out, Microsoft Windows Azure is an open and flexible platform that enables customers to build, deploy, and manage applications across a global network of Microsoft-managed datacenters.  Windows Azure encompasses both IaaS, PaaS and Data cloud services that enable customers to use scalable, on-demand cloud computing services that adhere to and meet federal security compliance regulations in the support of government computing initiatives
    • IBM SmartCloud for Government (SCG). Here we see a IaaS model that is capable of supporting a variety of government initiatives. According to the IBM FedRAMP site, SmartCloud for Government (SCG) is a secure multi-tenant Infrastructure as a Service (IaaS) cloud computing environment for U.S. Federal customers. SCG services include provisioning of compute, memory, network, OS, and storage resources to meet client production and development/test computing needs. SCG IaaS services can be bundled with enterprise class, fully managed cloud hosting services, including OS Provisioning and Administration, Enterprise System Management,  Security Operation Center (SOC), Storage Management, and Backup.

    Why Cloud?

    Organizations of all sizes are jumping on the cloud bandwagon. More and more we are seeing new types of services being delivered from a variety of new systems. As always, security plays a big role in the entire process. Ultimately, the question is this: why sign up for FedRAMP? Well, the GSA site actually lists a number of useful reasons:

    • Increases re-use of existing security assessments across agencies
    • Saves significant cost, time and resources – “do once, use many times”
    • Improves real-time security visibility
    • Provides a uniform approach to risk-based management
    • Enhances transparency between government and cloud service providers (CSPs)
    • Improves the trustworthiness, reliability, consistency, and quality of the Federal security authorization process

    As your organization continues on its cloud journey – remember that new services delivery models are always right around the corner. Conversations around data center automation and next-generation technologies drive the interest in cloud computing.

    In deploying the right model for your business or organization, remember that the cloud can have a great impact on your environment. However, as with any technology – there are key considerations around infrastructure and security that must never be overlooked. Deploy your environment with security and deployment best practices in mind – and you’ll be able to build a cloud platform which can help push you to the next IT level.

    1:30p
    When the Software-Defined Data Center Meets the Reality-Defined Facility

    Richard Ungar is global head of R&D for ABB Decathlon for DCIM, where he leads both the business growth and product development of ABB’s data center infrastructure management system in North America.

    Rich-Unger-tnRICH UNGAR
    ABB Decathlon

    “Software-defined” is a new buzzword around the data center industry. It started with software -defined networking, branched out to software-defined storage, merged with server virtualization and voilà—the data center has been virtualized. People are embracing the concept that promises to streamline application deployment and automatically provision and re-provision based on the fluctuations in IT load.

    But hold on a moment. I’ve been in a lot of data centers and not one of them has given any hint that it might be software-defined. I’ve seen a lot of servers humming away, performing their invisible tasks, and perhaps living a virtual, software-defined existence. Similarly for networks and storage, how they segment and organize their network packets or disk sectors to support more flexible ways to allocate resources seems like a largely software-defined problem that is ripe for a software-defined solution.

    But what about the physical infrastructure? For instance, the chillers and CRAHs that keep those servers cooled, or the switchgear, transformers, UPS and PDUs that keep everything powered. I have yet to see a software-defined air economizer and I don’t hold out much hope of ever encountering one.

    The interface between the virtual, software-defined world and the real, physical, error-prone world is one which is necessary to think about whenever you plan to deploy any level of data center management. In particular, any system that promises to streamline and automate the time-consuming and error-prone processes that IT departments deal with today is ripe for the application of the law of unintended consequences.

    Creating a software-defined data center that deals with the ‘brains’ of the data center (i.e., the IT infrastructure) without incorporating the underlying physical systems reminds me of creating an operating system without any notion of the computer it is running on. It’s been done successfully many times, but it starts by abstracting the underlying hardware into a well-defined Hardware Abstraction Layer (HAL). The HAL provides all the mechanisms that the operating system needs to run, and if properly implemented, run extremely well.

    What Does a Software-Defined Data Center Need?

    What software-defined data centers (SDDC) need to be successful is a well-defined data center “facilities HAL” or data center infrastructure abstraction (DCIA). The DCIA would offer a set of services that inform the SDDC about the status of the data center physical infrastructure, plus provide mechanisms to alter it. This DCIA fits squarely into the scope of a data center infrastructure management (DCIM) system, such as ABB Decathlon®.

    Without a DCIM system, the SDDC is required to make assumptions about the current condition of the underlying infrastructure- specifically, that everything is operating smoothly and that changes will not have any adverse effect. This can be loosely translated to ‘assume I am running in a Tier 4 datacenter with perfect 72 degree cooling throughout and unlimited, access to low-cost energy’.

    The real world differs somewhat from this, at least for most. Temperatures are not always even or consistent. Humans sometimes make mistakes and plug things into the wrong socket. Equipment fails or requires maintenance.

    Let’s look at some examples of how the DCIM can help the SDDC make better choices about where to provision workload in a typical data center.

    • The SDDC decides to automatically provision additional compute for a web-based application for which usage is spiking. What it doesn’t know is the servers it has chosen to use are in the middle of a data center hot zone, made hotter by current high outdoor temperatures. A DCIM system can provide real-time summary information about the environmental conditions in any area of the data center. Using this information, the SDDC can choose different servers to provision.
    • Similarly, the DCIM system can provide the SDDC information about power loading on the circuits or if, for example, the system is presently running on UPS or backup power.

    With a facilities abstraction layer provided by the DCIM system, the SDDC now improves the reliability of the IT infrastructure by actively managing the IT load. Therefore, based on an overall ‘degree of reliability’ metric calculated by the DCIM system, the SDDC can decide to automatically re-provision applications away from at-risk servers, or at-risk data centers, in the event of a major equipment failure or even an approaching weather-related event.

    It would even be possible to use this approach to drive cost-optimization protocols, where the DCIM system will calculate a ‘cost of operations’ metric, in real-time, based on the local set of servers, real-time energy pricing, and real-time cooling pricing (which can vary based on environmental and other physical conditions). The SDDC can then make informed decisions on when and where to allocate compute to maximize cost savings.

    The SDDC and DCIM are two trends in computing that seem destined to intertwine. Innovative companies looking to deploy SDDC would do well to examine how their software-defined world and their real-world facilities intersect. A good marriage between the two will yield benefits beyond ease of deployment. Higher reliability and lower operating costs are also achievable.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:15p
    Internap Opens Beta for Hybrid Cloud Platform Built on OpenStack

    Post originally appeared on the WHIR.

    Internap announced recently for its public cloud service, AgileCLOUD, is now available. Based on a native OpenStack API, AgileCLOUD exposes virtualized and bare-metal compute instances.

    With AgileCLOUD, customers can provision and managed thousands of virtual or bare-metal cloud AgileSERVER instances in order to best fit the needs of their application, and can “more easily hybridize between on-premise and third-party OpenStack clouds,” Internap said.

    AgileCLOUD is now 100 percent OpenStack ‘under the hood,’ which provides an open, interoperable framework that helps us deliver a dramatically more scalable platform,” Raj Dutt, senior vice president of technology at Internap said. “Not only can we offer our customers all the benefits of the OpenStack platform, but we’ve implemented existing features that our hybridized customers find valuable – such as bare-metal cloud instances, static IP addresses, Layer 2 VLANs and compatibility with our existing hosting API (hAPI).”

    While AgileCLOUD will eventually be available across Internap’s cloud footprint, including locations in New York, Dallas, Silicon Valley, Amsterdam and Singapore, it is currently being offered out of its Santa Clara data center to new and existing customers who register for the beta program.

    Improvements to AgileCLOUD include more configuration options to match cloud resources to application requirements, which can be managed through a single pane-of-glass customer portal and API.

    AgileCLOUD offers 100 percent SSD for both local and networked block storage, which is ideal for latency-sensitive and I/O-intensive applications, Internap said.

    Participants in the beta will not be charged for virtualized infrastructure and will receive a $1,000 credit toward the service once general availability hits.

    Recently, Internap and Aerospike partnered to offer a NoSQL database on Internap’s AgileSERVER cloud platform.

    Post originally appeared on the WHIR.

    3:00p
    Data Centers Require New Sensor Design Considerations

    Today’s data center platform includes high-density computing, new power requirements and an a growing amount of multi-tenancy. Cloud computing and virtualization have pushed the next-generation data center model to adopt new ways to help control the overall infrastructure. There was a time, not long ago, when data center managers relied on the room thermostat to indicate the ambient temperature of a data center. They would set the temperature in the mid-60s˚F to ensure adequate cooling. Now, most data center managers know that such an ambient temperature is unnecessarily cold and wasteful of energy.

    A room thermostat only indicates the temperature at the thermostat’s location, typically an interior wall. It is far more useful to know the temperature at the cool air inlets of IT devices. Being able to see data on plots of temperatures from multiple sensors can identify hot spots and areas of overcooling.

    This white paper from Raritan outlines the need to truly have direct visibility into the environmental variables within the modern data center. It’s no longer about just temperature or humidity – although those are still important. New demands around the modern data center have created new requirements around infrastructure monitoring.

    Download this whitepaper today to learn about the key types of analog and digital sensors which can give you direct insight into your data center environment. During the analysis, Raritan discuss key types of sensors including:

    • Advanced temperature and humidity sensors
    • Airflow sensors
    • Differential air pressure sensors
    • Water and leak sensors
    • Contact closure sensors
    • Rack and room webcams
    • DCIM and next-generation management solutions

    There are direct benefits around knowing and understanding how your data center is performing. As more organizations place their entire IT stack into the data center platform, improved infrastructure visibility will be critical.

    A data center, whether a room or an entire building, is all about what is happening at the rack. The right environment monitoring and metering at the rack can lead to some nifty data center improvements. This type of visibility allows you to right size the data center and create just-in-time expansions to save on capital expenses as well as improved energy efficiency, IT productivity and utility. Plus, it’ll allow you to integrate with next-generation technologies like virtualization, IT consumerization, and cloud computing.

    3:30p
    5 Considerations Around Leasing vs Buying a Data Center

    Build vs. Buy? Many administrators and data center operators are still asking this question. The reality here is that this is always going to be a bit of a challenge when the decision has to be made. The modern business continues to ask more from their IT department, while still spending less. So what is happening around the data center environment? Why are we seeing this boom in data center demand?

    Well, consider these statistics:

    • 15 petabytes of new data created every day
    • 90 percent of today’s digital data created in past 2 years (IBM)
    • “By 2015, the gigabyte equivalent of all movies ever made will cross global IP networks every 5 minutes.” (Cisco)
    • People send over 145bn emails/day
    • Over 100 hours of new video is uploaded to YouTube every minute
    • 75 percent of data today is generated by individuals, but enterprises will have some liability for 80 percent of it at some point
    • 20 typical households generate more Internet traffic than the entire Internet in 2008
    • Walmart’s transaction databases see 2.5 Petabytes of data/day

    Impressive, right? The truth is that these trends won’t be subsiding any time soon. As more web content is delivered to a more mobile user – the data center will have to evolve to meet these new types of demands.

    In this on-demand webinar sponsored by Iron Mountain, join 451 Research and Iron Mountain as they discuss the 5 key considerations around buying or leasing a data center environment.

    Topics include:

    Type of environment

    • Internal vs. colocation vs. private cloud vs. hosting
    • What works  for your business requirements?

    Risk of downtime

    • Different historical downtime risk for differing types of environments
    • How much downtime can you reasonably accept?

    Regulatory requirements and compliance

    • HIPAA, FISMA, PCI-DSS, SSAE-16

    Capacity planning – getting it wrong

    • Stranded capacity vs inadequate capacity vs equipment limitations

    Capex vs. opex and other financial factors

    • Pre-payment as an option in CapEx situations
    • Partial builds for wholesale data centers
    • Servers – buy or lease or rent

    Download this on-demand webinar today to learn about the demands being placed around the modern data center and where certain delivery options make sense. In working with a current and future data center infrastructure, managers must understand the TCO model between various options of DC ownership. The cost of capital can make some alternatives look very different. To create the best business and IT plan – make sure to follow these considerations around your future data center deployment.

    << Previous Day 2013/11/26
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org