Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, June 11th, 2014

    Time Event
    12:00p
    Equinix Holds Grand Opening for Sixth Dallas Data Center

    Equinix has completed its sixth data center in Dallas – a key interconnection point for southern-U.S. fiber routes and for links to Latin America. The company is hosting a grand opening for the facility, launched earlier this year, today.

    Among interconnection options at the new facility will be ExpressRoute, which will provide private network connections to Microsoft’s Azure cloud services.

    The network aspect alone makes Dallas-Fort Worth an important point on the global data center map, but the market is also thriving because of a massive representation of the oil-and-gas industry and a favorable business-tax environment, Ryan Mallory, Equinix senior director and global solutions architect, said.

    The new data center is an expansion of the company’s footprint in the Dallas Infomart, a highly interconnected building and a key carrier hotel in the Dallas-Fort Worth region. Equinix has had presence in the Infomart since 2000 and has continued to build out contiguous space there as it becomes available, Mallory said.

    The company has brought online and commissioned Phase I of the new data center, called DA6, which has capacity for 450 cabinets and provides 1.3 megawatts of power. DA6 can accommodate three more phases of similar capacity.

    From Dallas into cloud

    ExpressRoute infrastructure at the facility is the next step in execution on a deal Equinix has made with Microsoft last year. Microsoft will use Equinix facilities around the world to bring direct links to its cloud services to 16 markets. The cloud provider has similar deals with a handful of other data center providers.

    Big cloud infrastructure service providers usually build their own data centers or lease wholesale space to support their services. But they use colocation partners to expand the reach of their networks geographically. Private links to public clouds are especially attractive for enterprise users because they provide better performance and security than the public Internet.

    Dallas is one of the first six Equinix locations that will host ExpressRoute. The others are Silicon Valley, northern Virginia, Chicago, London and Singapore.

    “This will serve as the access point in that market for cloud-based services,” Mallory said. “It’ll be accessed through the Equinix Cloud Exchange.”

    Microsoft is not the only cloud provider using Equinix to expand its reach. Amazon Web Services and IBM SoftLayer, among a number of other smaller providers, have also partnered with the data center services giant. SoftLayer colocates in some of the older Equinix space at Infomart, Mallory said.

    Sustained high demand

    Equinix has had a lot of success in the Dallas market and the latest build-out is a testament to that. “There’s definitely a demand there that allows us to continue to invest capital in that marketplace,” Mallory said.

    Enterprise demand comes from the oil-and-gas industry as well as manufacturing companies. However, there is also growing demand from digital media companies, who use Dallas as a springboard to Latin America.

    12:00p
    HP Cloud Chief: OpenStack and Cloud Foundry a Match Made in Heaven

    There are two open source technologies that form the foundation for HP’s newly minted cloud strategy it calls Helion. They are OpenStack and Cloud Foundry.

    Like many of its competitors, HP is going after the enterprise developer market, working hard to convince traditional enterprises that they need to build up their software development chops if they want to stay relevant.

    Platform-as-a-Service products are widely viewed as an easy way enterprises can give developers the tools they need to build, test and quickly deploy applications. The goal is not only to give them the tools they need, but tools they like and are happy using.

    Cloud Foundry is an open source PaaS that has recently become a foundation on which several vendors, including HP and its rival IBM, have built their commercial PaaS offerings.

    But, as Manav Mishra, director of HP Helion, put it, “Any PaaS layer still needs an infrastructure.” Speaking at the Cloud Foundry Summit in San Francisco Tuesday, Mishra said that OpenStack, the popular open source cloud architecture, was the perfect way to build that infrastructure layer.

    Binding Cloud Foundry to OpenStack

    HP announced that it was working on a Cloud Foundry-based PaaS, called Helion Development Platform, in May. The PaaS is part of the vendor’s big $1 billion Helion cloud initiative announced that month, which also included a commercial distribution of OpenStack. “Helion is everything cloud at HP,” Mishra said.

    HP’s Helion team is working to optimize binding between Cloud Foundry and OpenStack. “HP Helion Development Platform is where Cloud Foundry meets OpenStack,” he said. “We don’t want these to be agnostic of each other, because then what’s the point?”

    The two already work well together. So well that Mishra described them as the “two platforms that are the future of the cloud computing industry.”

    Not the only game in town

    But Helion Development Platform is still in the works; IBM’s Cloud Foundry-based BlueMix PaaS only became generally available this year and Pivotal, the EMC- and VM-ware owned company behind Cloud Foundry launched its Pivotal CF PaaS in November of last year. These companies are newcomers to a market that has been busy for years.

    Red Hat, the open source enterprise software giant, has had a successful enterprise PaaS called OpenShift for three years. Google’s App Engine PaaS has been around since 2008, and Microsoft’s Azure PaaS has been commercially available since 2010. Amazon Web Services launched its Elastic Beanstalk PaaS in 2011, and Heroku, the hugely successful PaaS owned by Salesforce since 2010, has now been around for about seven years.

    While PaaS offerings by the likes of Google and Microsoft have not traditionally enjoyed widespread use among enterprise developers, the giants have recently been sharpening focus on the enterprise market across their cloud service portfolios.

    Open and multi-cloud

    The big differences between the incumbent PaaS players and Cloud Foundry, however, is that the open source PaaS (and applications built on it) can run on different infrastructure clouds – not just OpenStack.

    Cloud Foundry is both open and open source, which means users are also not stuck with the features their vendors have included in their offerings. In this respect, Red Hat is a closer competitor, since its OpenShift PaaS is also open source.

    Mishra said he expects Cloud Foundry’s openness to make it a winning proposition for enterprises. “Enterprises see open platforms and open source platforms as a solution to some of the challenges that they face on a regular basis,” he said.

    Giving IT managers an out

    The main challenge he was talking about was the pressure on IT managers to be innovative and forward-looking while at the same time increasing total cost of ownership of the IT infrastructure. Enterprise IT shops are now seen as both catalysts for moving companies forward and massive cost centers.

    Open platforms are popular with these people because they enable development of applications. “Applications are great because that’s where the touch-point with the broader organization happens,” Mishra said. When applications are easy to build and deploy, IT shops can deliver the innovation that is expected from them.

    There are many parallels that can be drawn between OpenStack and Cloud Foundry, but the main one is in the way both open source technologies have given big IT vendors, such as HP and IBM, an on-ramp into the cloud services business.

    OpenStack enabled these companies to compete with the likes of Google and Amazon in the Infrastructure-as-a-Service space (IBM is a major OpenStack backer), and Cloud Foundry has given them an instant foundation for building modern PaaS offerings.

    12:30p
    ViaWest Opens Huge Denver Data Center

    ViaWest has opened its fifth Denver-area data center. The Compark facility — the first greenfield construction project the provider has ever undertaken – is a  210,000 square foot building with 140,000 square feet of raised floor.

    The news comes about two months after ViaWest launched a data center in Chaska, Minnesota, to serve the Minneapolis market.

    ViaWest announced it was building the latest Denver facility in October of last year, as well as its intentions to go after Uptime Institute certification, which it is in the process of doing. The company was the first colocation provider to achieve Tier IV Design Certification from Uptime for its Lone Mountain facility in Las Vegas.

    Compark, located in Englewood, Colorado, was built using the Lone Mountain template, except that it’s double in scale. Lone Mountain is a 9 megawatt data center, while Compark has 18 megawatts.

    “The biggest differentiator across the entire data center fleet is that it’s our first purpose-built data center,” said Todd Gale, vice president of data center architecture and innovation at ViaWest. “It was our first greenfield project. It was an excellent experience and will be our model going forward in the near term. We have two expansions in planning stages in existing markets that will be purpose-built as well.”

    The company decided to build the facility from scratch because it could not find a building in the area that would fit its needs.

    ViaWest will offer cloud computing, wholesale and retail colocation and managed services in the new facility, which resides just east of Centennial Airport.

    Compark construction project facts:

    • Total project investment for ViaWest at full build-out and full capacity will be more than $100 million.
    • ViaWest customer investment in IT equipment (i.e. servers, racks, storage devices, etc.) will represent between $500 million and $1 billion.
    • More than 30 contractors were involved across several disciplines, including design, general construction, electrical and mechanical. This represents a combined workforce of more than 600 people.
    • More than 43,000 cubic yards of earth were moved to prepare the site for the structure.
    • The construction effort alone required more than 15,000 hours of labor.

    Colorado data center activity

    Colorado, ViaWest’s home state, has seen a lot of activity in recent years and is a growing hub for enterprise and technology companies. Its low-risk geography is good for both production and disaster recovery needs. It also offers a talented tech workforce.

    The data center provider has experienced a lot of demand in the Denver area, which is why it decided to make such a large investment in capacity there. “We’re at capacity at the other four [local] facilities and we’re seeing strong demand in Colorado,” Gale said. There are also many companies from outside of the state that are looking at Denver as a disaster recovery location with low utility rates and small chances of natural disaster.

    There’s been a slew of activity in Colorado. Other providers in the market include Fortrust, which recently added more capacity in Denver, taking a modular approach to growth using IO.Anywhere modules. There are also CoreSite realty, which operates the Any2 Denver peering exchange, and Latisys, which built a second Denver facility in 2011 in nearby Englewood.

    Also in Englewood, OneNeck IT is building a  $20 million greenfield data center to serve the Denver area.

    There’s also a lot of action just an hour’s drive down I-25 S, in Colorado Springs. Atlanta-based T5 is planning a massive $800 million data center campus situated on 64 acres of land with up to 100 megawatts of available power.

    Colorado Springs has been competing in the state through tax incentives, cheap power rates and a friendly business atmosphere. Verizon Wireless, HP, FedEx, T. Rowe Price, Progressive, HP, Intel, Wal-Mart and FedEx have big facilities there.

    12:30p
    Bringing Hyper Scale-Out to the Masses: The Power of Open Optimized Storage

    Mario Blandini is the senior director of product marketing, storage systems, at HGST, a Western Digital Company.

    Data growth estimates by percentage may differ from one prognosticator to the next, but everyone agrees that more data will be stored in 2014 than was stored in 2013. So how will infrastructure scale to meet the insatiable consumption of unstructured data in the coming years? In short: optimization.

    As we enter the age of analytics, data is only valuable if you can get to the information and knowledge locked inside of it. Storing data is part of the challenge, and readily accessing massive amounts of unstructured data from archives for analytics or compliance is more challenging. Stakeholders therefore need a new data storage system architecture with high-density “peta-scale” capacities, accessible to the applications that must leverage it and approachable for organizations of any size.

    Web-scale in the enterprise

    As outlined by Gartner, the biggest names in “Web-Scale IT” and Web 2.0 have already achieved new storage efficiencies by designing standard hardware that is highly optimized for their very specific software workloads. Few data centers have equivalent human resources to do the same, though the emergence of open software-defined storage options make optimized architectures for scale-out much more approachable. Optimized data storage is more than just hardware – storage software is as much a part of the opportunity for optimization.

    These new technologies will enable enterprise data centers to gain the same CapEx and OpEx benefits enjoyed by “Web-Scale IT” players—an investment that Gartner identified as a Top 10 Strategic Technology Trend in 2014. They’re re-inventing the way in which IT services can be delivered. The capabilities of these companies exceed the “scale in terms of sheer size to also include scale as it pertains to speed and agility.” The suggestion is that IT organizations should align with and emulate the processes, architectures, and practices of leading cloud providers.

    Thanks to commercially supported open-source initiatives such as Red Hat Storage Server with GlusterFS, Inktank Ceph Enterprise and SwiftStack for OpenStack Object Storage, we can expect to see software-defined storage systems cross from cloud into more mainstream enterprise data centers across multiple deployment options. Several new startup-developed software-defined storage offerings will likely emerge from stealth mode in the coming 18 months. With commercial support for open storage software, traditional IT can use the same approaches once limited to the biggest operators. Even today you’ll find companies presenting their case studies at conferences from service providers, enterprises, as well as early stage companies.

    Innovation in storage hardware

    On the hardware side of the equation, standard storage building blocks for the software-defined data center are also getting optimized. Higher capacity drives consuming less power are improving storage clusters, enabling more resources in the same footprint. New technologies like hermetically sealed helium filled drives allow for more optimal data storage in the standard 3.5” form factor. Drives that are lighter and lower power also enable vendors of standard server hardware to increase the density of their enclosures to support software-defined storage. What had been 12-36 as a typical system density, 60-80+ drive systems are now much more feasible.

    On the path to optimized hardware, Ethernet drives bring new abilities to distribute software services for scale-out storage. This architecture optimizes the data path so that application services can run closer to the location where data resides at rest. Developers can take advantage of those drive-resident resources in open architectures without needing to modify their applications. By virtue of Ethernet, operators supporting those developers get seamless connectivity to existing data center fabrics, and use of existing automation and management frameworks. An open Ethernet drive architecture can also enable the intermixing of new technology with server-based deployments of popular software-defined storage solutions.

    Historically, servers and networking have been the stars of the data center. With the volume, velocity, value and longevity of data, however, we’re entering an era when data enterprise storage is taking over the spotlight as a key enabler for advancements in the data center. It’s not that processing or moving the data is easy; it’s that data has become the currency of business insight and needs to be stored and readily accessible for companies to fully realize its value. For data center architects and storage developers looking to keep pace with next generation big data processing, analytics, research queries and other applications that require long-term retention of active data, it’s imperative they understand how open software-defined storage will impact (and benefit) the new ecosystem of storage architectures.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Hybrid IT: The Best of All Worlds

    The IT world is quickly changing and now, more than ever, the challenge revolves around our ability to adapt to these changes and ever-evolving user demands. So what’s at stake?

    Most companies cannot keep up with business agility demands using the old, unwieldy IT model of building and operating data centers. With cloud computing, IT consumerization, and the modern requirements around infrastructures and data centers, organizations have had to rethink how they utilize data center and IT services.

    Rising customer expectations add pressure to continually enhance the customer experience. And now that IT systems underpin virtually all customer and supply chain interactions, IT solutions must be 100 percent available – even as they meet that demand for extreme agility.

    All of this means IT organizations must deliver new business capabilities faster, even as their budgets are being squeezed.

    “In this fiercely competitive landscape, many companies don’t even have time to put together detailed business cases before they respond to competitive changes. If you don’t take advantage of the right technologies to move quickly, you will get left behind,” says Elizabeth Shumacker, Vice President, Global Products & Solutions Marketing at CenturyLink Technology Solutions, a global provider of managed services on virtual, dedicated and colocation platforms.

    In this white paper from CenturyLink, we learn how the drive to maximize IT agility in support of new business needs (e.g., digital customer experience; big data analysis) while holding down cost is leading many CIOs to re-imagine their core IT infrastructure. They’re outsourcing a growing proportion, utilizing an array of different “hybrid” approaches. This re-imagining enables them to focus more of their resources on delivering business applications instead of managing data centers and day-to-day IT infrastructure updates. Now we’re looking at IT from a hybrid perspective.

    Download this whitepaper today to learn why major organizations and CIOs are switching to a hybrid IT services model. Reasons include:

    • Ability to provide end-to-end IT solutions from assessment and planning to delivery and ongoing management and optimization.
    • Scalability and expertise at every stage of the outsourcing and application lifecycles.
    • Flexibility and agility to move IT workloads to the optimal platform throughout their lifecycles.
    • Seeking 100 percent availability, and clearly defined SLAs.
    • Strong multi-level security technology, policies and procedures.
    • Scalability and global reach to accelerate business expansion into new markets.
    • Proven expertise solving diverse business problems.

    Remember, your platform will only continue to change and evolve. Through it all, the direct correlation between your IT capabilities and your business demands is what will allow you to out-compute and out-compete. It’s in these cases that working with the right data center and colocation provider can make all the difference.

    4:48p
    Colovore Brings Liquid-Cooled Colo to Silicon Valley

    In the midst of Silicon Valley’s busiest data center neighborhood, the air is fine. But the water is even better.

    That’s the approach being taken by Colovore, a colocation startup that is the newest arrival on the scene in Santa Clara, the data center capital of Silicon Valley. Seeking to find the best combination of high density and efficiency, Colovore is offering water-cooled cabinets that can support power loads of up to 20 kilowatts (kW). Each cabinet is equipped with a rear-door heat exchanger which uses cool water to remove heat as waste air flows through the back of the enclosure.

    This cooling strategy allows Colovore to differentiate itself in a crowded regional market in which virtually every major player in the industry is offering data center space. It also provides the best bang for the buck in the company’s facility at 1101 Space Park Drive, where it has a modest footprint (24,000 square feet) but 9 megawatts of power.

    Colovore launched last August and opened its doors in December. The company says it is seeing traction with two types of clients: companies focused on high-performance computing (HPC) and customers that are refreshing their IT hardware and looking to pack more servers into fewer cabinets.

    Hardware refreshes drive consolidation

    Colovore President and co-founder Sean Holzknecht says more companies in Silicon Valley are finding that as servers get more powerful, new equipment is pushing the boundaries of their existing data center. He says that’s true for in-house data centers, but also for some colocation facilities using traditional air cooling.

    “Many of these companies can move into half as many racks in Colovore,” said Holzknecht. “Everyone is scrambling to build high-density islands in low-density data centers. We’re seeing interest from companies that are consolidating, but don’t need a footprint that’s gigantic.”

    Holzknecht knows the Bay Area market from his previous post as vice president of operations at Evocative, the Emeryville provider that was acquired by 365 Main last year. CTO Peter Harrison arrives from Google, where he was a senior technical program manager on the data center team. CFO Ben Coughlin brings experience in the private equity sector, where he worked with Camden Partners and Spectrum Equity.

    The Liebert rear-door heat exchangers can support up to 12kW per cabinet, according to Mehrdad Alipour, mechanical engineer at Therma Corporation in San Jose, which supported the Colovore project. The rear door unit uses air that is “tempered” rather than mechanically chilled, typically circulating at temperatures between 62 and 76 degrees. The cooling capacity can be boosted to 20kW per cabinet when supplemented by Liebert CRV in-row-cooling units, and the company is developing a design to support densities of up to 40kW per rack.

    “Our expectation was that in 2014 we would see 10kW per cabinet and then we would expect that number would rise over the next five years,” Holzknecht said. But that trend line has been accelerated by several HPC projects that are already running as high as 17kW per cabinet.

    High-efficiency UPS system

    Dave Smith, a principal at ECOM Engineering, noted that the Colovore facility is designed to allow customers with different loads to be hosted comfortably within the same data hall. ”The way the mechanical piping is configured, it doesn’t change between a 5kW or 10kW or 15kW cabinet,” said Smith. “The heat output is the same for the 20kW rack as it is for the 10kW rack.”

    Colovore is also using a high-efficiency UPS system that operates in “eco mode”, foregoing the traditional double-conversion UPS approach with a configuration that offers better efficiency but slightly less redundancy. Smith said Colovore provides a good example of an environment in which eco-mode is appropriate, since it is adjacent to a substation for the local utility.

    “You have to look at where you are and the reliability of the grid,” said Smith. “We’re right next to Silicon Valley Power. Our reliability is really high, so the need to go to double-conversion really isn’t there. Of course, it’s supported if that’s what the customer needs.

    “The days of fear and overbuilding are gone. Now it’s about optimizing for efficiency. We’re not layering belts and suspenders on top of belts and suspenders.”

    Colovore is starting out with 2 megawatts of power and will add capacity in increments of 1.2 megawatts to 1.8 megawatts, installing skids of electrical and mechanical gear as needed to support additional IT space.

    The Colovore team acknowledges the competitive nature of the market in Santa Clara, where DuPont Fabros just commissioned 9 megawatts of new capacity. Holzknecht says there’s plenty of demand to support many providers and different approaches to the market.

    Colovore-2

    A row of cabinets inside the Colovore data center in Santa Clara, Calif. Each cabinet is equipped with a rear-door heat exchanger using tempered water.

     

     

    5:07p
    HP Intros Latest SDN-enabled Switches

    HP introduced new open standards-based software defined networking solutions this week at its annual Discover conference in Las Vegas. The vendor hopes to pair a new Virtual Cloud Networking SDN application and new FlexFabric data center 7900 switches with its recently announced HP Helion OpenStack offering.

    “Our customers are looking to the cloud for business agility and efficiency that will ultimately drive revenue, but this is hindered by the complexity and disjointed architecture plaguing data center networks today,” said Antonio Neri, senior vice president and general manager of networking at HP. “In helping modernize our customers’ networks and transition to a cloud environment, HP is focused on software for the flexibility it brings. Cloud as it’s meant to be can’t be accomplished without SDN.”

    As a part of the HP Helion OpenStack solution the new Cloud Networking SDN application helps to meet the scalability demands to implement private and hybrid clouds, enabling on-demand application deployment with secure and isolated virtual networks. Helping to prepare a cloud and SDN-ready infrastructure, the new 7900 switch series combines the virtual and the physical as a unified resilient fabric to ensure network visibility, availability and support of changing workloads.

    The application aims to help multitenant cloud providers and network-as-a-service offerings and is designed to add to the Neutron networking layers of OpenStack.

    With the 7900 series switch HP looks to keep pace with similar offerings from Cisco, Juniper and Arista Networks by offering many of the same features, but with a modular, compact chassis and lower price point. HP Virtual Cloud Networking SDN Application will be available in August as a part of the HP Helion OpenStack, and the 7900 switches are available immediately.

    The 7900 FlexFabric switches are built on open standards, such as OpenFlow, support VXLAN and NVGRE tunneling technologies, and in a 2U form factor can scale up to 48 40G ports. The switch supports full Layer 2 and Layer 3 features, nonblocking Layer 2/3 Clos and delivers up to 3.84Tbps switching capacity across four interface module slots.

    5:27p
    Latest Nimble Storage Array Combines Flash Performance With HDD Capacity

    Flash-optimized storage provider Nimble Storage launched a new Adaptive Flash platform, which aims to combine the performance of flash-only arrays and the capacity of hybrid arrays, dynamically and intelligently allocating storage resources to meet diverse and stringent application demands on a single platform. The new platform also introduces the CS700 Series arrays and All-Flash Shelf to deliver up to 500,000 IOPS, 64 terabytes of flash and a petabyte of capacity.

    Built on Nimble’s patented Cache-Accelerated Sequential Layout architecture the Adaptive Flash platform is engineered for performance and has integrated data protection and predictive support to seamlessly scale storage infrastructures. Also built into the platform is Infosight, the company’s automated cloud-based management and support system, which will recommend the exact amount of resources required as application demands change within an enterprise.

    The rising adoption of all-flash storage has been leveraged with the price-performance argument that changes the economic discussion from raw capacity to dollars per IOP. Modern IT requirements and the need to both reduce latency and increase performance have all-flash arrays balancing a changing workload of I/O profiles and service levels.

    San Jose, California-based Nimble went public late last year among a flurry of other storage-related funding announcements and IPOs from rivals Violin Memory, Fusion-io and Pure Storage. Outside of these smaller vendors, incumbents like EMC have purchased rack-scale flash startup DSSD, and HP recently pushed down the cost of its all-flash 3PAR arrays. Nimble has done quite well since its IPO, with more than 1,200 customers deploying its scale-out storage architecture and more than 200 enterprise customers implementing its SmartStack converged infrastructure solution.

    “The battle between all-flash and hybrid flash array vendors will continue to rage for at least the next several years,” said Eric Burgener, research director at IDC. ”Application workloads that require lots of performance and relatively little capacity will migrate more towards all-flash array architectures, and those applications that require lots of capacity and relatively less performance will probably find hybrid array architectures more cost-effective. Solutions like Nimble’s new CS700 and All-Flash Shelf give customers significant leeway in establishing the ratio between SSDs and HDDs to offer the flexibility necessary to accommodate a wide range of mixed data center workloads.”

    Nimble says the new CS700 Series array can handle a variety of performance-intensive enterprise workloads, such as large-scale VDI deployments and high transaction-volume databases, as well as other performance-intensive server virtualization workloads, such as Microsoft Exchange.  The new All-Flash Shelf provides the flexibility to scale flash gradually up to 16 TB per node or 64 TB in a 4-node scale-out cluster.

    7:12p
    IBM SoftLayer to Launch Dallas, Ashburn Data Centers for Federal Clients

    IBM SoftLayer is planning to launch data centers in Dallas and Ashburn, Virginia, dedicated to providing Infrastructure-as-a-Service to agencies of the U.S. federal government.

    Both facilities will be compliant with FedRAMP, a standard set of security requirements all cloud service providers that serve federal agencies must meet as of this month.

    Since 2011 federal agencies have been under pressure to replace data center infrastructure they own and operate themselves with cloud services to the maximum extent possible.

    The government’s take-up of cloud services has been slow, however. FedRAMP was devised to speed it up, and the deadline for all cloud services used by the feds to be certified as FedRAMP-compliant was on June 5. Many of them were expected to blow it.

    IBM is one of 11 providers that have the certification. Other certified IaaS providers are Akamai, AT&T, CGI, HP, Lockheed Martin, Microsoft and Amazon (there is also a handful of certified Platform-as-a-Service and Software-as-a-Service offerings).

    Built for government needs

    IBM plans to bring the Dallas data center online in June, and the Ashburn facility is slated to launch later this year. The facilities will host 30,000 servers in the beginning and share an isolated private 2,000 Gbps network, IBM said.

    The company is building a dedicated security operations center for the two facilities. The center will provide clients additional security, availability and incident response capabilities.

    Anne Altman, general manager for IBM’s U.S. Federal division, said the data centers were custom designed for government clients. “We’ve designed these centers with government clients’ needs in mind, investing in added security features and redundancies to provide a high level of availability,” she said.

    SoftLayer empire marches on

    IBM has been on a data center construction kick after it announced it would invest $1.2 billion in expanding the physical footprint for its SoftLayer cloud services in January. The goal is to have 40 data centers on five continents, up from 25.

    The latest addition to the portfolio was a SoftLayer data center in Hong Kong, announced earlier this month.

    The company is also expanding its cloud reach through partnerships with colocation providers. Yesterday it announced the launch of Direct Link, a service companies in colocation data centers can use to connect to the SoftLayer cloud privately, bypassing the public Internet.

    << Previous Day 2014/06/11
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org