Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 8th, 2016

    Time Event
    1:00p
    Modular Data Center Startup Keystone NAP Raises $15M

    It has not been easy to secure financing for a data center services startup in recent years, and few have managed to pull it off. While institutional investors are more familiar with the data center provider market than they were in the past, and there is a lot of interest in investing in providers, the preference usually goes to established companies who already have tenants with good credit ratings and positive cash flow.

    Keystone NAP, operator of the modular data center on the site of a former steel mill in Pennsylvania, is one of the few new data center providers that have managed to raise capital. The company is expected to become profitable this year, its president John Parker said.

    The data center provider announced its latest funding round this week: a $15 million debt facility with White Oak Global Advisors, which brings its total to $27.5 million, consisting of a combination of debt and private equity.

    “The capital market is favorable,” Parker said. “We’re a startup, so that presents additional challenges, but I think the market is favorable.”

    The data center industry has a lot of attributes lenders like, and Keystone has shown its ability to land enterprise-grade customers for multi-year leases. Customers like major financial institutions and healthcare organizations are “the types of buyers in this market that I think carry a lot of weight with lenders,” he said. “That’s really the key element.”

    Because of confidentiality agreements, Keystone cannot name its marquee financial services client, saying only that it’s a major firm based in New Jersey. Its poster healthcare customer is Aria Health, which recently merged with Thomas Jefferson University. It is northeast Philadelphia’s largest healthcare provider, which operates three hospitals and a network of outpatient centers and primary healthcare physicians.

    Modular Data Center with Virtually Limitless Power

    Keystone’s facility is a repurposed building in a 3,500-acre business park in Fairless Hills, called Keystone Industrial Port Complex, where the company is using the modular data center approach to expand capacity gradually. Schneider Electric manufactures the modules, which it designed together with the data center provider.

    Each module can hold up to 22 IT racks, and modules can be stacked on top of each other. Keystone’s data center design provides for six-module blocks, each of them consisting of two adjacent three-module stacks.

    Keystone NAP uses a 50-ton crane in the building to move the modules and customer equipment. Upper-level modules are accessible by stairs. (Photo: Keystone NAP)

    Keystone NAP uses a 50-ton crane in the building to move the modules and customer equipment. Upper-level modules are accessible by stairs. (Photo: Keystone NAP)

    The first module arrived to the site in late 2015. Six are up and running today. The time to get a module manufactured, shipped, and commissioned varies, but it’s safe to expect a 90-day delivery, from commitment to live date, Parker said.

    Keystone’s site has capacity to house and support 30 times its current load, or up to 180 modules total, which would include the current building and an expansion. Because it’s on a site of a former steel mill, it has access to a virtually limitless amount of power as far as data centers are concerned. “We really don’t have a capacity limit,” Parker said. “It’s just a matter of continuing to add transformers.”

    Flexible Infrastructure, Managed Services

    The company offers flexibility in power density and level of infrastructure redundancy – both capabilities all data center providers will have to have in the future. Fewer and fewer customers are content with colocation deals that lock them into a single power density, while more and more customers are looking at their application uptime needs more intelligently, looking for lower-cost lower-redundancy data center infrastructure to run non-critical apps on.

    Another trend Keystone is taking advantage of is the growing amount of enterprise customers looking for full-service data center providers, who will help them set up things like network connectivity, disaster recovery, security, and cloud. Keystone acts as a managed service provider in addition to providing data center capacity to cater to those needs.

    Aria Health, for example, is using the provider’s colocation services, managed virtual disaster recovery, cloud services provided by one of Keystone’s partners, and IP network and bandwidth services to its three main hospitals.

    No Relation to Steel Orca

    The Keystone NAP data center is in the same business park where another data center provider, called Steel Orca, attempted and failed to establish a data center. Steel Orca was going to build a greenfield data center there, but the project never got off the ground, and the company filed for bankruptcy last year.

    Parker, who provided some outside counseling services to Steel Orca, said there was no connection between the two companies and their respective data center projects.

    4:00p
    Can You Achieve Zero-Downtime Cutover with Data Migrations?

    Wayne Lam is CEO of Cirrus Data.

    When migrating data for enterprises, not having to upset the 24×7 operation in any way is the most important requirement. All of these systems are only allowed short, annually planned downtimes for maintenance. Data migration involves moving data from an old storage system to a new storage system, then switching all the application servers from the old to the new.

    In most cases the two storage systems are different, both in make and model, and certainly in firmware versions. With advanced migration appliances, insertions can be made into the live environment transparently without any disruption, then data is migrated from the old storage to the new. The migration process can be set to have minimal impact to the ongoing I/O by yielding to production traffic intelligently. Up until the point the application server cutover, zero disruption can be achieved.

    Cutting over to the new storage is another story. Here’s a review of exactly what is involved for one typical server cutover.

    When a host is switched from one storage to another, even if the LUNs have the exact data, it is a whole new set of LUNs for the OS. There is no mechanism to just “switch” from one LUN to another. For SANs, the closest possibility is to use the multipath to fool the system by pretending the new storage is just another set of paths to the original LUNs. With such impersonation of the identity of the original LUNs, the new LUNs can be treated as new paths of the original LUN. Therefore, the multipath driver can just switch to the new set of paths dynamically, whereby the original LUNs are now replaced by the new LUNs.

    In theory, this is zero-downtime cutover. Mission accomplished! As someone once said: “In theory, there is no difference between practice and theory. In practice, there is.”

    To do what the above described, there will be so much required for the new storage to add, it is doubtful any storage vendor will have the stomach to even contemplate it. For impersonating another storage, the inquiry string, vendor critical data pages, and many mode pages need to be dynamically changeable. In addition, the ALUA information has to be aggregated and emulated perfectly. Not to mention the path configuration, which must be able to preserve and support SCSI reservation for cluster operation. And then, think about all the different OS’s for all the hosts. Furthermore, all these operations are to be conducted by third-party migration appliances via standard APIs provided.

    It is probably easier to get the United Nations to agree on some peace plan for the world than to have all the vendors support such an initiative. One also has to remember, this dynamic transfer of state is to be done for every LUN, with every I/O at that moment. If any I/O is not handled properly, data corruption will be its fate.

    People tend to be unaware that zero-downtime cutover is generally not a permanent requirement. Meaning, even the most critical operation has scheduled downtime for maintenance. Most often than not, people want to migrate and cutover because they need to remove the old storage, either due to a problem, an end of lease, or a requirement for better performance of the new storage.

    An approach that is both theoretically sound and practically feasible is to enhance the migration appliance to be able to sustain the operation, using the new storage it migrated to, and allowing the old storage to be removed. This way, the user can control the timing of cutting over to the new storage, and at the same time, wait for the annual scheduled maintenance window to remove the appliances.

    Of course, all the technical details mentioned above still need to be handled, but it is the migration appliance that is implementing all the necessary functions, and not requiring all the new storage to handle that. There is no required cooperation from anyone, as long as their storage meets the overall standard specifications, which they must, otherwise it would not be working to start with. This is a lot more realistic. All the burden is now on the single party – people who build the migration appliances.

    This is still a huge technical challenge. Zero-downtime data migration, from installation to cutover, is what you should expect to see in next-generation data migration appliances.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:26p
    IBM Launches Cloud Data Center in South Africa

    IBM has expanded its cloud data center infrastructure to South Africa. The company announced the launch of a new IBM Cloud site in Johannesburg Tuesday.

    The company has been investing in expanding its global cloud data center footprint in recent years as it competes with cloud services giants like Amazon and Microsoft. IBM’s cloud data center network now consists of close to 50 sites around the world.

    It has partnered with African mobile and fixed-line operator Vodacom and IT services firm Gijima Group on the launch. Using Vodacom’s data center infrastructure, the companies will resell IBM’s cloud managed services in South Africa and other countries on the continent.

    The move extends the reach of SAP’s enterprise software services delivered on IBM’s cloud infrastructure to South Africa. IBM became a cloud infrastructure provider for SAP HANA Enterprise Cloud in 2014.

    Gijima has been providing services around SAP products in the region already and now will resell IBM’s cloud managed services to the existing SAP customer base there. Vodacom will resell the services as well.

    8:19p
    Going to the Cloud: Stories from the Frontlines

    You hear a lot from vendors and leading industry experts at data center shows, and they have a lot of good, insightful information to share. But some of the most interesting sessions are often people talking about projects they’ve done in their own data centers or IT departments, sharing stories of their successes and failures. These are rare glimpses at theory applied in practice – stories that most of the time don’t travel beyond the circle of those immediately involved.

    A number of real-world stories about enterprise IT shops making the transition to cloud – public and private, IaaS and SaaS – are lined up for next week’s Data Center World Global conference in Las Vegas. Here are some of the highlights from the list of data center frontline stories you’ll have a chance to hear:

    Juniper IT Goes from 18 Data Centers to 50 Racks

    Bob Worrall, senior VP and CIO of the networking technology giant Juniper, will share his experience transforming the company’s IT infrastructure by modernizing its application platform and migrating to cloud. The company set a target for a zero data center footprint for corporate IT in order to innovate faster and improve how corporate applications functioned. Between 2011 and January 2016, Juniper went from 18 data centers to just one with only 50 racks, resulting in significant cost savings and increased efficiency.

    School District Transitioning to Cloud

    Andrew Moore, CIO of the Boulder Valley School District in Colorado, will talk about his experience in transitioning an organization that serves more than 34,000 end users to the cloud, from moving calendars, email, and collaboration systems, to the more complex transition of the ERP and other mission critical applications. Topics will include security, access from anywhere, organizational considerations with ways to effectively manage them, change management needs, transition from a capital to expense budget model, and reduce the physical space needed to house traditional on-premise data centers.

    Healthcare Provider Switches to as-a-Service Model

    St. Joseph Health System data center manager Shawn Arcus and the system’s VP of infrastructure and operations Robert Rice will talk about the key cost considerations they’ve learned are necessary before changing the service model. Prior to potentially changing any service model, it is wise to know your current internal cost, and which portions of that cost may be modified if the service model changes, including fluctuating and hidden costs.

    Medical Center Implements SaaS

    Joe Furmanski, director of data center facilities and technology at the University of Pittsburgh Medical Center, will share his organization’s experience with implementing Software-as-a-Service: Where is SaaS a good fit? When may it be better to host on-premise? Who manages the application, and how can it impact your enterprise? Where does your data reside, and how should you approach disaster recovery planning, management of software upgrades and features? What are your direct costs beyond the SaaS fee? And, what is the impact on your network and security?

    Mapping Software Company Explores Public Cloud

    John Parker, disaster recovery and global data center operations manager at ESRI, a mapping software services company, will talk about understanding the cost of public cloud, including cloud VMs, networking, storage, data transfer, and other expenses. In a separate session, he will talk about identifying applications that are best fit for public cloud and about his company’s experience transitioning to the cloud and the unexpected issues that arose as a result.

    Join these IT leaders and 1,300 more of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website.

    11:47p
    HPE Rolls Out Storage Server for Cloud Data Centers
    By The WHIR

    By The WHIR

    Hewlett Packard Enterprise has completed its open infrastructure portfolio, with an open, cloud-optimized storage server, the company announced Tuesday. HPE Cloudline servers and HPE Altoline network switches for service providers are also being updated to meet the needs imposed on service providers by explosive data growth.

    The HPE Cloudline CL5200 is a high-density, multi-node storage server built on open design principles for adaptability and IT integration necessary to multi-vendor management environments. It supports up to 80 large form factor hard drives, for up to 640 TB of storage in a 4U chassis.

    HPE cites a 451 Research report suggesting that implementation of managed services and alternative technologies like OpenStack are differentiation opportunities for service providers in the quickly growing and evolving hosting and IaaS markets. That market is set to double by 2019 from $60 billion in 2015, while organizations shift workloads to increasingly specialized infrastructure and application environments. HPE also expanded its partnership with Scality in January, and Scality RING includes certified support for the CL5200.

    “Service providers are looking for the operational agility and faster data center integration that comes with deploying open infrastructures, while others value the integrated management and proven deployment capabilities that come with industry-standard offerings,” Reaz Rasul, vice president and general manager, Global Hyperscale Business, HPE said in a statement. “HPE is the only vendor that offers a choice of standard or open infrastructure solutions to accelerate business growth and provide service providers with the flexibility required to scale rapidly and cost-effectively.”

    The HPE Altoline 6900 family is being joined by four new switch models, to allow customers to tailor their networking infrastructure to specific application and workload requirements. Service providers will also be able to quickly scale at low cost with any product and avoid vendor lock-in, HPE said.The fastest of the new switches features 25/100 Gbps.

    Cumulus and Pica8 can be used as open source network operating systems with HPE Altoline switches, and OpenSwitch support is promised later this year.

    Read more: HP Partners With Arista on Data Center Switches

    New reference architectures for hyper-scale, multi-data center operations running on HPE Helion OpenStack with Swift represent the company’s commitment to scaling open storage, the company says. HPE touts the solution’s ProLiant Gen 9 servers and its performance to footprint ratio as part of a broader pitch for service provider partners.

    That pitch also includes a set of programs and services specifically for service providers, such as HPE Datacenter Care Services for Hyperscale, HP Service Provider Ready Solutions, and HPE Partner Ready Program.

    The company’s avowed commitment to open source and design service provider solutions gives HPE a target market to replace the one it lost when it closed its public cloud offering in January. Since that strategic move was announced, HPE has also released a Docker solutions portfolio, and begun selling Microsoft Azure. HPE capped 2015 by presenting its vision of the future for enterprise computing.

    This first ran at http://www.thewhir.com/web-hosting-news/hewlett-packard-enterprise-adds-to-service-provider-offerings-with-new-cloudline-servers

    << Previous Day 2016/03/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org