Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, July 29th, 2015

    Time Event
    12:00p
    The Floating Data Center Idea Isn’t Dead

    Take all the mechanical and electrical infrastructure necessary to support 8 MW of critical load, add the IT load itself, and put everything on a floating barge. What could go wrong?

    According to two San Francisco Bay Area entrepreneurs, not much. In fact, Arnold Magcale and Daniel Kekai believe a floating data center can be safer and, importantly, cheaper to own and operate. They believe it so much that they are building one at a dock on a US Navy shipyard about 20 miles northeast of San Francisco.

    Unlike Google’s tide-powered floating data center idea – which as far as we know is just a patent – or the Google barge that used to be docked at Treasure Island on the San Francisco Bay, which turned out to have nothing to do with data centers, this floating data center project is real and is currently under construction.

    Like an Equinix or a Digital Realty, the startup, called Nautilus Data Technologies, will provide data center colocation services. It plans to build multiple floating data centers in different places and offer services at competitive rates, exploiting energy efficiency of its technology, security, and low cost of aquatic real estate as its competitive advantages.

    A waterborne data center, Magcale argued, is safer from fires and earthquakes than a data center on land. It can also be easily moved from one place to another in case of emergency. He is a co-founder and CEO of Nautilus.

    Over the past six years the startup has been designing the facility, including its own patent-pending cooling system, which uses the water the data center is floating on. It is also building its own cloud orchestration platform and data center infrastructure management software, which the founders say will take advantage of machine learning to optimize its floating facilities for efficiency.

    Navy Shipyard Proves Ideal

    After a successful proof-of-concept, Nautilus is building out four 2 MW data halls on a 250-foot barge docked at Mare Island, a peninsula that’s part of the City of Vallejo and home to the Mare Island Naval Shipyard. Being at a military base adds to the security element of the project, and the founders hope to build the other data centers in military locations as well. Mare Island also had lots of excess power capacity and a ship builder that fit the startup’s needs.

    The current plan is to build a total of five these data vessels in various places. Each floating data center will be docked when in operation, getting power and network connectivity from land.

    The company is funded by a family that invests in the green energy sector, but Magcale declined to name the investors. The first vessel is costing Nautilus about $3 million per 1 MW of capacity, he said.

    Magcale has worked in data center, operations, and engineering roles since the late ‘90s. His co-founder Kekai, the company’s data center and cloud infrastructure architect, has had an internet-infrastructure career similar in length. Both founders have worked at Exodus Communications, one of Silicon Valley’s first colocation giants that went bankrupt during the dot-com bust. The two have been colleagues at Microsoft, Motorola, and Quantum Capital Fund.

    Efficiency, Fast Deployment Key to Strategy

    They hope they can convince retail colocation and wholesale data center customers that the highly unorthodox solution can be more advantageous than traditional colos on land. According to the founders, one of the big advantages besides disaster avoidance is efficiency of the cooling system.

    A closed cooling loop runs through each data center rack, collecting heat from the IT gear. It then passes the heat through a heat exchanger to cool ocean water, which is in turn expelled back into the ocean. The system is extremely efficient since it doesn’t require cooling towers, chillers, or air handlers and takes water directly from underneath the barge.

    Another advantage is quick deployment. The Nautilus team expects to complete the 8 MW barge in six months. The founders claim they can replicate the timeline anywhere in the world.

    Of course, every location will have its unique complications. In the US, for example, they had to ensure it would be permitted by the US Coast Guard and comply with environmental regulations. The thinking is if Nautilus can pull it off in California, one of the more tightly regulated states, it can do it anywhere, Magcale said.

    The barge at Mare Island is only the first iteration of the concept. Kekai is already thinking about future-generation barges that will use fuel cells as a primary source of power, relying on the electrical grid for backup, he said. Together with the cloud-service provisioning system and the advanced DCIM solution Nautilus is developing, the company promises to have a fairly sophisticated, modern data center solution.

    But, whether they’ll be able to make the case that a data center on water can be better than a data center on land remains to be seen. Given the notoriously conservative nature of data center users, Nautilus is facing a steep climb.

    3:00p
    Avoiding Private Cloud Pitfalls

    Alex Henthorn-Iwane is responsible for marketing and public relations for QualiSystems.

    There’s a decent amount of private cloud bashing going on these days, and it appears the bashers have grounds for their critique. According to Gartner, between 2011 and 2014 the total number of virtual machines (VMs) has tripled, yet the percentage of VMs living in private clouds didn’t budge from its 3 percent position. On the other hand, public cloud VMs rocketed from 3 percent in 2011 to 20 percent in 2014.

    In fact, prominent Gartner Analyst Thomas Bittman recently released a report titled, “Internal Private Cloud Is Not for Most Mainstream Enterprises.” That’s a pretty damning statement in and of itself. So, is the best way to avoid private cloud pitfalls to avoid private clouds all together?

    I don’t believe that’s what Bittman intended. I think it’s that most mainstream enterprises should focus on “agility” initiatives on public cloud applications and teams running in full DevOps mode. Meanwhile, traditional IT should move toward something less ambitious but not a private cloud in the true sense.

    Regardless of what name to apply to it, there’s no doubt that IT needs to move faster across the board. If you’re dealing with applications and infrastructure that aren’t ready to move to the private cloud, then how do you move forward? Here are a few guidelines to help increase your chances of success:

    Start With a Single Solution Cloud

    You should first focus on a single team that allows you to put all your efforts toward satisfying a clearly identifiable client organization. In fact, Bittman writes, “Private cloud projects will usually fail if they are scoped too large, move too fast or are developed without an evolving, comprehensive road map for services and processes.” Even if you have big plans, restrict the scope to ensure that you don’t generalize so much to meet everyone’s requirements that you don’t meet anyone’s in specific.

    Understand Deploy Differences

    Make sure you know that difference between supporting developers and testers and production application deploys. It’s very common to start private cloud initiatives by choosing development test use cases. However, the way that infrastructure is allocated differs between the two use cases.

    First, application deployment infrastructure use patterns are similar to one-way trips. Infrastructure resources are allocated to application components, then deployed until decommissioned, which could be years.

    On the other hand, development test use patterns are like round-trips. Users need to be able to access infrastructure environments (anything from a single VM to a hybrid physical/virtual environment with networking switches, etc.) for a relatively short period such as hours to weeks. Once they’re done, the resources need to come back into the shared pool and be baselined, so they’re ready to be deployed by the next user. Having a clear handle is a good way to ensure that you don’t miss the requirements of your starter team and use case. Keep double-clicking on the requirements, particularly if the use case is new to you. Often, IT and developers haven’t spent a lot of time communicating, so if you’re trying to serve developers and testers, you’ll need to offer a complete infrastructure solution. If the team you’re trying to service is working across legacy/physical and virtual infrastructure, there’s no choice but to address all of those infrastructure elements. The alternative is to miss the use case.

    Procure and Allocate Automation Skills

    Cloud management platform products can help provide the scaffolding and substance needed to achieve a functional infrastructure-as-a-service offering to your starter use case. Unless you are fortunate enough to be in a single vendor infrastructure, you’re very likely to have to do some automation planning and integration. You simply won’t be able to achieve your goals without getting at least one person who is good at automation scripting and has a grasp on data models. That’s still a far cry from trying to build from scratch and the need for a big team of professional coders. However, the point is that building infrastructure-as-a-service (IaaS) is automation, not automagic.

    The term “private cloud” is a controversial at the moment. Terminology aside, you can still pursue the modernization of your on-premises infrastructure to increase efficiency and velocity. Your development, test, security, compliance and other teams can work a lot faster and more productively via infrastructure as a service. By taking careful, well-planned, properly scoped steps, it’s possible to make IaaS happen.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:30p
    HP Buys ActiveState’s Cloud Foundry PaaS Stackato

    HP is acquiring ActiveState’s Stackato Platform-as-a-Service business to capture the developer market. Both the Stackato product and team members are joining HP. ActiveState will continue as a company, providing non-Stackato-based offerings, refocusing around its tools and languages products and business.

    Stackato gives HP a developer productivity weapon for its Helion portfolio, however, it’s not an entirely new addition. HP and ActiveState have been partners since 2012, with the Stackato PaaS already running and tied to Helion.

    HP’s Helion cloud strategy is founded on OpenStack and Cloud Foundry, the open source PaaS born at VMware. Stackato will be incorporated into HP Development Platform, the vendor’s version of the Cloud Foundry PaaS launched in 2014. Stackato is open source and also built on Cloud Foundry, but it also includes much-desired Docker support. Linux containers are in high demand with developers, whom HP is targeting with this acquisition.

    The Helion portfolio as a whole is targeting IT infrastructure modernization currently underway on a wide scale. Many applications are being tuned or retuned for cloud, with enterprises employing a hybrid of deployment models. HP is focusing on these hybrid cloud needs rather than competing on the public cloud front, committing a $1 billion investment to hybrid cloud last year. In the background, HP itself is splitting into two companies, one focused on consumers the other on the enterprise.

    PaaS plays an important role in hybrid cloud, acting as a middle layer between developers and infrastructure and also providing developer tools. ActiveState studies claim Stackato helps enterprise customers deliver applications 30 times faster than the traditional development cycle and with a 90-percent cost reduction.

    “HP’s acquisition of Stackato further demonstrates our commitment to Cloud Foundry technology and broadens our hybrid cloud capabilities,” HP said in a statement. “Expanding our presence in the Cloud Foundry community is critical to our strategy of helping enterprises transition from traditional IT systems to a hybrid infrastructure.”

    In 2014, Cloud Foundry saw close to 35 percent increase in community contributions. The foundation behind the open source project is positioning the technology as a global standard for open PaaS, though there are opposing factions, such as Red Hat’s OpenShift.

    ActiveState teamed with Piston Cloud Computing on private PaaS in 2014, prior to Piston’s acquisition by Cisco this year. Piston founder Joshua McKenty recently left Piston to join Pivotal, an EMC company, as CTO.

    Pivotal took over Cloud Foundry duties after it was formed and then formed the Cloud Foundry Foundation to give the project a more vendor-neutral governance structure.

    4:53p
    Intel and Micron Change How Non-Volatile Memory Works

    Intel and Micron announced the start of production on a new class of non-volatile memory, promising speeds up to 1,000 times faster than NAND memory. Intel says the new 3D XPoint (Cross Point) technology was developed using unique material compounds and a cross point architecture to enable not only the copious speed increase, but 1,000 times greater endurance than NAND and 10 times greater density than conventional memory.

    The new chips will be produced from the Intel Micron Flash Technologies labs in Utah, a joint venture formed by the companies back in 2006. After more than a decade of research and development, Intel said the new 3D XPoint technology ushers in a new class of non-volatile memory that significantly reduces latencies, allowing much more data to be stored close to the processor and accessed at speeds previously impossible for non-volatile storage. IMFT introduced several new products, including new 3D NAND flash memory last spring.

    The innovation in this technology lies in both the three-dimensional cross point architecture and using a bulk material property change instead of an electrical charge to store data, which uses the entire memory cell (enabling smaller dimensions). This transistor-less approach differs from other 3D NAND methods using a floating gate approach.

    As the first new innovation since NAND flash was introduced in 1989, Intel said 3D XPoint is a fundamentally different class of memory. Senior VP and general manager of Intel’s Non-Volatile Memory Solutions Group Rob Crooke said in a statement that the advancement will “enable us to scale computing even further.” Intel places 3D XPoint between memory and storage. “When we scale processing performance, memory and data can scale with it,” Crooke added.

    “One of the most significant hurdles in modern computing is the time it takes the processor to reach data on long-term storage,” said Mark Adams, president of Micron, in a statement. “This new class of non-volatile memory is a revolutionary technology that allows for quick access to enormous data sets and enables entirely new applications.”

    The potential applications for this new technology are vast, and both Intel and Micron say they are developing individual products based on it.

    5:13p
    SoftLayer Founder Crosby Shares Lessons of Success

    logo-WHIR

    Story by TC Doyle for The WHIR

    Chairman and CEO of “To Be Announced.”

    A lot of business practitioners would have a hard time listing that as their current position in their LinkedIn profile. But not Lance Crosby.

    Crosby, of course, is the founder and former CEO of SoftLayer. In 2013, he sold the company to IBM for $2.1 billion. In February of 2015, he resigned his executive post as the General Manager of Big Blue’s cloud business. Since then, the industry has awaited his next move.

    Crosby has been tight-lipped for the most part about his plans, but has made it clear through social media and other forums that he has some ideas under consideration. Before tipping his hand, the entrepreneur reflected on what made him successful, and what lessons he can pass along to others in the technology hosting industry. On July 27, Crosby shared his ideas in a keynote address at the HostingCon Global 2015 Conference underway this week in San Diego. His presentation to a packed crowd of several hundred hosting companies and technology developers was a mix of to Sun Tzu, Dale Carnegie and Stephen Covey.

    For those that do not know his story, it’s worth recapping how SoftLayer came into being 10 years ago this week. Before launching the company, Crosby served as COO of one of the largest hosting companies in the business. Dissatisfied with the direction the company was taking, Crosby invited 11 colleagues to a breakfast one morning in May 2005 to discuss their future. In the subject line of the email invite, he typed, “Time to Go.”

    At breakfast the following morning, he presented an idea for a new kind of technology company. The organization, he explained, would pre-rack and stack all of the technologies in data center that customers needed to power up key services in under an hour. Though commonplace today, the idea was a revolution a decade ago. After breakfast, nine of the 11 invitees resigned. They did so rather cheekily, handing in one resignation letter, from the living room of Crosby’s home.

    For several weeks, Crosby and his team shared ideas and visions for their new cloud company. Try as they might, they could never get their vision codified in any way that would lead to a commercially viable business. Frustrated, Crosby sent the team home one Friday and asked that they all return with a solid plan the following Monday. None of them made any progress, however. Exasperated, he demanded that his team pitch him despite their reservations. The more he pushed, the more outlandish their ideas became. Initially, the plans were too expansive. Then they were too technically complex. But Crosby persisted in his demand that they refine their thinking and push through the status quo and pre-conceived ideas. And then the team began collaborating on an idea that was the inspiration behind SoftLayer.

    The team devised a cloud service that brought together three disparate networking services in a scalable, reliable way. Although it the idea was prohibitively expensive on paper—as much as three times over other designs—Crosby was convinced that the team was onto something.

    Go Big

    The idea gave birth to SoftLayer, which launched officially on July 28, 2005. From the experience, Crosby says he learned an invaluable lesson: When envisioning the future, don’t allow everyday limits restrict your thinking. If his team had thought more conservatively, he told his audience, SoftLayer would never have been the disruptive force that it eventually became.

    To get to that point, SoftLayer had to survive some difficult times. The team’s former employer, for example, did not take the well to the mass resignation that Crosby inspired. Hoping to slow if not punish Crosby, it filed a lawsuit listing 179 causes of action against he and the new company. While such a burden would have derailed many entrepreneurs, Crosby dug in deep. Doing so taught him another invaluable business lesson: Never let anyone push you around. What is more, he discovered, the more aggressive your adversary, the more fearful of disruptive innovation they may be, he said.

    Brashness aside, SoftLayer ran into difficult financial times. Without a finished product to sell, the company’s financial liabilities started to pile up. Fearing for the company’s future, Crosby asked that everyone at SoftLayer put money into the company. He himself, put $1 million of his own money into it. Others took out second mortgages on their homes while still more hit up relatives for seed money. Even the company’s youngest employee, a 22-year old engineer at the time, was pressured to invest in the company. Lacking any cash, Crosby persuaded the young man to maxx out his credit card to the tune of $5,000.

    While some might think the move was overly aggressive if not predatory, Crosby said the gesture paid huge dividends—figuratively and literally. When everyone has “skin in the game,” he told his audience at HostingCon, the team becomes more committed. As for the $5,000 investment the young worker was “forced” to put into the business, it turned into a $16 million windfall for the individual when IBM bought SoftLayer.

    Try as it might to get by without a product, the company eventually ran into more money problems. So Crosby insisted that it go live with its product. Though his engineers objected because their work was not yet finished, Crosby insisted. Truth be told, he had little choice. SoftLayer needed revenue and it wasn’t getting anymore money without sales. “We created continuous deployments on that day,” he told the crowd in San Diego.

    And live the company went. Though its service offered only four or so basic capabilities, it was enough to attract some early customers. “Good enough technology,” he learned, will suffice when perfect is not an option.

    With a product at the ready, SoftLayer generated $4 million in sales in its first year. Fueled with cash flow, Crosby sought out investors. Though turned down 72 times, he persisted. Finally, he found some angel investors willing to put money into the company. But he needed more money still. So he took out loans that charged 30 percent interest. Although the decision violated his sense of financial propriety, the entrepreneur in him knew it was the right thing to do.

    “People get hung up on rules or ideals when striving to build a business,” he said. “But I didn’t. I knew I could make money with that rate, so I took the terms.” He advises others do the same. When faced with an unprecedented opportunity, don’t let conventional thinking limit your actions.

    With momentum rolling, SoftLayer began attracting a great deal of attention. Sales grew to $4 million. Then $16 million, then $40 and finally they reached $80 million.

    The one thing he couldn’t shake, however, was the lawsuit that dogged him from the beginning. So Crosby did the unthinkable: he decided to cut a deal with the company that was suing him. Despite the animosity that existed between the two organizations, he believed a merger might be the best way forward. Initially he was rebuffed. But two weeks later, representatives from The Planet contacted him and said, “we’re interested.”

    Crosby cut the deal and SoftLayer and The Planet became one. When they did, the two organizations created the largest private hosting and cloud services provider in the world. For the next six months, Crosby painstakingly integrated the two companies. The work was particularly difficult because he had to work some people whose affections he previously alienated. But then a wonderful thing happened: the companies meshed and growth resumed anew. The experience taught Crosby to look beyond personal feelings and do the right thing by the company. It also taught him to be aggressive and propose the unthinkable when the moment strikes.

    SoftLayer taught him additional lessons. One of the most important was timing. In the spring of 2012, he began to believe that the company needed greater scale to compete with the likes of Amazon and others. So he approached investors and told them it was time to sell. Most were stunned. We’re growing at 30-40 percent a year, they noted. But Crosby insisted. When IBM paid $2.1 billion in cash for the company, he was relieved, realizing another lesson, one played out in business a million times over: timing is everything.

    “Entrepreneurs start, run and then sell companies,” he said. “That’s what we do.” Though he said he learned a great deal at IBM, he realized that working for a big, established global power wasn’t for him. So he walked away.

    Looking back, Crosby said his experience at SoftLayer taught him volumes about himself and business.

    Among the lessons he learned: be bold and aggressive, and don’t back down from bullies. SoftLayer also taught him to act boldly and do the unthinkable in many instances. It reinforced how important timing is, and what it means to put skin in the game. It also taught him how important culture is to a company. Though young and hungry, he always made sure his employees had a world class healthcare plan and that their office environs were the best that he could afford. As a result, few bailed on him when times were tough.

    Finally, SoftLayer taught him to listen to his heart. An entrepreneur through and through, Crosby told his audience at HostingCon that a world of great ideas awaits. You can translate that to mean that he will fill in the blank on his LinkedIn profile where it says “Chairman and CEO of ‘To Be Announced’” someday soon.

    When he does, an entire industry will take note.

    This first ran at https://www.thewhir.com/web-hosting-news/softlayer-founder-shares-lessons-of-success-at-hostingcon

    7:09p
    Alibaba to Put Cloud Data Centers in Europe, Middle East

    Aliyun, the cloud computing business of the Chinese internet giant Alibaba, is planning to establish data centers in Europe and the Middle East as part of a $1 billion global expansion initiative announced today. Other new cloud data center locations will be in Singapore and Japan, the company said in a statement.

    Aliyn currently has five data centers in China and Hong Kong. It launched its first US data center in Silicon Valley earlier this year. A company executive said in a press conference in Beijing earlier this month that another Aliyun cloud data center would come online in the US within several months.

    As common in the world of cloud service providers, Aliyun’s cloud strategy relies to a great extent on partnerships with other providers, including its data center strategy. The Chinese cloud company has used data center service providers to expand its footprint outside of mainland China.

    It used the Hong Kong utility and telco Towngas’ data center services to expand to Hong Kong last year, which was its first foray outside of mainland China. Aliyun hasn’t said much about its US data center strategy, but industry insiders have told us it is leasing its data center space in Silicon Valley from one of the biggest providers there.

    Aliyun has partnerships with Singtel, a Singapore telco which also happens to be the biggest data center provider on the island. It has a technology-services joint venture with Meraas, a government-owned Dubai company that’s building a tech-oriented master-planned community in Dubai that will include a data center where the joint venture will be an anchor tenant.

    The Chinese company has partnerships with Intel, the global data center provider Equinix, French managed services provider Linkbynet, and PCCW, a Hong Kong ICT company.

    Also today, Aliyun announced a partnership with Yonyou Software, the biggest enterprise software vendor in China. The collaboration will focus on cloud computing, Big Data, digital marketing, and e-commerce.

    8:10p
    NTT Acquires Indonesian Data Center Provider PT Cyber CSF

    NTT Communications has acquired Indonesian data center provider PT Cyber CSF as part of ongoing global expansion. Financial terms of the deal were not disclosed. Founded in 2012, Cyber CSF is headquartered in Jakarta. It operates an eight-story, 75,000-square-foot data center with 24 MVA capacity.

    Indonesia is a neighboring market to Singapore, the big data center market and major commercial hub in the region. Structure Research recently said neighboring markets to Singapore, such as Indonesia and Malaysia were beginning to compete as well, although Singapore is much further ahead of both.

    “Indonesia is emerging as a viable data center market, and NTT has essentially decided to buy a platform and market presence in an up-and-coming market and gain some level of first-mover advantage,” said Jabez Tan, senior analyst at Structure. “Acquiring a quality asset in Jakarta positions NTT to further extend its unified infrastructure product set throughout a largely untapped Indonesian market.”

    Tan called the move to acquire the Indonesian data center provider a good representation of NTT’s data center strategy. “NTT is not trying to consolidate the market but rather is looking to build out its platform in strategically placed markets on a global basis,” he said.

    NTT’s global expansion is often fueled through acquisition. Its most recent prior acquisition was German data center provider e-shelter in a €742-million deal in March. It acquired majority ownership in RagingWire to strengthen its North American presence last year. In 2012, it acquired Gyron in the UK and NetMagic in India.

    A recent Asia Cloud Computing Association (ACCA) report noted Indonesia, along with India, has a particularly healthy appetite for cloud services. Indonesia and China have the most SMEs.

    NTT said in a press release that the Indonesian Information and Communication Technology market is expected to average about 10 percent annual growth through 2017, exceeding growth in most other Southeast Asian countries.

    However, this growth necessitates infrastructure and cloud education maturing. Cloud services are currently too expensive and infrastructure is not yet stable enough, according to ACCA. But continued investment from the likes of NTT is likely to change that situation.

    Beyond infrastructure, the big barrier to cloud adoption in emerging markets like Indonesia is a matter of education. The ACCA study found only three percent of SMEs in Indonesia knew the basics of cloud computing.

    Synergy lists NTT as the third-largest global data center provider and notes the company is very active in this year’s consolidation bonanza.

    8:19p
    iomart Receives £60 Million to Fund Further Acquisitions

    logo-WHIR

    This article originally appeared at The WHIR

    Scottish cloud and managed hosting company iomart has received £60 million (approximately $93.5 million USD) from the Bank of Scotland to fund further acquisitions, the company announced Monday. iomart has made 12 acquisitions in the last five years, and is seeking to extend a period of rapid growth.

    Public cloud consultancy SystemsUp became the latest iomart acquisition in June in an estimated £12.5 million deal. Backup Technology, ServerSpace, and EQSN are among the other companies picked up by iomart since 2010.

    “Our ambition is to be the market leader in the UK for cloud and managed hosting services including the growing opportunity to provide hybrid cloud,” iomart CEO Angus MacSween said. “We will continue to look for well positioned businesses that can give us the specialist skills and knowledge we need to achieve that, knowing we have the firepower to move quickly when the right opportunity arises.”

    iomart reported an 18 percent increase in annual revenue to £65.8 million, and a 14 percent increase in profit to £16.6 million for the year ending March 31 2015. The company turned down an acquisition offer of £300 million from German web host Host Europe Group a year ago, and picked up an international revenue boost in a deal to provide backup services for IIJ Europe earlier this month leveraging the earlier Backup Technology acquisition.

    Among the few clues as to what company might be targeted by iomart next are MacSween’s references to “hybrid cloud” and “specialist skills,” both of which could be used to describe the services of recent iomart acquisitions. Hybrid cloud adoption could triple in the next three years, according to an April Peer 1 report.

    This first ran at https://www.thewhir.com/web-hosting-news/iomart-receives-60-million-to-fund-further-acquisitions

    << Previous Day 2015/07/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org