Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, August 20th, 2015
| Time |
Event |
| 12:00p |
Exclusive: RagingWire Takes Its Massive-Scale, Luxury-Amenities Data Center Model to Texas Everything is bigger in Texas, as the saying goes, and Reno, Nevada-based RagingWire should fit right in. The data center provider, majority-owned by Japan’s NTT Communications, is planning its first Texas data center, bringing to Dallas-Fort Worth its penchant for massive campuses with the luxury feel of W hotels.
The company, whose customer list includes Twitter, is preparing to buy a 42-acre tract of land in Garland, a fairly large city immediately to the northeast of Dallas. Garland officials approved an economic incentive package for the company earlier this week.
Dallas-Fort Worth is one of the top and fastest-growing US data center markets. While Garland is part of the metroplex, it hasn’t seen the kind of data center activity its neighbors Richardson, Plano, Carrollton, and Dallas have seen in recent years. City officials hope to change that, and the deal with RagingWire is their first big move.
As RagingWire’s team was shopping for a site in the Dallas market, focusing on those already active data center cities, Garland officials approached them early on, Bruno Berti, director of product management at RagingWire, said in an interview.
After some examination, the city turned out to be quite ideal for the company’s purposes. It offered cheap power and was next door to Richardson, which is where a big chunk of the Dallas market’s data centers are located. What that proximity means is a lot of fiber-optic network infrastructure “a stone’s throw away,” as Doug Adams, RagingWire’s senior VP and chief revenue officer, put it.
Although RagingWire will have to build more fiber to connect its future campus to that infrastructure, it will not be over a long distance. “Because we are in the key area within the Dallas marketplace, there is a lot of fiber nearby, but it is a few miles away,” Berti said. “We will have at least two to four separate paths from those fiber points into our facility.”
The city has offered the company low-cost power, property tax breaks, and some subsidies for fiber construction as incentives, Adams said. Garland operates its own utility, called Garland Power and Light, which is why it is able to use energy rates as leverage.
At full build-out, the campus will be about 1 million square feet in size, consisting of five buildings, 16 megawatts of data center capacity each. The company will offer everything from retail colocation to 4MW vaults, Berti said. For the first time, RagingWire plans to also sell capacity in 1MW increments.
Customers that prefer to stay close to their servers will get about 1,000 square feet of office and storage space per 1MW of data center capacity they lease, Carl Lubawy, the company’s director of critical facilities design and development and architect of the future Texas site, said. The offices will be interconnected, so nobody will have to face the scorching Texas heat during their work day if they don’t want to.
Like RagingWire’s recently opened CA3 data center in Sacramento, California, its Texas campus will have amenities that will “make it feel like a W hotel,” Adams said. Its third Sacramento data center, launched earlier this year, has a sleek modern interior design, a gym, a game room, and a climbing wall.
Dallas-Fort Worth will be the company’s third market. Besides Sacramento, it has a data center campus in Ashburn, Virginia. But expect RagingWire to continue adding push-pins to the map. Dallas happened to be the first of a handful of major US data center markets the company wants to step into – an aggressive expansion plan enabled by access to funding that came with the NTT acquisition two years ago. Other potential markets on the list are New York, Chicago, Silicon Valley, and another metro in the west, such as Los Angeles, Seattle, Phoenix, or Denver, Adams said.
He views aggressive expansion as a must for a data center provider that wants to survive and especially thrive in today’s market. Large colo companies and cloud providers are doing really well, while the smaller niche players aren’t, he said. “We needed to really scale to do well. That ability was hampered by our ability to raise cash” before the NTT acquisition. The company’s growth plans were outpacing its ability to raise capital at reasonable interest rates, which is why the NTT deal was attractive.
RagingWire’s scale is also no longer limited to US. Its customers can now tap into NTT’s vast and rapidly growing global data center empire. The NTT acquisition gave the Reno company the cash to expand on its home turf and also that global reach that is so important to competing in today’s data center services market. | | 1:00p |
Mesosphere and Cisco Partner on Turnkey Data Center Platform for Open Source Tech Looking to make it simpler for IT organizations of all sizes to embrace emerging open source technologies like Apache Spark, Kafka, Cassandra, and Akka, Mesosphere, today unveiled the first in a series of turnkey platforms based on its Datacenter Operating System.
Developed in collaboration with Cisco, the Mesosphere Infinity platform makes use of Mesosphere DCOS to provide a higher level of abstraction that enables IT organizations to deploy complex technologies via a single click, Mesosphere CEO Florian Leibert said. The end result is an unprecedented level of data center efficiency in terms of both scale and manageability, according to him.
“We can now manage 10,000 servers using a single engineer,” said Leibert. “The previous industry average was somewhere between 100 to 500 servers.”
Aimed at IT organizations looking for ways to deploy complex open source technologies and drive a new generation of Big Data and Internet of Things applications, Mesosphere Infinity is built around DCOS, which in turn is based on open source Apache Mesos cluster manager software, originally developed at the University of California at Berkeley.
Infinity takes that data center operating system concept to the next logical level by identifying all the deployment and operational complexities associated with packaging all key component services into a single interoperable stack. That stack can then be deployed in the cloud or on-premise, using either the community edition of Mesosphere or the commercially-supported distribution that Mesosphere sells to enterprise IT organizations. If one server crashes, Liebert said, Infinity automatically redeploys the stack on the elements of the cluster that are available.
While the first instance of Infinity will initially focus on IoT and Big Data applications, Mesosphere plans to extend the reach and scope of the platform to other application scenarios as part of an effort to make DCOS more accessible to the average enterprise IT organization.
In addition to Cisco, Mesosphere has recruited partner support from Elodina, a provider of a Big Data as a service applications that is a leading contributor to the Apache Kafka messaging software project, Typesafe, a leading contributor to the Scala programming language in which Spark, Akka, and Kafka are all written, and Confluent, a provider of stream processing platform founded by the creators of the Kafka project.
For its part, Cisco is using these technologies to drive both internal projects as well as including them as foundational components within its hybrid Intercloud initiative.
While it’s unclear just yet how many organizations actually have the level of IT maturity needed to adopt Mesosphere Infinity, the one thing that is certain is that very few of them have the time or skills needed to manually configure complex clusters. As a result, Liebert said, the only way many of them will ever be able to deploy technologies such as Spark and Kafka is to embrace a higher level of abstraction that automates the process.
Mesosphere is reportedly in acquisition talks with Microsoft. | | 3:00p |
Transparency: Customers Demand It – Can Data Centers Provide It? Byron Miller is the Senior Vice President of Operations at FORTRUST.
Knowledge is power. As a colocation customer, can you imagine having real-time information on the power, cooling, and relative humidity being supplied to your IT assets available 24/7 on your mobile device? Plus, what if there were a way to verify whether or not your SLAs are being fulfilled?
We are in a sea of change in the colocation data center industry. Real-time visibility is in demand, and the era of transparency is upon us.
A data center partnership requires a lot of trust. Clients trust that they will receive the amount of power they need, the network connection and uptime they paid for, a well-maintained facility without risk from disasters, and a building supplied with equipment that will cool and power effectively, while also being monitored and maintained by a trained and attentive staff.
Without a metric or tool to check whether these SLAs are being met, they become promises a customer trusts their data center is fulfilling. A customer will only know otherwise when something goes wrong.
When a customer needs to add more power and learns there is no more power available, this leaves a colocation customer between a rock and a hard place. Or worse, clients pay for power they don’t need, costing their business thousands and taking power away from other data center clients. In other cases, servers might overheat and damage equipment, but without visibility into a monitoring system no one knows until the situation becomes dire and costly.
Unless colocation customers plan to work next to equipment and monitor all of these variables 24 hours a day, they will never know.
However, this is changing. With the advent of Data Center Infrastructure Management (DCIM), new customer-friendly tools are emerging that give colocation clients complete visibility and transparency. These tools are incredibly convenient and offer efficient management for data center staff.
Colocation customers can see whether their SLAs are being met, as well as access real-time data about their colocated environment. They can also trend data on their colocated environment regarding power usage, temperature, and relative humidity. The alternative to this would be manually taking readings for a single point in time.
Data center managers can ensure clients receive the precise amount of services they need and have paid for, allowing them to better manage and allocate resources.
There is a finite amount of power and cooling in any data center – incorrectly allocating resources can be an egregious error. DCIM systems make assessing when to expand, when to add more capacity, and when to employ cooling to areas of the data center very straightforward. Monitoring tools allow for intricate review of the data center, revealing potential issues before they progress.
With all of the incredible benefits, why don’t all data centers have DCIM tools?
Two reasons: Implementing a DCIM is costly, particularly for a legacy data center; each data center is different and there is no “one-size-fits-all” solution. For a data center to take on the cost of implementing a DCIM system may simply not be a financial possibility.
Recently at AFCOM, a room full of roughly 100 data center professionals were asked to raise their hands if they had implemented DCIM. One-third of the hands went up.
There are DCIM tools and DCIM software available on the market, but options are limited and it is difficult to find a solution because no two data centers are alike. The amount of data points that need to be monitored vary from one data center to the next. For example, at FORTRUST, we monitor close to 100,000 data points throughout our data center infrastructure.
Some data centers can’t do branch circuit monitoring at the circuit breaker level, so they can’t track on an individual circuit level — making power utilization monitoring difficult. The problem with many DCIM tools is that they are not easily customizable by the end user. This has left DCIM developers scrambling to create a solution and fill the gap in the marketplace.
After researching and testing out multiple Building Management Systems (BMS) and DCIM tools, I found that none worked for our facility. We finally found a system that would allow user-friendly customization for our specific needs without needing translators, black boxes or incurring additional cost from the manufacturer to customize. From that DCIM system we created our own tool, giving colocation customers 24-hour real-time access to an in-depth view of their colocated data center environment via web login.
If your data center doesn’t offer a similar tool or isn’t operating a DCIM system, it is time to ask if this will change in the future. In the interest of transparency and data center management, DCIM and customer login monitoring tools are the wave of the future. Be sure your provider is on board.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:30p |
Microsoft Offers Windows 10 Edge Testing on Downloadable VMs The jury still appears to be out on whether the use of Microsoft’s new Edge browser with Windows 10 is going to be as successful as the company originally thought. As reported in a Computerworld article, only between one-sixth and one-third of customers using the newest operating system have also embraced Edge.
In fact, the article also stated that in the first 16 days of August, the global average daily usage share was 0.7 percent for the browser, compared to Windows 10 at 4.4 percent. In an effort to boost confidence in the compatibility and ease-of-use between the two, Microsoft has made downloadable virtual machines (VMs) available for use by anyone wanting to test various scenarios for Windows 10 and Microsoft Edge, according to our sister site Windows IT Pro.
The VMs expire after 90 days and are available thus far for Hyper-V 2012, VirtualBox, and VMware, but Microsoft promises more formats are forthcoming. The company hopes that individualized testing will convince potential customers to use the new OS with its latest internet browser.
That’s not the only promise coming from its Redmond, Washington headquarters. Since the release of Windows 10 back on July, IT administrators have been asking for a Remote Server Administration Tool (RSAT) that’s compatible with Windows 10. The current version only works with Windows 7 and Windows 8.1, but Gabe Aul, the company’s corporate vice president, says the updated RSAT should be available by the end of August.
In other Microsoft news, Windows Insiders are in the process of previewing a new build of Windows 10, as reported by The Verge; and have thus far given feedback regarding default color options as well as memory enhancements that would reduce the amount of memory used per process so that the OS can keep more applications in physical memory at a time.
To read the original post and to find out the bug you should be aware of in the Windows 10 VM, visit http://windowsitpro.com/windows-10/download-vms-test-microsoft-edge-and-windows-10 | | 5:26p |
Seagate Drops $694M on Dot Hill to Boost its Cloud Business Seagate announced it has entered into an agreement to purchase Dot Hill Systems for approximately $694 million. The external storage array systems and software from Dot Hill will fold into Seagate’s Cloud Systems and Electronics Solutions business.
Seagate expects the hardware, software, numerous patents, and private label OEM channel from Dot Hill to foster growth in its cloud business as it evolves. The Dot Hill patent portfolio across storage, data tiering, and the cloud is extensive, and its software is white-labeled inside some large providers, most notably HP and Dell.
Seagate President of Cloud Systems and Electronics Solutions Phil Brace noted the importance of OEM customers in advancing its strategic efforts and said that the company looks forward “to welcoming Dot Hill’s strong team, which has proven experience in developing and delivering best-in-class storage solutions that are trusted by the world’s leading IT manufacturers and their channel partners.”
After purchasing the flash business from LSI last year and other strategic acquisitions, Seagate is investing in hyperscale and cloud workloads.
It is also a founding member of the Kinetic Open Storage Project launched this week by the Linux Foundation. The company has been developing this open storage platform for a number of years, as it looks to eliminate the storage server tier of traditional data center architectures. | | 6:22p |
Linux and Windows Servers to Be Cogs in One Data Center OS Wheel While Mesosphere may or may not become part of Microsoft – that anonymously sourced report hasn’t been denied or confirmed – the two companies are certainly working to blend their technologies.
This morning at MesosCon in Seattle, Microsoft unveiled an open source project that ports Apache Mesos, the heart of Mesosphere’s Datacenter Operating System, onto Windows Server. The end goal is to give developers and IT ops a single interface to deploy and manage applications in Docker containers across infrastructure that consists of both Linux and Windows servers.
Mesos on Windows Server is already available on the Mesos GitHub, although at the moment it’s a proof of concept, and the companies have invited other developers to take part in the open source project.
Since the majority of the world’s servers run either Windows or Linux, the project’s potential addressable audience is quite large. The idea is to enable the new generation of applications that consist of so-called “microservices” packaged in individual Docker containers, Microsoft Azure CTO Mark Russinovich explained in an interview posted on the Mesosphere blog.
Container-enabled microservice-oriented application development benefits a lot from the orchestration tools and cluster management made possible by Mesos and the data center OS that’s Mesosphere’s commercial version of the open source software born at University of California at Berkeley.
Mesos has been road-tested in some of the world’s largest-scale data center infrastructures, including, famously, Twitter’s and Apple’s, and Mesosphere’s aim is to take it to the enterprise. It abstracts disparate data center resources, be they cloud or on-prem VMs or physical servers, and presents them as a single pool of resources to the application, turning a data center or a cloud VM cluster into essentially a single computer. Hence the data center OS nomenclature.
Extending that pool of resources to include Windows would make for an even more all-encompassing data center OS, which is the aim of the new partnership announced today.
Earlier this week, a news report surfaced saying Microsoft was in buyout talks with Mesosphere, citing anonymous sources. | | 6:35p |
Basic and Intelligent PDUs – Understanding the Differences The technological landscape is evolving to a point where the modern data center is truly the home to many of the latest cloud and content delivery platforms. Cloud computing, IT consumerization, and mobility have all impacted how we use data center resources and optimize power utilization. As the business becomes even more reliant on the data center, administrators are faced with a line of new challenges. These include:
- Power capacity management and provisioning
- Energy management
- Computing capacity demand
- Physical and network security
Organizations and data center administrators are constantly looking for ways to improve data center control and overcome these kinds of challenges. Consider this – a recent Ponemon Institute study showed that in 2013, the average cost of downtime was a staggering $7,908 per minute. The very same study also showed us that the cost of a data breach to a company is on average $145 per affected individual and $3.5M per incident. This means we’re dealing with real capacity, management, and even security challenges when it comes to data center control. This is where intelligence can begin to make a real difference.
In this whitepaper, we learn that data centers absolutely require a new way to create optimal uses around power, space, cooling, and of course, people. This means creating an architecture built around intelligence and one that can resolve some of the most pressing data center control challenges out there. Intelligent PDUs go way further than just power distribution. Today, these units help tackle power, energy, capacity, and even management challenges. Furthermore, direct integration with real-time data collection and the entire network architecture, you begin to introduce very real security into the physical and logical side of the data center.
Download this whitepaper today to see how intelligent PDUs help meet the demands around increased computing capacity, improve the overall safeguarding of your infrastructure, and make it much easier to manage your critical data center assets. | | 7:22p |
Equinix Brings Direct Links to Amazon’s Cloud to Osaka Data Center Equinix has given its data center customers in western Japan the option of plugging directly into Amazon’s infrastructure cloud inside its Osaka data center.
This is the second Equinix data center in Japan to offer AWS Direct Connect, fourth in Asia-Pacific, and ninth worldwide. The service is also available at an Equinix facility in Tokyo.
AWS Direct Connect and comparable services by Amazon competitors, such as Microsoft Azure, IBM SoftLayer, or Google Cloud Platform, cater to security and performance conscious enterprise IT shops that want to use public cloud services but don’t want to do it via the public internet. The services give them a direct, private cross-connect from their servers in a colocation data center to the servers of the cloud provider of their choice hosted in the same facility.
Equinix is one of close to 70 data center providers and telco carriers offering AWS Direct Connect services around the world. Carrier Level 3 offers the service in the most locations, including data centers in the US, Latin America, Europe, and Asia-Pacific.
For data center providers like Equinix, such services are a way to take advantage of all the user growth the public cloud giants are seeing, while protecting their colocation business from shrinking as a result of companies moving applications to the cloud. They also make those customer deployments in their data centers “stickier.”
Providing private links to public clouds has been the fastest-growing source of revenue for Equinix.
Other Equinix data centers in Asia-Pacific where AWS Direct Connect is available are in Sydney and Singapore. In the US, it offers the service in Silicon Valley, Seattle, and Northern Virginia. It’s also available at Equinix data centers in London and Frankfurt. | | 9:21p |
Microservices: Coming Soon to a Data Center Near You Like it or not, the practice of IT management is going through some fundamental changes. Thanks to the rise of microservices, modern applications are made up hundreds of containers that collectively are more resilient than traditional VMs but perhaps more challenging to manage.
Instead of thinking of VMs as “cattle” that can be easily replaced, or “pets” that need to be cared for, the future of virtualization in the data center will more closely resemble an ant colony where the ants are application containers, Adam Johnson, VP of business for Midokura, a provider of network virtualization software, said. Should tens or hundreds of containers suddenly get wiped out, there will be another thousand standing by to take their place.
“Applications are going to consist of little services made up of multiple containers,” he said. “That will abstract away much of the process of scheduling and delivery inside the data center.”
Johnson is speaking at the Data Center World conference in National Harbor, Maryland, this September, where he will discuss how traditional approaches to managing VMs in the data center will soon fall by the wayside.
That abstraction layer, provided by tools such as the open source cluster management software Apache Mesos and Kubernetes, Google’s open source orchestration tools, will fundamentally change the way organizations think about IT operations, Johnson added.
In theory at least, it won’t be uncommon one day for a single administrator to be managing a hundred times as many server instances than they typically do today. The fundamental operations challenge IT operations teams now face is not so much how each of those containers will get managed, but rather how to prepare today for a new IT reality that at this juncture is all but inevitable.
For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Adam’s session titled “DevOps for Networking: Pets or Cattle?” |
|