Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, August 21st, 2013
| Time |
Event |
| 11:30a |
Nutanix OS 3.5 Advances Virtual Data Center Operations A new version of the Nutanix OS (NOS) intelligent software for building and operating virtual data centers was released Tuesday – adding elastic deduplication and a more robust platform, with user-center design. NOS 3.5 delivers an intelligent architecture that understands data access patterns to adaptively manage storage utilization in real-time and applies inline technologies to accelerate application performance.
“The release of Nutanix OS 3.5 represents a significant step forward in datacenter infrastructure,” said Rajiv Mirani, vice president of engineering, Nutanix. “The combination of accelerated performance, advanced storage optimization and consumer-like management will change long-held expectations about what is possible with converged infrastructure technologies and further accelerate market adoption.”
Built specifically for scale-out architectures, a new Nutanix Elastic Deduplication Engine accelerates application performance by spanning memory, flash and hard disk drive resources simultaneously in any storage offering or converged infrastructure platform. It continuously analyzes read I/O access patterns and granularly deduplicates data to deliver the highest possible performance and to maximize effective storage capacity.
With the release of NOS 3.5, Nutanix is introducing Nutanix Prism – a comprehensive management framework, including a completely new interface that simplifies user experience and a REST-based API enabling Nutanix to be easily integrated into enterprise or cloud management systems. Built from the ground up, the new Prism UI has all system and VM information organized and presented to facilitate easy consumption for immediate insights and actions. The Prism API, which includes a rich explorer for simple and discrete sampling of individual API commands, allows all reporting and management functionality to be programmatically accessed in a cloud orchestration management system, such as OpenStack and VMware vCloud Automation Center.
“Nutanix pioneered the creation of Hyper-converged solutions that dramatically simplify IT infrastructure, reduce costs and improve manageability,” said Arun Taneja, founder and consulting analyst, Taneja Group. “With NOS 3.5, Nutanix is upping the ante by adding their Elastic Deduplication Engine, which improves the performance of the memory and flash layers with software-driven, inline deduplication technology. This, along with a simplified UI and the availability of Prism API, should propel Nutanix and Hyper-convergence into mainstream enterprise data centers.”
| | 12:00p |
IaaS provider Ajubeo Secures Capital to Support Growth Virtual data center and cloud Infrastructure as a Service (IaaS) provider Ajubeo has secured additional capital from Grey Mountain Partners, a Boulder-based private equity firm. The new capital is going towards continued expansion in the New York Metro Area, product development, staffing, and sales and marketing.
The capital comes in the wake of the company’s 600% quarterly revenue growth over the past 12 months.
“The cloud infrastructure market continues to expand at a rapid pace,” says Chuck Price, president and CEO of Ajubeo. “The question is no longer whether infrastructure-as-a-service is a viable alternative to traditional methods. Instead, customers are architecting their IT-based products and services around a diversified cloud strategy. This shift has led to strong demand for high-performance, production-ready cloud services from providers such as Ajubeo.”
Ajubeo is a national provider of high-performance, enterprise-class cloud infrastructure-as-a-service that keeps the CIO in mind; it’s founded and built by CIOs for CIOs. The company launched in May 2012, beginning in the Denver market. It has since deployed cloud hubs across several reliable and interconnected data centers, in Denver, New York, and London.
“We are excited to support Ajubeo’s leadership team, Chuck Price and Colorado CIO of the Year Tom Whitcomb, as they expand and enhance their industry-leading cloud infrastructure offering,” said Marcello La Rocca, affiliate manager at Grey Mountain Partners. “Ajubeo continues to deliver on a strong business plan, providing performance-sensitive customers with a 100% uptime cloud solution that has been measured at twice the performance of current market share leaders such as Amazon Web Services.”
Ajubeo’s IaaS offering includes enterprise class Virtual Datacenters, Virtual Desktops, Backup, Disaster Recovery as a Service and Restore as a Service. It focuses on providing scale, integration and compliance for enterprise virtual infrastructure. Cloud services are backed with a 100% SLA, and
The name Ajubeo is derived from Latin phrases which translate roughly to “beginning with strong relationships, mastery and order.” | | 12:30p |
Taking a Leap Forward in Efficiency with Real-Time PUE Patrick Flynn is lead sustainability strategist at IO, equipping customers with the tools they need to improve data center performance.
 PATRICK FLYNNIO</p>
PUE 101
Introduced by The Green Grid 2007, Power Usage Effectiveness (PUE) has become the de facto standard metric for tracking the energy efficiency of data centers. While data center designs have evolved tremendously to manage data more efficiently, the approach to PUE has remained constant, even in the face of on-going concerns around PUE limitations. These concerns validate that the industry needs to establish a new methodology for measuring energy efficiency.
PUE represents the ratio of total amount of energy that comes into a data center to the energy that reaches and is used by Information Technology (IT) equipment. The energy that reaches the computing equipment is considered productive, while energy used for infrastructure (e.g. cooling, lighting, security, system inefficiency) is auxiliary and viewed as waste. This waste is the place to look for efficiency gains. Data Centers strive for a PUE of 1.0, which represents a hypothetical efficient data center where energy is used exclusively to power IT, and there is no energy loss or overhead in the system.
Troubles with Today’s PUE Approach
Though widely used, PUE has industry-acknowledged shortcomings in terms of accuracy. For starters, PUE does not measure efficient performance of a data center’s end purpose, which is to conduct digital work. In reality, the data center’s job is not to provide energy to IT equipment or infrastructure, but to do useful and productive computing; that power is going to IT equipment does not mean the equipment is doing good work.
The second inaccuracy is that PUE very rarely provides a reliable way to compare among data centers. Without controlling for variables like location, size, design, and data sets, PUE cannot tell you with accuracy which data center is performing better.
Aside from its accuracy, there are serious (but rarely discussed) questions about the utility of today’s PUE. Typically, PUE is measured for entire facilities. A mixed-use building may house any number of functions, such as data centers, labs, and offices. For these types of mixed-use environments, determining the power usage of just the data center environment is difficult, especially when some systems share power or cooling infrastructure.
Also, PUE measurement is a challenge in a colocation data center, which is a mixed-use facility from the perspective of having multiple customers. To Tenant A, the neighbors’ power, efficiency, equipment and overhead are unknown, and do not contribute to the energy that reaches the IT gear of Tenant A. For PUE to be useful to Tenant A, it must be specific to their defined infrastructure and data systems, even –and perhaps especially– if they are a small fraction of a larger shared data center infrastructure.
Further reducing the utility of today’s PUE is the fact that it is typically calculated retroactively, looking at energy consumption over a historical period of time and then calculating an implied average power usage over the period. Such an approach masks any volatility in PUE and doesn’t create timely feedback to operators.
The Solution: Real-Time PUE
With a data center industry mandate for greater energy efficiency, we need to evolve to newer, enhanced measurement models. Though a retroactive, building-averaged PUE may serve to confirm overall progress, it falls short of helping to pinpoint opportunities for improvement. Service providers must strive to provide customers information that allows them to improve performance and make better business decisions. This is why IO is increasing the usefulness of energy efficiency measurement through an evolved methodology called real-time PUE, as measured through the IO.OS operating system.
Real-time PUE measures power efficiency instantaneously and provides a level of granularity down to the individual server. This level of specificity is made possible by our software defined data centers, which are able to capture live data from across the infrastructure, enabling monitoring, measurement, benchmarking and continuous improvement.
Additionally, our modular data center designs allow system administrators to pinpoint where power is being used and where improvements are possible. By taking the analytical lens down to the server level, we are one step closer to tying PUE to actual digital work.
For example, a company may only want to assess the performance of one data module, or how much power is being used to cool the systems. With real-time PUE, this is possible. By contrast, using the traditional PUE calculation – summing all electrical input to the entire system and attempting to allocate a portion of that energy to the data module of interest – this may prove impossible or misrepresent PUE for that particular module. This results in misallocated costs and poorly informed operational decisions.
Initial conversations with companies are generating strong interest in implementing real-time PUE methodology. There is a hunger to understand data center performance all the way into the applications layer. When our industry achieves this level of measurement, we will have arrived at a truly comprehensive measure of efficiency, inclusive of the work output.
Working from Where We Are
We recognize that traditional PUE measurement has a place in current customer assessments of data center efficiency and environmental sustainability. To inspire the industry to rethink data center design, IO partnered with Arizona Public Service (APS) on a comparative, independent third party study. The goal of this study was to level-set PUE, by evaluating both construction-based data centers and modular data centers. IO runs its own data centers and operates both traditional and modular environments, so we were in a position to reliably evaluate the power efficiency differential between Data Center 1.0 and 2.0 designs.
This month, APS released their report, showing that IO’s manufactured, modular data center approach achieves 19 percent energy cost savings compared to the traditional construction-based environment. In its research, APS analyzed 12 months of data from both IO.Anywhere modules and the traditional build-out located at the IO.Phoenix multi-tenant data center environment.
APS monitored PUE for calendar year 2012 and found the data center 1.0 environment had a PUE of 1.73, while the data center 2.0 modular environment had a PUE of 1.41. The portion of PUE above 1.0 denotes energy not going to IT equipment, and that’s where efficiencies can be found. We’ve reduced this portion from 0.73 down to 0.41 in our switch to the Data Center 2.0 technology, representing a 44 percent reduction in energy spent on infrastructure versus IT equipment. Over the course of the year, IO achieved annual savings of $200,000 per MW of average IT power within the modular build-out.
Just as our industry pushes the boundaries with data center technology innovations, we all must continue to evaluate and challenge the status quo around measuring data center efficiency. Data center infrastructure technology has gone through a transformation over the last decade. There is a performance gap between the way the industry measures PUE and the need for more useful information to continue driving performance.
At IO, we believe real-time PUE is the logical next step for the industry, especially as modular data center designs become the standard. But it is just one critical step on the path to achieving break-through cost and efficiency gains. While providing data center users with more useful efficiency measurement enables better decision-making, it does not provide recommendations to take action. This is where data center analytics come into play — a very interesting area of innovation for the industry. Evolving our category’s thinking on PUE is an instrumental step in improving data center efficiencies and will serve CIOs, CFOs, facilities managers, end-users, and the planet.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:17p |
Colovore Enters Silicon Valley Market  Colocation startup Colovore is setting up shop in Santa Clara.
There’s a new player on the Silicon Valley data center scene. Colocation startup Colovore is moving into Santa Clara, where it plans to deploy high-density data center space.
Colovore is preparing to commence construction at 1101 Space Park, a 24,000 square foot property owned by veteran developers Pelio & Associates. The building is in the heart of the Valley’s “Data Center District” in Santa Clara, just across the street from the area’s primary connectivity hub and an electrical substation.
As it seeks to break into one of the industry’s most competitive geographies, Colovore is seeking to differentiate itself by focusing on customers needing high-density power loads of 20 kW per cabinet and beyond. The Colovore team says the appetite for converged infrastructure and highly-virtualized environments is driving demand for denser deployments that many existing facilities in the Santa Clara market aren’t designed to support.
“For us, it’s really about power density,” said Ben Coughlin, the Chief Financial Officer of Colovore. “Our belief is that the current legacy data center space doesn’t cut it these days. We see a major opportunity. We’re building for 20 kW per rack, wall to wall.”
Experienced Team
Colovore President and co-founder Sean Holzknecht knows the Bay Area market from his previous post as Vice President of Operations at Evocative, the Emeryville provider that was recently acquired by 365 Main. Chief Technology Officer Peter Harrison arrives from Google, where he was a Senior Technical Program Manager on the data center team. Coughlin brings experience in the private equity sector, where he worked with Camden Partners and Spectrum Equity.
The company has raised $8.1 million in funding, and hopes to have space available for customers in late 2013.
“Out of the gate, we’ll allow companies to scale vertically instead of horizontally,” said Coughlin. ”What providers in the Bay Area can offer is 4 to 5 kW a rack, along with some focused high density space.”
Coughlin said Colovore wasn’t yet ready to discuss the details of its facility design, but that it would involve innovations in a raised-floor environment. The company won’t be using modular data centers, which were the focal point of an earlier effort to develop 1101 Space Park as a “container colo” environment.
Density as a Differentiator
Colovore’s focus on high-density offers a way for the startup to differentiate itself in the highly competitive Santa Clara market, where virtually all of the data center industry’s major players are marketing space. But it cane be done, as shown by Vantage Data Centers, which entered the Silicon Valley market in early 2011 and leased 20 megawatts of space in Santa Clara in less than two years.
The high density strategy also offers a way to generate as much revenue as possible from the compact footprint at 1101 Space Park.
“Power is the real variable,” said Coughlin. “We feel very good about our differentiation. Density is a topic that doesn’t get discussed that much because the incumbent providers don’t have it. We talk to a lot of system integrators and VARs they ask a lot about the converged infrastructure trend. They can’t find a place to deploy it. It’s great for us.”
There’s a historic trend for data center customers to seek high power density than they will actually use. But Coughlin says the focus on power efficiency in recent years had ;ed to better understanding of density requirements.
“We want to spend a lot of time with customers, getting to know them and understand their needs,” said Coughlin.”I think customers are more aware of their needs and the nature of their power draws.”
Colovore will focus on the colocation market, rather than data center suites.
“Wholesale is not our game,” said Coughlin. “We’ll leave that to the big boys.” | | 1:30p |
CoreSite Gets Data Center Lease Extension In Denver CoreSite Realty has extended the term of the lease and expanded its capacity at its DE1 facility in Denver, Colorado. CoreSite has contractual rights to control its lease term into 2024, as well as secured the ability to scale to further customer requirements with additional power capacity. It’s good news for customers, as they can benefit and scale right along with CoreSite.
“We are pleased to execute the agreements supporting additional growth in the Denver market,” said Tom Ray, CEO at CoreSite. “We look forward to further enabling our customers in the region to propel revenue growth through expanded capacity, connectivity and extensive communities of networks, cloud service providers, content organizations and enterprises.”
The facility is among one of the most interconnected in the Rocky Mountain region and a growing component of the CoreSite Mesh, a large national fabric of deeply connected facilities. Data Center Infrastructure Management provider FieldView acts as one component of CoreSite’s larger technology transformation to cloud-enabled, connected data centers.
CoreSite operates Any2 Denver, the largest peering exchange in the regions growing interconnection market, providing access to over 25 networks. The Denver campus consists of DE1 at 910 15th Street and DE2 at 639 East 18th Avenue.
The company is in overall expansion mode, recently announcing a facility in Secaucus, a 280,000 square foot building with a first phase of 65,000 square feet. The Secaucus facility follows the launch of CoreSite’s previously announced 15th data center, located in Reston, Virginia. CoreSite’s national platform spans nine U.S. markets, and these markets are increasingly connected as part of the CoreSite Mesh initiative. | | 2:00p |
DCK Webinar: Data Center Build vs. Buy When you need data center space, do you build your own or buy space from a data center wholesale company? Join Data Center Knowledge contributor Julius Neudorfer on Wednesday, September 4 for a special webinar in which Julius will discuss Build vs. Buy, the age-old question for many industries.
When making the decision to build vs. buy, in some cases the decision is driven by hard number-crunching business analytics, yet in other cases, it is driven by human emotion. On one hand, decisions made in the design and build stages will not only affect the total CapEx of the data center, but will forever impact its energy efficiency and long term operating costs, as well as limiting the functional life of the data center. On the other hand, today’s market offers a wide array of facility providers to lease and operate fully managed sites. How do you determine which solution is best for your business?
While there are a multitude of factors to be considered, in the end you will have to ask if building, owning and operating your own data center is a strategic advantage to your business, or just a burden on internal resources and capital.
Title: Data Center Build vs Buy
Date: Wednesday, September 4, 2013
Time: 2 pm Eastern/ 11 am Pacific (Duration 60 minutes, including time for Q&A)
Register: Sign up for the webinar.
This one-hour webinar will examine the more detailed aspects, such as site selection and risk factors, choices when considering a “Buy” option, factors impacting the total cost of ownership, and more.
Following the presentation, there will be a Q&A session with your peers, Julius and industry experts from Digital Realty. Sign up today and you will receive further instructions via e-mail about the webinar. We invite you to join the conversation. | | 3:00p |
4 Key Cloud Considerations For Data Center Managers  Today: 4 key cloud considerations for data center and IT managers.
We’re beginning to move along in the cloud conversation. More organizations are seeing the benefits of moving towards some type of cloud infrastructure. Still, even though use-cases are being developed for a cloud migration, there are still some concerns around execution and finding the right cloud model. With the evolution of the cloud platform, it’s a good time to look at a few considerations when working with the modern cloud.
Remember, there are a lot of new technologies spanning the cloud platform. We now have various “as-a-service” models and even more virtual and cloud workloads being delivered.
In working with cloud computing, organizations can quickly see how they’ll benefit from such a powerful platform. However, four major considerations must still be addressed by many data center and IT managers:
- Data migration and control. Initially, this was a major concern for organizations. Where is my data being stored? How is it being accessed? Do I even have control over it? Now, better technologies which are directly in-line with the cloud mentality are easing these concerns. Consider this: new data loss prevention (DLP) and onsite cloud options are providing administrators with granular control over their data and how it’s being distributed. Furthermore, storage systems are becoming more “virtualization aware” and cloud ready. This means that some enterprise SAN systems are now able to live migrate data from one controller or disk aggregate to another. Now, imaging being able to do that from one cloud platform to another. Powerful stuff.
- WAN, Cloud and Bandwidth. Cloud computing is, at a high-level, just the transmission of data over the Wide Area Network or the Internet. Sometimes organizations use their own private cloud platform, but for the most part, they still have considerations around bandwidth and WAN utilization. One of the biggest challenges around the cloud is the understanding of how the WAN and bandwidth plays a role. When designing a cloud solution, remember the following:
- Number of uses connecting – concurrently.
- Number of applications, desktops or workloads residing in the cloud.
- The amount of traffic to be pushed through the WAN – whether private or public.
- Distance that the data has to travel. WAN optimization may have to be an option.
An organization can have the best technologies deployed at the data center level. However, if the proper bandwidth and WAN considerations aren’t applied, the user experience will suffer.
- Application compatibility in the cloud. That’s the beauty of the cloud. For the end-user there is complete transparency. However, at the data center level, there is complete control. Cloud computing works very closely with virtualization. And, because of that, creates a simpler platform for application compatibility. For example, you’re able to run a 32-bit application as well as a 64-bit application in the same environment while still presenting both apps under the same portal to the user. Application virtualization in conjunction with cloud computing has come a long way. Products like Citrix XenApp create a platform where heterogeneous applications can live side-by-side peacefully while still being presented in a single interface to the cloud user.
Working With the Right Cloud Model
There are now four distinct cloud models to choose from. Each has a specific use-case and will benefit an environment differently. At a high-level, these are the four major cloud environments to work with:
- Private – If your organization needs to stay locked down, but has remote users who need to access a central data center for applications, desktops, or files. This is the right model for you. In this scenario, you are deploying your own hardware and software to facilitate the cloud solution.
- Public – Many times organizations need to test out an application or a set of servers. Instead of buying equipment, they’ll “rent” it out. In this pay-as-you-go model, IT managers can provision VMs in a public cloud environment, work with them as long as needed, and then decommission those servers. All of this will have little to no impact on the local corporate data center. Hybrid – In some cases, two cloud platforms need to be combined to test production data with a public cloud VM. In this case, organizations are able to create secure connection between the public and private environment – effectively creating a hybrid cloud.
- Community – This is a newer and most recent addition to the cloud model family. Imagine that there is an application being hosted by a provider. Instead of providing the same application for a customer on individual VMs. The provider is able to logically segment connections coming into its environment. From there, it will allow multiple customers to connect into its cloud platform to access that single application. This is, effectively, a community cloud.
- As with any newer technology, don’t let the concept overwhelm you. The flexibility of cloud computing offers the flexibility in the deployment option. By far, the most important part of any cloud (or IT) initiative will be the planning phase. When it comes to cloud computing, think short and long term. The power of an elastic cloud can allow an organization to scale as business needs dictate. However, a poorly implemented cloud solution can become a fast budget drainer.
In moving to a cloud solution, understand the options and see the ROI. With improvements in API structures, cloud platforms (and data center infrastructure) as well as the ability to seamlessly integrate cloud services – it’s time you explore the power of the cloud. | | 3:30p |
Intel’s Shannon Poulin to Keynote Data Center World Fall Conference The emergence of converged infrastructure and the “software-defined data center” presents many ways to make IT operations more agile and powerful. But data center managers must first sort out the many options for taking advantage of these technologies, implementing them in their facilities, and aligning data center operations with business goals.
AFCOM, the association for data center managers, is here to help with a full plate of education and information programs at Data Center World Fall 2013, scheduled for Sept.29 to Oct. 1. in Orlando. This year’s conference theme is “Aligning the Data Center With Business Strategy,” which includes a new track on data center business trends.
The keynote address will be presented by Shannon Poulin, vice president of Intel’s Datacenter and Connected Systems Group, who will provide insight into Intel’s initiatives to re-architect data center infrastructure and enable IT managers to become more service-driven than ever before.
“Data Centers are increasingly faced with supporting and delivering rapid service,” said Tom Roberts, president of AFCOM association and chairperson for Data Center World. “Intel is leading the industry in its efforts to deliver the scale and efficiency required across network, storage and servers to meet the demands of a services –oriented mobile world. We are excited that our attendees will gain insight on new approaches to delivering that service from an industry leader like Intel.”
“We have a vision for a software defined infrastructure that will help IT accelerate the delivery of new services, make it easier for business’s to enter new markets, and enable IT to get closer to their customers” said Poulin. “Our goal is to enable IT managers to orchestrate data center infrastructure so they can provision server, network and storage resources on demand. This is no easy task, but I look forward to sharing more information about how this can be achieved and the products Intel offers that will help data center managers successfully make this transition into a new era of rapid service delivery.”
Poulin is a vice president in the Datacenter and Connected Systems Group and General Manager of the Datacenter Marketing Group for Intel. In this capacity he is responsible for all marketing activities including the design win process, product positioning, pricing and outbound messaging across all datacenter segments.
The education program will target data center executives and decision makers with new sessions on topics covering data center business models, staffing and development plans, risk assessment, finance for the DC Manager, Energy Productivity, Thermal management, and IT and Facilities Organizational Convergence. The new Data Center Trends track will cover the latest developments in colocation, data center automation, Big Data, real estate, the lease vs. own decision, Software-Defined Data Centers and cloud computing.
For the full agenda and to register, visit the Data Center World website. You can learn about upcoming sessions on Facebook and follow AFCOM and Data Center World on Twitter. | | 5:00p |
Intel Leads $17 Million Financing of Storage Startup Maginatics Software-defined storage startup Maginatics came out of stealth mode last year and has since seeded its MagFS (Maginatics File System) offering in the enterprise. Looking to increase the pace of adoption of its flagship product and fuel growth in the U.S. and internationally, the company announced it has raised $17 million in Series B Financing. Intel Capital led the round, with participation from WestSummit Capital, Comcast Ventures and existing investors including Atlantic Bridge and VMware.
Mountain View, California-based Maginatics was founded in 2010, and since coming out of stealth the company has seen a dramatic increase in customers. The MagFS solution offers Network Attached Storage (NAS)-like capabilities where NAS is not an option: in the cloud and in any enterprise where distributed, concurrent and consistent access to a shared capacity pool is required. Following this latest round of investment, Maginatics also appointed two new members to its board of directors: Dharmesh Thakker of Intel Capital and Raymond Yang of WestSummit Capital.
“The shift from monolithic storage architectures to more agile, software-only architectures able to bridge enterprise IT with cloud is transforming the enterprise,” said Lisa Lambert, vice president, Intel Capital. “MagFS enables this transition by helping companies serve highly distributed operations and seamlessly migrate workloads to the cloud. Intel Capital is excited to help scale the company through significant resources, expertise and partners via our vast enterprise ecosystem.”
“Maginatics’ next generation enterprise storage software solutions are ideally suited for the high growth, multi-billion dollar storage market in the Asia Pacific region, particularly in China,” said Raymond Yang, co-founder and managing director of WestSummit Capital. “Asian enterprises demand massively scalable, highly secure and low cost storage solutions optimized for today’s mobile enterprise. Maginatics delivers on all fronts and WestSummit is delighted to support Maginatics’ expansion into this highly strategic market.” |
|