Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, August 13th, 2013
| Time |
Event |
| 12:00p |
Stream Lands Tenant for Houston Data Center  An artist’s conception of the new Stream Data Centers wholesale project in Houston, which has announced its first tenant. (Image: Stream)
Stream Data Centers has secured a sizeable tenant in its new Houston data center at the Woodlands, only a short time after completing construction. The unnamed tenant is a multinational corporation leasing one of the three available Private Data Center (PDC) Suites. Each PDC suite has independent power and cooling infrastructure, delivering 1.125 megawatts of critical load, expandable to 2.2 megawatts across 10,000 square feet of space
In addition to using the facility for data storage, the customer is also outsourcing its mission-critical IT infrastructure operations to Stream Data Centers. The PDC Suite also provides tenants with plug-and-play disaster recovery office space, redundant private telecommunications rooms and a private utility yard.
“This response to our recently commissioned Private Data Center demonstrates the strong demand for enterprise quality data center space in the Houston area,” said Paul Moser, Co-Managing Partner of Stream Data Centers. “We look forward to providing exceptional service to this new customer.”
Located in a suburb north of Houston, the 74,901-square-foot data center is built with a 2N electrical / N+1 mechanical configuration, delivering dual feed power from separate substations. Construction began in August of 2012. The carrier-neutral facility has access to eight fiber providers through diverse, private fiber conduits, and offers seven days of onsite fuel storage.
The Stream Private Data Center is located 70 miles from the Texas coast, outside of the 500-year floodplain, away from railways and flight paths, and it has been built to withstand 185-mph wind and uplift. It has also been designed to achieve a LEED Silver Certification.
Stream Data Centers also has facilities in Dallas, San Antonio, Denver and Minneapolis. | | 12:30p |
Smarter Data Center Migrations and Consolidations Jim McGann is vice president of information management company Index Engines. Connect with him on LinkedIn.
 JIM McGANN
Index Engines
Data center migrations and consolidations are a common occurrence, especially as data center growth averages 40-60 percent per year, leaving data center managers with three choices: upgrade their capacity, migrate data to less expensive storage or consolidate environments.
As budgets prohibit most organizations from blindly upgrading their storage capacity, companies are turning to migrations and consolidations to control costs.
But when faced with a migration to a new storage platform or consolidation of multiple environments, movement of outdated, abandoned and aged data is complicating and polluting the process.
This value-less data can easily account for 30 – 50 percent of the total capacity. But now new data profiling technology enables data center managers to eliminate data with no business value before a consolidation or migration occurs.
Data profiling is the metadata analysis of unstructured user files. Providing an efficient and cost- effectively index (via NFS/CIFS/NDMP) of data storage, data profiling works by extracting key metadata. Last modified/accessed dates, owner, location, size, even duplicate content can be located with custom queries.
In addition, integrating data profiling software with Active Directory/LDAP allows reports and analysis to be summarized by groups and departments as well as active versus inactive (ex-employees) users.
This shows data at the file level that is no longer being used or belongs to ex-employees and enables data center managers to manage use by department and locate documents needed for regulatory or legal purposes.
This analysis software differs greatly from existing solutions that analyze access logs and network metadata. Data profiling goes deep within the files, even a full text profile if required, and delivers comprehensive access to file information. When managing files, this is the only solution that provides the level of knowledge, as well as the analytical tools and disposition capability needed to efficiently migrate data.
Filters and Queries
Data profiling provides flexible filters, queries and dynamic summary reports that provide the knowledge corporate data centers need to make appropriate decisions.
For example, according to the Bureau of Labor Statistics, organizations are currently facing a 3.3 percent turnover rate. Take an example of a 5,000 employee organization; this represents 165 ex-employees annually. If these ex-employees were generating a meager 5GB of unstructured content annually this would represent almost 1TB of forgotten data annually.
Considering how the files one person creates are shared within the company and stored on other people’s desktops, mail attachments, archives and backups, this number quickly turns into 10TB of annual useless content cluttering the data center. Over 10 years this will explode to 100TB of abandoned data that will continue to grow annually. Data profiling can locate and remove this data prior to a migration or consolidation.
Analysis is flexible and will allow the user to understand the current state of data as well as how it changes. From finding and managing data that has outlived its business value to finding data that must be preserved in an archive, data profiling delivers the reports and disposition tools needed to get the job done.
Data Disposition
With dynamic reports displaying your environment and narrowing down the analysis of the data set, it is then easy to manage the disposition of the content. Using the built-in tools to delete, migrate or archive the data, or exporting a csv text file so you can utilize existing tools, content can be easily managed.
Deletion of content, while a sensitive subject, is performed in a defensible manner to protect the organization from penalties. Once you have refined a subset of content to be purged and have received sign off from legal or compliance, it is one click and the data is deleted. The software creates a log of this activity, including the person, time and specific files, is stored in a database for future reference.
Migration of data can be managed including moving content to a more appropriate platform, preserving it in an existing archive, or pushing it out to a cloud repository. This allows the tiering of data based on value and access requirements and freeing up expensive storage for more important content.
Streamlined Migrations and Consolidations
As the cost-saving and risk-mitigating trend of migrating and consolidating data centers continue, it must be streamlined to be truly effective. Wasting effort and expense in moving data that has outlived its business value is a significant waste of resources.
Typical enterprise servers can easily contain 22 percent abandoned data, 14 percent that has aged and outlived its business value, 24 percent duplicate content, and 6 percent personal files such as vacation photos and music libraries. This could account for over 50 percent of wasted capacity.
Managing a migration or consolidation where you can cut the volume in half would free up tremendous resources and expense. These expenses will continue to saved annually.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 12:36p |
QTS Adds Disaster Recovery Services  One of the power rooms inside a QTS data center near Richmond, Virginia. The company has expanded its disaster recovery options. (Photo: QTS)
Data center provider QTS (Quality Technology Services) has launched two disaster recovery services designed to provide network availability in the event of virtual or physical server disruptions. The DR ON-Demand and DR High Availability services feature flexible configurations for QTS’ managed hosting customers that can meet a variety of budgetary and business continuity needs.
DR On-Demand offers an image-based replication service for customers with virtual server environments. It’s a fully managed service that uses geographically dispersed offline servers that can be activated in the event of a disaster. The service offers a range of recovery options, from application level to full server failover.
DR High Availability provides real-time, continuous data replication of customers’ physical and virtual server environments, ensuring that a current copy of the data, applications and operation system are always accessible. This service uses asynchronous replication between servers across QTS data centers in several geographic markets to provide near zero data loss. Point-in-time recovery options allow customers the flexibility to failover to the most current data or to a specific date and time.
“Preparing for natural disasters and other unplanned events is a crucial part of business in today’s environment, where a company’s network is its lifeline to productivity,” said Chad Williams, chief executive officer of QTS. “We are proud to provide our managed hosting customers with these disaster recovery options, along with our expert engineers to help customers understand which service is most appropriate for their business needs.”
Among the early implementers is IBM, which partners with QTS and has worked with a New York City-based client in the financial services industry to implement QTS’ new DR offerings.
“With QTS’ disaster recovery services, we were able to easily customize our client’s recovery solution according to their business needs and specific recovery requirements,” said Steve Fijalkowski, business unit executive, private cloud-delivery partner services for IBM Global Technology Services. “Our client was excited to find a solution that supports the replication of their large volume of data while meeting both their recovery point and time objectives.”
QTS (Quality Technology Services) was founded in 2005, and has grown from a single facility in Kansas to a national chain operating more than 3.8 million square feet of data center space, including several of the largest facilities in the industry. QTS is the leading provider in the Atlanta market, where it operates a huge data center downtown and also has a major data center in the suburb of Suwanee. The company also has data centers in Miami; Richmond, Va.; Jersey City, N.J.; Dallas; Sacramento, Calif.; Santa Clara, Calif.; and thnree facilities in Kansas. | | 2:30p |
Infographic: Overcoming Data Center Barriers The modern data center is at the heart of any organization. Now, more workloads and technologies are being moved to the data center platform. There are more users, a lot more devices and new requirements around data center solutions. Organizations are trying to find ways to optimize their environments and introduce new levels of agility. As any organization continues to progress in today’s business world – the data center platform will play a vital role.
Still, with the growing emphasis on data center providers – there needs to be the understanding that there are still challenges to work around. Remember, having direct visibility into your environment can help alleviate issues with resources around cooling, power and floor space. Furthermore, a solid management platform will help increase the uptime and resiliency of your environment.
In this infographic from CA, we are able to see the impact that data center barriers can have against business operations. For example, 84% of data centers have had issues with power, space, cooling capacity, assets and uptime. These issues went on to negatively impact the business process. What were those consequences to the business?
- 31% – Delay in application rollouts
- 30% – Disrupted ability to provide service to customers
- 27% – Forced to spend unplanned OPEX budget
- 26% – Need to roll back an application deployment.
Download this inforgraphic today to see how a powerful datacenter infrastructure management solution (DCIM) can help create a more integrated platform for your environment. Functionality from DCIM has numerous benefits including:
- Real-time monitoring, including power and temperature
- Alerts and alarms for power and cooling
- Inventory and asset management
- Capacity analysis and planning
To have a truly robust data center platform – there must be clear visibility into all of the vital underlying components within that infrastructure. When that visibility is created, a more efficient and agile data center is able to operate at a higher standard. | | 2:52p |
FORTRUST Enters Wholesale Data Center Market  Several IO factory-built data center modules reside in an area of the FORTRUST Denver data center dedicated to modular growth. (Photo: FORTRUST)
Colocation provider FORTRUST will expand its offerings to enter the market for wholesale data center center space, the company said today. The Denver-based provider will deliver its wholesale space using data center modules from IO, which can deploy turn-key space in increments of 200 kilowatts to 400 kilowatts.
FORTRUST says it has been seeing increased demand from enterprise customers for modular data centers. IO’s modules have now been certified as meeting the Tier III standard for reliability, a common benchmark for enterprises seeking to outsource their IT operations.
The initiative by FORTRUST continues the blurring of boundaries between “retail” colocation and wholesale turn-key space, adding a modular wrinkle to a landscape that is offering a growing number of options for customers seekinf data center services.
In the traditional wholesale data center model, a tenant leases a dedicated data center suite or “pod” of raised floor space, which usually offers about 1.1 megawatts of power capacity. In colocation, a customer leases a smaller chunk of space within a data center, usually in a caged-off area or within a cabinet or rack.
But in recent years wholesale suppliers have begun competing for deals of 500 kilowatts and below. In response, colocation providers like Equinix have begun offering dedicated suites for larger customers. With its wholesale offering, FORTRUST is building upon the success of IO in offering dedicated infrastructure demised within factory-built modules, rather than data halls within brick-and-mortar buildings.
“FORTRUST, like IO, recognized that DC 1.0 was broken and not serving our key customer, the IT end user,” said FORTRUST Senior Vice President and General Manager Rob McClary. “By building out with the world’s most advanced modules, we increased our total data center efficiency by 30% and added more inventory to provide what the world’s most demanding organizations require.”
As the first participant in the Powered by IO program, Fortrust committed to use IO modular data center technology exclusively as it expands its Denver data center, where it offers colocation and disaster recovery services. i/o Anywhere is a family of modular data center components that can create and deploy a fully-configured enterprise data center.
FORTRUST has used the IO designs to add 5.2 megawatts of capacity by deploying modules to the concrete floor inside its Tier III facilities in Denver, Phoenix and New Jersey. | | 4:24p |
Big Data Analytics Startup CloudPhysics Raises $10 Million Virtualized environments are becoming larger, denser and more complex than ever before. More integrations with other systems in the IT stack now exist, involving subtle interdependencies. Managing these environments has also become very complex. New cloud startup CloudPhysics aims to help.
Founded in 2011, CloudPhysics provides intelligent operations management for virtualized workloads. The company, whose backers include VMware co-founder Diane Greene, said today that it has raised $10 million in Funding led by Kleiner Perkins Caufield & Byers and released its IT operations management service for VMware workloads.
CloudPhysics’ cloud-based approach enables it to derive collective intelligence from the more than 80 billion samples of operations data it receives daily from its global user base.
“Big Data applied in a highly personalized manner has delivered huge, disruptive benefits in every industry – except IT,” said Mike Abbott, general partner at Kleiner Perkins Caufield & Byers. “CloudPhysics is the first to apply it to IT, and they’re well positioned to disrupt this market. We are impressed with their leadership team, game-changing SaaS offering and the community they have built since coming out of stealth mode late last year.”
Helps IT “Run VMware Like Google”
Today, the company released its SaaS offering, which automatically uncovers hidden operational hazards before these problems emerge, as well as identified efficiency improvements in storage, compute and networking with VMware workloads. It empowers IT to understand, troubleshoot, and optimize virtualized systems. The company says it helps IT run VMware like Google.
“Google uses analysis of anonymized traffic data from everyone’s GPS location streams to help users avoid accidents and bottlenecks and to make better driving decisions. CloudPhysics brings that same kind of power to IT so enterprises can make better operational decisions,” said John Blumenthal, CloudPhysics CEO and Co-Founder. “Our servers receive a daily stream of 80+ billion samples of configuration, performance, failure and event data from our global user base with a total of 20+ trillion data points to date. This ‘collective intelligence,’ combined with CloudPhysics’ patent-pending data center simulation and unique resource management techniques, empowers enterprise IT to drive Google-like operations excellence using actionable analytics from a large, relevant, continually refreshed dataset.”
Card Store is Highly Focused Apps to Solve IT Problems
Card Store is the first IT operations app built on an industry-wide dataset and community. The CloudPhysics user interface is composed of “Card”, each of which is a focused app to solve a particular IT problem. Cards are built by CloudPhysics and members of its user community using the Card Builder feature of the cloud-based service and range across all IT operations use cases from planning, procurement, reporting, analysis, troubleshooting and capacity management.
It’s a central repository of targeted apps. Today’s IT teams often employ a myriad of homegrown, commercial, and open source products to manage IT operations of the virtualized data center. Card Store is an attempt to provide the same set of weapons, but with better interoperability. It cuts down the time management needed to manage all of these applications from a variety of places.
“With the Card Store, customers benefit from the collective intelligence of the IT community at their fingertips,” said Irfan Ahmad, CTO and Co-Founder of CloudPhysics. “Purpose-built for IT operational tasks and problems, each user-defined analytic — referred to as a ‘Card’ — is a highly specialized app targeting a particular problem. To paraphrase Apple, ‘If you have an operational problem in your data center, there’s a Card for that.’ And just as Apple’s App Store revolutionized how consumers use technology, the CloudPhysics Card Store stands to revolutionize IT operations management.”
Users also have access to the CloudPhysics Card Builder. IT teams can define, build and customize their own IT Operations Cards, and publish these solutions within teams or the Card Store and larger community. Many popular Cards — such as Knowledge Base Advisor, vCheck suite, Security Hardening and Thin Provisioning Advisor — were created by experts and renowned virtualization practitioners in the industry.
Managing virtualized IT means managing an ever-changing, dynamic set of conditions,” said John Blumenthal, CEO and Co-Founder of CloudPhysics. “Today’s static solutions can’t keep pace and a new approach is called for — leveraging collective intelligence drawn from industry-wide operations big data and delivering solutions rapidly via SaaS as users encounter new operational problems. This infusion of capital will accelerate our aggressive growth plans as we continue to hire the industry’s best engineering talent and expand our global IT operational data service.”
The $10 million series B also included previous investors the Mayfield Fund, Mark Leslie, Peter Wagner, Carl Waldspurger, Nigel Stokes, Matt Ocko and VMware co-founders Diane Greene and Mendel Rosenblum.
“The progress CloudPhysics has made over the last 18 months is phenomenal,” said Robin Vasan, Managing Director at Mayfield Fund. “CloudPhysics’ leadership team is exceptional, and they have done an excellent job growing the team and developing their intelligent operations management SaaS.” | | 4:59p |
OpenStack Storage Gets Boost With Riak Compatibility The OpenStack cloud platform got a boost today with the news that Basho Technologies has added OpenStack storage support to the newly-released version 1.4 of its Riak CS software, which will extend OpenStack’s capabilities for distributed storage.
The open source Riak distributed database automatically redistributes data across multiple data centers when you scale, and keeps data available when physical machines fail. Riak CS (cloud storage) is simple, available cloud storage, in the company’s words. It’s open source storage software based on Riak.
“It’s easy to add another node to a cluster is a single command,” said Bobby Patrick the CMO of Basho. “We’ve made it very easy to build large distributed cloud storage that spans multiple locations. That’s what CS does very well.”
Riak CS 1.4 Enterprise significantly boosts the performance of multi-datacenter replication by allowing for concurrent channels, so the full capacity of the network and cluster size can scale the performance to available resources.
The company has also enhanced Riak CS to now support OpenStack’s Keynote authentication service, and it is now formally compatible with OpenStack Object Storage API. This adds a second major cloud, as it supports CloudStack as well. Additionally, Amazon Web Services S3 compatibility was part of the first Riak CS launch. The OpenStack compatibility will appeal to enterprises or service providers looking to build a distributed cloud storage on OpenStack.
History of Basho Riak CS
“Last year, in March 2012 Riak CS, turned the database, which is a distributed, high performance database, into a multi-cloud object store in a box,” said Patrick. “Riak CS is a little over a year old, and counts over 250 users worldwide like Yahoo Japan, who uses Riak for cloud storage. In the U.S., customers like Datapipe and tier 3 hosting, and a number that we couldn’t mention, use it to power their cloud.” The customers Patrick mentions use it in combination with CloudStack. OpenStack compatibility opens the company up to a whole new userbase, with OpenStack having significant mindshare in the market. This lets users switch out the Swift API for this.
Riak is open source, distributed database with Basho offering an enterprise version of both it and Riak CS. The enterprise edition includes multi-data center replication in addition to full, 24 hour support. The company touts, from a paying customer perspective, nearly a third of the fortune 50, meaning it has solid footing in the mission critical world.
“The updates are about speed and performance, and making it very easy to replicate,” said Patrick. “They’re about simplifying operations management over 100s or 1000s of nodes spread across multiple data centers. Most of our additions are making it easier to scale, as well as speed up operational query data out of object stores.”
“This is aimed at Service providers or large enterprises,” said Patrick. “A large enterprise CIO says ‘I’ve got large infrastructure, and I want economics and versatility of cloud’ would use this. High availability is critical to that story – and the only way to do this is have it stored across multiple locations.” |
|