Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, March 17th, 2016
| Time |
Event |
| 3:00p |
Not Your Grandfather’s Archive: Three Key Features for Today’s Smart Archive Robert Cruz is Sr. Director of Information Governance at Actiance.
Yesterday’s archives can’t handle the challenges of today’s modern enterprise, let alone the challenges enterprises are bound to face in the future as communication channels rapidly evolve.
The variety of communication sources has dramatically changed, expanding to include instant messaging, unified communications, enterprise social networks, social media, and more.
Old school archives were built at a time when an organization’s primary communication vehicle was email, and social media sites like Twitter and Facebook had not yet gained massive popularity. Expecting these antiquated archives to perform in today’s world is akin to using your grandparent’s landline phone as a primary mode of contact.
So how does today’s archive different from the archive of yesteryear?
To start, it lives in the cloud. Cloud-based archiving has been touted as the next generation of archiving for the modern enterprise because organizations aren’t faced with enormous set up and on-going maintenance costs. Additionally, there are minimal staffing requirements, and, as a company grows, cloud archives can easily accommodate that growth without concern for infrastructure modifications.
However, not all cloud archives are created equal. All too often, existing archiving solutions are built on traditional computing architectures and have simply been “relocated” to the cloud. These archives are not purpose-built for the cloud, do not adequately exploit the advantages of cloud infrastructure, and fail to scale to handle the exponential increase in data volumes.
Finally, it should not force companies to convert rich content from unified communications, IM, and public social networks into email to be reviewed. Understanding the context of a conversation that took place across 15 tweets and 5 LinkedIn posts over 2 days is not easy if that content is archived as 20 distinct email messages. Crucial posts may easily be missed, and legal teams could waste multiple hours attempting to piece together a conversation thread to find out what really transpired.
To be considered a smart archive suited for today’s enterprise, it should have the following qualities:
Dynamic Scaling
Enterprise users generate millions of data every day in various forms including emails, instant messages and persistent chats, blogs, wiki pages, and social media posts. Given the ever-growing data sources that should be archived, governed, and made discoverable, archives have to keep up with the exponential growth of data overall.
For example, social communications alone create a massive data influx. Social posts are enveloped in metadata that tell the full story of a communication thread, at any point in time, and should be captured when archiving these new forms of communication. Without the metadata, review and governance of these communications is next to impossible as there is no correlation of one post, thread or tweet to another.
The enterprise cloud archive must also be designed to handle the ingestion of both real-time data and historical data from an existing messaging source or an archive. This requires dynamic scaling to handle the increase in data volume and variety.
Fast Search
When an archive swells, search performance suffers because the underlying search technology is unable to process vast data stores efficiently. The result can be searches that take far too long and performance can become erratic, which not only wastes time and money, but also reduces the ability to look to an archive as the definitive source of truth.
Google has set an expectation for search speed and today’s archive should meet that. Users anticipate seeing search results instantly, often in less than a second, irrespective of the volume or variety of data being searched. Searching your archive for eDiscovery purposes should also deliver consistency and fidelity every time.
Context Aware Results
With a traditional email archive, reviewers can’t understand the relationship between and among various emails unless they spend a great deal of time combing through the metadata or looking through the actual content of each individual message to try to thread them together.
Traditional archives retain real-time communications as simple, uncorrelated emails. In doing so, they fail to capture thecontext of those communications: their relevance is lost and the cost of review for eDiscovery and regulatory audits is increased. In short, the conversation becomes an indiscernible mess.
Smart archive solutions must save the entire conversation thread for an accurate chronological representation of the conversation, even if a portion of a conversation has been edited or deleted. Reviewers can thereby appreciate the relevance of complex interaction events, such as real-time chat, blog entries or discussion board comments in a single view.
Communication and collaboration patterns have radically changed within the past 5 years, and it’s now time to retire outdated email-centric technologies whose time has passed. Armed with a smart, cloud-based archive, reviewers can begin to see archiving as a source of company intelligence that not only manages information governance practices, but also builds competitive advantages.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:25p |
Emerging Trends Shaping the Data Center of the Future Jack Pouchet certainly didn’t intend to insult the standing room-only audience at his Data World session Wednesday on emerging trends impacting the future of data centers.
“Show me an IT professional who can predict the exact timing, size, method, and location for their next data center and I will show you someone with a defective crystal ball. That’s the nature of this industry.”
As has been a theme throughout the conference, the VP of marketing development and energy initiatives for Emerson Network Power, set the tone for his presentation by focusing on data and its potential for turning the IT world upside down.
“Five billion people that woke up today do not have daily access to the Internet,” he said, suggesting that the amount of data generated by that many more users, on top of what already flows over our crowded networks, is reason alone that the data center of today must change.
Here are other key emerging trends Pouchet talked about that will have substantial impact on how we build and design future data centers.
The Cloud of Many Drops
More and more companies are looking beyond virtualization and to the cloud to address underutilization of computing resources, and for good reason. A 2015 study by Stanford’s Jonathan Koomey, found that enterprise data center servers still only deliver, on average, between 5 and 15 percent of their maximum computing output over the course of a year. A surprising 30 percent of physical servers had been comatose for six months or more. Enter the shared services cloud arena. The fact that companies can now offload space-consuming applications and non-critical workloads to shared space means fewer data center builds and a little breathing room. “That allows for more intelligent decisions on the core building they already have,” said Pouchet.
The Data Fortress
It’s hard not to put security first when it comes to data center design. The total cost of privacy-related data security breach stands at $3.8 million, and the number of security-related downtime incidents rose from two percent in 2010 to 22 percent in 2015, according to the Ponemon Institute. This affects the way enterprises approach resiliency, availability, storage, you name it. Will data reside on the cloud or on-site? Are you capable of bringing up systems fast enough to avoid serious downtime and loss to data and oftentimes reputation? Those are questions that every existing and builder of new data centers must consider.
Beyond PUE and Green
Data centers have certainly made plenty of headlines with respect to being energy hogs, thus the push toward great efficiency, new cooling techniques and the acronym PUE. Today, they’re being singled out as abusers of what is rapidly becoming a rare commodity: water. In fact, when Pouchet talked about the 5 billion people without Internet access, he added that 1 billion of those do not have access to potable water. So, it doesn’t bode well that, according to Pouchet, also a Green Grid board member, that a modest 1 MW facility can easily consume more than 4.4 million liters (1.2 million gallons) annually.
This focus on water has spawned a new acronym: WUE (Water Usage Efficiency) and new thinking about ways to cool the dater center. That’s the typical way of increasing efficiency: by cooling the data center. However, Pouchet pointed to a new approach that actually removes heat at the rack or aisle. Other new considerations include evaporative cooling technologies and economizers that utilize outdoor air. This has become yet another key factor set to impact the future data center.
Edge Computing
Because the fabric of the Internet is changing so rapidly, we’re seeing more and more data centers decentralizing and being supported by micro data centers. In other words, data, and its processing component, are moving as close to user groups as possible in terms of edge and (small but growing) micro data centers. For example, just as content delivery networks cached data closer to customers, there will be more satellite data centers providing cloud-based content closer to the network’s edge. As a result, the importance of tier two and three cities grows more important as traffic moves away from tier one cities such as London, New York, Chicago and San Francisco, to the next set of markets closer to the edge and to users.
Data centers are in a constant state of flux, and the above emerging trends will definitely shape how they look, feel and perform in the future. It’s important that you keep all of them in mind—and their possible ramifications—as you make decisions when building or redesigning existing data centers. | | 7:47p |
Report: Amazon Losing Some Apple Business to Google’s Cloud Apple, one of the biggest users of Amazon’s cloud services, has reportedly decided to shift some of that cloud infrastructure to Google’s cloud.
Being able to tout a company everybody knows and loves as your customer is an important tool in the toolbox of any technology company in a competitive market, but Google hasn’t officially touted Apple as its cloud customer just yet. The information was leaked to CRN by anonymous sources.
Google and Apple representatives did not respond to requests for comment in time for publication. An Amazon spokesperson responded by questioning the integrity of CRN’s sources.
“It’s kind of a puzzler to us, because vendors who understand doing business with enterprises respect NDAs (Non-Disclosure Agreements) with their customers and don’t imply competitive defection where it doesn’t exist,” the spokesperson said in an emailed statement.
Apple hasn’t fully defected from Amazon Web Services, according to CRN. But it has moved what could be a substantial portion of its total cloud services spend onto the Google Cloud Platform, the report said.
Both Microsoft and Google have struggled to win market share from the Seattle-based cloud services giant, but Microsoft appears to have pulled further ahead of Google in this race, thanks to its near-omnipresence in enterprise data centers.
When Urs Hölzle, Google’s VP of technical infrastructure, found himself having to defend Google’s “seriousness” about the enterprise cloud market on stage at a conference in San Francisco last November, he promised there would be visible change on that front “soon.”
One official announcement earlier this week was a sign of momentum for Google’s cloud services. Spotify, the popular music streaming service, announced it was moving much of its application and data from its own data centers to Google Cloud Platform.
As AWS was celebrating its 10th birthday this week, another one of its major customers, Dropbox, announced it was also moving a lot of its data out of the public cloud and into its own data centers. The company said it had reached the scale at which it was more economical to design and control its own infrastructure.
These aren’t signs of decline for AWS. Different companies move in and out of public clouds for different reasons, some of the biggest factors being scale and cost, which are closely linked, and the nature of their services.
Apple, for example, hosts most of its data in its own mega data centers, and while it is using some cloud services, there are no indications that it will ever go all-in with cloud. In fact, the company has continually invested in expanding its own data center capacity.
A hybrid infrastructure model can work for Apple but not for all hyperscale giants. Companies often find it more efficient and economical to operate all of their infrastructure on their own. After Facebook bought Instagram, which ran exclusively on AWS, the social network eventually moved the photo sharing app into its own data centers.
Another internet giant, Netflix, went completely the opposite way. The company announced last month that it had migrated the last bits of its infrastructure from its own data center the cloud. Netflix now runs on AWS 100 percent.
There are clearly multiple ways to design distributed infrastructure for a hyperscale internet service, and no one method will work for every use case, which is why Spotify or Apple moving data to Google’s cloud may be a good thing for Google, but it doesn’t necessarily mean AWS is quickly losing ground to its competitor. | | 8:55p |
Understanding ARM Chips for Servers, the Cloud and IoT  By The VAR Guy
ARM: It’s one of the oldest computer chip architectures still in use today, and it completely saturates some markets. Yet it’s only just beginning to penetrate others — namely the cloud, IoT and servers. Where are ARM chips headed in these ecosystems? Here’s a primer.
First, the ARM backstory: ARM is a type of architecture for computer chips. First developed in the 1980s to power PCs, ARM chips have ended up seeing their widest use so far in mobile devices, where they account for the vast majority of market share. Today, the number of ARM processors produced totals more than 50 billion. If you lined up all the ARM chips in existence, you could circle the globe about twelve times.
Why ARM?
But that’s old news for ARM. The most promising new frontier for ARM devices is not the mobile ecosystem but servers, the cloud and IoT devices.
ARM chips offer a number of important advantages in these areas. For one, they’re usually less expensive than the x86 CPUs that have traditionally powered servers. They also tend to use less energy, which in turn means they produce less heat — two crucial selling-points in an age when data center energy costs are skyrocketing, and in which IoT devices often have limited or sporadic power supplies.
ARM also offers a more flexible architecture. Unlike x86 chips, which were designed for a very specific purpose decades ago and have been plagued by backwards-compatibility issues ever since, ARM is a broad family of chip architectures. There are many different types of ARM CPUs. Hardware vendors can get ARM chips that are optimized for particular tasks, rather than having to work with the one-size-fits-all x86 architecture.
ARM for Servers, Cloud and IoT
If ARM is so great, why isn’t it being used everywhere already? Why is it currently restricted mostly to the mobile ecosystem?
The answer has two parts. The first is that ARM actually is seeing increasing deployment on servers, in the cloud and in IoT devices. You just have not yet heard a lot about it because the market penetration so far remains small.
But it’s not unheard of. PayPal is using ARM servers in its data centers. Google has signaled a willingness to adopt ARM servers — although it remains to be seen how serious the company is about ARM. And just last week, Red Hat announced plans to bring its enterprise Linux product to the ARM architecture — although there, too, expect a long wait before an enterprise-class product appears.
The second, more important part of the answer is that you can’t just take software written for an x86 server and run it on an ARM chip. In many cases you have to rewrite some of your code, or at least recompile it so it runs on ARM chips. For that reason, switching to ARM servers or cloud applications requires overcoming a huge migration barrier. The obstacle is not just about having to buy new hardware. You have to re-architect your whole infrastructure, from the software stack through the servers.
But the migration is becoming less difficult than it used to be. Companies like ARM-as-a-Service are offering cloud-based migration tools designed to make it easy to port applications to ARM. Similarly, Online Labs provides ARM servers hosted in the cloud, allowing organizations to migrate to ARM without having to worry about setting up new hardware. That removes one big piece of the migration puzzle.
The surge of IoT, too, promises to drive ARM adoption. As a (relatively) novel field, IoT doesn’t come with the legacy baggage of servers or the cloud. The IoT software being written now doesn’t have to be backwards-compatible with applications designed for x86 environments. It can be created for ARM from the start. Combine that with the fact that ARM chips are much more cost- and energy-effective than x86 for building IoT devices, and you have a formula for ARM success in this market.
For now, though, IoT is just coming of age. There will still be some waiting before ARM-based IoT solutions dominate the market.
Will 2016 prove to be the Year of the ARM Server, Cloud and/or IoT Device? Frankly, we don’t know. There are still too many variables and open-ended questions in this market. But ARM is certainly worth watching. It’s going places it has never gone before.
This first ran at http://thevarguy.com/cloud-computing-services-and-business-solutions/understanding-arm-chips-servers-cloud-and-iot | | 9:05p |
Products and News Announced at Data Center World 2016 The following exhibitors made major products or news announcements at Data Center World in Las Vegas this week:
Anord Critical Power, Inc.
Anord Critical Power, Inc., (ACPI), revealed the Anord Modular product (AMP) Power Distribution Unit (PDU), which allows for mission-critical facilities to improve safety, performance, management and monitoring. According to the company, the product sets new standards for PDUs by incorporating a smarter design to help exceed today’s complex electrical distribution requirements. The new offering provides: transformer compartment heat reduction, compartmentalization for safety and PDU start up.
CABLExpress
The ratification of 16Gb Fibre Channel and soon-to-be ratified 32 and 128Gb is causing a shift in data center optical architecture and associated cabling infrastructure design. In an effort to help future-proof, high-speed data centers, CABLExpress introduced its new angled patch panels. They allow the routing of patch cords directly into vertical cable managers on both sides, eliminating the need for horizontal cable management. As a result, the panels help to increase efficiency and manageability, decrease material and installation costs, and implement best-in-class connectivity solutions with high reliability and performance. The angled patch panels hold up to 96 fibers (1U) or 192 fibers (2U).
Cormant
Cormant Inc., a pure-play data center infrastructure management (DCIM) software company, and TSO Logic, a developer of IT metrics and automation software, announced a new business partnership that will cohesively bring together analytics collected from facility and compute to deliver total visibility and clarity into the data center and what’s running inside it. The partnership will ultimately allow customers to quickly aggregate complex and often siloed data to provide an actionable, fact-based view that will deliver total clarity and relevant insights, from across compute, network, applications, virtualization, and facility. The result will be faster decision making, better capacity planning and improved cost reductions.
EPI
EPI initiated and led a committee to write a complete data center operations and maintenance standard available and unveiled the world’s first complete data center operations and maintenance standard called the EPI-DCOS®. The draft standard was reviewed and endorsed by individuals working in data center operations and maintenance from over 40 prestigious organizations across the world with different levels of complexity, sizes and different industries.
Iron Mountain
Iron Mountain Inc. announced that it signed a 15-year wind power purchase that will exchange 30 percent of its North American electricity footprint with renewable energy. Additionally, the purchase of two-thirds of the power produced by a new wind turbine farm – currently under construction in Ringer Hill, Penn. – will provide Iron Mountain with long-term rate stability and expected annual savings of up to $500,000 in utility costs. The power generated by the Ringer Hill turbines will directly provide for the energy needs of Iron Mountain’s entire mid-Atlantic operations (comprising all or part of 13 states, including Washington, DC), currently using over 80,000 megawatt hours of electricity annually. In particular, this wind power purchase will support the energy requirements for Iron Mountain’s emergent data center business, projected to account for as much as 20 percent of the company’s electricity use in North America as the business grows.
Leviton
Leviton displayed its latest in copper and fiber network systems, the Atlas-X1 Category 8 Copper Cabling System. It is designed to provide flexible and scalable infrastructures capable of supporting future data network requirements like more frequent tech refreshes and rapid growth.
Built on a unified connector form-factor across Cat 5e, Cat 6, Cat 6A and Cat 8 applications, the Atlas-X1 System allows for a seamless migration from 1G to 40G networks. In early 2015, the company announced that Atlas-X1 connectivity had been tested to meet performance standards found in the current draft 2.0E of the TIA-568-C.2-1 Category 8 proposed standard and can support the operation of IEEE 802.3bq 25G/40GBASE-T applications up to 30 meters.
Modius
Modius Inc. demonstrated its new Capacity Planning Module (PCM). OpenData v3.7 Capacity Planning Module provides data center managers and facilities operators with three critical capabilities: an accurate assessment of current capacity; an accurate forecast of capacity usage over time; and a series of interactive power and network diagrams for assessing the impact of changes on data center capacity. The product uses data from your IT and infrastructure assets to baseline current available capacity for power, cooling, space and network connections. This module also tracks changes to resources over time to accurately forecast and plan expansion projects before capacity thresholds are reached.
NEXTDC
IDC’s 2015 MarketScape DCIM report forecasts a 16 percent CAGR from 2014 to reach $1.4 billion in 2019; and the Uptime Institute’s global 2015 Data Centre Industry Survey shows that 56 percent of respondents had either purchased a commercial DCIM solution or were considering one. In an effort to fill the needs of data center managers globally, NEXTDC released a data center-neutral DCIMaaS platform, ONEDC, which can be customized to connect any device in any data center environment (including own premises), replacing multiple systems and management tools with one central, cloud-based software platform to improve operational efficiency and deliver business insights.
Upsite Technologies
In an effort to help facilities managers and data center operators optimize cooling efficiency and prevent downtime, Upsite Technologies unveiled its new wireless monitoring solution, EnergyLok® EMS 300. It is designed to track a variety of environmental conditions to help identify opportunities for improving the effectiveness and efficiency of cooling and airflow management and offers up to 150 wireless sensor inputs. This provides for the deployment of multiple temperature and humidity sensor configurations along with four wired digital inputs, which can be used to track open/closed doors, motion and airflow sensors, fire alarms, gas and liquid leaks, and summary alarms from critical equipment including Uninterruptable Power Supplies and generators. |
|