Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, February 10th, 2014
Time |
Event |
12:30p |
HP Autonomy Offers eDiscovery in the Cloud HP brings Autonomy eDiscovery to the cloud, Oracle delivers numerous updates to its Human Capital Management Cloud offering, and Sematext announces the first monitoring and alerting solution for Apache Storm.
HP Autonomy eDiscovery in the cloud. HP (HPQ) announced a new cloud-based eDiscovery offering that provides businesses with a fast, easy and cost-effective path to leverage HP Autonomy’s eDiscovery platform. HP eDiscovery OnDemand delivers the entire HP Autonomy eDiscovery platform – including processing, early case assessment, review and production – as a secure cloud offering. Legal teams can quickly and easily upload case files and manage the complete eDiscovery process with a single platform and vendor. It goes through regular third-party audits testing operational process, personnel and security, and also provides access to HP’s industry-leading eDiscovery software and analytics, which have been battle tested with some of the world’s largest and most demanding organizations across a wide variety of industry verticals. “Software-as-a-Service offerings like this one empower legal professionals to focus on managing the eDiscovery process and analyzing and reviewing information,” said Alan Winchester, partner at Harris Beach, a New York–based law firm ranked one of the top law firms in the United States by the National Law Journal. “For firms without robust IT departments, it grants them the experts to manage the technology operations and security.”
Oracle updated HCM Cloud. Oracle (ORCL) unveiled significant updates to its Human Capital Management (HCM) Cloud, including Oracle Global HR Cloud and Oracle Talent Management Cloud. With these updates organizations can achieve new levels of engagement, productivity and business results through unparalleled insights into human capital. In the new release there are more than 200 new innovations, including an integrated time and attendance solution, workforce modeling, additional global support for payroll and new languages. “To be nimble and competitive in a global and connected world, organizations constantly need new and smarter ways to harness the best talent,” said Gretchen Alarcon, vice president, Oracle HCM Strategy. “With modern HR in the cloud delivered easily and intuitively, Oracle provides business with continual innovation to help tackle their most pressing talent needs. The latest updates to Oracle HCM Cloud promote a more collaborative, mobile and engaging experience for global HR management, leading organizations to take a more strategic approach to HR and better capitalize on their talent.”
Sematext launches monitoring and alerting solution for Apache Storm. Search and big data analytics company Sematext announced it has added Apache Storm and Redis monitoring and alerting to SPM, its core Performance Monitoring solution. An enterprise-class solution available in Cloud (SaaS) and On Premises editions, SPM monitors critical Search and Big Data services like Hadoop, Elasticsearch, Kafka, Solr, ZooKeeper and many others running on hundreds of thousands of servers all over the globe. “We are excited to add monitoring and alerting for Apache Storm and Redis to SPM,” said Otis Gospodnetic, Sematext’s founder and CEO. “Originally developed at Twitter for handling over 500 million tweets per day, Storm’s ability to perform distributed, real-time mining of high volume data streams has become indispensable to many organizations. And we didn’t just stop at adding Storm monitoring and alerting. We’ve also added support for Redis, a fast key-value store that has taken hold with developers everywhere due to its blazing performance and ability to process literally thousands of requests per second. These new additions to SPM will surely save our users significant time — and many headaches!” | 1:00p |
CyrusOne Buys Land for Data Center in Northern Virginia  CyrusOne’s Phoenix data center is one of the newer locations for the fast-growing data center company, which has now bought land in northern Virginia. (Photo: CyrusOne)
CyrusOne has bought land in northern Virginia to build an East Coast campus for its growing national data center network. The company has purchased a 14.3-acre site in Sterling (near the Dulles Town Square mall) for $6.87 million, according to property records. The transaction was first reported in the Washington Business Journal.
CyrusOne has a well-established footprint across the country, with more than 1 million square feet of space available in 20 data centers concentrated in Texas, Ohio and Phoenix. The missing piece in its footprint has been an East Coast presence, and the data center cluster in Northern Virginia seemed a natural destination. The company plans on building a 254,000 square foot data center, and has requested a meeting with Loudoun County planners to discuss a zoning amendment to support a multi-building data center project.
The company is in a quiet period until earnings scheduled for February 20th.
A Competitive Landscape
CyrusOne has a big impact on the competitive landscape. It is yet another data center giant vying for the lucrative northern Virginia business. Director of Economic Development in Loudoun Buddy Rizer has stated on numerous occasions that there hasn’t been a day in five years that hasn’t seen data center construction.
The CyrusOne land purchase exemplifies the continuing growth of the Loudoun data center cluster. Several factors make the area attractive. Power costs are low with Dominion Power very accommodating to the data center sector. The connectivity is rich, as the very foundations of the Internet were formed here with major companies like AOL and Equinix setting up shop in Northern Virginia in the 1990s. Additionally, tax incentives are extremely friendly in Virginia.
Loudoun County was home to a staggering 5 million square feet of data center space as of 2012. The numerous expansions since and ongoing, as well as the introduction of another data center giant suggests that the number is now a low ball estimate. Loudoun loves data centers; it is the cornerstone to the economy, outpacing growth in all areas of real estate.
Major Investments in New Space
Dupont Fabros is investing $900 million in Ashburn, citing tax incentives as one reason for the commitment (the other being demand). Before the NTT investment, RagingWire secured $230 million, which in part is being used for expansion here in NoVa. Digital Realty has been aggressively expanding in Ashburn, planning to invest $150 million through 2015 on an additional 400,000 square feet. Equinix has built out a connectivity hub of massive proportions. CoreSiteCOPT, Latisys, CenturyLink Technology, Verizon Business and AT&T also operate data centers in Northern Virginia.
Continuing deal announcements, alongside of expansion announcements, suggest that there is room for CyrusOne to grow here. Its strength among financials in particular mean it will be real competition for many down the line. The company is also making a major commitment to the Open-IX initiative, which has recently begun operations at several sites in the region. | 1:15p |
Professional Labeling Essential to Data Center Security and Audit Compliance Craig Robinson manages sales and business development for the P-touch EDGE industrial labeling division at Brother Mobile Solutions, a provider of on-demand mobile printing and labeling products for professional contractors and installers of data centers and other telecom/datacom infrastructures.
 CRAIG ROBINSON Brother Mobile Solutions
Large data centers are repositories for some of the most mission-critical information in the cyberworld today. For this reason, there are legal and regulatory compliance requirements mandating effective data center configuration and asset management across a wide range of closely regulated industries.
Data centers owned by banks, financial management firms, utilities and energy companies, defense contractors, government agencies, healthcare and pharmaceutical companies, airlines and other transportation providers, for example, have strict compliance requirements linked to configuration and asset management of their DCs. And often their compliance must be validated through quality assurance audits with oversight by a third-party.
Quality Assurance Audits
According to one executive with extensive experience in overseeing internal / external DC audit reviews: “Up to 90 percent failure rate on audit reviews are due to improper labeling of the vast array of DC components, which include cables and wires, servers, storage components, power panels and more. Lack of professional, standards-based labeling directly ties into a DC’s configuration management and record-keeping capabilities, especially its ability to re-configure assets in the event of a catastrophic failure, such as fire, flood, or earthquake. If you can’t identify the 100,000 or so components and their exact location, you can put the enterprise at operational risk.”
The only way a DC can recover from a partial or total failure is to create a permanent record showing where everything is located and how the components are configured, so the arrangement can be accurately reconstructed. In addition, since an organization’s DC is a critical part of its information infrastructure, it is never static but continues to change, necessitating frequent moves, adds and changes (MACs). This means the permanent record of the DC configuration must also be dynamically maintained and updated to reflect the current status.
Professional Labeling Standards
The ANSI/TIA 606-B.1 Standard for identification and labeling provides clear specifications for labeling and administrative best practices across all networked systems classes, including large data centers. While not mandatory, these guidelines help professional contractors and installers ensure quality assurance for the long term.
The purpose of implementing and maintaining a durable, end-to-end labeling scheme throughout the DC infrastructure is to accelerate tracing and initiate problem-solving measures as quickly as possible to avoid costly downtime. It also helps to future-proof the installation by providing an accurate blueprint should the entire configuration need to be recreated―an unlikely scenario, but one that must be considered.
Among the Standard’s general guidelines are that text on the labels should be machine-generated and visible, and that all cables and pathways should be labeled at both ends for ease of tracing from either direction.
The labels used should meet the UL969 specification for legibility, defacement and adhesion. This means they should be rugged, tamper and smear-resistant and able to withstand environmental conditions such as heat, moisture and ultraviolet light. And, they should have a design life equal to or greater than that of the labeled components. Lastly, keeping and maintaining accurate records is of paramount importance, with identifier information stored in a permanent record and backed up securely by the facility’s administration.
The labeling standards may sound complicated, but in reality they are not. In today’s marketplace, you can find next-generation industrial labeling tools that incorporate smart technology, intuitive navigation and versatile functionality to help make DC component identification and labeling relatively quick and easy to implement.
What to Look for in Smart Labeling Tools
Best-in-class industrial-grade handheld labeling tools will be ruggedly constructed and ergonomically designed to comfortably handle the rigors of large installations. Many contractors prefer thermal printers that can format and print a variety of conforming label types up to 24mm or 36mm wide using easy-to-load, snap-in tape cartridges. Durable UL-approved polyester laminated labels encapsulate the print between two protective layers to ensure long-term integrity, legibility and adhesion.
Some intelligent labeling tools feature PC connectivity options and built-in software that allow users to download and store data from common databases. They also allow for on-site download and printing of previously saved custom or pre-formatted label templates, as well as previously programmed Alpha/numeric serialized labels. These capabilities can dramatically increase labeling speed and productivity in the field.
Additional features to look for include:
- QWERTY-style keyboard for fast, easy input and formatting
- Large, backlit LCD display with drop-down menus of label design settings
- Fast printing speed (up to 1.2 ips) and automated cutting of labels to the desired size
- Optimized character font shaping and print positioning for high-quality, easy-to-read text, symbols and barcodes
- Ability to accept HGe and TZe tapes, as well as heat shrink tubing (HSe) to produce permanent heat shrink labels quickly and economically
While asset management and labeling are only part of a successful data center installation, it is clear that they play an important role in ensuring maximum uptime and audit compliance, as well as the long-term performance, security and reliability of the facility and its components.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 2:00p |
In 2014, Build Your Business Model Around Your Data Center  Forward-thinking executives are now building their business around the data center.
New technologies have emerged over the past few years that have elevated the value of the modern data center. Cloud computing, big data, IT consumerization, and numerous types of virtualization platforms have fueled direct growth around data center demand. There are new types of services around infrastructure extension and creating your next-generation environment.
In the past, the data center was seen as a mechanism to drive and support business. Physical PCs still ruled the domain and a lot was accomplished with pen and paper. The reality these days is that business and technology have completely evolved. The modern data center is the central connection hub for organizations of all sizes and verticals. Now almost everyone is globally connected and reliant on the data center.
So what has changed? Business used to establish their practices and then create their IT department. Now big (and smart) businesses are approaching data centers and technology from a completely different angle. Executives realize the value of cloud and the data center. They see how users are connecting and how data is being distributed. These visionaries see that the future revolves around complete mobility and true device-agnostic connectivity. These executives are now building their business around the data center.
- New Ways to Compute. The modern data center has introduced a whole new way of data and information delivery. New types of end-points allow for zero-client technologies to really thrive. Plus, today’s user is really using 3 to 4 devices to access their data. And so, many businesses are looking at ways at the user. How can we deliver the best possible user experience? How can we create a consistent look and feel, regardless of the device that a user is utilizing? The idea here is to create a business model around data center compute delivery. Allow for a minimal infrastructure, create business processes and automation to work for you – and create a powerful platform to delivery content. Take Netflix as an example. Their main goal is to deliver the best possible user experience as quickly as possible. So, they created a business and technology model around edge computing which allows for fast (hi-definition) content delivery.
- New Ways to Distribute Data. Companies like Dropbox, Box.com and other Internet-based shops quickly understood the value of fast data replication and synchronization. Their business model is directly built around a truly distributed data center model. In utilizing WANOP technologies advanced interconnectivity, these businesses can deliver high levels of service on a truly cloud-ready level. Many organizations are directly leveraging this distributed data model to create new types of services. Whether it’s data synchronization, content delivery, or backing/replication services, there are a lot of new ways to design your data model. Here’s another example: retailers, research shops, and other organizations that process a massive amount of information are actively looking at big data and business intelligence to optimize their business model. These organizations are re-designing how they utilize data center and IT services to create even more business efficiency.
- Changes in Regulations and Governance. A big reason many organizations are changing the way they think about cloud computing and the modern data center revolves around recent regulation changes. For example, the recent Omnibus rule was enacted as a change to HIPAA now allows for the creation of a business associate (BA). This is any organization that has more than just transient access to data (FedEx, UPS, USPS, for example). An organization can sign the business associate agreement which would now allow them to take on additional liability to manage protected healthcare information (PHI). Another example is Rackspace and what they’ve done with PCI/DSS. At a high-level, they intelligently controlling data through the cloud, the organization’s servers and the payment gateway. Because of that control, you’re organization is able to continuously control the flow of sensitive information. Now, businesses looking to open an e-commerce or data distribution point have a lot more options to work with.
- The Logic Behind a Data Center Business Model. At this point, you’re truly leveraging technology to help you optimize business. The point here isn’t to rip out your existing infrastructure and rebuild everything – unless that’s something you need to do, of course. The idea here is to shift how you think about creating a business model. Taking technology and the modern data center infrastructure into the very first planning stages will open doors to many more potential business opportunities and markets. Because cloud computing and IT have come so far – using technology as a direct business tool makes complete sense. So, as you plan out a new business venture, division, service or even if you are about to open up a new shop – make sure to align your organizational planning directly with your technological platform.
During the era of the cloud, the data center will be the center of the technological universe. To see the data center or your IT environment as secondary assets is no longer the way to run you organization. Technology can truly empower your users and your business. There is much more support around cloud and virtualization systems. Large businesses are migrating from legacy servers and applications to a much more distributed business model.
The next-generation business will need to be completely in-tune with its next-generation data center. As you evaluate your current infrastructure and think about the future, know that the world is becoming much more interconnected. There are new ways to deliver your applications and data which can completely revolutionize the way that you conduct business. In doing so, your organization will be truly aligned and built around your IT and data center infrastructure. | 2:30p |
Top 10 Data Center Stories: January 2014  Frank Frankovsky, chairman and president of the Open Compute Project Foundation, sees the solution provider community as a key enabler of open hardware innovation. The fifth Open Compute Summit, held in January, led to much open hardware news. (Photo by Colleen Miller.)
In January, multiple data center articles were popular among Data Center Knowledge readers. Our top items included: Microsoft joining the Open Compute Project, downtime for top hosting companies, downtime for Google and Gmail, and Bitcoin, a cryptocurrency play, capturing the interest of cloud and infrastructure industry. Without further ado, here are the most viewed stories on Data Center Knowledge for January 2104, ranked by page views. Enjoy!
-
Microsoft Joins Open Compute Project, Shares its Server Designs – January 27 – In a dramatic move that illustrates how cloud computing has altered the data center landscape, Microsoft is opening up the server and rack designs that power its vast online platforms and sharing them with the world.
- Lengthy Outages for Hacker News, FastHosts – January 6 – It was a rough weekend for uptime, with significant outages at UK hosting provider FastHosts and the startup news portal Hacker News.
- Closer Look: Microsoft’s Cloud Server Hardware – January 27 – As it joins the Open Compute Project, Microsoft can now show the world the custom server and storage designs that power its global armada of more than 1 million servers. DCK takes a closer look.
- As Bitcoin Infrastructure Booms, Mining Heads to the Data Center – January 21 – After getting started in garages and server closets, bitcoin mining is moving into data centers and the cloud. Some traditional data center providers will benefit, but this transition also has the potential to enrich a new generation of entrepreneurs emerging from within the bitcoin community.
- Facebook: Open Compute Has Saved Us $1.2 Billion – January 28 – Over the last three years, Facebook has saved more than $1.2 billion by using Open Compute designs to streamline its data centers and servers, Facebook CEO Mark Zuckerberg said today at the Open Compute Summit in San Jose.
- Why Does Gmail Go Down? January 2014 Edition – January 24 – So how does a widely-used app like Gmail go down, as it has today? There have been a number of Gmail outages over the years, usually involving software updates or networking issues.
- Schneider Electric Acquires AST Modular – January 10 – Schneider Electric has beefed up its position in the market for pre-fabricated data centers with the acquisition of AST Modular, the company said today. The AST deal reflects Schneider’s growing focus on modular solutions, coming just three months after it rolled out a new line of 15 enclosures.
- NSA Will Cool its Secret Servers With Waste Water – January 6 – A new data center being built by the National Security Agency (NSA) will use up to 5 million gallons a day of treated wastewater from a Maryland utility. With its use of local waste water, the NSA is emulating a strategy adopted by Google.
- Report: Data Center Leasing Surged 25 Percent in 2013 – January 9 – Data center demand from social media companies and cloud-builders contributed to a surge in leasing of wholesale data center suites in 2013, with total leasing volume up about 25 percent from 2012, according to a report from a real estate firm.
- Google Glass: A Vision of the Future for Data Center Maintenance – January 9 – When fitted with safety lenses, Google Glass provides a great opportunity to revolutionize the way data center technicians perform their daily tasks. When dedicated maintenance and repair apps are developed, the data center technician will directly connect to the maintenance database. when completing usual maintenance activities.
| 3:30p |
Schneider Electric Extends PowerChute Network Shutdown Support For Virtual Cluster Environments Schneider Electric released an updated version of PowerChute Network Shutdown , a network-based server shutdown solution for IT equipment, providing users with the assurance that mission critical equipment will be protected in the event of an extended power failure.
In version 3.1, Schneider has added support for virtual machine migration in virtual cluster environments. This allows virtual machines affected by power disturbances to be migrated to hosts not impacted by a specific UPS event, ensuring the virtual systems stay online as long as possible and affording IT managers the time needed to assess and address the power issue.
“While virtualization has changed the IT landscape, the need for power protection remains,” said Paul Bohan, vice president, Network Management, Schneider Electric’s IT Business. “PowerChute Network Shutdown v3.1 draws on APC by Schneider Electric’s history as a provider of dependable network management solutions by ensuring safe handling of virtual machines during downtime.”
Working with a APC by Schneider Electric Uninterruptible Power Supply (UPS) Network Management card, PowerChute enables automatic virtual machine migration and graceful virtual machine and host shutdown in VMware and Microsoft Hyper-V clusters. This protects mission critical equipment in the event of an extended power failure.
Additional key features of PowerChute Network Shutdown v3.1 include:
- VMware Ready certification
- Easy installation: Users can opt to deploy the solution as a virtual appliance for VMware environments, simplifying installation.
- Industry-leading usability: A patent pending “Virtual Cluster View” dashboard displays a graphical representation of the virtual cluster environment and UPS setup on a single screen. In VMware environments, this user interface can also be monitored via the vSphere Client using the new PowerChute vCenter Plugin option.
- Compatibility with current virtualization software: Built-in support for the latest virtual platforms including VMware vSphere 5.5 and Microsoft Hyper-V Server 2012 R2.
In the past, users with virtual cluster environments could only support graceful virtual machine shutdown using customized scripts. With PowerChute, VMware and Microsoft Hyper-V platforms are integrated, allowing for seamless configuration and protection via an easy to use setup wizard.
PowerChute Network Shutdown v3.1 is bundled with current UPS Network Management Cards and available as a free download via the APC by Schneider Electric product website. Users may also download the software as a virtual appliance from the VMware Solution Exchange. | 4:00p |
ABB Offers Free Preview Version of Decathlon DCIM Software Ramping up customers into Data Center Infrastructure Management (DCIM) quickly remains a primary concern of vendors. This is the impetus behind ABB’s free version of Decathlon, its DCIM software that helps data center operators monitor their power meters. The free version of Decathlon allows data center managers to get a hands-on experience with remote monitoring of up to 50 power meters. The goal is to demonstrate the software’s value and ramp more customers into Decathlon.
DCIM is a big investment and a time consuming project. Vendors have been turning to different tactics to get their foot in the door – for example, Nlyte’s SaaS version. Getting customers to try the product is a big hurdle.
Decathlon Preview for power meters is available at no cost through the Decathlon web page. The full ABB Decathlon provides tools to manage a flexible network of power, cooling and IT systems for maximum reliability, energy efficiency and optimal utilization of all data center assets. Decathlon Preview for power meters is trial software designed to provide hands-on experience with a specific feature of ABB Decathlon. The full version supports monitoring of 5,000 or more meters running virtually any protocol.
Trial period runs for 21 days, with an option to extend through simple license renewal on 21-day calendar cycles. Decathlon Preview for power meters displays the data in ready-made dashboards with phasor diagrams, flicker indication, harmonics histograms and waveform traces.
Rich Ungar, global head of R&D for Decathlon at ABB, said the preview version was “a simple and cost-free way to discover how easily power quality can be monitored across an entire data center.”
The goal is to demonstrate value and get DCIM into customer hands easily. Decathlon Preview for power meters demonstrates how data center managers can resolve the complexities of integrating the performance data of all the equipment in a data center into a single operational view. Through that single pane view, both ABB Decathlon and the Preview version allow data center managers to easily access, view and manage system performance data from the critical power path.
The complete ABB Decathlon system allows users to create more dashboards, use preconfigured or custom reports, see where meters physically reside and compare meter values with aggregated results from other power-related data. | 9:37p |
Napier Retires from Rackspace, Graham Weston Returns as CEO  Graham Weston will return as CEO of Rackspace following the retirement of Lanham Napier. (Photo: Rackspace Hosting)
Lanham Napier is retiring from his position as CEO of Rackspace Hosting, with former CEO Graham Weston stepping in to take the helm while the Rackspace board conducts a search for a permanent chief executive, the company said today.
“I intend to explore new things, both professionally and personally, and return to my entrepreneurial roots,” said Napier. “My decision to step down as CEO was a difficult one, but it’s the right choice for me and for the company. With the board and management team aligned around our 2014 strategy and financial plan, I believe now is a natural transition point to select a new leader for the next exciting phase of Rackspace’s growth.”
Napier will remain a consultant to Rackspace for the next several months to ensure a smooth transition. The Rackspace Board of Directors has launched a comprehensive search process to hire a long-term successor CEO and will consider both internal and external candidates, the company said.
Weston, who has been serving as executive chairman, provided capital for the formation of Rackspace in December 1998 and served as Chief Executive Officer from July 1999 to August 2006.
“Expect a continuation of the strategy we’ve had in place for the last year,” said Weston. “This allows us to do the CEO search with as much diligence as possible. We’re not in a hurry to replace Lanham. We’re really going to take our time.”
“The Board has great confidence that Graham is the right person to guide Rackspace while it conducts a thorough search for a CEO with the talent and passion to lead the company during its next phase of growth,” said James Bishkin, lead director on the Rackspace Board of Directors. “Under Lanham’s leadership, Rackspace grew from a small startup to a global $1.5 billion public company, serving more than 200,000 customers, and has been one of the fastest-growing firms on the New York Stock Exchange. We are grateful for the way Lanham positioned Rackspace for continuing success in this attractive and growing market.”
“I’m personally grateful to Lanham for fourteen years of partnership and friendship as we worked side-by-side to build this company,” said Weston. “Lanham has been an inspiring and successful leader, and we wish him all the best as he pursues his other passions.”
Taylor Rhodes, the company’s Chief Customer Officer, has been appointed President. Rhodes joined Rackspace in 2007 and has served in a variety of leadership positions within the company. Prior to his role as CCO, Rhodes served as Senior Vice President and Managing Director of Rackspace International. |
|