Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, July 21st, 2014

    Time Event
    12:30p
    Wrong Place, Wrong Time for Data

    Jim McGann is vice president of information management company Index Engines. Connect with him on LinkedIn.

    It’s a story that plays out in the public view all too often. An unencrypted hard drive full of personal client information is loaded on a laptop, left in a cab and the entire company goes into damage control.

    This is a prime example of data being in the wrong place at the wrong time. But client records aren’t the only sets of data in the wrong place. Companies can have up to 80 percent of their data in the wrong place at the wrong time.

    Beyond the breach, storing your data

    Well beyond data breaches, having data in the wrong place eats into company resources, can add up to millions in unnecessary expenses and creates a considerable legal risk.

    Data has a habit or dying where it was born. An email sent 10 years ago that backed up to a legacy tape is sitting in an offsite storage vault. The iTunes library the intern downloaded two years ago is saved on the user share server that’s backing up to disk.

    Data dies where it was born, where it was first backed up, first used. As this data ages, it loses context and its location is often forgotten. The data becomes sensitive, useless and/or expensive making its current home the wrong place at the wrong time.

    Backup tape shouldn’t double as file storage and it’s definitely not an archive. Yet, there they are – boxes of tapes with nightly backups created for disaster recovery and doubling as storage and an archive in case of an eDiscovery event.

    The fact is, as this data ages the value is lost. On average, less than 5 percent of tape data has business or legal value. After five years those numbers drop to 1-3 percent. This percentage is likely composed of contracts, client/employee records and other sensitive content you can’t destroy. The likelihood of these documents being accessed again are slim, but they must be maintained for compliance and legal.

    Disk-based backup has been widely accepted because of its better features and convenience, but it has two faults. First, tapes are often still created off the lack end for long-term retention and second, the systems are backing up data with no business value.

    Companies upgrade to premium disk-based backup technology only to resort back to using tape for long-term retention. This results in paying for expensive offsite storage and incurring costs to manage tapes even though they are a “tapeless” environment.

    Disk also isn’t the most economical of storage platforms and when everything’s backed up to disk including lunch requests from the past year and the intern’s iTunes library, the cost of managing this environment can add up even faster.

    In both cases organizations need to set policy on their data and clean up what is no longer required and stop simply stockpiling legacy data in offsite storage. Long-term retention of abandoned, personal, duplicate and value data in the wrong place builds up.

    Cloud migration has been popular because of its affordability, easier management and lower physical presence in the data center. There’s one thing the cloud is right on track with compared to disk and tape – the vendor lock in.

    With cloud companies gearing for enterprises’ business and popping up daily, organizations have to be aware of depending on one cloud backup provider that may go out of business – with your data. The cloud provider likely has the data in a proprietary format, making a virtual move a lot more frustrating.

    Companies also lose physical control of the data, causing many of them to keep everything on internal servers so they can find data when they need it. This causes mass expansion of server capacity including a lot of junk.

    By cleaning out servers, particularly user share servers and those belonging to high-turnover departments, capacity cost and capacity can be reduced.

    Set parameters with data profiling

    Data profiling takes all forms of unstructured files and document types, creating a searchable index of what exists, where it is located, who owns it, when it was last accessed and optionally what key terms are in it so companies can make smarter decisions about data retention and platform.

    Leveraging a rich metadata or full text index as well as powerful Active Directory integration, content can be profiled and analyzed with a single click. High-level summary reports allow instant insight into enterprise storage providing never-before knowledge of data assets. During this process, mystery data can be managed and classified, including content that has outlived its business value or sensitive data that poses a corporate risk or liability.

    Then data profiling enables companies to build and automate policies to manage this data. The built-in disposition capabilities within the engine make constructing and enforcing information management, compliance, defensible deletion or other retention policies simple, auditable and automated.

    Set parameters around what data exists, who owns it, file type, when it was last accessed and where it’s located.  Disposition options include migration to cloud or lower cost storage tiers, defensible deletion, archiving, and more. Identify what has value and should be kept for long-term preservation and eliminate the save-everything strategy.

    The time is now

    Data has a habit of dying where it was born because no policy is set around what to do with it. With new technology and the various storage platforms already maintained in the data center, it is the right time to manage, tier, remediate, encrypt and archive that data.

    Policy setting can remediate legacy tape, keep disk from turning into tape and maintain server size, saving companies cost, capacity, legal risks and maybe even a data breach story on the nightly news.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    Intel Designs Custom Scalable Chips for Oracle’s Massive Database Machines

    Intel has designed a custom processor for Oracle it says will enable the customer to dynamically scale frequency and the number of cores processing Oracle software workloads.

    Customizing chip designs for high-volume customers is already a big business for Intel, and one that is growing. The company makes custom chips for vendors as well as web-scale data center operators, such as Facebook, Google and Amazon.

    In June, Intel also announced a hybrid processor that combines its Xeon E5 chip with a Field-Programmable Gate Array. It has not disclosed an availability date, but once out, customers will be able to change the processor’s configuration dynamically as their needs change.

    The custom SKU Intel came up with for Oracle is a modified version of its latest Xeon E7 v2 chips. Making frequency and the amount of active cores dynamic takes compute power elasticity to new level of granularity. Elastic capacity traditionally means dynamic adjustment of the amount of physical or virtual machines allocated to a certain workload.

    The companies made the announcement at the same time Oracle announced the latest model of its massive database machine called Exadata. The machine, X4-8, uses the custom 15-core chips Intel designed to power its powerful database server hardware.

    Diane Bryant, senior vice president and general manager of the Data Center Group at Intel, said, “This customized version of the Intel Xeon processor E7 v2, developed in collaboration with Oracle, helps maximize the power of the Exadata Database Machine X4-8 by elastically accelerating peak performance of database operations, while also reducing the data footprint.”

    Oracle says it has optimized X4-8 specifically for Database-as-a-Service and in-memory database workloads. It comes with up to 12 terabytes of DRAM and can be used to either consolidate hundreds of databases or to run massive databases in-memory.

    The system has eight-socket severs, intelligent storage, fast PCI flash cards and InfiniBand connectivity.

    Here are the hardware enhancements over the previous generation of Exadata:

    • 50 percent more database compute cores, using the Xeon E7 v2 15-core processors
    • Up to 6TB of memory per compute node for a total of 12TB per rack
    • 2x faster InfiniBand with a new PCIe card with all ports active
    • Nearly two times more local disk space
    • Up to 672 TB of disk storage and 44 TB of PCI flash per rack

    Oracle also recently announced new software that enables customers to use SQL to run queries across SQL, Hadoop and NoSQL databases.

    2:00p
    Data Center Jobs: McKinstry

    At the Data Center Jobs Board, we have a new job listing from McKinstry, which is seeking a Controls Technician in Indianola, Iowa.

    The Controls Technician is responsible for coordinating facility operations which includes training when applicable: staff (initial certification, re-certification, provide on board training to all new site technicians, as directed); supporting audit initiatives: training records, project compliance auditing; conducting extensive self study (reading, research and practice) to improve and maintain technical proficiency in BMS systems; completing certifications as required by the company and providing technical assistance to team members at the data center.

    To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    5:18p
    Former Switch & Data Team’s Firm vXchnge Breaks Ground on Philly Data Center

    Continuing its rise from the ashes of Switch & Data, vXchnge broke ground on a Philadelphia data center earlier this month.

    The facility is an example of the company’s strategy to build what it dubs “Built for Performance” data centers in 15 strategic, geographically-dispersed markets across the county. The company, started by alums of Switch & Data, the data center provider Equinix gobbled up for about $690 million in 2009, targets underserved, well-connected regions to enable edge computing.

    Its target customers are network-centric businesses that need to expand geographic reach. “We’re not a managed services company’ we want to enable managed services,” Keith Olsen, CEO of vXchnge and former CEO of Switch & Data, said.

    vXchnge is building a 70,000 square foot Philadelphia data center for high-density infrastructure with multiple network and service providers. The facility is located at 1500 Spring Garden Street, and the company expects to bring it online by mid-2015.

    vXchnge is managed by Switch & Data’s former management team, and its strategy employs a lot of Switch & Data DNA. The older company also tended to go after underserved markets.

    “Data center deployments need to be in the right geography,” Olsen said. “Philly is a tremendous marketplace. Philly is high in demand, and the supply is tight.”

    Philadelphia’s untapped data center potential

    Philadelphia is the third-densest city in the U.S. and has high demand for data center services. vXchnge is positioning its facility there as one that doesn’t have the physical and technological limitations of many existing legacy data centers and the buildings where they are housed.

    The facility also represents a sort of homecoming, as it was the location of the first Switch & Data facility. “We had a history there,” said Olsen. “When you think about business metrics, average revenue per square foot, interconnection per CPU, Philly was always at the top of the list. We know what was happening to our sites when it came to demand and power and cooling.”

    The team relied on more than its past experience in the market. It also performed an analysis of the top 50 markets in the U.S., and Philadelphia offered substantial opportunity.

    The analysis included population growth, business presence and network capacity. “I’m a telegeographer,” said Olsen. “This analysis is the macro research; then there’s the micro research. We’ve established excellent customer relations over the decade. We ask customers about what areas they’re looking at as well.”

    vXchnge doesn’t consider Philadelphia to be a secondary market, but rather a very underserved, well connected emerging market.

    “We’re not doing this as a backup site; our strategy is the edge,” said Olsen. “How do you enable applications so you’ve effectively supported our exchange points? Our lives, personal and professional have become more network-enabled everyday. Data center deployments need to be in the right geography. We enable an application from that point.”

    Provider consolidation marches on

    The entire former Switch and Data leadership team spent 18 months speaking to customers in the market and evaluating what data center services people were looking at. “We’re maniacally focused on the exchange point,” Olsen said.

    vXchnge was formed after the Stephens Group acquired Bay Area Internet Services (BAIS), a colocation provider in Santa Clara, California, in 2013. The Little Rock, Arkansas-based private equity firm partnered with what is now the vXchnge management team on the transaction and formed vXchnge.

    The Stephens Group later merged Fiber Media, its other private equity investment, with vXchnge, expanding the company’s data center footprint. Olsen said it was now focused on building new data centers.

    Today, vXchnge operates in four other U.S. markets on east and west coasts.

    This isn’t the only new data center company that was spawned by Equinix’s Switch & Data acquisition. Another provider, 365 Main, acquired the former Switch & Data data center portfolio from Equinix in 2012.

    5:38p
    Interxion Teases Centralized Private Multi-Cloud Connection Service

    European data center provider Interxion announced Cloud Connect, an upcoming service that allows companies to connect to one or more cloud service providers through private, secure links, bypassing the public Internet. It will include service level guarantees on latency, throughput and security. BTI Systems provided the technological foundation for the service.

    Private connections to multiple clouds from Interxion data centers will make it easier for customers to integrate public cloud services into enterprise environments.

    The services will be provisioned through BTI-made customer service portal. It enables online ordering and management of customers’ virtual LAN connections to cloud providers.

    Interxion has maintained a “cloud-neutral” strategy, saying it has no intentions of getting into the cloud services business itself. It focuses on connecting customers to whatever clouds they want.

    The company announced private links to Amazon Web Services and Microsoft Azure before.

    “Every organization beginning a migration to the cloud faces the challenge of making the hybrid model work with their existing IT installations,” said Mario Galvez, Interxion’s vice president of product management. “The choice of data center is important because the applications running in different environments need to interact in a secure and high-performance manner. Cloud Connect makes this decision-making process easier by offering multiple private connect services from a single connection.”

    Cloud Connect will launch in London, Frankfurt and Amsterdam in the third quarter of 2015, and other markets later.

    Providing private links to public cloud services has become an important part of colocation providers’ businesses, and many of them have been racing to get public cloud points of presence installed in their facilities.

    Digital Realty Trust recently launched cloud connectivity services in the UK in partnership with communications services provider Epsilon. Equinix continues to expand its private connections to public clouds, recently rolling out Azure ExpressRoute globally, including Europe.

    Level 3 launched a Cloud Connect service in October, offering the underlying network connectivity and services for global enterprises to more effectively integrate the cloud into their evolving IT architecture.

    7:53p
    Pivotal Grabs Puppet Labs Co-founder Andrew Clay Shafer as Director of Technology

    Andrew Clay Shafer, a familiar face in the DevOps community and one of the co-founders of the IT automation startup Puppet Labs, has taken a senior technologist role at Pivotal, the EMC and VMware company led by former VMware CEO Paul Maritz.

    Shafer joins Pivotal as senior director of technology, reporting directly to president and head of product Scott Yara. Yara was a co-founder of Greenplum, a data analytics technology firm EMC bought in 2010 and made part of Pivotal along with other past acquisitions.

    Shafer is the latest addition to the star-studded management team at Pivotal, which aims to sell enterprises the technology and the mindset necessary for building and deploying modern software applications at the same pace Internet giants, such as Google and Facebook, build and deploy them.

    In a Q&A published on Pivotal’s blog announcing his appointment, Shafer said his initial focus will be on growing the Cloud Foundry community. Cloud Foundry is Pivotal’s open source Platform-as-a-Service technology.

    “For the moment, Scott asked me to focus on fostering a vibrant technical community around Cloud Foundry and look for ways to help align Pivotal’s products strategically with the opportunities in a market being transformed by cloud computing, open source, agile, DevOps and data,” he said.

    Cloud Foundry is part of Pivotal’s varied portfolio of products and services and something Shafer has been involved in in the past. “I see Cloud Foundry as an empowering technology that allows operations to declaratively manage distributed services as a top-level abstraction while ensuring consistent application of policies and self-service to the frontline developers,” he said.

    Shafer acted as an MC at the Cloud Foundry Summit this past June in San Francisco.

    After he left Puppet, the successful DevOps-style IT automation company, he went on to work as VP of engineering at Cloudscaling, helping companies build and operate OpenStack and CloudStack infrastructure. He also did a “short tour of duty” as a cloud builder at Rackspace, a service provider deeply invested in OpenStack.

    8:43p
    Rackspace to Move into Former Texas Shopping Mall … Again

    Making a habit out of using defunct malls as office buildings, Rackspace has agreed to lease a building in Austin, Texas, that used to house the Highland Mall.

    The building will give Rackspace additional office real estate in the vibrant and quickly growing city, where it already has an office building. There is a thriving tech startup scene in Austin with no shortage of venture capital.

    The building belongs to Austin Community College, which bought it in 2010. The idea to renovate the four-story 194,000 square foot building and lease it to Rackspace came from Live Oak-Gottesman, a developer who suggested it in a request for proposals to ACC.

    Live Oak-Gottesman will renovate the building and get paid by a portion of Rackspace’s lease payment, according to an ACC news release.

    Once the data center services company’s offices at Highland open – which ACC expects to happen in late 2015 – it will offer paid internships and tech training for ACC students. Rackspace is reportedly planning to relocate 570 employees to the new campus.

    Richard Rhodes, ACC president and CEO, said, “Rackspace is one of the area’s top employers and has a strong commitment to education.”

    This will not be the first time the company will have moved into a former shopping mall. Its current headquarters building in San Antonio, Texas, is the former Windsor Park Mall.

    There are scores of massive defunct properties all over the U.S. that used to house shopping malls. The list grew with proliferation of online shopping as well as fallout from the 2008 recession.

    Some of the properties have been repurposed, such as the two buildings in Texas. While Rackspace is a data center company that occupies former shopping malls as office buildings, there are also companies that convert these properties into data centers.

    One example is Ubiquity Critical Environments, a company established by Sears Holdings, a long-time brick-and-mortar retailer that now owns a lot of the shuttered retail real estate. Ubiquity is tasked with repurposing former Sears Auto Centers for data center use.

    Another example is a company called AiNET, which has turned a former department store within a mall in Maryland into a data center but expressed interest in buying the entire property in 2013, whose owners were facing foreclosure after defaulting on a loan.

    9:00p
    New CEO Positioning RF Code as an Internet of Things Play for Data Centers

    RF Code, the company best known for its RFID-tag-based asset management solutions has a new CEO at the helm. Ed Healy has stepped into the role as the company prepares to launch a new strategy, aiming to change the dynamics of using Big Data to run data centers. Early in his tenure, Healy sees massive opportunity in both global expansion and improving proactive asset management capabilities through software.

    RF Code provides asset management, environmental monitoring and data center optimization solutions. Its strategic imperative is to provide actionable insight into collected data. “It’s a misnomer to view RF Code as strictly an RFID play,” Healy says. He believes there is a lot more value in the data the company collects, and that value lies in predictive analytics.

    “A lot of what’s going on right now is collecting a lot of data,” he says. “We’re putting it into reports that are meaningful to operators. In terms of predictive analytics, there’s a lot of development we can throw in. We’re tied into building management, so adaptive technologies are the big opportunity going forward.”

    The new CEO sees scaling revenue over the next six months to a year as the company’s biggest most immediate challenge, but the second key challenge is more strategic. “There’s a very focused effort within the company to increase the value we provide through software. We can almost leapfrog what’s currently being done in terms of a software platform,” he says.

    Positioning as Internet of Things company

    Healy wants to make sure RF Code has a place in the world of increasingly connected devices. A self-described big user of the “Internet of Things” in his personal life, Healy brings up Nest (the intelligent home thermostat control company Google bought earlier this year) as an example. Controlled by smartphones, Nest’s technology learns about the temperatures you like and adjusts it accordingly when you’re there. The result is a lower energy bills and a thermostat that you don’t need to touch, he says. It’s one of the bigger, early successes in the Internet of Things.

    “If you think about what we’re doing, we’re sort of like the Nest of data centers,” Healy says. “We’re providing control over smart phones. We’re an IoT play for sure. While Nest has done a great job in terms of their software and their ability to self-learn [home temperature] patterns, we’re increasingly doing the same in data centers.”

    Aggressive push into Asia underway

    RF Code hit the 2 million assets mark in data centers in September of last year. The company has high market penetration in North America, and the EMEA region is its fastest growing market. However, Healy sees Asia as a big chance for growth and has the background to help the company hit the ground running there.

    “We’re missing a complete market for ourselves over in Asia,” he says. “That’s my background.” That background includes an executive role at Silicon Laboratories, an Austin, Texas-based semiconductor company with big presence in Asia, and a role as senior advisor to the chairman of MediaTek, a semiconductor company headquartered in Taiwan.

    “We can grow our revenue significantly there in the short term,” he said. “We’re using on the ground distributors that I have close relationships with, and they get what we’re doing.”

    He says his existing relationships give RF Code a foothold in Asia without the need to invest money in establishing initial relationships in the new market early on. Healy has full confidence in this network, which has already started yielding results. The company recently touted a win with one of Hong Kong’s largest power companies to improve energy efficiency of its data centers.

    A month in, it’s still early for Healy, but he says he feels very good about the software talent within the company and its ability to develop further capabilities around analyzing the data its asset tags collect. “One thing I’ve always felt pretty good about is I’ve always been able to assess talent, and the developer talent is here.”

    Several data center infrastructure management providers integrate with RF Code and state that it’s one of the more popular integrations. Better predictive analytics and adaptive technologies will increase the value the company provides. “For me the real challenge will be to hone in and provide a real focus and strategy on what we want to do in the next year,” said Healy.

    << Previous Day 2014/07/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org