Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, October 29th, 2013

    Time Event
    12:30p
    From Legacy to Legendary: Making Your Mainframe Work For You

    Andrew Wickett is director of migrations and modernizations at Micro Focus.

    Andrew-Wickett-Microform-tnANDREW WICKETT
    Microform

    What do you think of when you hear the word “mainframe”? For many of us, images of clunky, outdated, overpriced and expensive legacy systems come to mind. Despite the crucial role they play in powering the core of so many independent companies and government systems, mainframes still carry negative connotations and bad reputations. A good portion of this is simply due to three common misperceptions: perceived high cost, perceived IT skills crisis and perceived irrelevance to modern computing.

    In this article, we’ll delve into these misperceptions surrounding mainframe use and explore how organizations can maximize their current systems to fuel future business practices by understanding the changing workloads and ensuring mainframe environments are up-to-date.

    Perceived High Cost

    For many large-scale organizations, the mainframe is considered powerful, secure and unrivalled in reliability. They find these systems to be cost effective and high performing. However, when an application’s processing consumption is rising or its response times are not meeting users’ real-time expectations, often the cost benefit justification is put in jeopardy, either because MIPS/MSU costs continue to rise and in turn force hardware upgrades, or simply because the systems are becoming too complex and expensive to maintain. Organizations need to look for ways to modernize and/or mobilize components of their workloads. Luckily, with a little attention and improved understanding, companies can easily ensure that their mainframes remain legendary systems, instead of legacy systems.

    By making sure the mainframe environments are kept up to date and understanding the workloads with which the mainframe must contend, companies can overcome the surprises of degraded service, rising costs or unplanned hardware upgrades.

    It’s important to “right size” your environment and look for opportunities to optimize their workload (e.g. offload development and testing).

    IT Skills Crisis

    Aside from erroneous ideas over cost, there is also not enough academia supporting the survival and development of the mainframe. The lack of IT skills in mainframe languages remains a significant issue in major enterprises and is one of the primary drivers, in addition to cost, for the growth of the outsourcing market in the last decade. Organizations are seriously considering the “rip and replace” method as a means to trade in their old systems for off-the-shelf solutions because of the apparent lack of skilled mainframe developers. In most cases, this is not only unnecessary, but exceedingly expensive and very risky.

    Yet, there is a massive disparity between what is being taught and the skills needed in business. Recent research has shown that mainframe programming languages, like COBOL, are being taught at the University-level, but we still have a long way to go. Data from over 100 schools shows that 1 out of 4 universities have COBOL as part of its curriculum. However, 71 percent of companies report that they will rely heavily on applications built using COBOL language for the next decade and beyond.

    How can we bridge the divide? Proponents of the mainframe development paradigm have forged relationships with training organizations, academic institutions and others to help build the next generation mainframe programming staff. Commerce, academia and even IT students themselves must come together with vendor support. There are examples of this happening already – with some companies retraining programming staff in key mainframe skills. That research shows that modern tooling enabled developers to pick up traditional mainframe languages like COBOL in a matter of hours, effectively eradicating their perceived skills crisis in a single stroke. The key is providing a modern development environment like Visual Studio and/or Eclipse that support traditional mainframe languages and environments – like COBOL, JCL, VSAM, CICS, IMS, etc.

    Irrelevance to Modern Computing

    Generally speaking, mainframes go quietly about their business, processing many of the worlds’ most critical business applications without much fuss. As a result. there is little mindshare devoted to them and very little new competitive edge or corporate emphasis placed on them. It’s only when an outage occurs that anyone broaches the topic of how their mainframe is (or isn’t) working for them. The untold story of the enduring value of the mainframe is a vacuum into which negative perceptions can get sucked in.

    The fact of the matter is mainframes are not only relevant, but imperative for thousands of global organizations. We need more investments and innovation to retain and improve how mainframe systems operate with modern computing platforms like Windows, Linux, and AIX.

    The mainframe can and should be seen as relevant for many of today’s organizations. These are the systems that run the business – and, as such, need the management, investment and innovation to support the future. Many of these applications have Organization IP embedded into them and differentiate a company’s products and services – something that packages can’t offer. Organizations need to take this IP and modernize its usage and delivery by, for example, introducing self-serve websites, new interfaces and Mobile variations, which can all be done without modifying the code or duplicating efforts. Organizations need to leverage these assets and not try to recreate/re-invent them – it’s not needed. Instead of labeling mainframes “legacy systems,” it’s actually more appropriate to call them “foundation systems.”

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:12p
    Public Cloud Provider CloudSigma Adds Presence In Equinix Ashburn
    equinix-fiber-tray

    A full cable tray reflects the interconnections available in an Equinix data center. CloudSigma has added space at an Equinix site in Ashburn, Virginia. (Photo: Equinix)

    Public cloud provider CloudSigma has added a second U.S. location, expanding into the Equinix’s DC6 data center in Ashburn, Virginia. This adds an east coast presence to the existing infrastructure in Las Vegas at the Switch SuperNAP. The additional location can improve customer latency, performance, and provides added redundancy and more convenient failover solution within U.S. borders.

    Customers can now more easily geobalance workloads across multiple data centers, thereby keeping their data and mission-critical applications accessible in the event of a disaster or a system outage. The Las Vegas can failover to Virginia, and vice versa. Deploying at both sites means a web service can be closer to end users

    CloudSigma doesn’t disclose installed customers for any of its cloud locations, but the company says it has seen a 10-15 percent increase month on month in capacity usage across its various cloud locations.

    Equinix’s DC6 is highly connected, boasting more than 200 network service providers and nine of the top 10 Content Delivery Networks. Its connections to key metro areas such as Chicago, New York and London, additionally make it an ideal location for CloudSigma’s expanding public cloud offering.

    The selection of Equinix for its east coast presence was initially announced as the two companies partnered last July. Zurich and DC are the first phase in an ambitious targeted roll-out for CloudSigma in select markets covered by Equinix’s global footprint.

    “The combination of increasing customer demand and our growing partnership with Equinix made this an ideal time to grow our public cloud to a second U.S. data center location,” said Robert Jenkins, CloudSigma CEO. “This expansion shows further validation for our uniquely flexible and customer-centric public cloud model – something we expect to build out in additional regions, including Asia, Latin America and the Middle East, in the near future.”

    “We help customers connect around the world and are pleased to have CloudSigma expand with us in this additional data center location,” said Dick Theunissen, Equinix EMEA CMO. “We have more than 650 customers colocated with us in the Ashburn data center campus in the D.C. metro area. The Ashburn data centercampus continues to be of great strategic importance for the company, as it represents one of the largest Internet exchange points in the world, and, by bringing one of the best-of-breed IaaS providers directly to them, we are fostering even more productive computing environments for customers within our IBX.”

    CloudSigma released version 2.0 of its platform last June, embracing Software Defined Networking (SDN) and Solid State Drives (SSD). “The feedback has been excellent,” said Jenkins. “CloudSigma has been seeing a doubling of conversion rates on new business on the new platform and existing customers have also reported positive feedback with the additional feature set. We have also dropped prices significantly over the last six months, offering even better value whilst at the same time raising our overall performance levels using SSD, SDN and other technological improvements within our cloud stack.” Prices have dropped a cumulative 28 percent for CPU and 36 percent for RAM in the last six months.

    1:53p
    How Virtualization Creates Cloud-Ready Security Options

    cloud-security-combolock
    Many organizations have turned to virtualization technologies to help them become more agile and scalable. Now, it’s time to create cloud-ready security.

    As you know, the demands of the market have created a need for companies to provision resources dynamically, without incurring too much additional cost. In using virtualization, IT administrators have more control over their infrastructure, how resources are divided and are able to deliver a better user experience. Over the past few years, virtualization has evolved far beyond the server. Really, virtualization technologies have even evolved far beyond what we remember it all to be.

    Now, managers are delivering various workloads down to the end-user regardless of what hardware they are operating or from which location they are trying to consume data. Beyond server virtualization, new technologies are entering the modernized data center. This revolves around new technologies like:

    • Security virtualization
    • Application virtualization
    • Desktop virtualization
    • User virtualization
    • Storage virtualization
    • And many more.

    By controlling data, applications, and even desktops at the data center, administrators can deliver a secure look and feel to the end-user. By incorporating flexible virtualization-ready solution – data center administrators suddenly have a lot more tools at their disposal.

    In working with virtualization, just like any other technology, security must be a priority. Although this data is always stored at the data center level, it doesn’t mean that accidental or malicious events won’t happen. With that in mind, the centralizing of information on a virtual node allows administrators to develop new types of security methodologies. This means that policies will need to be evolved and new layers of cloud security will need to be adopted.

    • Create good policies. More so for the user and not the IT staff, virtualization can be used as a mechanism to deliver BYOD data. Be sure to remind users that although the devices they are using may belong to them, the information they are accessing still belongs to the organization. This means that if a user is accessing a virtual application or virtual desktop remotely, they must be aware of their connection and their surroundings. Computer usage policies can be extended to help support and cover BYOD initiatives. Within those policies, be sure to explain how virtual desktops and applications are used and monitored. Virtualization and the information that it delivers requires IT shops to rethink security and end-user policies. Remember, even though this data doesn’t live at the end-point – new types of security threats are always aiming to take down the hottest new technologies.
    • Use next-generations security. Although it’s a bit of a buzz term, the idea behind next-generation security technologies is very real. Beyond just a standard physical firewall, next-gen security looks to introduce newer, advanced, scanning engines, virtualization technologies, and better visibility into the cloud. For example, administrators are able to incorporate mobile and device management solutions which monitor all of the incoming BYOD devices. These devices may be only accessing emails, while some others may need access to applications and desktops. These management platforms can check for OS versions and even see if a device is rooted. Furthermore, next-generation security appliances now offer much more advanced access interrogation policies. An administrator can set a 4 policy entry point. If a device only passes 2 out of the 4 interrogation metrics, it may be given access to only a part of the environment. These metrics can range from having the right AV database to an OS version and patch level – and even the location of the device.
    • Control your cloud. A part of securing a virtual infrastructure is being very proactive. To do so, there must be control mechanisms in place for the virtualization infrastructure. Proactive alerts and alarms should be configured for resource utilization, access, and of course workload/hardware notifications. By monitoring an environment, administrators are able to see spikes in data usage (both LAN and WAN), where and what users are accessing, and continuously monitoring the type of data entering and leaving the data center. Virtualization allows for the creation of a logical layer. This layer can be better monitored because policies can be put into place to trigger improved alerts and notifications. By having greater control over your cloud, virtualization helps administrators create a more robust infrastructure based on proactive security best practices.
    • Use intelligent AV. Just because a desktop or application is virtual doesn’t mean it’s not susceptible to a malicious attack. Traditional AV engines have always been a bit bulky and resource intensive. New technologies look to skip over the heavy resource utilization and become much more virtualization aware. For example, a virtualization aware AV engine can run at the hypervisor level, scanning all of the traffic which flows into and out of the VM. Trend Micro introduced its Deep Security platform to do just that. It will integrate directly with VMware Tools to facilitate virtualization-ready security at the hypervisor layer. Another great example is 5nine’s security model and how it interacts with Hyper-V. This way, administrators don’t actually have to install AV clients on the workloads. Because of this, the AV process becomes much more streamlined and efficient. Now, we’re introducing new levels of security and efficiency for your virtual platform.
    • Lock down apps, desktops, and users. With virtualization, comes the very real need to lock down the environment. The great part here is that there are technologies which are able to help out. In working with virtual desktops and applications, workloads can be locked down based on their location, the type of device requesting access, and user group policy settings. In fact, based on the user or application – entire menu items or sections of an app can be locked down. Depending on your organization and the type of apps/desktops you’re deploying, you may have varying needs to the level of security required. From an end-user’s perspective, creating a secure, yet very functional environment is very important. End-users can be controlled by deploying effective measures of user virtualization. This means that user settings, preferences, network settings, and other personalization options can migrate with the user. Furthermore, these settings can be very granularly controlled. Administrators can lock down everything from, as mentioned earlier, application menu items to the usage of USB keys or ports.

    Remember, even though the data is centrally stored, administrators should still take the same active precautions to protect their environment. A good management plan, solid update cycles and regular testing are all good measure to ensure that preventative maintenance is kept up. The cloud isn’t perfect: So, in designing a security strategy, being proactive can save time, money and – very importantly – reduce downtime due to security issues.

    2:32p
    Big Data News: Splunk Analytics For Hadoop, Big Data Storage Innovations

    As big data conferences, Strata and Hadoop World, convene in New York this week, there’s lots of big data news. Live streaming is available.

    Splunk Analytics for Hadoop

    Operational intelligence software provider Splunk (SPLK) launched Hunk, a software that integrates exploration, analysis and visualization of data in Hadoop, this past summer. On Tuesday, the company announced the general availability of Hunk: Splunk Analytics for Hadoop. 

    Hunk is a full-featured, integrated analytics platform for Hadoop that enables users to interactively explore, analyze and visualize historical data in Hadoop. Built with patent pending technology, Hunk offers powerful, self-serve analytics without the need for specialized programming.

    “Hunk is transforming the way organizations analyze their data in Hadoop by replacing drawn out development cycles with software that enables customers to deploy and deliver insights in hours instead of weeks or months,” said Sanjay Mehta, vice president of product marketing, Splunk. “Hadoop is an increasingly important technology and many organizations are storing vast amounts of data in Hadoop. However, this often creates a problem because the data sets become too big to move and more traditional approaches to analytics of raw data in Hadoop require brittle, fixed schemas. These are key reasons our customers consistently tell us about the cost, time and sheer difficulty of getting analytics out of their Hadoop clusters. With Hunk, we applied everything Splunk has learned from ten years of experience with more than 6,000 customers to this unique challenge.”

    More than 100 took part in the Hunk beta program. “Hunk enables our enterprise customers to achieve their big data goals,” said Kou Miyake, President and CEO, NTT DATA INTELLILINK Corp. “Hunk accelerates insights from Hadoop with a much faster time-to-value than open source alternatives. Hunk also enables enterprise developers to build big data applications because of the rich developer environment and tooling.”

    Red Hat Storage Team Adds Apache Hadoop Plug-in to Gluster

    Red Hat’s Apache Hadoop plug-in was added to the Gluster Community, the open software-defined storage community. Gluster users can deploy the Apache Hadoop Plug-in from the Gluster Community and run MapReduce jobs on GlusterFS volumes, easily making the data available to other toolkits and programs. Conversely, data stored on general purpose filesystems is now available to Apache Hadoop operations without the need for brute force copying of data to the Hadoop Distributed File System (HDFS).

    The Apache Hadoop Plug-in provides a new storage option for enterprise Hadoop deployments and delivers enterprise storage features while maintaining 100 percent Hadoop FileSystem API compatibility. The Apache Hadoop Plug-in delivers significant disaster recovery benefits, industry-leading data availability, and name node high availability with the ability to store data in POSIX compliant, general purpose filesystems.

    To download the Apache Hadoop Plug-in, users can go to https://forge.gluster.org/hadoop/. For the Apache Hadoop Ambari Project, users can visit the Apache Hadoop Community at http://hadoop.apache.org/.

    GlusterFS Now Integrated with Intel Distribution for Apache Hadoop Software

    Red Hat, Inc. also announced that it has contributed software to the Gluster Community that integrates the GlusterFS open software-defined storage filesystem with the Intel Distribution for Apache Hadoop software. The resulting code, the Apache Hadoop Enablement on GlusterFS plugin, delivers a big data analytics solution that easily integrates with existing IT infrastructure. The companies jointly validated the reference architecture integrating GlusterFS with the Intel Distribution.

    Red Hat has contributed code to the Apache Hadoop community supporting the Hadoop Compatible File System standard and the Intel Manager for Apache Hadoop can now configure, monitor, and manage GlusterFS as a Hadoop-compatible File System (HCFS). The Hadoop Enablement solution avoids the cost and complexity associated with creating and managing another data silo for analytics. The integrated solution was built using community-driven innovation to deliver an open and interoperable solution.

    Combining the performance, security, and manageability of the Intel Distribution with the HDFS API compatibility and disaster recovery capabilities of GlusterFS, the integrated solution supports a scalable, cost-effective infrastructure for big data analytics. The Intel Distribution includes security mechanisms such as query authentication, data encryption, role-based access control, and auditing. GlusterFS maintains data locality as the cluster scales, avoids NameNode bottlenecks and the single point of failure in HDFS, and has built-in disaster recovery with its geo replication feature.

    Red Hat sees the evolution of analytics extending beyond Hadoop and traditional business intelligence systems into a comprehensive view of end-to-end big data analytics. Most enterprises try to manage big data from three sources, including business-, machine- and human-generated data through a work flow that includes three types of analytics systems, massively parallel processing, Hadoop clusters, and traditional business transaction processing. This broader view of big data requires a general purpose storage repository such as GlusterFS that can store a variety of data in its native format and serve it to a variety of analytics systems through multiple protocols. An end-to-end view of all the enterprise data and all the enterprise analytics systems offers a more comprehensive way to allow for deep business insights and help drive operational intelligence.

    3:00p
    Calxeda Updates Its ARM Server Platform, Targeting Private Clouds
    A look a the Calxeda ARM server board being used in HP's Project Moonshot servers. (Photo: Calxeda)

    A look a the Calxeda ARM server board being used in HP’s Project Moonshot servers. (Photo: Calxeda)

    ARM-based server provider Calxeda announced its second-generation product line, the EnergyCore ECX-2000 family, targeting the private cloud market. The company’s new Server-on-a-Chip (SoC) enables an integrated fabric of high-density servers and storage, and has been selected by HP for Moonshot solutions, which are tuned for hyperscale applications.

    The ECX-2000 uses standard ARM Cortex-A15 cores up to 1.8 GHz, with the integrated Calxeda Fleet Fabric, 10Gb Ethernet and standard I/O controllers. The company has also added pin-compatible 64-bit SoC to its roadmap.

    Calxeda says the new SoC delivers up to twice the performance, four times the memory capacity, and three times the memory bandwidth than previous ARM-based servers.  The company has been samply the ECX-2000 to partners and data centers, and volume shipments to system vendors are expected before the end of the year.

    DreamHost Among Those Interested

    Cloud provider DreamHost, which hosts over 1.3 million Internet domains, is testing the new setup.

    “We are really impressed with the performance and low-power footprint of the Calxeda-based storage solution,” said Simon Anderson, CEO of DreamHost.  “After extensive testing of the new product, we see great potential for Calxeda in our OpenStack-based public cloud deployments and Ceph-based distributed object storage service.”

    Calxeda will demonstrate the new product with OpenStack and the open source Ceph distributed object storage software at the annual ARM TechCon event at the end of October.

    Calxeda is targeting the private cloud infrastructure market. The company believes its architecture is ideal for I/O intensive applications such as distributed storage, cloud-based gaming, media streaming and throughput-oriented private clouds.

    “The rate and pace of innovation we have seen in the mobile space, enabled by power-efficient ARM-based SoCs, is now taking place in the datacenter,” said Barry Evans, CEO and co-founder of Calxeda. “The Calxeda Fleet Fabric enables our customers to create an extremely efficient computing infrastructure that improves management of large-scale clouds at lower cost, lower power and reduced carbon footprint.  Coupled with the strength of our software and hardware partners, this is an unbeatable combination in an industry that is demanding better alternatives.”

    The ECX-2000 supports hypervisors KVM and Xen, and Canonical is officially certifying it for Ubuntu 13.10, including the latest OpenStack release, Havana.

    “Calxeda and Canonical brought the first server-grade solution based on the ARM Cortex processor family to the market in 2010,” said Mark Shuttleworth, Founder of Ubuntu and VP Products for Canonical. “We are doing it again by delivering Ubuntu’s scale-out workloads on top of ECX-2000, the first ARM-based Server platform to support virtualization and 40-bit memory addressing, which will deliver the efficient OpenStack clouds of the future.”

    Selected By HP For Moonshot Solutions

    “HP is working with Calxeda to deliver new infrastructure economics to our customers and we are excited about their new EnergyCore ECX-2000 processor,” said Paul Santeler, vice president and general manager of HP’s Hyperscale Business Unit. “When delivered in an HP Moonshot solution, we believe it will offer users a new level of performance, requiring less energy and less space than traditional solutions do today.”

    In addition to HP, notable solution partners building servers and storage appliances using the new SoC include Aaeon, Boston Ltd. and Penguin Computing, with systems available beginning later this year.

    Additionally, Frank Frankovsky, chairman and president of the Open Compute Foundation has joined the Calxeda board, suggesting a larger role for Open Compute in their future. “The Calxeda team has a terrific vision for the role of ARM-based products in the data center, and they have the talent, the technology, and the commitment to openness required to execute on that vision,” said Frankovsky.

    Pin-Compatible 64-bit SoC on Roadmap

    Calxeda announced the addition of a second 64-bit SoC to the company roadmap last year. Code-named “Sarita,” the new 64-bit ARM Cortex-A57 SoC complements the “Lago” platform, but is pin-compatible with the ECX-1000 and new ECX-2000. This unique approach reduces development time and expense for Calxeda partners, accelerating the 64-bit ARM ecosystem, and enabling customers to future-proof designs for three generations of rapid technology innovation.

    Customer Aaeon, designers and manufacturers of advanced industrial and embedded computing platforms, puts this in perspective. “Calxeda’s 64-bit Sarita SoC gives Aaeon a pin-compatible on-ramp for 64-bit solutions next year. Now, we can engineer one design for a 3-generation lifespan. That’s really an industry first,” said Spark Chen, Aaeon Vice President of Intelligent Services Division

    5:00p
    ClearStory Introduces Data Analysis and Visualization Tool, Data Intelligence

    At the O’Reilly Strata Conference and Hadoop World event Tuesday ClearStory launched Data Intelligence, a new way for anyone and any organization to speed access to internal and external data sources, discover new data and insights, and actively collaborate across diverse data assets to reach fast answers.

    The now available solution is an integrated Application and Platform that changes how people across any business consume data from corporate and external sources, to accelerate the pace of informed and intelligent decision-making.

    “Companies across CPG, retail, media and entertainment, financial services and almost every industry are looking for better ways to access more data, from more sources, ask questions and reach conclusions faster,” said Sharmila Mulligan, CEO of ClearStory Data. “Data Intelligence is the new way that lets business users and data stewards get to more data, uncover insights fast, actively collaborate and ease the cycle of questions to answers, where meaningful answers may span many internal and external data sources.”

    ClearStory Data is backed by Google Ventures, Kleiner, Perkins, Caufield & Byers, and Andreeson Horowitz.

    With the power of embedded data intelligence, users are enabled to take data from many more data sources into the hands of more users for fast answers. ClearStory’s Data Intelligence solution is unique in that it speeds access to private data sources, brings simple point-and-click access to external data, harmonizes disparate data automatically, and introduces a unique data-aware collaboration model so people can actively participate in evolving insights, in context, to make informed.

    “Kantar Media, the global leader in advertising measurement, is pleased to partner with ClearStory Data to bring organizations a new self-service way to access our rich advertising intelligence data, “ said Libby MacDonald, Senior Vice-President for National Sales at Kantar Media Intelligence. “Insights into advertising trends are critical for companies seeking to optimize their marketing campaigns. ClearStory enables business users to navigate this data easily, and blend it with their private data to quickly get the answers they need to drive their businesses.”

    5:30p
    Holodeck Deals: Conducting M&A In the Virtual World

    Would you complete a business negotiation on the holodeck, as is used in Star Trek? Intralinks is doing just that with Virtual Data Rooms, and so far, these “virtual worlds” have been the platform for $23.5 trillion dollars worth of deals done in corporate finance and strategic M&A.

    Let’s examine the situation. The investment banking world deals with critical and confidential information, often requiring excessive travel to meet and get deals done in person. To alleviate the need for often-onerous travel and frequent face-to-face business meetings, SunGard Availability Services customer Intralinks created Virtual Data Rooms (VDRs), a SaaS offering which are highly secure collaboration spaces for dealmakers to share information. Given the sensitive nature of the data, as well as time sensitivities “an hour lost on a deal can lose a transaction,” says Matt Forzio, VP Strategy and Product Marketing at Intralinks. “VDRs need to be secure and available 24/7.”

    These deals can make or break careers. “We built our system from the very beginning with two fundamental aspects – it had to be secure, and it had to only let certain people do certain functions,” said Forzio. “The second piece was that it had to always work.” Forzio has a history in the investment banking world. He recalls several instances of flying in-person and being in closed data rooms until 3 a.m. in the morning. The people involved in these deals are around the globe, and deals often involved several trips. “We didn’t want the buyers to know about one another, to collude,” said Forzio. “Deal making is dynamic – new financial information comes out, and we’d have to invite people to New York. A lot of deals lose steam because of arduous due diligence.”

    Intralinks provides a virtual space to accomplish business negotiations, with the same security and peace of mind as the constant person-to-person meetings that it looks to replace.

    Increased Speed and Data

    “The main driver, the main benefit of the VDR is the efficiency, the time saving,” said Forzio. “These deals average well over 300 million. There’s been deals done in our VDRs from around 5 million and up. VDRs also give the ability to close these deals 30 days faster, and for sellers to get to the buyers they need.” As people grow to accept online spaces for even the most confidential of activities, it turns the market into a truly global one.

    There’s also a business intelligence aspect to VDRs. “In the physical world that I used to be involved in, a bunch of people showed up, but we didn’t really know what level of diligence they did,” said Forzio. “In the virtual world, with the audibility of it, you get a sense of what buyer is doing all of the work. Our clients, bankers, say this gives them a real sense of who’s done the most work. It let’s them know who’s reviewed the data – what individuals, what lawyers have come in?  Are we going to be able to close the deal?”

    These data rooms are completely user-administered. The company also offers tools for clients to create their own data rooms and create templates, if needed. “We’ve built capability to drag and drop entire folder structures and permissions on a granular level,” said Forzio. “The other thing I’ll mention that is being more broadly adopted in the global data room is questions and answers. It’s more than due diligence – that information generated sparks questions and follow ups. People can right-click and ask questions, pass it on to subject matter experts. It’s all done through the platform.”

    Infrastructure Behind the VDR

    Given the high security and availability needs, the company turned to SunGard Availability Services for its SaaS offering, and this relationship has grown over time. “When we were pioneering Virtual Data Rooms, we were doing maybe deal 5 deals a month. Now it’s closer to 500,” said Forzio. “We have 2.7 million users on our platform accessing 80,000 workspaces. Really about this point, it’s about the information that’s shared. Each deal has multiple Gigs of data. We’re talking about 50 terabytes of really critical data. Over time the complexity of that data has increased, the need to make that data faster has increased.”

    “Some of the reasons we partnered with Sungard was that passwords and all that big data was encrypted, and the company’s track record of high uptime,” said Forzio.

    Intralinks vetted several companies before deciding on SunGard AS. It was Sungard AS’ understanding of the financial industry that ultimately gave the company comfort. “It gave credibility on a global basis as well,” said Forzio. “Over time, we’ve been vigilant; SunGard has had that same vigilance.”

    “The scrutiny on M&A transactions has increased from a regulatory standpoint, as well as shareholder, fiduciary responsibility,” said Forzio.” “When buying and selling an asset, you leave no stone unturned. There’s hundreds of people on each one of these deals – the system can’t be down for a minute. That’s how you potentially lose a buyer.”

    “Virtual Data Rooms are probably utilized in half of deals globally,” said Forzio. “We’re working on the other half. The adoption phase, and the comfort level we’ve built with people, has grown. I can’t overemphasize the importance of building trust in the market.”

    << Previous Day 2013/10/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org