Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, March 7th, 2013

    Time Event
    12:05p
    Pertino Raises $20M to Bring Cloud Networking to SMBs

    cloud-connectivity

    Another player looking to redefine networking in the cloud era has raised a nice chunk of change. SDN-powered cloud networking player Pertino has raised $20 million in Series B funding. The company will use the funds to expand its platform and market strategy. The company previously raised an $8.85 million Series A round in April of 2012.

    Pertino looks to bring SDN-powered wide-area networks to the masses. There is a demand for this thanks to increasing globalization and rise of cloud computing, as well as an increasingly mobile workforce. The proliferation of mobile usage and remote employees means companies are becoming more dependent on wide-area networks (WANs) and Pertino wants to bring these capabilities to small and medium-sized businesses (SMBs) by cutting out the barriers of cost and complexity. Pertino has something it calls “cloud network engine” which allows the creation of a cloud-based network in minutes with no hardware, expertise, or upfront investment required. Basically, it looks to bring a previously cost-prohibitive capability to the world of SMB, hence the bringing it “to the masses.”

    “SMBs comprise almost half of IT spending worldwide, yet a significant number of these organizations struggle with having the resources to deploy new technology and applications,” said Craig Elliott, Pertino co-founder and CEO. “By leveraging the cloud and SDN technology to radically simplify networking, Pertino is poised to unlock a massive opportunity and Jafco’s SMB and business development experience in Asia will help us realize it.”

    Priced for the SMB Market

    The company is going after the home office or SMB here with the pricing as well. Pertino allows customers to build a free network for up to three members with three devices each, and then pay $10 per member per month as they grow.

    The Series B round was led by new investor Jafco Ventures with existing investors Norwest Venture Partners and Lightspeed Venture Partners also participating. Jafco’s portfolio consists of several cloud and network security companies including Reputation.com, Huddle, FireEye, and Palo Alto Networks.

    “One of the things that attracted us to Pertino is the fact they have built a cloud-based solution leveraging the most innovative and disruptive technology to hit networking in a decade – SDN, and they’re delivering it in a practical and consumable way to an underserved global market,” said Jeb Miller, general partner at Jafco. “Disruptive technology and a massive global market opportunity, coupled with the pedigree and experience of the executive team makes Pertino an ideal portfolio company for us.”

    The company launched into limited availability last month and says it saw significant demand. The limited launch came after concluding a successful beta program within the Spiceworks IT community that resulted in over 250 customers deploying and testing their own Pertino cloud network. Since then, the number of deployed customers has grown to over 700.

    1:00p
    Public Cloud Security, Readiness and Reliability

    cloud-rows-dreamstime

    The modern idea of the “cloud” may be something new but a lot of the technology it uses has been around for a while, since 1997 in fact. As with any technology, the most important aspects of deploying a new solution will be an understanding of the platform and, of course, thorough planning.

    Ready for Public Cloud?

    When considering public cloud options, it’s important to understand where there is a direct fit. This means that both key business stakeholders as well as IT executives will need to see the benefits of moving towards a public cloud “Infrastructure as a Service” environment. Although there are many benefits, administrators should take in some considerations when looking at public cloud options.

    • Public cloud and security. This is a major consideration for any organization. Although a public cloud is certainly secure, some organizations have specific regulations as to how their data can be delivered over the WAN. Also, securing the server and application environment will differ when these workloads are pushed through a cloud environment. Special planning meetings and considerations have to go into knowing the type of security requirements and environment might have.

    It’s important NOT to get overwhelmed when we talk about cloud security options. Yes, there are new technologies revolving around ensuring cloud security, but it doesn’t have to be overwhelming. As mentioned earlier, we can break down cloud security at a high-level by examining the following:

    • Security on the LAN: The first steps will be the understanding of the security elements of your LAN. Is data being encrypted internally? Are there ACLs on the switches? How are the firewalls and load-balancers configured for data leaving the local network?
    • Security at the end-point: How is the end-point accessing the data? Is it through a VPN or through an encrypted connection? Is there a secure client involved? Understanding the end-point security setting and policies is important to ensuring that the data reaches its destination safely.
    • Security in the middle: When data is being transmitted over the WAN there have to be security settings in place from beginning to end. That means setting up a secure tunnel for the data to travel, constant monitoring of the links, and proactively maintaining server and LAN security policies.

    Remember one main point as you plan out your environment: Cloud security isn’t really just one component in itself. Rather, it’s a lot of security best practices being applied for the purpose of transmitting data over the WAN. This is where using next-generation security tools can really help. Advanced device interrogation engines as well as intrusion prevention/detection (IPS/IDS) can further secure a cloud platform.

    • Environment readiness and reliability. Although public clouds can be easy to adapt to, some environments may not be ready for a cloud initiative. Having the right infrastructure in place to support a cloud movement may be required. In these cases, organizations should take the time and evaluate their current position to see if going to the cloud is the right move.

    Just like any other infrastructure, it’s important to create an environment capable of supporting business continuity needs. This means understanding the fact that the cloud can and will potentially go down. For example, in a recent major cloud outage – a simple SSL certificate was allowed to expire. This then created a global, cascading failure taking down numerous vital public cloud components. Who was the provider? Microsoft Azure.

    • Deploying the right workload. The larger the workload, Virtual Desktop Infrastructure (VDI) for example, the longer it will take to be delivered. Some core applications require backend database connectivity where a public cloud model may not be the right fit. Before moving to the cloud, make sure to have a complete understanding of what will be utilized in the public cloud arena. From there, a good decision can be made as to whether a given application or even virtual node is the right fit for a cloud model.
    • Maintaining control. Just like a local, non-cloud environment, administrators must retain control of their environment. This is especially important in pay-as-you-go models. With little control or oversight, administrators might be provisioning Virtual Machines (VMs) and resources when they’re simply not needed. This is where a public cloud can quickly lose its value. IT organizations must keep a watchful eye on their cloud-based workloads and resources to know what is being use and that they are utilizing that environment efficiently.
    • End-user and administrator training. The success of almost any new deployment will be user acceptance. If an organization deploys a new public cloud capable of delivering entire workloads to the end-user, there must be core training associated with it. What good is a robust, highly scalable infrastructure if the end-user is confused or not sure how to use it? Since users are often adverse to change, all modifications should be gradual and well documented. Information passed to the user should be easy to understand and simple to follow. With good training and solid support on the backend, administrators can deliver powerful data on-demand solutions to the end-user.

    Cloud computing is here to stay – and there are the many benefits to such a powerful Wide Area Network-based platform. Whether administrators need to provision a new workload or test out an application, a public cloud solution can help an organization stay innovative. Remember, as with any new environment, it’s important to plan out the infrastructure and find the need behind the deployment. When it comes to a public cloud, administrators should evaluate their needs and see how this type of cloud platform can directly benefit them.

    The goal with many recent cloud articles is to debunk the myth that cloud computing is an insecure, Wild West environment. Unlike the dot com bust or other failed technologies, our generation is evolving into a data-on-demand environment where cloud computing acts as the delivery mechanism for vast amounts of information. So while you may not be ready to embrace the technology, it’s important start to understand it and learn the facts, not the hype.

    1:30p
    HIPAA and PCI Compliance Are Not Interchangeable

    Mike Klein is president and COO of Online Tech, which provides colocation, managed servers and private cloud services. He follows the health care IT industry closely and you can find more resources at www.onlinetech.com/compliant-hosting/overview.

    Mike KleinMIKE KLEIN
    Online Tech

    When thinking about compliance, many companies assume PCI DSS is interchangeable with HIPAA. Otherwise it is assumed that the gap between the two is small. This ignores that HIPAA and PCI DSS compliance protect different types of information, with different audit guidelines, safeguard requirements, and consequences for non-compliance or breaches.

    Origins and Audits

    HIPAA compliance is monitored by Health and Human Services, and the audit is based on OCR (Office of Civil Rights) protocols that are continuously updated and enforced. These are governmental entities, not private companies. KPMG was selected as HHS’ auditor of choice, and investigation of compliance with the Security and Privacy rules comes with the benefit of the fully informed and funded auditing power of a well-respected auditing powerhouse.

    Conversely, PCI compliance is defined by the PCI SSC (Payment Card Industry Security Standards Council). This council is a collaboration including Visa, Mastercard, American Express, Discover, and JCB (Japan Credit Bureau), with these companies having a vested interest in keeping consumer data safe.

    Consequences of Non-Compliance

    The cost of a breach is very different between HIPAA and PCI compliance as well. HIPAA is a US federal law. There are criminal and civil penalties associated with a breach, as well as fines. This means that in addition to stiff financial consequences, willfully negligent stakeholders can go to jail for non-compliance. If a breach occurs, healthcare providers are required to post public press releases in traditional media outlets to inform patients of the potential threat to their information. This damage to the image and credibility of an institution can have long lasting impacts.

    With PCI compliance, there are contractually agreed upon fines, but no criminal charges. You aren’t going to see anyone going to jail for not being PCI compliant. This isn’t to say that PCI costs aren’t serious. A PCI breach could cost anywhere from thousands to millions in fines to the credit card companies, and could result in the loss of card processing privileges, which severely impacts business cashflow. Of course, there is also always a threat to a company’s reputation that might discourage current or future buyers.

    Requirements

    When you peel back the curtain on HIPAA and PCI requirements, they look very different. HIPAA is very focused on policies, training, and processes. It’s more subjective and broad in application, caring about how a company handles breach notification, whether an organization insists on BAAs (Business Associate Agreements) with their vendors, or whether the cloud provider associated with a company has conducted a thorough risk assessment against all administrative, physical, and technical safeguards. To this last point, the final HIPAA Privacy and Security Rules published by HHS recently clarified that data center and cloud providers are, in fact, considered Business Associates that must be HIPAA compliant if there is Protected Health Information (PHI) in their data centers or on their servers. HIPAA doesn’t precisely describe technical specification or methods to achieve compliance. Each Covered Entity and Business Associate is to complete a risk assessment and management plan for addressing each of the HIPAA safeguards.

    The Business Associate Agreement is unique to HIPAA, and extends the ‘chain-of-trust’ and liabilities for protecting PHI from the Covered Entities (healthcare providers), throughout its network of supporting vendors. Any company that stores, processes, or accesses patient health information is automatically considered a Business Associate. As such, they will be held to the full legal liability to keep PHI safe. Turning a blind eye only makes the penalties steeper.

    PCI DSS requirements are much more prescriptive, comparably. The technical requirements are more detailed, explicitly outlining the necessity for processes like daily log review and encryption across open, public networks, while processes around training and policies are not as prevalent. PCI DSS does not have an equivalent of a Business Associate Agreement required between a company that needs to be PCI DSS compliant and its vendors.

    Do HIPAA and PCI Compliance Overlap?

    Well, yes and no. The technical PCI requirements can set up a nice framework that could work as a prescriptive guide for some of HIPAA’s technical safeguard requirements. However, the foundation of HIPAA compliance is a documented risk assessment and management plan against the entire security rule. PCI share this core cornerstone for the basis of meeting compliance.

    The bottom line is that passing a PCI audit does not mean you’re HIPAA compliant, or that KPMG is going to care about PCI when it comes to an evaluation on due diligence to meet HIPAA compliance.

    The reverse is also true. Passing an independent audit against the HIPAA Security and Privacy rules does not imply PCI compliance either. Even with overlap, they’re still separate and should be treated as such. The best course when looking at hosting providers is to request an audit report, read the details, and confirm that HIPAA compliance is based on the OCR Audit Protocols and PCI compliance is based on the PCI DSS. This insures that the business not only understands the difference between each compliance (if both are necessary), but that the company has truly been diligent to keep your data safe. After all, compliance is not a checkmark, it’s a culture.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:15p
    LSI, NetApp Collaborate on Server-based Flash

    LSI-WarpDrive_HR

    LSI has announced its Nytro WarpDrive family of application acceleration PCIe flash cards, which are designed to address application performance by converting server-based flash into hot data cache for critical applications. The cards are among the first PCIe flash devices to be fully tested and qualified with NetApp’s intelligent server caching software.

    “Flash memory adoption in the enterprise is a powerful complement to hard-disk-based network storage,” said Tim Russell, vice president, Data Lifecycle Ecosystem Group, NetApp. “Deploying flash as a high-speed cache in the server is a simple and cost-effective way to significantly reduce latency and I/O bottlenecks, while providing enterprise-level data protection and manageability for the entire infrastructure. Working with our server cache partners, we’re able to offer customers a complete end-to-end, high-speed solution.”

    LSI Nytro WarpDrive cards deployed in conjunction with Flash Accel software deliver automated and intelligent caching of hot data to PCIe flash storage, and an optimized cost per IOPS and cost per gigabyte across flash and hard drives. Combined, it is an easy to use, fully tested, end-to-end solution. Test results have shown a reduction in application and server latency by up to 90 percent while increasing throughput by up to 80 percent. Storage efficiency is also improved by minimizing the number of input/output operations between servers and back-end storage systems, which frees up shared storage resources to handle additional workloads. In addition, the Nytro WarpDrive card’s advanced “off-loaded” multiprocessor architecture uses up to four times less CPU and memory resources than competing solutions.

    “LSI Nytro WarpDrive cards help datacenter managers contend with massive data growth by increasing the speed and responsiveness of critical applications,” said Gary Smerdon, senior vice president and general manager, Accelerated Solutions Division, LSI. “The combination of Nytro WarpDrive cards and Flash Accel software allows for an optimized use of flash while extending its significant performance and TCO benefits to any server connected to NetApp storage.”

    Nytro WarpDrive cards range in capacity from 200GB to 1.6TB of MLC or SLC flash memory and are designed for simple, plug-and-play integration into today’s low-profile, high-performance system chassis.

    2:30p
    Telx Gets New Tenant for NJ Data Center Campus
    The raised-floor area at the Telx data center in Clifton, NJ.

    The raised-floor area at the Telx data center in Clifton, NJ.

    Colocation provider Telx said this week that DBR360’s public and private cloud solutions will be available at the NJR2 and NJR3 data center facilities located at the Telx campus in Clifton N.J. DBR360 will offer its Infrastructure as a Service (IaaS) to leverage Telx’s high-density of networks and interconnection services for both new and existing clients.

    The Telx’s Clifton datacenter campus encompasses its NJR2 data center facility at 100 Delawanna Avenue and the soon to be completedNJR3  flagship facility. The Telx Clifton campus is a managed, carrier-neutral, secure environment for enterprises, digital media and bio-pharm companies, financial services, network providers and carriers. The expanded campus provides Telx and DBR360 customers with access to an end-to-end cloud and network infrastructure service for IT organizations with enterprise cloud performance.

    “Uniting a robust network infrastructure into the cloud served is a natural fit for Telx’s high density network interconnection services,” said Anthony Lobretto, vice president of engineering and technology solutions, DBR360. “Most cloud providers offer network connectivity limited to access over the internet. Many of our customers require private networks that provide greater levels of performance, security and integration with their existing corporate networks. Oftentimes the network aspect of cloud computing is overlooked.”

    “Partnering with DBR360 on this endeavor further illustrates Telx’s role as a cloud enabler. The breadth of network and cloud service providers within our facilities is a distinguishing advantage,” said John Freimuth, Telx’s General Manager of Cloud and Enterprise Solutions. “Some clients have preexisting relationships with carriers, and the flexibility of our network combined with the customized solutions offered by DBR360, allows us to mirror our customers network architecture to their specific needs within our facility.”

    DBR360 is a managed service provider that integrates advanced networking technologies and virtualization infrastructure into private and public cloud solutions. DBR360 is headquartered in Fairfax, VA.

    3:30p
    Why Anti-DDoS Services Matter in Today’s Business Environment

    Although the Internet has been around for a while, the boost in cloud computing has increased the utilization of WAN services. Any organization now using the Cloud or some type of Internet-based service must be aware of the security risks that come with the platform. With the evolution of the modern data center – and the use of cloud computing – has created more targets for attackers to go after. The widespread availability of inexpensive attack tools enables anyone to carry out distributed denial of service (DDoS) attacks. This has profound implications for the threat landscape, risk profile, network architecture and security deployments of Internet operators and Internet-connected enterprises.

    With the direct increase in cloud services, organizations are utilizing more Internet services and greater amounts of bandwidth. Because of this, attackers are increasing the size and number of their attacks on targeted organizations. In a recent survey conducted by Arbor Networks the size of volumetric DDoS attacks have steadily grown. The truly troubling piece, however, was the report in 2010 of a 100 Gbps attack. To put that in perspective, that is more than double the size of the largest attack in 2009. This staggering figure illustrates the resources hackers are capable of bringing to bear when attacking a network or service.

    Arbor Networks — Worldwide Infrastructure Security Report, Volume VI

    Image source: Arbor Networks — Worldwide Infrastructure Security Report, Volume VI

    Although these attacks have been simplified in deployment – they’ve certainly evolved in complexity. The methods hackers use to carry out DDoS attacks have evolved from the traditional high bandwidth/volumetric attacks to more stealthy application-layer attacks, with a combination of both being used in some cases.

    In working with DDoS-type attacks, administrators must understand the depth of the DDoS problem. Volumetric attacks are also getting larger, with a larger base of either malware-machines or volunteered hosts being used to launch these attacks. Well-known groups, such as Anonymous, have brought a new type of DDoS attack into scope as well – hactivism. As these attacks become more prevalent, IT administrators must have good visibility into the complex threat environment and the true need for a full-spectrum solution. Download this white paper to see how DDoS can affect a business and the true importance for a solid security infrastructure. In this paper, Frost & Sullivan outline the various points in creating an all-encompassing security solution. Key points include:

    • Integrity and Confidentiality vs. Availability
    • Protect Your Business from the DDoS Threat
    • Cloud-Based DDoS Protection
    • Perimeter-Based DDoS Protection
    • Out-of-the-Box Protection
    • Advanced DDoS Blocking
    • Botnet Threat Mitigation
    • Cloud Signaling

    The increase in cloud computing will result in more DDoS attacks on organizations. Since more targets are being presented, attackers may use a myriad of reasons to target an IT environment. This white paper outlines the key points in understanding DDoS attacks and how to strategically protect your environment. In creating a solid security solution, administrators are able to secure their infrastructure both at the perimeter and the cloud level.

    4:29p
    Network News: Mellanox Launches Open Ethernet Initiative

    Here’s a roundup of some of some of this week’s headlines from the network industry:

    Mellanox launches Open Ethernet initiative.  Mellanox Technologies (MLNX) announced the “Generation of Open Ethernet” initiative, an alternative approach to traditional closed-code Ethernet switches. Mellanox, a high-speed networking specialist, says the program provides customers with full flexibility and freedom to custom-design their data center in order to optimize utilization, efficiency and overall return on investment. With open source networking and SDN trends, Mellanox Open Ethernet is a framework to eliminate proprietary software and encourags the development of an ecosystem environment focused on building Ethernet switch software to move innovation forward. It is supported on Mellanox’s 10/20/40/56GbE switches, with forward compatibility to future Mellanox Ethernet solutions. “The current landscape of proprietary Ethernet switches limits the foundation of compute and storage clouds and Web 2.0 infrastructures. We are excited to facilitate change and to lead the new generation of Open Ethernet that will enable a more open and collaborative world,” said Eyal Waldman, president, CEO and chairman of the board of Mellanox Technologies. “Mellanox Open Ethernet allows users to gain control of their network and data center, and to achieve higher utilization, efficiency and return on investment, and will enable our customers to add differentiation and competitive advantages in their networking infrastructure. We have been seeing wide and strong support for this initiative from our partners and users, and expect to see a growing community around our initiative.”

    ADARA introduces Meta Controller.  ADARA Networks announced the introduction of the Ecliptic Meta Controller, a layer 1-7 controller, designed to address existing gaps within the Software Defined Networking (SDN) space. The controller enables the implementation of SDN for both service providers and enterprises of all sizes. It can implement and manage supervisory programs such as SDN controllers, cloud software, hypervisors and network hypervisors. This alleviates the need for multiple management systems and communication delays. The Ecliptic Meta Controller is available as software only for a fully virtualized solution, or software on either purpose-built or third party COTS Appliances. “As bandwidth demand from the network continues to increase, theere is a pent up need to strengthen network communication and create a smarter network,” said Eric Johnson, Chairman and CEO of ADARA Networks. “While SDN continues to gain steam, there are major gaps that exist in all common SDN approaches, architectures, products and capabilities. Through our work with our partners, ADARA has designed and engineered the Ecliptic Meta Controller to help address these gaps, which include the lack of a coordinated network and cloud computing orchestration and the absence of a responsive and robust operation in a production environment, among others.”

    Cisco selected by St. Andrews Hospital.  Cisco (CSCO) announced that St. Andrews Hospital in Australia has selected Cisco Medical-grade Network (MGN) to develop the foundation for its digital future. The Cisco MGN is designed to support any application and any device. It enables an agile, flexible and dynamic networking environment.  The hospital has also implemented Cisco IP voice to facilitate communication and collaboration between surgeons, healthcare workers and patients. With its partners KPMG and Data Mobility Voice, St. Andrews Hospital deployed the network including Cisco Catalyst 3750-X Series Switches in the core, distribution and edge to support IP telephony and digital theatre environments. “With rapidly changing trends in point-of-care health services, real-time consultations with specialists and e-health records, we need to ensure that our intelligent network platform can support these new technologies.  It had to be highly secure, resilient, robust, with high redundancy capabilities and able to meet the Hospital’s future business requirements,” said Peter Cooper, Director: Engineering & Support Services, St. Andrews Hospital.

    7:49p
    Houston is Hottest Hosting Hub, Pingdom Says

    top-20-web-hosting-cities-p

    Houston, Texas is the favorite hosting location for the world’s most popular web sites, according to Pingdom, which has mapped the hosting universe using the top 1 million sites. The Pingdom survey found Houston was the clear winner, hosting 50,598 of those top million sites, followed by Mountain View, Calif. (29,594 sites), Dallas (24,822) and Scottsdale, Arizona (23,210).

    Why are these locations ranked so highly? Not surprisingly, the top cities track with the locations of major hosting companies.

    An example: Houston is the home to multiple data centers hosted by SoftLayer Technologies, which houses more than 100,000 servers across its infrastructure (see Who’s Got the Most Servers? for more). Because SoftLayer is a “host of hosts,” popular sites hosted by large providers like HostGator and Site5 also map to Houston. Hosting’s not the only game in town, as dozens of the world’s largest energy companies also host their infrastructure in the Houston area, much of it at CyrusOne.

    Mountain View is home to Google, which operates many of the world’s most widely used web services, including hosting offerings like Blogger and Google App Engine. The numbers for Mountain View may also reflect some of the many sites hosted in data centers in adjacent Silicon Valley hosting hubs, like Sunnyvale and Santa Clara.

    Dallas is home to more than 50 data centers for many of the world’s leading hosts, including Rackspace, SoftLayer, CI Host, Colo4Dallas, Savvis and Terremark/Verizon, as well as major hosting buildings such as the INFOMART and 2323 Bryan (the Univision building).

    Scottsdale is the location of domain registrar GoDaddy, which is also one of the world’s largest shared hosting providers (for a closer look at the company’s operations, see Inside Go Daddy’s Phoenix Data Center). Scottsdale is also home to a major multi-tenant data center for IO.

    While most of the top 20 cities in the Pingdom survey are familiar to followers of the data center industry, there are some that aren’t immediately obvious. Brea? That would be Brea, California, which is where DreamHost houses some of its operations.

    For more including a map and photo, see the post over at Pingdom. For additional coverage, see Houston Hosts More than 50K of the Top Million Websites at The WHIR.

    << Previous Day 2013/03/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org