Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 22nd, 2014

    Time Event
    12:30p
    Crosby: Weak Commissioning Poses Risks to Reliability, Safety

    ORLANDO, Fla. - The data center industry isn’t serious enough about testing the performance and safety of new facilities, according to Chris Crosby, who says this practice will lead to problems unless it is addressed.

    Crosby, the CEO of Compass Datacenters, is an ardent advocate of the importance of commissioning, the thorough testing of mission-critical systems in new data centers. In a presentation Tuesday at Data Center World, Crosby called on service providers to commit to Level 5 commissioning of new data centers, which involves integrated testing of all mission-critical systems.

    “It’s been shocking to me how much our industry has been shirking on commissioning,” said Crosby. “It has real risks for the operation of the building, as well as safety.”

    Crosby was also critical of the practice of phased expansions of data center electrical infrastructure, saying this approach creates additional risk by introducing scenarios that cannot be tested during commissioning.

    “Some companies are saying ‘we’ve pre-designed the data center, and can add UPS capacity later,’” said Crosby. “If you haven’t tested it at full load, you’re kidding yourself.”

    The value of commissioning

    Commissioning is a quality assurance process in which a facility is thoroughly tested, preferably by an independent specialist. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) defines commissioning as “verifying and documenting that the facility and all of its systems and assemblies are planned, designed, installed, tested, operated and maintained to meet the needs of the owner.”

    In his presentation at Data Center World, Crosby outlined the various levels of commissioning a data center.

    commissioning-crosby-dcw

    “Commissioning is the only time you get to test the equipment as its supposed to work,” said Crosby. “As the CEO, that’s the one thing that allows me to sleep at night. I know the site performs. Data centers very rarely get you kudos, but they can get you canned.”

    Crosby said Level 5 commissioning is crucial in affirming that the data center will operate reliably and safely, but many providers forego this level of testing. “Level 4 is where a lot of people are stopping nowadays,” he said.

    Before founding Compass, Crosby was an executive for many years at Digital Realty Trust when it pioneered the wholesale data center model. He has become a strong advocate of standards, including Tier certification that includes data center construction as well as design. That emphasis on independent review of data center quality extends to commissioning.

    Safety and capacity expansion

    Crosby says the commissioning issue goes deeper than performance and reliability. He says it’s a safety issue.

    “The concept of modular expansion has created an inordinate amount of risk,” he said. “Don’t fall into the phased-build model of installing extra capacity on live stuff. If you don’t test and then add something later, you increase the risk of an arc flash exponentially.”

    An arc flash is an electrical explosion that generates intense heat that can reach 35,000 F, which can damage and even melt electrical equipment. Arc flash incidents also represent a significant threat to worker safety.

    A commissioning specialist who wished to remain anonymous said such safety concerns can be addressed by including commissioning agents in the design process, a practice that’s common in data center projects for large enterprises, but less so for service providers.

    Chris Crosby, CEO and co-founder, Compass Datacenters.

    Chris Crosby, CEO and co-founder, Compass Datacenters.

    Reducing arc flash hazards has been a growing priority for data center power vendors and the National Fire Protection Association (NFPA), which in 2012 introduced new regulations designed to limit the scenarios in which technicians are working with energized equipment.

    According to Crosby, data center designs incorporating phased expansion — often through the addition of modular UPS infrastructure — can create risky scenarios in which capacity is added to operational facilities. He said this is a particular challenge with larger facilities with a unified power infrastructure.

    “I firmly believe that we are going to have an issue as an industry with arc flash,” said Crosby. “We don’t do a good enough job explaining the risks of live work.”

    Design options

    One way to manage these risks would be to lease equipment for the commissioning process, so that a provider could test a facility at full projected load without laying out the capital to purchase UPS units and generators.

    “I get that these are hard decisions,” said Crosby. “But this is dangerous stuff.”

    Audience members noted that this risk can also be addressed through designs that break larger facilities into multiple chunks with independent power systems. Crosby acknowledged this point, and said more data centers need to consider this in the design process.

    “You’ve got to build out separate infrastructure systems, from the transformer all the way through,” he said.

    This is the approach Compass has taken with its data center design, which features facilities sized at 1 to 1.5 megawatts, with expansion available through additional buildings.

    3:00p
    Wag The Dog: Aligning Business and Data Center

    Dear C-Suite,

    We need to talk. We don’t know how much our data center is worth to the organization and cannot calculate return on investment. We not only have to align facilities and IT, but align the data center with business objectives. Communication is key, yet we speak different languages.

    Thanks,

    Your data center

    Tuesday, day 2 of Data Center World in Orlando, saw a heated panel discussion on the role a data center plays in the overall business. “They’re the modern equivalent of railroad infrastructure,” said Compass Datacenters CEO Chris Crosby. “Data centers are the layer necessary for business today. It has moved from the basement to the boardroom.”

    “There is a deep connection,” said Dennis Wenk, principal analyst at Symantec. “It is the foundation of the business service arm. If anything is interrupted, loses occur.”

    C-suite doesn’t speak data center

    In most instances, however, communication between the data center and the C-suite is inadequate, so business and data center strategies are not aligned. Panelists agreed that CIOs and CFOs don’t necessarily understand the data center, and vice versa.

    “Just because you think you have a data center strategy in place doesn’t mean they do,” said Jake Sherrill, founder and CEO at Tier4Advisors. “The truth is the majority of strategies are poor. Pitfalls we see is the business expects IT to have the right strategy but doesn’t give IT visibility. IT is expected to reinvent the wheel on penny and nickel budgets. The biggest pitfall is poor communication.”

    “Financial communication especially,” added Crosby. “Not many know the preferred method of communication with the C-suite.”

    It is important to understand how communication goes with the CIO in particular. Do they want to see the data center in terms of capital payback? Return on investment? Net present value? Can an asset be written off? How about a tech refresh? There are many different variables.

    “The tail doesn’t wag the dog,” said Derek Odegard, president and founder of CentricsIT. “It’s about the business driving the data center, not the data center driving the business. The chief financial officer does not understand the data center. It is not his or her job. The CFO says it’s the CIO’s job.”

    Long-term data center planning futile

    The data center is left to gauge the direction of the business in order to make its decisions (Will they acquire another company? Will they move into new services?). The way a business will go is often unpredictable, while technology changes at lightning speeds. The panel was in agreement that 15-year data center strategies were impossible in this climate.

    “Capacity in small chunks is better,” Crosby said. “People talk of 15-year strategies, when we can’t see 6 months ahead.”

    Financial concerns slow tech refreshes down

    Capacity planning gets more complicated when tech refreshes are taken into consideration, themselves expensive and complicated projects. “Refresh schedules are rusted,” said Wenk. “We’re growing too quickly with no headroom.”

    “People try to refresh every 3.5 years, but they always fall behind and employ an ‘if it ain’t broke, don’t fix it’ strategy,” said Odegard.

    This can be dangerous, because downtime is costly, given the integral role of the data center. So you are playing a dangerous game if you are trying to maximize return on investment without crossing the line when old hardware finally fails.

    But the technology moves so quickly, it’s often impossible to demonstrate that an asset has been fully depreciated, which makes it difficult to convince the C-suite to sign off on a refresh.

    We’re in an age of just-in-time capacity, hybrid infrastructure and cloud. However, this has added complexity into the mix. It’s no longer about build vs. colo, but a discussion of aligning applications with infrastructure and aligning infrastructure with the business. The data center is the keystone of the service-enabled business. It is no longer in the basement, but now that everyone is in the same room, it’s time to have a discussion in a language everyone can speak.

    3:43p
    Data Integration for the Mobile Age

    Frank Huerta is CEO and co-founder of TransLattice, where he is responsible for the vision and strategic direction of the company.

    The age of mobile and its cousin, the Internet of Things, have multiplied data beyond all expectation. Trying to integrate all of this data has become significantly more difficult, particularly when you consider the fact that data centers are now often distributed around the world. There is a certain group of businesses for whom this is especially challenging: companies with large subscription lists who must manage sensitive data. Examples include online gaming platforms, healthcare organizations, banks and insurance companies.

    Certain kinds of data, such as Personally Identifiable Information (PII) that include critical details such as Social Security numbers, credit card information, dates of birth and names and addresses are all part of subscription management and can be especially challenging to handle. Failure to handle it well can lead to a loss of revenue, increased business risk, and angry customers – all of which are bad for business.

    The modern storage dilemma

    Until recently, before the big data deluge hit, a single data storage location was sufficient. Data was easier to access and control. Today, many large enterprises have a global component to daily business transactions, with customers, partners and employees located around the world. Given the distributed nature of an organization’s users and increasing data location regulations, the traditional method of storing data on a central server to support worldwide stakeholders no longer meets business needs.

    Changing business needs have led to a data storage flat earth – companies storing their data in locations around the world. This, in turn, has led to many foreign governments becoming increasingly obstinate about data privacy and security for data originating in-country. While regulations vary by country, there are growing requirements for PII data to remain in the country of origin. This means that policies must be created and maintained to ensure that data is stored in compliance with these regulations, which might be easier said than done when a company operates across continents.

    There are, then, two options for data storage, and neither is ideal. Companies may choose to store data where it is most convenient for them, risking non-compliance with data location regulations, or they can establish separate data stores by region, per regulations.

    Each of these has its challenges:

    • Storing data in a manner that is not compliant with local regulations, though more convenient, carries serious legal and regulatory implications.
    • Storing data in distinct locations in a way that maintains compliance with data location regulations keeps organizations safe but forces them to continually consolidate and synchronize their data. Depending on the needs of the organization, this can happen several times a day, once a day, weekly or even monthly. Regardless of the frequency of the consolidation, immediate access to data in real time is not possible.

    Integrated policy management to the rescue

    Thankfully, technological advances have come to the rescue regarding data management.

    New approaches allow an organization to keep its existing infrastructure while enabling automated data location compliance. One approach is an integrated policy-driven data management system that eliminates the challenges described above, by automatically synchronizing data in real time, which provides a 360-degree view of the data at all times.

    Significant reductions in costs and administrative time occur by deploying a data integration solution that allows you to keep existing infrastructure, while addressing data location compliance issues. This new approach takes advantage of a “scale-out” architecture where capabilities are extended by simply adding identical data management “nodes.” This enables easy scaling either within a data center or to multiple locations around the globe. Integrated policy management virtually eliminates the manual labor usually involved with scaling such a system and delivers a more streamlined, automated process.

    A global storage solution

    By adding data management nodes as needed, and where needed, to infrastructure that already exists, companies can run a data integration solution alongside their current data stores. As transactions are completed, most of the data is stored as usual, with region-specific data stored only on the node in that region. For example, if a company chooses to expand operations into a nation that requires all PII data be maintained in-country, it can place a node in that country and the PII data will be stored only on that node, rather than deploying a separate instance of the company’s existing database. The nodes form a geographically distributed fabric that provides data visibility in real time.

    Nodes can run alongside existing database systems and may also be deployed in remote locations to enable PII data to remain in the country of origin.

    Nodes can run alongside existing database systems and may also be deployed in remote locations to enable PII data to remain in the country of origin.

    Data storage is more complex than it’s ever been. Not only is there exponentially more data to be stored than at any time in human history, but regulations regarding the storing of that data have become much more complex. Companies are operating at a global level, and their customers expect instant access to data at all times. Rather than having to choose between providing great service and remaining compliant, organizations can now employ policy-driven data management solutions. A policy-driven approach offers the best of both worlds: improved response times for remote users and real-time data visibility across regions. Better yet, this approach yields hefty time and cost savings as well. Businesses that cross borders now have one less thing to worry about.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:44p
    IIX Buys Allegro to Automate its Global Network Peering Cloud

    IIX has made its first acquisition following a $10.4 million Series A this summer. The company has acquired Allegro Networks in the UK for an undisclosed amount. IIX said it was attracted to Allegro’s connectivity automation capabilities and talent. It also gains more than 30 Allegro network Points of Presence (PoPs) in the UK.

    IIX wants to make international network peering simple through distributed and interconnected Internet Exchange Points (IXPs) and believes it has the platform to make this a reality. The company describes its PeeringCloud is a global Internet exchange platform that enables customers to connect to IXPs via a single interconnection from anywhere in the world. IIX is making a case for distributed and interconnected IXPs and moving away from the interconnection “islands” that currently exist, where certain data center providers control the major interconnection points.

    Members of an industry group called Open-IX are promoting another model of Internet exchanges that extend beyond a single data center. AMS-IX, operator of the Amsterdam Internet Exchange that this year expanded into the U.S. market is an Open-IX member, but its model limits each distributed exchange to a single metropolitan area. This is different from the global exchange model IIX has devised.

    Allegro is a UK-based software and interconnection company that IIX believes will further its mission of providing global network interconnection services to enterprises that are easy to set up.

    Taking manual labor out of network configuration

    IIX and Allegro are aligned in their desire to eliminate the manual labor of setting up a range of connection types by distributing the platform and offering it through software instead.

    Enterprises continue to adopt hybrid strategies employing a variety of architectures dictated by specific applications’ needs. This requires being able to configure infrastructure dynamically, which is only possible through software.

    “Our two organizations share the same philosophy, believing that the ability to provision scalable, reliable, and secure interconnects will power new innovation for the benefit of online users,” said Davidson.

    IIX’s platform is not an overlay product, but a complete bypass of the public Internet. By bypassing the public Internet, it improves latency, increases immunity to DDoS attacks and mitigates other issues.

    The IIX peering cloud enables users to connect to peers in other geographic regions. (Image: IIX)

    The IIX peering cloud enables users to connect to peers in other geographic regions. (Image: IIX)

    Team of peering veterans

    Allegro’s automation platform Snap allows organizations to purchase and provision virtual cross-connects, point-to-point circuits and peering interconnections on demand. Allegro’s engineering talent is crucial to the deal, as they will help evolve the platform.

    Allegro CTO Andy Davidson has been appointed IIX president of network engineering, Europe. He now joins a team of experts with deep roots in the network peering industry.

    IIX founder and CEO Al Burgio has quietly built a major contender. His team includes some Equinix veterans, including Bill Norton, co-founder of Equinix and published author on peering, Morgan Snyder, who spent a decade with Equinix, and advisory board member Jay Adelson, Equinix founder and co-creator of PAIX, formerly known as the Palo Alto Internet Exchange.

    Equinix, by far the largest incumbent in the commercially-operated Internet exchange space in the U.S. is the biggest competitor of both IIX and exchange operators that participate in the Open-IX initiative.

    5:16p
    DCIM Vendor Power Assure Dissolved

    Power Assure, a Santa Clara, California-based data center infrastructure management startup, has been dissolved, Clemens Pfeiffer, the company’s now former CTO, said.

    The company failed to raise enough money to continue operating, he said, adding that he did not have any further details about what had led to the dissolution.

    The DCIM software market is still fairly young but has already gone through a wave of consolidation. A handful of Power Assure competitors have been acquired by larger companies, which have been fleshing out their DCIM offerings to provide comprehensive infrastructure monitoring and management software suites.

    Power Assure has been focused on one element of the infrastructure: energy management. Its technology early on was based on algorithms that enabled it to calculate optimal server capacity an application needs and automatically shut off unnecessary capacity or spin up more capacity based on actual application demand.

    Recently, it has been pushing the concept of “software-defined power,” the idea of dynamically shifting applications from one data center to another based on power availability and quality in a particular geography at a particular point in time. “It requires creating a layer of abstraction that isolates the application from local power dependencies and maximizes application uptime by leveraging existing failover, virtualization and load-balancing capabilities to shift application capacity across data centers in ways that always utilize the power with the highest availability, dependability and quality,” Pfeiffer described the concept in an article he wrote for Data Center Knowledge earlier this year.

    Ahead of its time?

    Jennifer Koppy, research director for data center management at IDC, said Power Assure’s energy management technology was “extremely forward-looking.”

    She speculated that perhaps timing was to blame for its inability to sustain itself. “I feel like they had a superb idea, but I don’t think the market is ready yet.”

    What the company was proposing would require an entirely new mind set about data center management and a new set of processes.

    The DCIM market, she said, is still “in the ad hoc opportunistic phase,” where a lot of new ideas are born but not yet tested against market realities. There’s “flood of great ideas, and then there’s what actually succeeds in the market,” Koppy said.

    Wide pool of supporters

    Power Assure has enjoyed support from partners, investors and the government but appears to not have been able to sustain that support.

    It followed a $2.5 million Series A round in 2009 with an $11.25 million round the following year and a $14.5 million Series B in 2011. In 2010, the U.S. Department of Energy awarded the company a $5 million grant as part of the American Reinvestment and Recovery Act of 2009, also known as President Barack Obama’s “stimulus package.”

    Investors in Power Assure included ABB Technology Ventures, Dominion Energy Technologies, Draper Fisher Jurvetson, Good Energies and Point Judith Capital. The company had struck partnerships with numerous vendors, including VMware, Cisco, Dell, IBM, Intel, ABB, Raritan and In-Q-Tel.

    Market favors big players

    DCIM is one of the fastest-growing segments of the enterprise software market, according to 451 Research. The market research firm tracks about 60 vendors in the space.

    While there is a healthy growth rate, 451 revised its 2013 growth projections downward this year. In 2013, the analysts expected market size to reach $1.8 billion by 2016, but changed the forecast, saying it would reach that size two years later.

    The size of portions of total revenue the market generates is greatly skewed toward a dozen or so large companies. Only a handful of players manage to generate more than $200 million in annual DCIM sales, according to 451.

    The vast majority of DCIM vendors are companies like Power Assure, smaller players, many of whom make less than $4 million a year in sales.

    Some of the smaller players have been acquired by their larger competitors in deals such as CommScope’s iTRACS acquisition in 2013 and Panduit’s acquisition of SynapSense earlier this year. Companies with different portions of DCIM functionality have also partnered as everybody races to have the most comprehensive package out there.

    Consolidation is a sign that the market is starting to mature, IDC’s Koppy said. “We will most definitely see more. More partnerships and more buyouts.”

    6:00p
    SingleHop Launches VMware-based Virtual Private Cloud

    logo-WHIR

    This article originally appeared at The WHIR

    Managed cloud hosting provider SingleHop announced on Tuesday that it has launched its new Virtual Private Cloud (VPC) infrastructure service. The offering leverages enterprise technologies from VMware, Veeam, EMC and others to provide a scalable private cloud.

    VPC is integrated with SingleHop’s proprietary cloud automation and orchestration platform and gives customers granular control over resource allocation.

    Based on VMware vSphere and VMware vCloud Director, SingleHop’s VPC offering has achieved VMware Hybrid Cloud Powered status. This means that its cloud service is interoperable with a customer’s VMware-based internal data center.

    Using SingleHop’s LEAP interface, customers can deploy an unlimited number of virtual private clouds, and can manage them via VMware’s vCloud Director.

    “Deploying a Virtual Private Cloud solves a number of common problems for businesses today. First, the pricing model is allocation-driven, easy to understand and fixed, meaning you don’t pay more just because you deploy a new Virtual Server inside of a Virtual Private Cloud,” Jordan Jacobs, Vice President of Products at SingleHop said. “Second, it’s scalable, in two ways. You can allocate and reallocate your resources to any Virtual Server inside of your Virtual Private Cloud anytime you want. And finally, the size of your Virtual Private Cloud itself can be scaled quickly and easily.”

    SingleHop is part of the VMware vCloud Air Network of partners, which gives the company access to VMware technology to provide a VMware-based cloud infrastructure.

    Last month, SingleHop launched its Enterprise On Demand Technology Partner Program, which brings enterprise-class technologies into its proprietary automation platform LEAP including solutions from initial partners Alert Logic and Radware.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/singlehop-launches-vmware-based-virtual-private-cloud

    6:40p
    Telx Adds Hybrid Cloud Option to Cloud Exchange

    Telx has added a hybrid cloud option on its Cloud Xchange platform, a centralized hub for cloud services its data center customers offer.

    Many public and private cloud services were available on the platform before, but now there is also the option of combining the two cloud flavors into a single architecture.

    Colocation and interconnection providers like Telx and Equinix have been using the exchange approach to leverage the ecosystems of service providers they have nurtured within their data centers to get a piece of the action in the thriving cloud services market without having to provide the cloud services themselves.

    In a way, it is an attempt to translate the Internet exchange-centric model for the cloud services market. Both companies put peering at the core of their business models early on and managed to attract large memberships, Equinix having the larger play in the U.S. peering market of the two.

    The bigger the number of networks peering in a facility the more attractive that facility becomes. Now, Telx’s Cloud Xchange and the Equinix Cloud Exchange, are growing to make their data centers more attractive for cloud users.

    Three companies that provide public and private cloud services through the Telx cloud exchange are now offering the private cloud option: Intercloud Systems, Easy Street and Peak. Direct links to the Amazon Web Services public cloud are also available at Telx facilities.

    “With access to an abundant set of cloud service providers offering public, private, and now hybrid cloud solutions, we’re helping providers grow their business while enterprises benefit from tailored solutions that tackle demanding workloads and offer the flexibility needed to meet business goals,” Telx CEO Chris Downie said in a statement. “The enhanced Telx Cloud Xchange is core to our business as we continue to foster relationships with cloud service providers and enterprises.

    Telx has 20 data centers in 13 U.S. markets. The portfolio has 1.3 million square feet of data center space and provides close to 50,000 links to various global networks.

    According to the company, cumulatively the networks connect more than 10 million square feet of data center space.

    7:30p
    How New Types of DDoS Affect the Cloud

    At a recent security meeting with a large healthcare organization, I had the privilege of looking at the logs of a private cloud infrastructure which I helped design. They showed me a couple of interesting numbers and what looked like possible DDoS attacks. Except, they were different. The security admin mentioned that he, and colleagues in different organizations, have been seeing a spike in malicious DDoS attacks against their systems.

    Over the past few months, there have been more DDoS attacks against more IT infrastructures all over the world. These attacks have evolved from simple volumetric attacks to something much more sophisticated. Now, attackers are using application-layer and HTTP attacks against certain targets within an organization.

    Consider this: cloud DDoS attacks are larger than ever. The Arbor Networks 9th annual Worldwide Infrastructure Security Report illustrates this point very clearly with the largest reported DDoS attack in 2013 clocking in at 309 Gbps. ATLAS data corroborates the report, with eight times the number of attacks over 20Gb/sec monitored in 2013 (as compared to 2012). And, 2014 is already shaping up to be a big year for attacks with a widely reported NTP reflection attack of 300Gbps+, and multiple attacks over 100Gbps in early February.

    AttackSize_2

    Fortunately for my friend and his organization, this SQL application-based attack was stopped. Why? They have an application firewall deployed on a virtual appliance. That firewall was specifically monitoring the targeted application, so the attack was stopped and logged.

    A cloud DDoS attack is no laughing matter. Massive systems now rely on cloud environments where a single component can cause a cascading failure. This is where next-generation security and DDoS appliances are going to be helping out.

    The reality is simple: With more organizations moving onto cloud platforms, there will need to be new types of security best practices to help secure their environments. Data leaks and security breaches can be messy from an IT perspective, but they can also really hurt a company’s image. More organizations are beginning to spend serious dollars on the next-generation security industry in efforts to help mitigate a possible DDoS attack.

    What to look for and consider:

    • Next-generation security appliances and firewalls are real and have powerful cloud-layer visibility
    • Incorporate virtual security into your data center as virtual machines, appliances and more
    • DLP, IPS/IDS engines are much more powerful now and have granular visibility into your data architecture

    Whether a company is hosting its own cloud environment or using a hosting provider, new types of security measures that can effectively protect against cloud DDoS attacks will have to be evaluated. Virtual security appliances can now be placed anywhere on the network to protect different types of internal systems. This can range from a specific OS service to a full application.

    Also, new physical storage appliances are taking data correlation and security into their own hands.

    There is one final very important piece to all of this. Because of the increase in attacks against applications, internal resources, and various data points, there needs to be more collaboration between application and security teams. Application developers and administrators must clearly communicate what they need to operate with the security teams. This means understanding network, port, and services configurations. Improperly setting up an application – especially if it’s WAN-facing – can have very bad consequences.

    It’s a changing industry out there. And cloud is certainly leading the way. However, just like with any new technology, there are always plenty of new security concerns to follow. Look for next-generation security to continue to evolve to help support the very wide demands of the cloud.

    << Previous Day 2014/10/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org