Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 4th, 2015

    Time Event
    5:01a
    Equinix’ Cloud Connectivity Play Helps Re-Architect Enterprise IT

    Two offerings Equinix launched last year, Cloud Exchange and Performance Hub, have hit major milestones.

    Cloud Exchange now has close to 100 members and customer adoption of Performance Hub has crossed the triple-digit line. Building on robust connectivity Equinix is known for, both are helping enterprises transform IT.

    Enterprises are increasingly looking to hybrid infrastructure that combines regular data center capacity with cloud. Equinix has tapped into this demand by providing the connectivity that kind of infrastructure requires. The company will spend 2015 on deepening its reach into connectivity at the application level, Equinix CTO Ihab Tarazi said.

    Cloud Exchange provides private and secure direct links to the major clouds and networks out of Equinix data centers. Performance Hub extends companies’ corporate IT networks close to major population centers. The company recently acquired professional services firm Nimbo, a company with expertise in transforming enterprise IT from legacy to modern infrastructure.

    “People are looking for direct access to cloud with a single port,” said Tarazi. “They want automation. They need it instantly.The third thing they want is to go down to the application level of performance. For the rest of 2015, we’ll build deeper integration at the application level, providing guaranteed performance to Office 365, Google, Cisco, IBM and others.”

    Performance Hub is an attractive entry point into the Equinix model. “We had a lot of people sign up [for Performance Hub],” said Tarazi. “It’s been very popular because it solves key issues like [updating] the backbone with significant capacity. It lowers the cost 40 percent or so in most cases.”

    Cloud Exchange and Performance Hub are proving to be very complementary for early customers. “The two use cases are starting to come together for the benefit of customer,” he said. “If they’re building Performance Hub, they link directly to Cloud Exchange. We see customers not only trying to build their backbone into our data centers, but connecting to hybrid infrastructure through Equinix.”

    One example is CDM Smith, a global consulting, engineering, and construction company, which chose Equinix to help transform its network architecture and leverage a hybrid cloud model. The company deployed Performance Hub at nine key locations around the globe and configured high-performance interconnection to support real-time collaboration among its 160 offices. It also connected to Cloud Exchange to transition to a hybrid cloud environment, including Microsoft Azure.

    Another example is HarperCollins Publishers, a division of the News Corp., which is a resident at Equinix’ London IBX. It connected to Microsoft Azure ExpressRoute through the Equinix Cloud Exchange.

    Cloud Exchange finished 2014 in 19 of the biggest global markets. Microsoft Azure is offered in 16 markets, AWS is in eight markets, Google’s cloud is in 15, Cisco’s in 16, and IBM’s in nine markets.

    “We’ll focus our energy on these major cloud providers who are innovating like crazy,” said Tarazi. “We’ve done a good job staying close to them when it comes to innovation. People want simplicity, automation, orchestrators to work seamlessly in our data centers. No longer do people have to worry about integration.”

    Cloud Exchange recently became more dynamic through work with Apigee. An API acts as the self-service mechanism so that customers can hook in to a multitude of cloud providers within the exchange automatically, instead of a physical cross-connect.

    4:30p
    Demystifying the Data Center Sourcing Dilemma

    Laura Cunningham is a business consultant with HP Technology Consulting, building financial business cases for CIOs and CFOs.

    A growing number of companies are faced with the decision of what to do with aging data centers and increasing IT requirements for data center provisions. There are a spectrum of sourcing options to consider when selecting a data center including retrofit, greenfield, modular, and colocation. While each option has advantages and disadvantages, variables such as size, location, and investment strategy are key components that influence which solution is the best fit.

    The following examines data center sourcing from the data center facilities perspective, including building shell and core, maintaining, and operating the facility, power, and cooling. For the sake of comparison, it is assumed that the availability, resiliency or “tier” level of the data center is the same across sourcing options.

    The options considered include retrofit, upgrades done to an existing data center; greenfield, the purchase of or construction of a data center; modular, a pre-fabricated data center; and colocation, a service provider leasing data center space. Managed services and cloud are considered along the same spectrum as colocation in this article since they are essentially varying degrees of providing data center as a service.

    Data Center Capacity Needs

    When you’re considering a data center solution, one of the driving factors is the IT equivalent of “space”, kW capacity. The long-term kW capacity requirements and the ongoing variability of kW requirements have the largest impact on a recommended data center solution. Typically short-term and highly variable capacity needs are often best accommodated by colocation, managed service, and cloud as these are highly scalable. Long-term data center capacity needs are often best accommodated by retrofit and greenfield sourcing options as these are more permanent solutions. Modular can accommodate somewhere in between as it provides the opportunity to relatively quickly add capacity in phases when needed. The trick is to forecast the kW requirements over time in order to find the best data center solution.

    In regard to the sourcing spectrum, for a retrofit, greenfield, or modular solution, the sooner secured capacity is reached and utilized, the faster the payoff period of the investment. However, the financial risk for these is overbuilding and spending more than necessary on unused capacity. In comparison, colocation provides for variability as it charges rent and power for what is actually used (excluding any charges for reserved adjacency expansion space).

    In the end, a combination of solutions may be necessary to accommodate fluctuating business cycles and the highs and lows in capacity requirements. The goal is to find the middle ground of investing in forecasted capacity requirements while utilizing a service provider to accommodate requirements above that amount.

    Location is Everything

    What is true in real-estate investment holds true for data centers: location is everything. First, from a financial perspective, land prices, construction costs, service providers, and financial incentives greatly vary based on location, and this can impact any of the data center sourcing options.

    Additionally, latency requirements, the time between the moment a signal is sent and when that signal is received, will also drive the location requirements. This is especially true for financial services that need to have low latency connections to the stock exchanges. Proximity hosting, a service offered by colocation providers, is popular with many companies that want the ability to interconnect to the exchanges.

    Further, for many businesses it is important to secure a data center near existing operations. This enables them to leverage trained personnel and to maintain market presence in that region. Disaster recovery requirements affect the location parameters, as being a specific distance away from other data centers is recommended based on the industry and best practices.

    Finally, it is most cost beneficial for retrofit, greenfield, and modular options to leverage land or buildings in the company’s existing real-estate portfolio.

    What is Your Investment Strategy?

    A huge differentiator between data center sourcing is a company’s investment strategy. Does your company prefer to own or lease facilities? Each data center sourcing option varies greatly on the ratio of capital expenditure (CAPEX) vs operating expenditure (OPEX) required.

    Retrofit, greenfield, and modular solutions are typically owned and classified as an asset on the balance sheet, and therefore require the largest initial CAPEX. Colocation, managed service, and cloud tend to be OPEX-centric, thereby needing low to no initial CAPEX. Owning a data center makes more sense to businesses that have or can obtain capital and want to maintain lower OPEX. Leasing a data center makes more sense to businesses that need to conserve capital and are OPEX-neutral.

    Financial calculations should be utilized when determining the right financial fit for your company. Total Cost of Ownership (TCO) is a calculation widely used to compare the cost of different data center sourcing options. In addition, Net Present Value (NPV) should be calculated to evaluate the present value of future cash flows for each data center option. Both calculations vary greatly on the time period considered. Retrofit, greenfield, and modular are typically more expensive options in the short term due to the large amount of initial CAPEX. Colocation is typically more expensive over the long term as recurring annual expenses, OPEX can surpass the cost of the prior three options.

    Ownership and appetite for control will set data center sourcing options apart. Greenfield, retrofit, and modular solutions are best for companies that desire ownership and control over the data center and daily operations. Colocation, managed service, and cloud are generally preferred by companies that wish to outsource the data center facility operations and manage control by ensuring the correct Service Level Agreement (SLA) is in place.

    In conclusion, evaluating size, location, and investment strategy will help you narrow in on the data center solution that is right for your business.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:40p
    Piston and ActiveState Pitch Instant Private PaaS

    Piston Cloud Computing has teamed up with ActiveState Software to sell a full-package private Platform-as-a-Service solution that combines OpenStack at the infrastructure layer and Cloud Foundry PaaS. The companies are promising they can deploy the entire environment in less than one day.

    Piston’s core business is standing up Infrastructure-as-a-Service clouds for customers based on OpenStack, the popular open source cloud architecture. ActiveState’s product is called Stackato, based on Cloud Foundry and Docker application containers, both also open source.

    Piston has strong ties to both OpenStack and Cloud Foundry. The company’s founder Joshua McKenty was one of the original developers of OpenStack. He left Piston in September 2014 to join EMC-controlled Pivotal, the company behind Cloud Foundry, to lead its Cloud Foundry business as CTO.

    The point of a private PaaS is to enable agile development teams to quickly and constantly build, test, and deploy software on secure in-house infrastructure than can scale.

    Stackato has features to make both developers and IT happy. It combines built-in languages, frameworks, and application services for developers. It has robust “enterprise-level” security and support.

    The companies say the private PaaS will be deployed faster and cost less than alternative private clouds, such as Amazon Web Services, VMware, or DIY solutions.

    “Customers will not only have access to a scalable PaaS environment enabling easy deployment of web and mobile apps, a flexible IaaS solution ideal for continuous development and delivery, but they’ll be able to deploy the entire environment and be up and running in less than a day,” ActiveState CEO Bart Copeland said in a statement.

    8:33p
    Looking for a Cloud Computing Job? Here are the Best Companies and CEOs to Work for in the Industry

    logo-WHIR

    This article originally appeared at The WHIR

    The most desirable places to work in cloud computing are Google, Parallels, Virtustream, HubSpot, AppDirect, Mirantis, MuleSoft, Zenoss, and OpenDNS, according to data gleaned from Glassdoor.com on whether employees would recommend their company to a friend.

    This is according to a list compiled by Forbes.

    With so many companies using the term “cloud” to describe themselves, Forbes cut down on the number of companies it assessed by only including companies inCRN’s “100 Coolest Cloud Computing Vendors of 2015” list. It also excluded companies that didn’t have enough feedback on Glassdoor.com to give a clear picture of how the company is perceived by employees.

    According to Glassdoor.com data, there were several CEOs with more than 90-percent employee approval. The highest rated CEOs included Zscaler’s Jay Chaudhry, OpenDNS’s David Ulevitch, Virtustream’s Rodney J. Rogers, HubSpot’s Brian Halligan, and New Relic’s Lew Cirne.

    However, some of the “hottest” companies that were rated relatively low by employees were Engine Yard, NTT Communications, and Internap.

    There were also many interesting discrepancies such as the fact that 91 percent approved Splunk’s quirky CEO Godfrey Sullivan, yet only 66 percent of employees would recommend working there.

    With 3.9 million jobs in the US alone affiliated with cloud computing, 384,478 of which are in IT, cloud jobs continue to make up a significant part of the workforce. A recent report by WANTED Analytics showed that IBM, Oracle and Amazon have the most cloud computing job openings. Recently, IBM said that it had around 1,000 job openings in its cloud group.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/looking-cloud-computing-job-best-companies-ceos-work-industry

    9:16p
    Exclusive: Rackspace Deploys FieldView DCIM Across Global Data Centers

    Rackspace has deployed data center infrastructure management software by FieldView Solutions in most of its data centers in a major win for the Edison, New Jersey-based DCIM vendor. Rackspace has data centers across the U.S., as well as in London, Hong Kong, and Sydney.

    Companies like Windcrest, Texas-based Rackspace, which provides a range of data center services, from colocation to Infrastructure-as-a-Service and everything in between, as well as pure-play colocation providers, are increasingly interested in commercial DCIM solutions and the competitive advantage they may bring.

    There are two main advantages to data center management software for providers. One is better capacity planning, which helps defer unnecessary data center construction costs, and the other is the ability to give customers granular visibility into data center resources they consume.

    Rackspace, so far, has deployed FieldView to use in its own data center management operations only. FieldView’s DCIM software provides power consumption and environmental monitoring and tracks how much space and power is available in any particular area of a data center. Should Rackspace choose to provide a customer DCIM portal as a service, the capability is there, Sev Onyshkevych, FieldView chief marketing officer, said.

    Pure colo – where providing dashboards that show customers how much power they’re using or alert them of cooling irregularities in their environment makes most sense – is only part of Rackspace’s business and not really a big focus for the company. Rackspace is much more focused on providing hands-on infrastructure outsourcing services, which means it owns and manages most of the hardware in its data centers on its own and stands to benefit the most from a tool that helps it do that.

    One big example of a FieldView customer with a pure space-power-cooling play is Digital Realty, Rhonda Ascierto, research manager at 451 Research, said. The San Francisco-based wholesale data center giant that also has a retail colocation business is using FieldView’s “white box” solution to provide monitoring dashboards to its customers under its own brand EnVision.

    Here’s a list of select DCIM deployments by data center providers, courtesy of 451 Research (suppliers in bold):

    • Cologix: Modius – Open Data
    • Digital Realty: FieldView Solutions – FieldView
    • CenturyLink: Schneider Electric – StruxureWare for Data Centers
    • Fortrust: Baselayer – IO.OS
    • Interactive Pty: Cormant – Cormant CS
    • RagingWire Data Centers: CA Technologies – CA DCIM
    • Virtus Data Centres: iTRACS (CommScope) – CPIM; TDB Fusion – Federos

    DCIM-Enabled Capacity Planning Advantageous

    Smart capacity planning alone can be a huge advantage for a service provider. Besides being able to build out just as much capacity as needed at any point in time, it makes for faster customer on-boarding, Ascierto explained. “It’s a big one for colos,” she said.

    For a company like Rackspace it can be a sales tool. Since Rackspace provides a range of infrastructure services, and customers often need a mix of services, DCIM can help a sales rep quickly determine what kind of capacity is available where and offer a price estimate for a very complex set of services to a customer almost instantaneously. It is also possible that the estimate will provide not only the lowest cost to the customer but also take into account placement of capacity that’s most advantageous to the service provider.

    Another big benefit lies in predictive analytics. If a service provider can track a customer’s usage trends over time, with a powerful enough analytics engine they can potentially help the customer plan their IT capacity better, and “that’s a pretty powerful service,” Ascierto said.

    Growing DCIM Opportunity

    DCIM is a growing market and DCIM for data center service providers is a segment that holds a lot of promise, because successful providers constantly grow their data center footprint. DCIM vendors, FiledView included, charge based on the amount of data center capacity monitored or managed, so a customer with a growing footprint is an ongoing revenue stream.

    FieldVew charges per data source, and each piece of equipment tracked has several such sources. A UPS typically has about 20, a CRAC unit 15, and a rack power strip can have 10, Onyshkevych said. He did not say how big Rackspace’s deployment was exactly, saying it was “many tens of thousands” of data points.

    Deploying DCIM is an involved process, however, and one thing vendors have found especially challenging is customers’ existing capacity planning systems and change management processes. Legacy capacity management systems are often siloed and custom, Ascierto explained. But vendors have to integrate with them because customers are rarely willing to “switch over to DCIM on day one,” she said.

    How long and difficult the deployment process is varies greatly depending on the customer of course. Deploying FieldView across the Rackspace footprint was a matter of several months, according to Onyshkevych.

    DCIM Hype Subsides

    The DCIM market has matured and gotten over the initial hype cycle, Onyshkevych said. Companies are now buying the software for good reasons and not because it’s the latest and greatest thing everybody is talking about.

    Because it provides the monitoring side of DCIM, FieldView is in a good position to capture revenue in the initial DCIM deployment phase. Before you deploy any other, more sophisticated DCIM functions – things like power capping, load shedding, or dynamic cooling control – you need a strong monitoring and asset management base, he explained.

    But overall, customers appear to be more knowledgeable about DCIM today than they were even a year ago. “They understand DCIM is not a single product,” Onyshkevych said. “It’s a category.”

    9:33p
    Startup Qumulo Raises $40M for Scale-Out NAS Storage Management

    Qumulo, a startup that focuses on data center storage management, has raised a $40 million Series B funding round, bringing total investment in the company to $67 million. The previous $24.5 million series A round was used to build up the product, now it’s go-to-market time.

    The startup consists of scale-out NAS storage veterans who have identified that the problem with scale-out storage today isn’t with the storage hardware itself, but with what resides within. Often, companies piece together different storage hardware as they grow, which causes management problems down the line. Qumulo’s technology helps track what data is where, who is accessing data and how often.

    It’s all in the name of understanding how data is being used and tuning storage accordingly. It’s a scale-out file system for today’s Frankenstein storage, where a bunch of pieces are stitched together acting as one thing.

    Just how Qumulo accomplishes this specifically is unknown, though the company does claim active users. Still in stealth, it’s the latest in a line of well-funded startups keeping specifics of their technology and strategies on the down-low. Often, stealth either means the problem has been identified but the solution is still in the works, or the solution has been figured out and the company is being protective. Stealth generates interest, and Qumulo is being covered heavily.

    While the details are unknown, the pedigree is there. The startup’s executive team has deep storage roots, stocked with former Isilon talent. EMC acquired Isilon for $2.25 billion in 2010. The scale-out NAS storage experts have spoken extensively with users to understand today’s problems, according to a company release.

    “Qumulo presents so much potential to change the enterprise storage game,” said Sujal Patel, founder of Isilon, in the release. “The team is addressing the major problem of digital data growth and management that will only continue to compound, with the rare expertise required to build high performance, massive scale-out NAS software. By attacking the root of the problem — managing the data, rather than managing the storage — new outcomes are possible. When Qumulo enters the market, it will have a significant impact on enterprise scale-out NAS.”

    The company’s first offering was a storage appliance built on commodity hardware, but Qumulo told Tech Crunch that the goal is to create an all-software solution.

    The latest round was led by Kleiner Perkins Caufield & Byers with participation from previous investors Highland Capital, Madrona Venture Group, and Valhalla Partners.

    There are numerous other upstarts attempting to fix modern day scale-out storage problems. Two examples include also-healthily-funded DataGravity and Actifio, which also focuses on data management and raised $100 million last year.

    Two startups in the space with an SSD focus are Coho and its flash-y SDN appliance that combines commodity hardware with software for performance, and Nutanix’ solution for apps that require extreme speed.

    Storage needs are diverse, but the trend is always toward bigger and faster. Companies like Qumulo recognize that bigger and faster means more data and files that are harder to track and understand how they’re needed and used.

    << Previous Day 2015/02/04
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org