Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, June 10th, 2014

    Time Event
    7:00a
    Open-Xchange to Offer Hosted Version of Popular Email and Apps Suite

    Open-Xchange has made a sizable business providing email and a suite of apps to hosting providers to help them expand into ancillary services and raise average revenue per customer. The company is now releasing its own hosted version that service providers can white label called OXaaS.

    Open-Xchange has gone from 82 million seats sold last year to 110 million. In addition to business software it now provides backend for several large telecoms, such as Telecom Italia, large service providers, as well as a wealth of smaller hosting providers.

    CEO Rafael Laguna said the new offering would take the burden of supporting non-core services off of providers’ shoulders. “They don’t have to worry or run auxiliary services anymore,” he said.

    The go-to-market approach is a “partners only” one, to make sure Open-Xchange does not compete with its own customers. ”We’re not selling to the end consumers,” Laguna said. “You can only buy this from provider partners because of channel conflicts.”

    The applications of Open-Xchange range from email services all the way to hosted desktops, where everything including collaboration and document editing can be accomplished within the browser.

    Hosted across several IaaS environments

    Open-Xchange is using several different Infrastructure-as-a-Service providers to run the hosted offering. It’s starting in Germany, the company’s home, and will be released in the U.S. later this year.

    In Germany, the company is using an OpenStack-based IaaS provider called Xion, a telecom spin-off. In the U.S., it is working with Hostway, with whom it has had a mutual customer: Sprint.

    “Being in Germany may not be enough for European customers,” said Laguna. “For example, we were talking to customers in Switzerland. Crazy example, but very interesting, is we’ve launched a customer in Finland on Azure. It’s 250,000 email accounts on one of the top five Finnish mail platforms.”

    Azure is an interesting choice because Open-Xchange’s app suite in some ways competes with Microsoft Office. In this case, Microsoft turns almost into a sales channel.

    Some customers also want to run its offerings on Amazon Web Services’ cloud. “We’re currently setting up a proof of concept on AWS,” Laguna said.

    Customers can choose to run it themselves or use the new OXaaS. Some are using the Open-Xchange hosted version for a quick go-to-market and then bring it in if they deem it successful, lowering the barrier to entry. The initial cost is much lower, giving more providers a chance to try it out.

    Developers behind OpenOffice on board

    The company hired many members of the team behind OpenOffice, the open source office application suite, two and a half years ago. It has been leveraging them to move into other business applications, such as spreadsheets and a recent Dropbox-type live installation.

    “We’re launching more and more stuff to complete the application stack,” said Laguna. “The applications have become a little more consumer friendly. In the past, we focused on small and medium business. Now we deal with consumer platforms with tens of millions of users on them, through partnerships with big telcos and hosting providers. It’s easy for the consumer to use, but grows with the user.”

    This move is similar to what Automattic did with WordPress in the past. The popular blogging platform is a popular hosted service, but with the introduction of WordPress.com, it allowed WordPress to be hosted by WordPress. The difference is that Open-Xchange will not sell directly and is open to a variety of hosting configurations.

    The public cloud services market is approaching $107 billion  in revenue by some estimates, and Open-Xchange believes the best chance to reduce escalating churn rates is by bringing profitable software-based services to its customers. The upfront infrastructure investment required to support cloud apps and services is too high for many hosts, telcos and MSPs to handle, driving demand for solutions like the hosted flavor of Open-Xchange.

    12:00p
    With Data Campus, the Modular Data Center Scales Out

    As modular data centers become more widely adopted, vendors are expanding their offerings to support larger installations. The latest example of this was last week’s introduction of the T4 Data Campus by Cannon Technologies, a UK-based company specializing in modular solutions.

    It’s just one of the many new products in the market for modular capacity, where end users have an expanding universe of options. Modular data centers are built in factories and then deployed to a customer site via truck or rail.

    The new Cannon offering adopts the “building block” approach that allows customers to use multiple pre-engineered modules to build out IT capacity in phases. This approach is common among cloud builders like Microsoft, Google and eBay in building hyper-scale data center facilities, as well as “modular colo” providers like IO.

    The T4 Data Campus is an expansion of Cannon’s T4 modular product. The solution is notable for its wide range of configurations, including single or multi-floor levels, Tier 3 or Tier 4 resilience with low power usage effectiveness (PUE) ratings and free cooling options. The standard “block” offers 250kW of IT capacity.

    Cannon is one of three players in the UK market that have developed highly-configurable modular designs, along with BladeRoom and Colt. In the U.S., IO continues to be the market leader in configurable modular deployments, while newer players like CenterCore offer stackable solutions.

    Larger vendors target modular market

    The industry’s largest players have taken note and stepped up their game. In recent months, Schneider Electric has dramatically sharpened focus on the modular market, acquiring AST Modular and rolling out an expanded line of its own modular designs. Meanwhile, HP has begun offering to lease EcoPod modules to make them more attractive to enterprise customers.

    At the enterprise level, modular adoption started with the use of containerized capacity to add capacity to power-constrained data centers. That is one reason much of the adoption of modular deployments has focused on enclosures that house power and cooling equipment.

    But that’s starting to change, as we noted in January. Enterprise modular deployments have been focused on high-density requirements, where the form factor offers the best return on investment.

    The T4 Data Campus modules can be selected from a menu of “pick-and-mix” options for the construction of data halls, network operation centers, offices, entry systems, people traps and storage areas. Stair and lift modules, special application modules for security and a wide array of power and cooling modules are all included.

    “The level of flexibility offered by the T4 Data Campus sets a new benchmark in modular data center configuration and is perfect for those who want to be able to grow their facility as required,” said Mark Hirst, head of Cannon T4 Data Center Solutions.

    A view of the interior of a T4 Data Campus installation.

    A view of the interior of a T4 Data Campus installation.

    12:30p
    Automate for Connected and Consistent Data

    Jeff Rauscher, Director of Solutions Design for Redwood Software, has more than 31 years of diversified MIS/IT experience working with a wide variety of technologies including SAP, HP, IBM, and many others.

    No matter what services or products your organization provides, you constantly need timely and accurate data to stay ahead and keep moving forward. But gathering and processing strategic data has always been a challenge. It takes time and often a great deal of human effort.

    For more than half a century, companies have relied on IT to reduce volumes of business data down to human size. In fact, this year marks the 50th anniversary of the IBM System/360, the world’s first commercially available mainframe computer. Today, the mainframe is everywhere in business and continues to serve as the backbone for many of the world’s corporate data centers.

    According to SHARE Inc., an independent organization of IT professionals, 71 percent of the global Fortune 500, 96 percent of the world’s top 100 banks, and 9 out of 10 of the world’s largest insurance companies use IBM’s System z. Contemporary mainframes have evolved and grown in tandem with other technologies, becoming considerably faster, more energy efficient and with significantly greater capacity than their predecessors.

    All business IT today owes its success, as well as much of its structure, to mainframe design. The three-tier architecture of database, application and user interface developed into the client-server architecture that is the standard for all business software now. This structure, along with the mainframe itself, ushered in another technology that businesses depend on every day – the ERP.

    ERP growth

    Gartner first coined the acronym “ERP” for “Enterprise Resource Planning” in 1990, but the concept of ERP had been in practice for several decades before that. This broad concept for business management software that includes groups of integrated applications spread quickly. Today virtually every business process has at least one ERP component offered by any number of global software providers.

    Along with the mainframe and ERP, companies continue to add new technologies to enhance their data management and analytics capabilities. They rely on distributed computing as well as Software-as-a-Service, (SaaS), virtual machines and cloud-based applications.  At this point in the continuing history of IT and business data, executives and the people who provide them information are at the crossroads of an old problem – extrapolating actionable insights from their data.

    Time for Big Data

    From the earliest days of business computing, companies have used machines to cope with large volumes of data. The important difference now is the speed and performance of modern computers, which can gather and analyze huge amounts of information very quickly. While the scope of this available data is new, the challenge of Big Data is old and familiar: convert that data into actionable, meaningful information.

    That’s why more companies are investing in Big Data technology to support market intelligence and predictive analytics. Wikibon measured 58 percent growth in the market between 2012 and 2013. Just last year Gartner predicted that “By 2016, 70 percent of the most profitable companies will manage their processes using real-time predictive analytics or extreme collaboration.” But the question of exactly how this will happen still remains for many organizations.

    With decades of investment in layers of technology, how can business and IT most effectively leverage everything they have– from the trusted mainframe to the latest Big Data cruncher? The answer lies in automation.

    Automate for excellence

    Recent research from The Hackett Group found that top-performing IT organizations focus on automation and complexity reduction as part of essential IT strategy. These top companies implement 80 percent higher levels of business process automation. Using automation, these firms have 70 percent less complexity in their technology platforms and spend 25 percent less on labor. By automating processes across diverse technologies, platforms and locations, IT organizations can begin to see the real benefits of decades of technological advances.

    For example, one company that analyzes data from more than 400 million customers worldwide automated and coordinated its application, data and infrastructure processes so that they could be repeated and expanded with minimal user interaction each time they added a new client.

    The company can now link a standard set of automated data management processes to maximize client data value in a rapid, repeatable and consistent way for every customer. IT staff no longer have to complete a long list of repetitive manual tasks to set up new accounts bridging information from diverse technologies in the enterprise. Instead, they focus on data analysis.

    The speed at which companies acquire data continues to escalate. At the same time, businesses continue to demand more analytics. IT and knowledge workers use a mix of legacy technologies and the latest innovations in a complex landscape to manage new challenges.  But, in the end, it all comes down to execution.

    Never have accuracy and speed been more important for knowledge workers. Just as with the original industrial revolution that introduced automated manufacturing, automation provides answers for the data center, too.  The only way to dramatically improve process quality and speed simultaneously at a significantly lower cost—across every part of the enterprise – is to automate. Automated processes guarantee a level of consistency and accuracy in the complex enterprise that’s impossible through manual effort alone.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    IBM SoftLayer Offers Enterprises Private Links From Colos to Its Cloud

    SoftLayer, an IBM company, announced a new service that provides a private link from a customer’s existing IT infrastructure to compute resources on SoftLayer’s cloud platform.

    Using the service, called Direct Link, customers can privately connect to SoftLayer’s infrastructure from their own data centers and offices or from colocation facilities. They have to go through one of the colocation providers SoftLayer has partnered with: Equinix, Telx, CoreSite, Terremark, Pacnet, Interxion and TelecityGroup. There are 18 Direct Link network Points of Presence in non-SoftLayer data centers around the world.

    A dedicated network connection to a cloud service provider offers higher performance and better security than connecting through the public Internet. Other cloud infrastructure service providers, such as Amazon Web Services and Microsoft Azure, have made similar deals with colocation companies.

    Private connections to third-party services make cloud more digestible for the enterprise market. Colocation companies benefit because hosting PoPs for cloud providers makes their facilities more attractive to enterprise customers.

    Marc Jones, vice president of product innovation at SoftLayer, said the service has been available to select customers for several months. “This is the  official ‘productization’ of the service,” he said. “The offering is suitable for clients that want both bare metal and virtual instances as part of their cloud portfolio. Also, IBM’s large enterprise install base has been requesting this service. It builds out our enterprise portfolio of services.”

    Direct Link benefits include:

    • Higher network performance consistency and predictability
    • Streamlined and accelerated workload and data migration
    • Improved data and operational security.

    “Services like Direct Link afford customers a broader range of options as they explore how to best leverage hybrid cloud,” IDC analyst Brad Casemore said. “Enterprises welcome choice and alternatives, and direct high-speed access can accommodate various hybrid workloads, while also offering use cases for backup, disaster recovery and business continuity.”

    Microsoft has been busy adding private Azure links globally, most recently across Telecity’s European infrastructure.  AWS Direct Connect was launched in 2011 in partnership with Equinix and has been expanding since.

    Direct Link is immediately available to all SoftLayer customers, with pricing starting at $147 per month for 1 Gbps and $997 per month for 10 Gbps. Customers will not be charged for bandwidth usage.

    2:00p
    Delivering Enhanced Visibility, Better SLA and Disaster Avoidance

    Your business is growing, you have more technology demands, and the industry is pushing you to make even more infrastructure decisions. These trends aren’t slowing down either. Cloud computing and complex applications are creating very large demands against the modern data center. In this case study from RF Code, we look at an organization that sought a solution for a growing resource challenge.

    Digital Fortress customers are demanding more power than ever before. While typical per-cabinet power usage projections used to fall in the 1.7 – 2kW range, these days 5kW per cabinet is the norm. In fact, some of Digital Fortress’s higher density customers require 9 – 15kW per cabinet, with a few using as much as 25kW. With power densities like these, accurate capacity planning, reliable cooling and continuous environmental monitoring are critical.

    “Between cloud infrastructures and Bitcoin mining, density is playing a bigger part than ever before in our customers’ needs,” said Scott Gamble, IT manager at Digital Fortress. “Average power density per cabinet continues to rise and with it the criticality of delivering adequate cooling. It’s a lot of power and a lot of heat.”

    In addition to enhanced monitoring, Digital Fortress also needed improved, automated threat notification systems. Increased density has reduced the amount of time available to act on cooling failures. Reaction times have shrunk from hours to minutes, making early detection and immediate notification crucial.

    Through the selection process – Digital Fortress examined their options with the following categories:

    • Ease of deployment
    • Performance and versatility
    • Ease of integration
    • Scalability
    • Total cost of ownership

    What were the results of the evaluation? Six days after deployment RF Code helped Digital Fortress identify a failing CRAC unit before it burned out. After 10 days Digital Fortress was able to drop SLA temperatures on the data center floor by more than 11 degrees. Over the course of four months Digital Fortress has addressed dozens of issues and are preparing for a hot summer, better situated than ever before. Download this case study today to learn how this organization is able to gain better, proactive insight into their data center infrastructure. Furthermore, find out how to make your own environment a lot more real-time and resilient.

    5:58p
    Pivotal Adds Automatic Multi-Cloud-Zone App Replication to PaaS

    Pivotal, the company that wants to enable old-school enterprises to build and deploy software like Google and Facebook do, rolled out the latest release of its Platform-as-a-Service offering Pivotal CF, adding more features it says enterprises will find attractive.

    The new capability for automatic replication of application instances across multiple cloud availability zones is perhaps the most important addition, since enterprises care a lot about uptime and Pivotal says the new feature ensures every application runs out of at least two geographically separated locations.

    As Dekel Tankel, Pivotal’s head of customer engagement for Cloud Foundry, explained it, the PaaS now distributes application instances to multiple areas across a company’s underlying infrastructure automatically, using availability zones defined by the customer.

    If there are 10 instances of an application, for example, five of them will be deployed in one availability zone and the other five in another availability zone, Tankel said.

    Practicing own preachings

    Pivotal CF 1.3 is the fourth release since the company launched the PaaS into general availability in November 2013. It is using the same agile software development principals it coaches its customers in.

    “We’re actually doing extreme rapid pace of innovation,” Tankel said. “Release with major features every three months. We are adopting the agile methodologies of Pivotal Labs in Cloud Foundry engineering.”

    Cloud Foundry is the open source PaaS Pivotal CF is based on. The company used to lead the open source development project single-handedly but recently created an independent foundation to steer it, so that more vendors would join.

    Pivotal timed announcement of the latest release of its commercial PaaS product to coincide with the Cloud Foundry Summit taking place this week in San Francisco.

    Bringing enterprises into 21st century

    The company, created and majority-owned by the storage giant EMC, takes its name from Pivotal Labs, a custom software development shop EMC bought in 2012. Pivotal CEO is Paul Maritz, former CEO of VMware, another company in which EMC holds a controlling stake.

    Pivotal proposes to enterprises the ability to develop new software and continuously deploy new features for their customers using a combination of modern software development languages and frameworks as well as modern Big Data technologies. In April, the company announced it would offer all of its Big Data products in a single bundle on a subscription basis.

    The bundle includes Pivotal’s Greenplum database, GemFire, SQLFire, GemFire XD and HAWQ. Customers that commit to a certain subscription level also get a free license for Pivotal’s own distribution of Apache Hadoop.

    The company also offers customers ways to deploy their applications on modern infrastructure, combining in-house data centers and a variety of cloud infrastructure service providers.

    Expanding ‘data lakes’

    Another big addition in the latest release of Pivotal CF gives developers the ability to “bind” applications to enterprise “data lakes.” This means an application can simultaneously draw on a variety of data generated by the enterprise, regardless of the system the data reside on.

    Whether it is stored in Redis, MongoDB, Neo4J, RiakCS or Cassandra, the data can be bound to an application as managed services. The integration of these sources into Cloud Foundry (the foundation of Pivotal CF) is done through a service-broker API, Tankel said. These services are available already, but more are in the pipeline.

    7:17p
    HP Rolls Out Converged HPC Solutions, Pushes Down Cost of All-Flash Arrays

    Hewlett Packard announced solutions which  enable a software-defined data center that is supported by cloud delivery models and built on a converged infrastructure that spans compute, storage and network technologies at its annual Discover conference in Las Vegas this week. The vendor positions its Converged System portfolio as pre-engineered validated solutions optimized for critical workloads, including virtualization, cloud and Big Data.

    New HPC solutions: HP Apollo

    The company rolled out a new family of HP Apollo high performance computing (HPC) systems. The number of HPC systems ranked on the Top500 list of supercomputers in the world has been steadily growing for HP, with 196 of the Top500 in November 2013. As a converged solution the Apollo offering combines modular design with advanced power distribution and cooling techniques.

    Three new offerings build on HP server architectures and address enterprise and scientific supercomputing. The Apollo 6000 system for the enterprise provides a selection of adaptors and power redundancy, allowing customization for specific workloads. With an air-cooled server rack design it can hold up to 160 servers per rack, giving a greater performance and efficiency in less space than a typical blade solution.

    The Apollo 8000 system features patented technologies for a 100 percent liquid cooled supercomputer with built-in technology to protect the hardware. Built on a scalable rack design with up to 144 servers per rack, the system can offer four times the teraflops per rack compared to air-cooled designs.

    “Leveraging the efficiency of HP Apollo 8000, we expect to save $800,000 in operating expenses per year,” said Steve Hammond, director, Computational Science Center at NREL. “Because we are capturing and using waste heat, we estimate we will save another $200,000 that would otherwise be used to heat the building. We are saving $1 million per year in operations costs for a data center that cost less to build than a typical data center.”

    Mainstream all-flash arrays

    Citing IDC forecasts for a $1.6 billion market by 2016 for all-flash arrays, HP announced enhancements to its all-flash 3PAR StoreServ 7450 Storage array. Enhancements include hardware-accelerated, inline primary deduplication, thin cloning software, express indexing and new 1.92 TB commercial multi-level cell (cMLC) solid state drives.

    “Enterprises have seen the performance that flash can provide applications but have faced limitations that prevent broad use,” said David Scott, senior vice president and general manager for HP Storage. “By driving down the total cost of all-flash arrays using innovations that also boost performance and scale, and backed by availability guarantees, HP has developed a solution that makes all-flash arrays viable for a wide range of mainstream enterprises and service provider use that represent an expansion of the addressable all-flash market by billions of dollars.”

    Going far beyond the first generation of all-flash systems, the 7450 array is able to scale to 460 TB raw and more than 1.3 petabytes of equivalent usable capacity. When combined with the 1.92 TB cMLC drive, the cost of storage is lowered to less than $2 per usable gigabyte. With this flash-enabled focus on total cost of ownership, HP joins a growing list of vendors pushing a solid state drive driven data center, while also taking aim at larger hybrid arrays such as EMC’s VNX.

    Using patented HP Adaptive Sparing technology, HP collaborated with SSD suppliers to extend usable capacity per drive up to 20 percent by reducing the amount of over-provisioned capacity typically reserved by media suppliers. HP is also backing the use of the 3PAR StoreServ Storage system architecture with a 6-Nines guarantee, which provides that customers with quad-controller or larger 3PAR StoreServ Storage arrays, including the all-flash 7450, will achieve 99.9999 percent data availability.

    7:30p
    GoDaddy’s IPO Filing Reveals Nine Data Centers, Sophisticated Technology Stack

    GoDaddy, a popular web hosting service provider and domain name registrar, kicked off an initial public offering Monday, planning to raise up to $100 million.

    As it often happens when companies go public, the U.S. Securities and Exchange Commission documents GoDaddy filed in conjunction with the IPO shed some light on its data center infrastructure, which is massive.

    GoDaddy’s 37,000 servers live in a total of nine facilities around the world. The company owns one of its data centers and leases the rest from wholesale providers.

    Its own data center is in Phoenix, Arizona, and it has two leased sites in the state: Scottsdale, which is also home to its headquarters, and Mesa. The others are in Los Angeles, Chicago, Ashburn, Virginia, Amsterdam and Singapore.

    GoDaddy’s own data center in Phoenix is more than 270,000 square feet in size.

    IaaS, PaaS, lots of open source tech

    In addition to the large data center footprint, the service provider has a fairly sophisticated IT architecture to deliver its services. The stack relies on a lot of open source technology.

    GoDaddy’s hosting services are supported by a single automated infrastructure built on OpenStack, the popular open source cloud architecture.

    One level up from the IaaS setup is the company’s Platform-as-a-Service, which provides an integrated set of services to its customers and enables the provider itself to build and deploy new products quickly and easily.

    GoDaddy also uses open source Apache Hadoop to store and process data it collects through web crawling, local listings, social and mobile platforms to provide business intelligence to its customers.

    The company uses Cassandra, an open source distributed database management system, to improve replication of customer data. A single Cassandra cluster can span multiple data centers, which enables replication across sites.

    Ambitious expansion plans

    The man in charge of this infrastructure is Arne Josefsberg, GoDaddy’s executive vice president and CIO. Josefsberg joined the hosting company in January after stints at ServiceNow and Microsoft.

    He was hired to help grow GoDaddy’s infrastructure as it executes on a plan to expand into 60 markets by 2015. In April, the company announced that it had added support in 14 new languages and expanded services into 17 new countries.

    << Previous Day 2014/06/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org