Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, April 22nd, 2014

    Time Event
    1:32p
    Cloud and Managed Services Provider Birch Communications to Acquire Cbeyond for $323 Million in Cash

    logo-WHIRThis article originally appeared at The WHIR.

    Birch Communications has agreed to buy Cbeyond for around $323 million in cash, according to an announcement by the Atlanta-based companies on Monday. The acquisition is the result of a six-month strategic review process in which Cbeyond evaluated a range of alternatives in addition to a sale.

    Birch Communications provides communications, cloud and managed services, which will align with Cbeyond’s cloud and data services, communications and IT services for small and mid-sized businesses. Last year, Cbeyond expanded its service portfoloio to include TotalAssist Services, a strategic set of services including cloud migration and management services for SMB customers.

    The transaction will create combined $700 million in annual revenue and approximately 200,000 business customers, and 5 data centers.

    Cbeyond stockholders will receive between $9.97 and $10.00 per share in cash. At the lower end, the purchase price represents a premium of 56.8 percent over Cbeyond’s stock price on Nov. 5, 2013, the trading data before Cbeyond announced it was exploring strategic initiatives, according to a statement.

    The transaction has been approved by both companies board of directors, and is expected to close within 6 months.

    “This transaction will create a nationwide communications and technology services powerhouse and significantly advances our strategy to drive top-line revenue growth by enhancing the premier communications, cloud and managed services that are available to our business customers,” Vincent M. Oddo, president and CEO of Birch said. “The combined company will have a nationwide IP-network with a significant fiber infrastructure, an extensive data center presence in multiple markets, and a relentless focus on providing superior customer service.”

    The Cbeyond deal is the latest acquisition for Birch Communications, which has acquired 21 companies in recent years.

    “The additional revenue scale, customer density, network reach, and product offerings will allow us to comprehensively meet the evolving, long-term needs of our business customers,” Oddo said. “We’re making this investment to serve our business customers in the best way possible for many years to come.”

    This story originally appeared at http://www.thewhir.com/web-hosting-news/cloud-managed-services-provider-birch-communications-acquire-cbeyond-323-million-cash

    2:07p
    iFortress Tackles Modular Market With Panel-Based Solution

    iFortress is tackling the modular market with a panel system that it says is infinitely configurable. Based around pre-configured data centers built with panels that can be dropped anywhere the customer desires, the company says its approach increases speed of delivery and flexibility of the solution.

    “We wanted something nimble and came up with the panel system,” said Jerry Lyons, iFortress CEO. “Risk mitigation is extremely important, so we spent over 10 years in research and development. Everything is tested and rated as an assembly to U.S. Department of Defense (DoD) standards.”

    Capacity for Now and the Future

    The company seeks to address one of the biggest problems in the data center industry:  How much do you build? Ideally, the idea is to build large enough to grow into over several years, but this is capital intensive. iFortress says that because it’s a panelized system, you can build based on your needs today. When it’s time to expand or subtract, the initial data center is kept intact, panels are attached to the exterior, and when the expansion space is ready, the dividing wall can be removed.

    “We started 14 years ago, designing solutions around what we heard, unlike most of the industry that responded around containerized,” said Lyons. “The reason that failed is because the market was asking for one thing, and the industry gave them something else. We went down to something extremely nimble.”

    Another view of the iFortress solution. (Graphic: iFortress)

    Another view of the iFortress solution. (Graphic: iFortress)

    As seen above, iFortress is based around a panel system, using panels 2 feet wide and 8 inches thick. The panels and sizes are infinitely configurable. “By virtue of this panel system and its inherent properties, every assembly is airtight, watertight, green, environmentally efficient and prevents environmental threats,” said Lyons. “It enhances the market’s ability to have a pre-configured pre-engineered data center that can be put together in advance of site selection.  While a parking lot very different than a warehouse – it doesn’t matter where we go.”

    It has two main products, xSite MCS which is a server ready solution, and iShelter MCS, in which iFortress build the environment in a sequence of decks. “Instead of two containers together, we build each deck so that when two elements are joined, it’s all continuous,” said Lyons.

    Economics of a Flexible System

    Lyons also talks of the benefits of depreciation. “When you build a traditional data center, it appreciates the property of the land so the taxes go up,” said Lyons. “iFortress is a piece of equipment, independent of the building architecture. Property taxes stay low, and it can be depreciated like furniture.”

    The company will also lease the modular data centers. Lyons believes the system has a wide range of applications, from hospitals, industrial, pharmaceutical, government, to colocation and hosting.

    The technology was established for full commercialization by 2010, but was in partial commercialization by 2004. In early stages, Iron Mountain, IBM and the U.S. Government tested deployments of the solution. The largest project currently is 140,000 square feet, but typical sizes range from 3,000 to 12,000 square feet.

    Is It A Fit for Colos?

    “The colocation industry has been founded on the ‘Build it and they will come’ approach – with iFortress we’re able to engineer a facility with a specific footprint, let’s say 3,000 square feet. It allows them to go out and build 1 or 2 as speculative builds, and then they can scale their growth with the demand of their customers,” Lyons said.

    Modular solutions have been more oriented to the enterprise, while colo hosting is the build it they will come model according to Lyons. “It’s okay for Equinix to build out 300,000 square feet, but so many other companies can’t afford that,” said Lyons. “We have focused on next generation hosting companies. We’re also doing this internationally with European and Asian offices. There’s a legacy of challenges, around building data centers. “We can make the entry point so much more palatable. We can build a showroom so a customer knows what it’s getting in advance.”

    2:41p
    Making Automation Work in Your IT Department

    Jonathan Crane is Chief Commercial Officer, IPsoft. Jonathan has been a communications industry leader for more than 35 years. Jonathan has held numerous executive positions in corporations such as MCI, Savvis, ROLM, Marcam Solutions and Lightstream.

    To be responsive to the business advantages brought forth by technology forces such as cloud architectures, automation advances, ubiquitous mobility and the proliferation of information, today’s IT departments must undergo a radical transformation. The establishment of an intelligent infrastructure that can anticipate and adapt to the fast rate of changing business and market demands becomes the primary objective of the CIO.

    Crucial measurements of this new architecture will be reliable and predictable performance, nearly 100 percent availability, and as usual, accomplished with reduction in operational costs. In order to focus on assimilating the new and disruptive technologies, labor automation coupled with informed or intelligent labor will be the enabler of success in this new era.

    Getting started with automation can be a daunting endeavor, though, given the variety of tools on the market, each with its own pros and cons. Where to even start is a bewildering task for many CIOs, IT managers and their respective teams, but they can begin with examining a couple criteria.

    Criteria #1: Process Integration

    Most importantly, organizations must evaluate process integration. Do you want an automation tool that can automate single, simple tasks, or do you need a solution that is capable of broader applications to address linked activities? More than likely, you’ll take the latter.

    For instance, allocating resources in response to capacity shortages is an event that can (and should) be automated, but it is simply one link in a chain of actions. A number of other steps will be involved: the organization will want to measure and report on application performance, identify the underlying cause of the shortage and obtain approvals for their response, all before the capacity shortage can be remedied. It’s important to remember that the end goal of automation is to go above and beyond humans’ manual capabilities; they simply aren’t able to efficiently manage end-to-end processes in one fell swoop. Selecting a solution that is capable of automating an entire process flow can derive significantly greater value in time and cost savings.

    Criteria #2: Process Flow

    The second critical criteria to evaluate as you select an automation tool is the flow of IT processes. Are they comprised of predictable and well-defined, if A then B tasks, like the resource provisioning example? Or, are they constantly in flux, with sequences of actions differing from one day to the next? Determining which one of these process frameworks characterizes your IT environment will then lead you down one of two paths: scripted automation or autonomics.

    Scripted Automation: For Controlled Environments

    Let’s say your IT department falls into the former category – it’s comprised of pre-set processes that remain largely unchanged like rebooting a server at 6:00 a.m. every morning or signaling that capacity usage has bypassed a given threshold. For standard workflows and processes like these, the big name IT vendors have long-served this need and are well-established scripted automation vendors. They can even provide out-of-the-box functionality with scripted templates, ready-made for commonly occurring tasks.

    While these automation brands can produce significant time and cost savings in repetitive and predictable IT environments, their reliance on scripted automation can become a hindrance when more complex tasks are introduced. In heterogeneous environments that cross processes and domains, engineers could spend hours or even days scripting a single, specialized automation execution, only for that process to change, requiring a modified script and putting the engineer back to square one.

    Autonomics: For Complex Environments

    For enterprises that operate in complex, ever-changing environments, scripting could turn into a full-time job, reversing the potential resource savings of automation. A better solution for these types of infrastructures is autonomics, which can essentially script itself by observing engineers’ day-to-day activity to emulate how they interpret and respond to service issues. It takes simple task execution a step further by adding a contextual element to automate the entire process based on environmental triggers. The more it “sees,” the more its knowledge base grows, and the more it is eventually able to reduce the workload of IT engineers.

    While the concept of autonomics may seem abstract, its potential savings are very real. In some cases, it can automate up to 80 percent of low-level, repetitive processes and can reduce mean-time-to-resolution from 40 minutes to just a few minutes. With the help of autonomics, large enterprises can cut their IT staff by up to one-half, redeploying that headcount to more strategic tasks that drive greater business value.

    Scripted Automation vs. Autonomics: Their Common Ground

    With today’s IT landscape maturing by the day, achieving operational efficiency is more than just a nice touch – it’s an absolute necessity. Deploying automation is critical to improving an organization’s bottom line, but ensuring its success means finding the right solution. And that means tapping into one that removes people-intensive administration from the equation – whether that’s removing humans from repeatable environments that lend themselves to scripted automation, or removing them from the scripting process itself. No matter what kind of solution you go with, there will always come a time where human intervention is required. The key is getting the right tool to minimize it as much as possible and, in turn, maximize operational – and commercial – success.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:00p
    Violin, Microsoft Develop All-Flash Array for Windows

    Violin Memory (VMEM) announced general availability of the Windows Flash Array (WFA), an all-flash storage array designed for delivering high-performance storage for Windows Storage Server.

    Jointly developed with Microsoft, WFA is a tightly integrated combination of Windows Storage Server and Violin’s hardware and software, purpose-built for enterprise and cloud workloads, such as SQL Server, Microsoft SharePoint and Windows Server with Hyper-V virtualized applications. The solution taps the storage features of Windows Storage Server 2012 R2 embedded into the Windows All Flash Array, including thin provisioning, data deduplication, scalability and encryption along with space efficient snapshots and continuous availability, through Windows failover clustering.

    “Microsoft technologies, such as Windows Server, SQL Server and Microsoft SharePoint, are already adopted by enterprises worldwide and growing dramatically,” said Bill Laing, Corporate Vice President at Microsoft. “By jointly developing this highly integrated solution with Violin Memory, we are working together to provide enterprise and cloud customers with dramatically improved performance, scalability, and economics for their Windows applications — virtualized, physical and cloud.”

    In Hyper-V environments, WFA achieved up to 40 percent higher write performance in customer beta tests. Increased storage performance means enterprises can increase host server virtual machine density, bolstering their virtualization ROI while reducing CAPEX. Customers beta testing the Windows Flash Array with 2012 SQL Server reported that the WFA delivered up to two times the write performance and up to 50 percent higher read performance, compared to an industry standard all-flash array connected via Fibre Channel.

    “We’ve been fans of the software defined storage inside Microsoft Windows Server 2012 and the flash defined storage from Violin Memory ever since we tested solutions from each of these industry leaders separately,” said Brian Garrett, Vice President at ESG. ”But we’re blown away with the Violin Memory Windows Flash Array that combines the power of both. This unique partnership has created a one of a kind solution that dramatically increases the performance and efficiency of business critical application workloads like Microsoft SQL Server and VDI.”

    “The WFA solution with Microsoft’s vast suite of server applications is critical for enterprises trying to revolutionize the performance, scale and cost of data center and cloud deployments,” said Kevin DeNuccio, CEO of Violin Memory. “This collaboration with Microsoft produces the necessary integration of compute, network, applications, and storage for customers designing the next generation of cloud and virtualized solutions.”

    4:00p
    Hortonworks, Concurrent Partner to Speed App Development on Hadoop

    Hortonworks and Concurrent announced an expansion of a strategic partnership to simplify enterprise application development for data-centric applications. Hortonworks will certify, support and deliver Concurrent’s Cascading development framework for data applications on Hadoop, and the Cascading SDK will be integrated and delivered with the Hortonworks Data Platform (HDP).

    The partnership underscores the timely importance of simplifying enterprise application development for these new data-centric applications. It benefits users by combining the robustness and simplicity of Cascading with the reliability and stability of Hortonworks Data Platform.

    Upcoming releases of Cascading will also support Apache Tez. Tez is a significant development in the Hadoop ecosystem, enabling projects to meet demands for faster response times and delivering near real-time big data processing. Tez is a general data-processing fabric and MapReduce replacement that provides a powerful framework for executing a complex topology of tasks. In addition, thousands of companies that already use Cascading, Lingual, Scalding or Cascalog, or any other dynamic programming language APIs and frameworks built on top of Cascading, have the flexibility to seamlessly migrate to newer versions of HDP that support Apache Tez, with zero investment required to take advantage of this improved processing environment.

    “Hadoop unleashes insight and value from enterprise data as a core component of the modern data architecture, integrating with and complementing existing systems,” said John Kreisa, vice president of strategic marketing at Hortonworks. ”By expanding our alliance with Concurrent and integrating with the Cascading application platform, Hortonworks’ customers can now drive even more value from their enterprise data by enabling the rapid development of data-driven applications.”

    “As more enterprises realize they are in the business of data, the need for simple, powerful tools for big data application development is a must-have to survive in today’s competitive climate,” said Gary Nakamura, CEO at Concurrent. ”Our deepened relationship with Hortonworks furthers our commitment to Hadoop and drives new innovation around the development of enterprise data applications.”

    6:44p
    Google: We’ve Bought 1 Gigawatt of Renewable Energy

    Google celebrated Earth Day by announcing a 407 megawatt deal with Iowa utility MidAmerican Energy to supply wind energy to support Google’s data center campus in Council Bluffs. The agreement will power both of Google’s current facilities and also allow for future expansion. This is the company’s seventh and largest renewable energy commitment to date, bringing total renewable energy contracted to more than one gigawatt (1,000 megawatts).

    Google is no stranger to purchasing wind energy. This agreement is similar to a 2012 agreement the company made with its Oklahoma utility, the Grand River Dam Authority. The company also recently purchased the entire electricity output of four wind farms to support data operations in Hamina, Findland.

    MidAmerican Energy will sell energy to Google’s Iowa data center, bundled and tracked by renewable energy certificates generated by projects that are part of its Wind VIII program. The wind energy will come from a few different farms.

    The Google MidAmerican Energy relationship goes far back, to around 2007, when Google began building its Iowa data center. Google made a commitment to carbon neutrality in 2007 and has been a big advocate to the green movement. It’s size has allowed it to negotiate with power companies and convince them to go green. The company is making the process of using renewable energy easier for other companies by advocating for renewable energy tariffs.

    Facebook also deserves kudos, as it used its leverage to convince MidAmerican Energy to power its data center with wind energy when it was negotiating for its data center in Iowa. MidAmerican is investing $1.9 billion in wind power generation, placing the largest order of onshore wind turbines to help meet these two tech giant’s demands.

    Google has invested over $1 billion in 15 renewable energy investments around the world. The timing of this announcement coincides with Earth day, and follows recent praise on the part of Greenpeace.

    If the Internet were a country, it’s electricity demand would rank sixth globally. Industry research estimates that Internet data will triple from 2012 to 2017, meaning the push is on to make sure renewable and green energy is powering the data centers storing this mountain of information.

    6:52p
    Microsoft Azure ExpressRoute Goes Global via 16 Equinix Data Centers

    Equinix is again boosting its ability to connect enterprises directly to cloud. Microsoft Azure ExpressRoute will be available in 16 markets globally via Equinix International Business Exchange (IBX) data centers. For Microsoft, this makes Azure more competitive with AWS on the use of direct connections for private clouds.

    Similar to Amazon’s Direct Connect, Microsoft Azure ExpressRoute lets customers in multi-tenant data centers connect directly to the Azure cloud. ExpressRoute does not go over the public Internet, so it offers higher security. It also provides high throughput, reliable and lower latency connections between customer data centers and Microsoft Azure. Equinix has the jump on other providers, as it’s the first provider that will offer the service globally.

    “We are witnessing a significant shift in how enterprises operate as they adopt hybrid cloud, and they are looking to effectively address performance and security concerns often associated with the cloud, while still benefiting from the flexibility it provides,” said Chris Sharp, vice president, Cloud Innovation at Equinix. “By expanding our partnership with Microsoft, we are able to offer our customers a secure, flexible and reliable connection to the Microsoft Azure cloud in 16 strategic markets around the world. Only Equinix’s global data center footprint can provide enterprises this scale and reach with Microsoft.”

    Announced Last Year

    Equinix and Microsoft first partnered on this in 2013. At the time, Equinix didn’t announce how many locations where the service would be rolled out, but said a “small number” of customers would participate in the initial beta testing prior to this official rollout. We now know the initial official rollout consists of 16 locations.

    Equinix will enable customers to directly connect to Azure in IBX data centers across five continents including North America, South America, Asia, Europe, and Australia. Additionally, customers interested in previewing the solution can now connect to ExpressRoute via Equinix London data center (LD5), in addition to data centers in Silicon Valley and Washington, DC (Ashburn). The service will become generally available in those markets later this spring, and will be rolled out in multiple metro areas in Europe, Asia-Pacific and South America throughout 2014.

    For Equinix, the addition of Azure is part of a broader strategy to forge a central role in the network connections that tie together the global cloud. It boosts Microsoft’s Azure, as it exposes a huge Equinix customer base to Microsoft’s cloud. Equinix touts over 4,500 customers around the globe.

    “Enterprises are drawn to hybrid cloud to benefit from cloud computing efficiencies, while maximizing their existing infrastructure,” said Steven Martin, general manager, Microsft Azure at Microsoft. “With 57 percent of Fortune 500 companies already using Microsoft Azure combined with Equinix’s global data center footprint, we look forward to working together to help customers bridge their cloud and on-premises technology to build hybrid environments with enterprise-grade control and reliability.”

    Microsoft Gear in Equinix Facilities

    Microsoft’s physical infrastructure for Azure ExpressRoute resides in Equinix data centers and is available via an Equinix switching fabric that provides secure connectivity and real time provisioning. Azure is commonly leveraged for key workloads that include Big Data, storage, backup and recovery, hybrid applications, productivity applications and media.

    Through Equinix data centers, Microsoft benefits from over 975 networks located within the facilities, as well as has the ability to scale across Equinix’s global platform which includes more than 450 cloud and 600 IT service providers.

    Equinix was founded with a mission to help companies and networks connect to one another in a single location. It has since become a powerhouse in terms of connectivity. The goal in recent years is to help customers connect with clouds directly. By offering direct connects to Azure, it helps ensure the company’s place as the king of connectivity.

    7:41p
    Shoot for the Moon – How to Create Your Next-Generation Web Server Platform

    As the modern infrastructure evolves, administrators are tasked with handling more web-based workloads and a lot more data. Specific tasks and applications are creating server density challenges, and data centers may wind up compensating by improperly allocating resources. The reality is that cloud-based traffic and big data aren’t going away. In fact, a recent HP study indicates that the amount of information traversing the WAN will only continue to increase. The study states that by 2020, we will have roughly 40 zettabytes of big data to process, 50 times more than in 2010.

    Now your enterprise can deliver high-volume work-loads through new computing and data center technologies. These are then coupled with a powerful logical layer to help your organization span numerous data center nodes. Infrastructure scalability is the new data center norm. To handle big data, mobility, security, and an influx of user connections, your data center model will need to evolve with the demands of the industry. In this whitepaper form HP, you begin to understand how server configurations and new SDx platforms can help optimize your organization, improve app and user performance, and better align IT goals with your future business plans.

    Remember, the modern data center is experiencing all new types of challenges. These include:

    • Proliferation of cloud
    • Much more mobility and IT consumerization
    • Big data
    • Data center and infrastructure management
    • Efficiency and scale
    • Data modeling

    To address these concerns – new platforms are allowing data centers to create a very efficient infrastructure capable of hyperscale. Furthermore, we’re now incorporating technologies which help abstract the physical and introduce the software-defined layer. In working to accommodate the ever-evolving Internet-of-Things (IoT), HP’s Moonshot platform created a server plat­form that was tailored and tuned for specific cloud-based and web server solutions. These specialized and task-oriented solutions provide optimal results for a given workload. In fact, these operations can range from dedicated hosting and web front-end to more advanced functions such as Graphics Process­ing Units (GPUs) and Digital Signal Processors (DSPs).

    Download this whitepaper today to learn about a powerful platform which can help revolutionize how you create your next-generation web server platform.

    << Previous Day 2014/04/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org