Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 31st, 2017
Time |
Event |
12:00p |
What’s Behind AT&T’s Big Bet on Edge Computing Hyper-scale cloud has big advantages in scale and efficiency, but for some things you need to have the computation done closer to the problem. That’s what AT&T is promising with its upcoming edge computing services that will put micro data centers in its central offices (think telephone exchange), cell towers, and small cells.
Eventually this edge computing network will use the future 5G standard to lower the latency even further. That will open up possibilities like using high-end GPUs AT&T says it will place at the edge of its network for highly parallel, near-real-time workloads. Take off-board rendering for augmented reality for example. Instead of rendering the overlay for AR frame by frame on your device, a cloud system that doesn’t have to worry about using up too much battery power could pre-render an entire scene and then quickly send what’s relevant as you turn your head.
“Today, one of the biggest challenges for phones running high-end VR applications is extremely short battery life due to the intense processing requirements,” an AT&T spokesperson told us. “We think this technology could play a huge role in multiple applications, from autonomous cars, to AR/VR, to robotic manufacturing and more. And we’ll use our software-defined network to manage it all, giving us a significant advantage over competitors.
More Than a Dumb Pipe to Cloud
The definition of edge computing is a little fuzzy; often it refers to aggregation points like gateways or hyperconverged micro data centers on premises, and what AT&T is promising from its tens of thousands of sites “usually never farther than a few miles from our customers” is perhaps closer to fog computing.
“Edge is different things to different people. Every vendor defines the edge as where they stop making products, and for AT&T the edge of their network is the RAN (Radio Access Network),” Christian Renaud, IoT Research Director at 451 Research, told Data Center Knowledge. “They’re talking about multi-access edge computing, MEC, which is a component of fog computing. For AT&T, it’s their way of saying ‘don’t just treat us as backhaul, or as a dumb pipe to hyper-scale cloud’.”
 Microsoft employee demonstrates the Microsoft HoloLens augmented reality (AR) viewer in March 2016 in San Francisco (Photo by Justin Sullivan/Getty Images)
New categories of applications — from data analytics using information from industrial sensors to upcoming consumer devices like VR headsets — are pushing the demand for compute that’s closer to where data is produced or consumed. “This is because of applications like autonomous vehicles co-ordination — vehicle to vehicle and vehicle to infrastructure — or VR, where because of the demands of your vestibulo-ocular reflex for collaborative VR, there are fixed latencies you have to adhere to,” he explains. In other words, a VR headset has to render images quickly enough to trick the mechanism in your brain responsible for moving your eyes to adjust to your head movements.
“There are applications that demand sub-10 millisecond latency, and there’s nothing you can do to beat the speed of light and make data centers respond in five or 10 milliseconds,” Renaud said. “It’s impossible to haul all the petabytes of data off the sensors in a jet engine at the gate to the cloud and get the analysis that says the engine is OK for another flight in a 30-minute turnaround time.”
See also: Edge Data Centers in the Self-Driving-Car Future
Physics dictates that the compute-analysis-action loop occurs closer. It needs to happen in milliseconds, preferably single-digit or low-double-digit milliseconds, and that dictates geographical placement of edge computing capacity. It can take the shape of onboard compute on the device itself, a dusty old PC on the manufacturing floor, or a server in a colocation data center. Data from non-stationary devices has to go to the MEC (via a RAN) as its first stop, and there’s lots of opportunity for network operators to add value beyond transport by pushing compute closer to the edge.
That’s what AT&T is betting on. The company says it will be able to offer edge computing at tens of thousands of locations because it already has its network. Initially it’s targeting ‘dense urban areas’ – where it has more bandwidth and more cell tower sites.
“They have a unique geographical footprint that affords them all sorts of advantages they can exploit in the IoT compute and analysis battles to come by putting things into their small cells and macro cells and central offices, which are so geographically distributed and often placed very close to the source of data origination,” Reynaud said.
The Role of Software-Defined, Virtual Networking
AT&T highlighted software-defined networking (SDN) and network functions virtualization (NFV) as key to delivering these services efficiently, and he agrees. “You might have vehicles in motion, moving from cell tower to cell tower, while you’re also trying to dynamically say ‘here is your resource allocation’. The NFV and SDN piece is critical because of the coordination. If you’re shifting compute to be closest to the point of data origination to reduce latency, and that point of data origin is going 150 miles per hour — or 400 miles an hour if it’s an aircraft — that makes it a harder problem to solve for.”
 An Uber self-driving car drives down 5th Street in March 2017 in San Francisco. (Photo by Justin Sullivan/Getty Images)
AT&T CFO John Stephens told investors last week that 47percent of its network functions are already virtualized, and he expects it to be 55 percent by the end of the year and 75 percent by 2020. Network virtualization and automation is key to AT&T lowering its operating costs, hence its purchase of Brocade’s Vyatta network operating system, which includes Vyatta vRouter and a distributed services platform. But this also gives it the tools it needs to build edge computing services.
See also: Telco Central Offices Get Second Life as Cloud Data Centers
AT&T already offers local network virtualization with its FlexWareSM service, which uses a local, MPLS-connected appliance to let businesses create virtual network functions like a Palo Alto Networks or FortiNet firewalls, Juniper or Cisco routers, or Riverbed’s virtual WAN, running on servers.
“Pandora’s Box of Quality of Service”
The applications AT&T wants to eventually support are more complex though and will raise issues like multi-tenancy and ‘noisy neighbors’ when you have many latency-sensitive applications on the network.
“If you have a service that allows you to hail autonomous cars, and they’re one of multiple tenants on the MEC, on the radio access network, you have to work though issues like billing and prioritization,” Reynaud said. “If you have emergency vehicles, they might need to commandeer the bandwidth, so would other vehicles have to pull over and stop? If there’s a VR gaming competition going on and everyone is hitting the network, and they need super low latencies, but you have all the cars driving themselves around in the autonomous zone, hitting the same data center’s resources and the same radio network, which one do you prioritize? We’re opening Pandora’s box of quality of service.”
AT&T may also have an advantage in rolling out the 5G technology it will need for some of the more ambitious uses of edge computing, like multiplayer VR at a low enough latency to avoid nausea. It’s already talking about expanding its fixed wireless 5G using millimeter wave (mmWave) beyond a pilot in Austin, Texas. The carrier also has the contract to design and operate the national FirstNet first-responder network (and it sees opportunities for edge computing for public safety in that network).
 (Photo by Sandra Mu/Getty Images)
Running the FirstNet network gives AT&T access to the 700MHz spectrum and it will start deploying wireless services in that spectrum by the end of 2017. AT&T will be deploying LTE-license assisted access (LAA) with carrier aggregation as part of building out the network, delivering additional bandwidth that lets it start preparing for 5G without having to do multiple updates to its cells and towers. With network virtualization, Reynaud points out, “they don’t have to do truck rolls; they can update their infrastructure at the flip of a software switch”.
No Need to Wait for 5G to Arrive
But AT&T can start offering some services without waiting for 5G, he believes.
“When 5G comes, it’s going to shave a lot of the latency off and increase the speeds. If I’ve got a latency budget for an app like multi-party participatory VR with a shared 3D space, I can trim that on the transport side to the data center, and I can trim that inside the data center by using SDN profiles. So this much of that goes to TCP windowing, and this much goes to the speed of light, and this much goes to the fundamental RAN speed and the latencies there; I’m going to be shaving off as much as I can in each of those areas already. 5G is just going to shave [more latency off] on the access side to the MEC resources, to their central offices and data centers. But I can put assets in the MEC or the RAN or my data centers that are not predicated on standards bodies coming to a consensus on 5G. I can still solve problems with 4G and judicious placement of compute resources; maybe I’m able to solve 80 percent of those now.”
Yet Another Unanswered Question
It’s worth noting that AT&T’s network only covers the US, and that multinational customers will likely want this kind of service at all their locations worldwide. “Either AT&T will end up contracting with Telefonica and similar carriers to leverage comparable infrastructure, or this will be US-only until those standards bake a bit more,” Reynaud notes.
AT&T couldn’t tell us more precisely when this service might go live. “We don’t have any target markets or trials to announce at this time,” the spokesperson told us. “We’re still in the early development phase. We hope to start deploying this capability over the next few years, as 5G standards get finalized, and that technology gets ready for broad commercialization.” | 4:51p |
Raytheon’s Troubled GPS III Ground Control Network Slips Again (Bloomberg) — Add at least nine more months of delays before the U.S. Air Force can deploy a fully capable version of Raytheon Co.’s ground system for advanced GPS satellites, a project that was already running about five years late.
The ground system, which was supposed to be in operation by October 2016 under Raytheon’s contract, now isn’t projected to be ready until at least April 2022, the Air Force said in response to an inquiry by Bloomberg News. Extending the schedule will increase the projected cost of the system’s development phase to $6 billion, up from $5.4 billion most recently and $4 billion in 2015.
Raytheon’s “Operational Control Network” of 20 ground stations and antennas worldwide is needed to take full advantage of new GPS III satellites being built by Lockheed Martin Corp. that promise greater worldwide coverage, accuracy and cybersecurity. The Global Positioning System is widely used for everything from helping the military to pinpoint airstrikes against Islamic State to allowing civilians to map street-by-street driving directions on their smartphones.
Space: The Ultimate Network Edge
The new satellites also have been beset by repeated delays, with the first launch now planned the second quarter of 2018.
“The Air Force has implemented improvements across the program, including at Raytheon,” Captain AnnMarie Annicelli, an Air Force spokeswoman, said in an email. “These include development of contingency efforts and modernizing software development processes. GPS is important to every American, and the Air Force must get it right.”
Absorbing Overrun
The Air Force proposed the delay in the ground stations to Pentagon officials in June during a major review of the troubled program. The service said it will need to find an additional $630 million after 2019 because of the latest delay, but has funds through fiscal 2018. The service must absorb the overrun under the “cost-plus” award fee contract it signed with Raytheon in February 2010.
After a 24-month delay set last year, “further analysis” resulted “in a more realistic schedule baseline extension of 33 months,” Annicelli said. “The increased schedule was due in part to realized program technical risks, and includes hardware and software obsolescence,” she said.
The network from Waltham, Massachusetts-based Raytheon was called the Pentagon’s “No. 1 troubled program” last year by the Air Force’s head of space systems acquisition.
“We remain on track to deliver this essential, advanced capability” based on an Air Force decision in March, before the Pentagon review, “to add an additional six months of margin,” Mike Doble, a Raytheon spokesman, said in a statement. He said he couldn’t address the Air Force’s latest “pre-decisional” extension.
Raytheon’s current schedule, even with doubling its staff to more than 1,000 personnel, assumes “efficiencies from software engineering improvements, such as increased testing automation, that have not yet been demonstrated,” the Government Accountability Office said in a a May report. | 4:56p |
Microsoft Backs Kubernetes with Cloud Native Membership  Brought to you by IT Pro
It’s happened again. Microsoft has joined yet another open source group. Whatever happened to Redmond’s long held belief that open source is a cancer? Times change, and evidently Microsoft has learned to change with them.
On Wednesday the company announced it’s joined the Cloud Native Computing Foundation as a top tier platinum member. The foundation is a project of the Linux Foundation, where Microsoft is also a platinum member. According to CNCF’s website, the membership is costing Microsoft $370,000 per year.
“We have contributed across many cloud native projects, including Kubernetes, Helm, containerd, and gRPC, and plan to expand our involvement in the future,” said Corey Sanders, Microsoft’s partner director. “Joining the Cloud Native Computing Foundation is another natural step on our open source journey, and we look forward to learning and engaging with the community on a deeper level as a CNCF member.”
See also: Microsoft Joins Hot Open Source PaaS Project Cloud Foundry
Although the foundation hosts at least 10 projects, including containerd and gRPC (which Sanders mentioned in his statement), the organization’s crown jewel is Kubernetes, which has become an essential element for managing containers. Having input into the direction of Kubernetes’ development is most of what Microsoft is buying with this membership.
Redmond considers Kubernetes an important part of both Azure and its Azure Container Service. So important that in May the company introduced Draft, an Azure tool to streamline application development and deployment into any Kubernetes cluster. And on Wednesday — the same day it joined CNCF — it announced Azure Container Instances for setting up containers without having to manage virtual machines or deal with container orchestration. If orchestration is wanted or needed, however, Microsoft has released an open source Kubernetes connector for ACI.
The reason Kubernetes is important to Azure is because of its wide use in the enterprise.
Like other cloud providers, Azure is seeking to lure enterprise customers to its public cloud, and many of those potential customers are deploying open source solutions — notably OpenStack — as the backbone of their private clouds. Because OpenStack clouds are almost invariably going to be employing Kubernetes as part of their container deployment infrastructure, this makes supporting it an important carrot to dangle to convince enterprise users to consider Azure for any hybrid cloud plans — because its what they’re already using.
Kubernetes support also gives Azure (as well as Google Cloud Platform, IBM Bluemix and others) an edge over Amazon Web Services — and if you’re in the cloud business and your name isn’t Amazon, you need an edge over giant-in-the-room AWS. Kubernetes can be run on Amazon’s cloud, but not as easily as running AWS’s supported house-brand, Elastic Container Service. One problem the enterprise might have with ECS, however, is that it only runs on AWS; it’s not portable.
CNCF, of course, is happy to have Microsoft on board.
“We are honored to have Microsoft, widely recognized as one of the most important enterprise technology and cloud providers in the world, join CNCF as a platinum member,” said Dan Kohn, who is the foundation’s executive director. “Their membership, along with other global cloud providers that also belong to CNCF, is a testament to the importance and growth of cloud native technologies. We believe Microsoft’s increasing commitment to open source infrastructure will be a significant asset to the CNCF.”
As part of Microsoft’s Platinum membership, Gabe Monroy, the lead project manager for containers at Azure, will join CNCF’s governing board.
This article originally appeared on IT Pro. | 5:30p |
Data Center Trends Shift Staff Workloads Data centers are becoming lean, efficient strategic assets as they adopt cloud computing, XaaS, self-provisioning models, colocation, and other still-emerging technologies. Achieving the promise of these technologies, however, requires changing work assignments and updating skill sets.
“These trends are redefining the data center work environment by reducing the number of physical devices that need human intervention,” says Colin Lacey, vice president of Data Center Transformation Services & Solutions at Unisys. “This elevates the required skill sets from ‘racking and stacking’ to administering tools and automation.” While some hands-on work will always be required, it’s much less in highly automated or outsourced data centers.
Tasks Shift
Removing lower levels of work does free employees to focus on strategic business priorities, but it also establishes new tasks that didn’t previously exist. As Lacy explains, “Those new tasks relate to how you approach the cloud from a network, security and resilience perspective.”
“Take clouding computing as an example,” he continues. “When you move to a cloud, you immediately remove some administrative details. Infrastructure is prepositioned, and automation, monitoring and reporting capabilities already are in place. That eliminates some of the physical aspects of operating a data center, but it also brings a new set of responsibilities for the client.”
See also: How to Get a Data Center Job at Google
For example, while moving to a cloud has the potential to improve disaster recovery, that feature isn’t automatic. As Andrew Mametz, VP of partner operations and governance for Peak 10, elaborates, “We have a disaster recovery plan to guide recovery of our services, but it doesn’t extend to individual customers.”
Clients migrating to a cloud, therefore, must redesign their disaster recovery plans for that specific environment, either purchasing disaster recovery as an added service or designing a different strategy. The point is that data center managers can’t simply migrate to a cloud and think everything is done.
Security details also change. “Data centers probably will need a higher level of security in a public cloud, so data center managers must build that security into their cloud-based architecture rather than focusing strictly on internal security.”
Understanding the many layers and potential entry points into their cloud is vital. That’s true, too, for XaaS and virtual environments.
“Although data center management software or vendors promise certain levels of achievement, the data center managers sometimes realize the resources needed to attain those levels are so overwhelming that they delay or never fully adopt the solution,” points out Jeff Klaus, general manager of data center solutions at Intel Corp.
Incorporating sensors for the Internet of Things (IoT) into data centers is one example of how personnel can become overwhelmed, Klaus says. “Managers have the opportunity to simplify deployment, but deploying the tools to make their work lives easier often has been a nightmare in the past. Companies sometimes have found the path to the promised capabilities is three times more challenging than anticipated.”
See also: How to Get a Data Center Job at Facebook
As Mametz says, “If you see migrating to a cloud as a 1:1 move, you’re not taking advantage of the cloud’s benefits.” Achieving those benefits is most likely when one person is in charge of implementing the new solution to ensure it works and that its full value is realized.
“You can’t assume the existing staff can or will integrate the new solution,” Klaus says. “Adding a specific person to focus on any particular initiative adds costs, but reduces the risks of failure.”
Staffing
Changes in headcount depend on the compute solution and its role in the company. “One of the biggest misconceptions in moving to a cloud is that you’ll need fewer employees,” Mametz says. While staffers may no longer be needed to handle the physical equipment, they are needed to maintain the operating system. “They work remotely. It’s a different value proposition.”
“If an organization develops a private or hybrid cloud internally, it may need more resources to deal with the increased complexity,” Klaus says. “Likewise, using even an external cloud or SaaS as a strategic asset to gain more revenue, products or service may require more technical help.” When Intel adopted the Salesforce customer relationship management application to manage its leads, for instance, it still needed experts to adapt and manage the baseline templates to ensure they fit its mission.
The adoption of hybrid IT is transforming the role of IT itself, says Tamara Budec, VP of design and construction at cloud and co-lo provider Digital Realty. With its reliance on new technology to connect internal and external services, sophisticated approaches to data classification and service-oriented architectures, “Hybrid IT creates symmetry between internal and external IT that will drive the business and IT paradigm shift for years to come.”
Consequently, she continues, “The traditional role of the enterprise IT professional is becoming multifaceted. Workloads will move around in hybrid internal/external IT environments. For example, the network engineer and application engineer will be one and the same job.
These data center trends mean, “The border between cloud computing and networking infrastructure will start to blur, as the distinction between networks and what connects to the cloud disappears. This will require skill sets to merge between telecommunication and cloud computing engineers.” While each of those specialists has specific knowledge from past architectures, in this new, blurry infrastructure environment, “They will need to learn to speak the same language,” Budec says.
DCM or Vendor Manager?
As tasks within the data center shift, so, too, does the role of the data center manager. “There is a slight, but important, difference in the competencies needed to manage a data center and to manage service providers,” Klaus says.
“In the past,” Lacey explains, “data center managers focused on capacity for servers, power and cooling. Today, with the cloud or hybrid model, that construct is becoming less challenging.” In the new, outsourced, automated, cloud or XaaS-based data center, managers are faced with managing the compute environment beyond their center’s physical boundaries.
“Data center managers need to adapt quickly or risk becoming extinct,” Mametz says. The term “endangered” may be more accurate. As smaller data centers outsource physical operations or are consolidated, the remaining data centers are becoming larger. Facilities of 100,000 square feet run by large teams are not uncommon.
With data center managers focusing externally, some of their senior administrators may step into the role of consultants. At Intel, for example, the data center capability planning manager not only evaluates new IT technology but also sometimes accompanies Intel’s sales force as a subject matter expert.
Governance
The ability to scale computing resources in a matter of minutes makes it easy to lose sight of what’s been requested and deployed and, importantly, what was never decommissioned. Managing this necessitates fresh attention to governance and core business issues.
IT governance in today’s environment requires a multidisciplinary approach that actively involves multiple departments within the organization. At Unisys, Lacey says, “We look at each tower – IT, security, finance, service, maintenance, etc. – and build linkages among them for a coordinated approach to the adoption and utilization of the new compute structure. Building a cloud without those components aligned is suboptimal.”
Retraining
The shift towards automation, including clouds, XaaS and colocation, clearly isn’t seamless. Data center employees need retraining to develop the new skills needed to operate efficiently in these streamlined environments.
“Generally, it’s the technical training people need,” Lacey says. “For example, they need to be trained to use VMware’s vCenter interfaces, and scripting and automation to support deployment and provisioning models in a more highly orchestrated fashion.”
Much of the necessary training is available through automation or service vendors at little or no cost. Therefore, Lacey says, “It’s not burdensome for the data centers to provide, and it opens a good career path and valuable skills for employees.”
Getting the Most from the Shift
The changes affecting data centers stem from the need for IT to become strategic differentiators for their organizations, Lacey says. “Analysts predict that companies that don’t embrace digital transformation will be left in the dust,” he underscores. Therefore, “Align the digital strategy to the business,” with input from the departments that will be affected, such as finance, sales and human resources.
The transformations underway in data centers typically begin with isolated approaches within a business unit before being deployed throughout the organization. “Moving bits and pieces of workloads allows validation, but it doesn’t allow significant business benefits until the entire process is moved,” Lacey says.
While XaaS, cloud computing and related automation and outsourcing trends certainly change the way data centers operate, the extent of the disruption depends upon the data center’s strategy. The new technology opens doors to managed security and monitoring as well as enhanced disaster recovery options and easy scalability.
Data Centers 2020…and Beyond
“For the next five years, we’ll see a significant portion of data center workloads existing in the traditional environment,” Lacey predicts. Gradually, they will bridge between traditional and cloud-based models.
As this occurs, he says, “The key challenge is how to bring to hybrid workload management into the organization. An intentional strategy is critical to ensuring workloads run efficiently and to leverage the most appropriate, lowest-cost resources based on legal, regulatory and governance boundaries.”
This article was originally published in AFCOM’s Data Center Management magazine. | 6:00p |
Sink or Swim? Five Steps to Big Data ROI Neil Barton is CTO of WhereScape.
In 1997, we saw the first mention of the buzz phrase “big data” in a research paper for the NASA Ames Research Center. Scientists are no longer the only ones discussing the promise of big data; rather it has become nomenclature for the entire technology industry at large.
According to Cisco, by 2021 the annual global IP traffic will reach 3.3 zettabytes per year, skyrocketing past previous years’ data growth. Nearly 20 years after big data made its debut, we are still mulling over what to do with the all-encompassing buzzword.
The industry has seen a significant bump in investments. Just this year alone, $57 billion will be invested to bring the realities of big data to companies and individuals. Whether we consider technologies that allow us to maximize computation, or algorithmic accuracy, analysis or even deeper knowledge, the question remains – how can we leverage big data?
Here are five steps to maximize your big data investment and ensure a positive ROI:
Think Before Jumping on the Big Data Bandwagon
As with any exciting trend, people are jumping quickly on the big data bandwagon. Unfortunately, most organizations invest in big data without first identifying their actual business needs.
To be successful, organizations must sit down and determine a problem and goals before pursuing a big data initiative. It sounds obvious, yet as the concept of big data grows, so too do the problems. Determining the underlying issue will catapult any organization down the right road to a solution and success.
Understand the Catch
Much of the technology available to manage the ever-swelling pool of data is free or open source. And just like most free things in life, there is always a catch. Just because the software itself doesn’t require a payment does not make it cheap or easy to install and operate. While the tools themselves may be free, the skills required to implement, configure, debug, manage and develop are hard to find and can be expensive.
Open source big data components tend to lack the breadth and depth of operations and maintenance capabilities that most traditional platforms have had for decades. This puts additional burdens on IT resources to manage and monitor.
Organizations and their customers do not care that it is open-source. They care that the data is governed and secured.
While free may sound tempting, it’s important to compare the total costs and benefits of open-source versus an enterprise-grade solution. You may find the enterprise solution is the most cost effective and painless way to value in the long run.
Mix and Match for the Best Combined Solution
The proliferation of big data tools and products are most valuable when they are properly matched, and all have a significant role to play in the modern data ecosystem. However, understanding that they are not a cure-all is critical. Rather, such tools are just parts of a larger and more complex ecosystem that is now becoming the new blueprint for enterprises.
It’s important to research whether applications can work together to produce your desired result before embarking on a new tool or product, or you may find yourself in the middle of a data junkyard. If you’re working in an open-sourced environment, consult with others for their experiences and best practices. Proactively seek out conflicts and potential exposures before they occur by learning from the mistakes of others.
Unfortunately, this means spending a good amount of time researching each component of your big data solution. But it’s best to research and prevent a problem rather than wasting time and money correcting an issue and repairing the trust and support of your management and customers.
Carefully Plan the Transition
While solutions like Spark and Hadoop are appealing on the surface level and relatively straightforward to implement in a development sandbox or another test environment, transitioning to production entails a level of governance and DevOps capabilities that these tools tend to lack.
This final hurdle can be a difficult step for organizations, especially large enterprises that still have governance, control, and auditing-related requirements, regardless of the underlying technologies being used.
The shift into production requires careful planning, the right skills, and the understanding to ensure success. Devote the time to develop your strategy for this part of the project; it will be a worthwhile investment.
Avoid the Resource Crunch
Traditional approaches to analyzing and deriving value from data cannot usually be applied to many of the new types of data being ingested. As a result, organizations need to adjust their culture and processes to accommodate the available new paradigms. But this can be tough when IT departments have downsized and eliminated their integration and architectural expertise.
Asking people to do new things when they are already stretched is often a recipe for disaster. For example, the notion of hand-coding and updating and maintaining the code as the underlying APIs change, or new/better components come to market, puts incredibly high demands on internal resources.
The truth is, most organizations are simply not equipped to handle this rate-of-change in the underlying technology and consequently fail.
Organizations on tight budgets and resources might struggle with the maintenance costs and maintenance of big data projects. Look to innovative approaches, such as metadata-driven solutions that can mitigate time, cost and risk to this scenario.
As big data investments continue to rise, understanding the complexities before jumping into the deep end will be critical for the high ROI organizations are seeking. Remember to look before you leap.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 6:30p |
Work Starts to Combine eFolder, Axcient Into Business Continuity Giant  Brought to you by MSPmentor
Matt Nachtrab doesn’t have a ton of down time these days.
A week after being named chief executive officer at eFolder, the LabTech founder and former chief operating officer at ConnectWise finds himself presiding over a sprawling merger with Axcient, creating a combined company with the scale to be a top player in the data protection and business continuity solutions space.
In the wake of Thursday’s announcement of the deal, executives of the firms have begun the task of sorting out an entity with more than 4,000 partner managed services providers (MSPs), 50,000-plus customers and more than 300 employees – including more than 100 in research and development.
For now, Nachtrab said, the leadership team’s first order of business is to do no harm.
“We’re not rushing integration between the companies,” he told MSPmentor a day after the merger was announced. “Right now, it’s ‘don’t touch anything,’ don’t spook the partners; we’ve invited all of the employees to stay.”
The origins of the merger date back to February, shortly after Nachtrab joined eFolder as its chief strategy officer.
Axcient had been exploring options for raising money and connected with K1 Capital, which owns a stake in eFolder.
Principals at the private equity firm decided that, rather than a straight investment, a merger could be accretive to both companies.
A meeting was arranged, and eFolder founder and then-CEO Dr. Kevin Hoffman traveled from Denver, Colo., to Axcient’s headquarters in Mountain View, Calif.
“Kevin and our leader of the development group went to visit,” Nachtrab said. “They came back all excited and we started looking at numbers.”
From there, the technical experts from the two companies discussed product synergies, while K1 Capital helped negotiate the numbers.
“We were able to execute on it this week,” Nachtrab said.
Terms of the deal were not disclosed.
Product integrations
Hoffman launched eFolder in 2002 as a value-added reseller, then spun out a very early remote backup-for-files solution.
“It was an agent that ran on either servers or workstations,” Nachtrab said. “It was web based. They had a data center.”
In recent years, eFolder has purchased a number of companies to acquire increasingly innovative backup products.
Meanwhile, Justin Moore launched Axcient in 2006 and became a pioneer in disaster recovery as a service (DRaaS).
“Justin was always pretty obsessed with the recovery portion of it; testing the recovery, making sure you could spin back up quickly,” Nachtrab said.
Axcient’s end users are larger enterprises in need of more sophisticated DRaaS solutions, like very rapid, orchestrated backups.
Hoffman, the combined company’s chief technology officer, and Moore, the chief strategy officer, will work with the CEO to make sure the respective offerings are complementary.
“eFolder has some really good backup technologies,” Nachtrab said. “Axcient has this awesome recovery environment.”
One of the first product integrations will likely involve eFolder’s Replibit business continuity software and Axcient’s Business Recovery Cloud (BRC).
“There’s some advantages that the Replibit technology has over the BRC product, so we’re going to integrate,” Nachtrab said.
Conversely, Axcient runs much of its technology on the public cloud, while eFolder relies on three data centers.
The leadership sees an opportunity to learn from Axcient’s approach and deliver eFolder products more efficiently.
“When we grow, we have to order a new rack and someone has to come and install it; it’s pretty complicated financially to scale the data center,” Nachtrab said. “Axcient has found a way to leverage the public cloud.”
“It scales kind of seamlessly,” he continued. “There’s no buying equipment. It elastically scales as we sign up new customers.”
“Either way, our data centers will stay intact for a long time.”
Partner programs
Beyond that, the managers plan to reach out to partners for roadmap ideas.
“We’re going to meet with the partner councils of both,” he said. “I’m a big believer in strongly engaging them.”
“I can come up with a bunch of ideas by myself but usually they’re wrong,” Nachtrab added. “We’ll listen to what they need and try to get some near-term wins.”
Integrating partner programs is always a delicate dance and this case features disparate sales models.
“eFolder is way more channel-centric; they’ve always been,” Nachtrab said. “Axcient is very channel-centric but also has a sales team that goes directly to (end customers).”
Within hours of the merger’s announcement, competitor Datto issued a statement raising the specter of potential conflicts between Axcient’s direct sales operation and MSPs who partner with eFolder.
Nachtrab argues that eFolder’s MSP partners and Axcient’s direct sales side target different customers.
“There’s very little collision that occurs if your midmarket team is focused on larger customers,” he said. “The key is making sure that your direct team is targeting companies significantly larger than 100 employees.”
Nachtrab doesn’t anticipate huge demand for eFolder’s products by Axcient’s larger, direct customers, but he does see an opportunity for eFolder partners to sell Axcient enterprise solutions.
“I’d love to work with VARs and system integrators, and build a channel around that so that they could offer these products to their larger customers,” he said.
Two brands for ‘a while’
This week, Hoffman was at Axcient’s Mountain View offices, meeting employees and seeking to ease any jitteriness about the merger.
He and Nachtrab will continue to work out of Denver, and Moore will continue to be headquartered in Silicon Valley.
“I’m working with Justin on the rest of the business, outside of R&D,” Nachtrab said.
At this point, talk of combining the entities under a single brand is at the very preliminary stage, according to the CEO.
One key consideration: Axcient owns the “Axcient.com” domain, while eFolder user a “.net.”
“I’d like to do a market study,” Nachtrab said. “I’m kind of flexible. I think either way, both brands will survive for a while.”
To this point, much of the integration discussions have focused on the technological potential.
With the deal now final, the leadership team’s attention is quickly swinging to broader strategic concerns.
“This brings scale to both of us,” Nachtrab said.
“More than likely, we were the third or fourth player in the market with MSPs,” he continued. “Now were a contender and that scale will help us go at the market a little bit stronger.”
This article originally appeared on MSPmentor. |
|