Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, March 20th, 2017

    Time Event
    12:00p
    Why Isn’t Hyperconvergence Converging?

    “Composable infrastructure, I think, is like a Rubik’s Cube,” Chris Cosgrave, chief technologist at Hewlett Packard Enterprise, said.  He was introducing his company’s strategy for hyperconvergence before an audience of IT admins, DevOps, and CIOs at the Discover conference in London back in December 2015.

    “You twizzle around any combination of storage, compute, and fabric to support the particular needs that you require there. Complexity is driven by the physical infrastructure. . .  If you look at a lot of the software stacks you get — for instance, virtualization — they don’t really help you in terms of the underlying physical infrastructure, firmware updates, compliance, etc.  This is what we’ve got to tackle here.  We’re going to have a single interface, a software-defined environment, so you abstract all of that complexity away.”

    He described how until this point in time IT departments built their companies’ data center infrastructure through hardware procurements and data center deals.  And they typically overprovisioned.

    The whole point of a software-defined data center, he went on, is to introduce resource automation based on the need for those resources at present, not some far-off future date.  If efficiency is the goal, why provision more resources than your applications actually require?  Moreover, why make a big pool of hardware just so you can subdivide it into clusters that are then sub-subdivided into segments?  Indeed, HPE consulted with Docker Inc. with the intent of treating containerized workloads and virtual machine-based workloads equivalently.

    This was HPE’s hyperconvergence stance in the beginning.

    “This Idea of a Federation”

    In February 2017, HPE completed its acquisition of SimpliVity, a company whose publicly stated objective five years earlier for hyperconverged infrastructure was the assimilation of the data center as we know it.  With VMware already having been folded into Dell Technologies, and with Cisco making gains in marrying SDN (a technology Cisco once shunned) with servers (another technology Cisco once shunned) with analytics (rinse and repeat), HPE was perceived as needing market parity.

    In a webinar for customers just days after the acquisition, Jesse St. Laurent, SimpliVity’s vice president for product strategy, described how servers such as its existing Hyper Converged 250 (based on HPE’s ProLiant 250) saved some customers nine-digit sums over a five-year period on their estimated budgets for hardware procurements.

    “The internals that make this possible are this idea of a federation,” explained St. Laurent.  “You simplify the management experience, but customers are managing more and more data centers.  It’s not just a single-point location; we see, more and more, multiple sites.  You have a cluster for a local site, a second cluster for [disaster recovery] or obviously, for very large customers, global networks.”

    This idea of a federation, as SimpliVity perceives it, begins with the hyperconverged appliance but then extends to whatever resources lie outside the hyperconverged infrastructure sphere, including the public cloud.  It doesn’t make much sense to completely automate the provisioning of some on-premise infrastructure resources without extending that automation to the other on-premise and all the off-premise resources as well.

    Joining St. Laurent for this webinar was HPE’s director of hyperconverged infrastructure and software-defined storage, Bharath Vasudevan.  In response to a question from the audience, Vasudevan was explaining why HPE would offer a single console for hyperconverged infrastructure management, separate from VMware’s vSphere.  His argument was based around the idea that senior-level IT managers need to focus on broader administrative issues, so that more junior-level staff could focus on everyday tasks like provisioning virtual machines.

    But then Vasudevan issued a warning about the way things tend to work in an IT department, and why resource provisioning typically gets elevated to a red-flag event:

    “The way developers tend to request hardware and equipment doesn’t really jive well with existing IT processes, in terms of [service-level agreements],” he said.  Thus a hyperconverged infrastructure console integrated with HPE OneView would be, as he explained it, “a way for IT to still maintain control of the environment, control of their [VM] images, security policies, data retention policies, all of that, but still allow self-service access.  And here, it’s mitigation for IT of workloads getting sent out to the public cloud, because a lot of times, once that happens, it becomes increasingly difficult to repatriate those workloads.”

    After a year-and-a-half of hyperconverged infrastructure HPE’s stance evolved a bit, at least from this vantage point.  Not all infrastructure is the same, even when it’s abstracted by the software-defined model.  And the infrastructure toward which developers typically gravitate may not be the preferred kind.

    “Solving Different Problems”

    Last February, HPE entered into a global reseller agreement with Mesosphere, the company that produces the commercial edition of a scheduling system based on the open source project, Apache Mesos.  Now called DC/OS, this system first came to light in 2014, then quickly signed on reseller partners such as Cisco and big-name customers such as Verizon.  HPE and Microsoft backed Mesosphere financially in 2016.

    From the outset, DC/OS was touted as the harbinger for a new kind of software-defined infrastructure — one where the management process is much closer to the applications.

    DC/OS includes a kind of orchestrator that makes Docker container-based workloads distributable across a broad cluster of servers.  In the sense that it operates a server cluster as though it were a single machine and distributes and manages these workloads in such a way as to perpetuate the illusion, DC/OS truly is an operating system.  It provisions resources from the workload’s underlying infrastructure to provide the best environment for these resources at present.

    Sound familiar?

    As part of the new agreement, HPE will pre-load DC/OS on some select models of ProLiant servers.  Meaning, among customers’ choices may be a Hyper Converged 380 currently bearing SimpliVity’s branding (though not for long, says SimpliVity) or a ProLiant 380 with DC/OS ready to go.

    Is this really a choice, one or the other?

    “I think the short answer is, they’re somewhat solving different problems,” explained Edward Hsu, Mesosphere’s vice president of product marketing, in a conversation with Data Center Knowledge.  “What DC/OS does with Apache Mesos is pool compute so that distributed systems can be pooled together.”

    That actually doesn’t sound all that different from HPE’s definition of its SDDC vision from December 2015.

    Hsu went on to say that his firm has been working with HPE to build plug-ins that could conceivably enable SimpliVity’s storage pools to be directly addressable as persistent storage volumes by containers running in DC/OS.

    “But you know, right now, in the state of maturity of the technology,” he continued, “nothing pools storage the way Mesosphere DC/OS pools compute today.  Put another way, a hyperconverged infrastructure and the storage systems that are a part of it are not completely, elastically pooled as one giant, contiguous volume yet.”

    So some DC/OS customers, Hsu said, use hyperconverged infrastructure appliances to assemble their storage layers.  But then to make them contiguous under a single compute pool they add DC/OS to make this storage addressable the way hyperconverged infrastructure originally promised, at least to container-based workloads.

    Seriously?  A convergence platform really needs a piece of open source software to complete its mission?

    “A Packing Exercise”

    “If there’s anything certain about the software-defined movement, it’s that blurry lines rule the day,” said Christian Perry, research manager for IT infrastructure at 451 Research, in a note to Data Center Knowledge.

    Certainly, hyperconverged infrastructure is introducing enterprises to SDDC, especially those who have never seen it work before, he said.  But to the extent that they “embrace” it, rather than incorporating compute and network fabric resources as HPE’s Cosgrave originally envisioned, they’re stopping at storage.

    “I think platforms like Mesosphere’s DC/OS get us much closer to what we might distinguish as a software-defined data center,” Perry continued.  “There could be some overlap where this type of platform exists in an environment that is already hyperconverged, but the focus on the HCI side is primarily with storage, not the entire ecosystem.  With this mind, hyperconverged probably would fit in nicely within a Mesosphere type of management environment.”

    Literally, Perry is envisioning SimpliVity as a storage service slave for DC/OS.  And he’s not alone.

    “Strictly speaking, HCI (hyperconverged infrastructure) is merely a packing exercise,” said Kurt Marko, principal of Marko Insights, “in which mode nodes and associated server-side storage are crammed into a given rack-space unit.  However, the assumption is that a high density of modestly-sized servers isn’t useful unless they are aggregated into some sort of distributed cluster that can pool virtual resources and allocate them as needed under the control of a master scheduler.  That’s where SDI comes in.”

    Marko’s suggestion is that a management function on a hyperconverged infrastructure appliance may be too far away from the workloads to properly provision resources for them in real-time.

    Originally, HCI vendors sought to build “cloud-lite” experiences for enterprises, he said, as they started adding centralized management consoles.  But when they introduced storage virtualization (particularly Nutanix), that’s when the hyperconverged infrastructure concept started taking off in the market.  Architecturally, it brought with it sophisticated features for cloud-like provisioning of compute and fabric.  But even then, he says, HCI needs a complementary platform to round out its purpose in life.

    “Expect other HCI vendors to integrate [Microsoft] Azure Stack and OpenStack into ‘insta-cloud’ products for organizations that, for whatever reason, want to operate their own scale-out cloud infrastructure,” said Marko.  “These will be complemented by container infrastructure such as the Docker suite, Mesosphere, or Kubernetes for those moving away from monolithic VMs to containers.”

    When did not converging become part of the plan for hyperconvergence?  Eric Hanselman, chief analyst at 451 Research, told us the problem may lie with an enterprise’s different buying centers — the fact that different departments still make separate purchasing decisions to address exclusively their own needs.

    “As a result, storage teams wind up being the gate,” Hanselman explained.  “If server teams feel that they’re frustrated at being able to get started when they want, in the quantities that they need, guess what?  You can buy HCI systems in a manner very similar to what you’re doing with servers today, and simply have all your own storage capacity there.  One-and-done, and off you go.”

    Hanselman describes a situation where enterprises purchase HCI servers for reasons having little or nothing to do with their main purpose: staging VM-based workloads on a combined, virtual platform.  Meanwhile, development teams invest in platforms such as DC/OS or Tectonic (a commercial Kubernetes platform made by CoreOS and offered by Nutanix) to pool together compute resources for containerized workloads.  Then, when one team needs a resource the other one has, maybe they converge, and maybe they don’t.

    “The challenge, of course, from an organizational perspective, is that you now have new storage environments that sort of randomly show up, [each of which is] tied to whatever the project happened to be,” continued Hanselman.  “So you’ve got an organizational management problem.”

    Which may have been the impetus for the creation of hyperconverged infrastructure in the first place: the need to twizzle around the variables until the workloads are running efficiently.  HPE’s Cosgrave argued that software stacks can’t help solve the problems with the underlying infrastructure.  As it turns out, they may be the only things that can.

    3:00p
    Legal Battle Over Failed Data Center Cogeneration Project Settles Out of Court

    After a lengthy and often nasty legal battle that pitted The University of Delaware against The Data Centers LLC, a Baltimore superior court judge dismissed the lawsuit last week following an undisclosed out-of-court settlement between the two parties.

    Terms of the deal prevent the plaintiff and defendant from commenting, but court documents did reveal that both sides would be responsible for their own court fees and costs.

    The Data Centers, developers of a $1.3 billion plan to build a data center and 280-megawatt cogeneration plant fueled by natural gas on the school’s STAR Campus in Newark, sued the university in 2015. The company claimed that officials succumbed to community pressure to halt the project, reneging on a signed 75-year lease agreement and providing defensible-by-law reasons for doing so instead of providing the absolute truth. The Data Centers said it lost $200 million, possibly more, as a result.

    “Succinctly, the university repeatedly lied to the public to save the skins of its internal bureaucrats who had signed all the contracts to bring the project to Delaware, but who had failed to anticipate the backlash from local objectors and extremist activists,” the lawsuit stated.

    See also: Firm Behind Failed Data Center Construction Sues University

    This project’s storyline definitely had a few unpredictable twists and turns and a “not in my backyard” theme. Proposed back in 2013, the community largely welcomed the plan to build a large-scale data center on 43 acres, site of the former Chrysler auto plant. Plus, the company promised to employ 290 people. However, it was ultimately the power plant and fear of the unknown that worked up residents, environmental activists and faculty at the university.

    Shortly after plans were made public, The Data Centers met with representatives of the environmental group, The Sierra Club, seeking their endorsement. The outcome was not what they expected.

    “Once we realized the nature of the plan, we let them know we would not ‘endorse’ a power plant and immediately set about to let neighbors in Newark know of the plan,” the group said.

    A non-disclosure agreement between The Data Centers and the city of Newark quickly became a trust-killer for residents, who were left with the impression that developers and local officials had something to hide. Residents reached out to news media to air their grievances, and NRAPP launched the website to publicize its grievances.

    The decision to halt the project finally came after a group of faculty and administrators conducted a report that raised concerns about the environmental impact and how noise and pollution might affect those living nearby. The report didn’t reveal any specific bad effects, only that a question mark remained about the potential for a negative impact  from the combined heat and power cogeneration plant.

    That was enough to halt the project, and the rest is history—barring any further legal action by either party.

    Read more:

    4:51p
    Hybrid IT Startup MuleSoft Eyes Acquisitions IPO Success

    Alex Barinka (Bloomberg) — MuleSoft Inc., the San Francisco-based maker of cloud software, soared in its trading debut after pricing its initial public offering above the marketed share price range.

    DCK: MuleSoft’s platform interconnects disparate enterprise IT systems, including on-premise and in the cloud, with each other. 

    The stock jumped 46 percent to $24.75 at the close in New York, valuing the company at about $3.1 billion. Mulesoft raised $221 million, according to a statement Thursday, pricing its shares at $17 — above the marketed range of $14 to $16 each.

    MuleSoft plans to use proceeds from the IPO for general corporate purposes, including possible acquisitions, according to the prospectus. The stock is listed on the New York Stock Exchange under the ticker MULE.

    “There’s always acquisition opportunity for us as a technology company,” Chief Executive Officer Greg Schott said in an interview at the NYSE. “The areas we look at are around security as well as analytics — those are some opportunities for us to potentially do some tuck-in acquisitions over time.”

    MuleSoft is the third technology company to go public in the U.S. this year, after  Snap Inc. raised $3.9 billion including an overallotment and sponsor-backed Presidio Inc.’s $233 million IPO. Snap, which makes the disappearing-photo app Snapchat, surged in its first two days of trading. The stock has since taken a turn, paring most of its gains to close at $19.54 on Friday, about 15 percent above its IPO price of $17 each.

    AppDynamics Inc., another tech company that started to market its shares this year, instead agreed to be acquired by Cisco Systems Inc. the day before its deal was set to price.

    MuleSoft has raised $260 million in private funding, including a $128 million cash infusion from its last financing round in May 2015. Investors include Salesforce Ventures, New Enterprise Associates, Meritech Capital Ventures and Lightspeed Venture Partners.

    Goldman Sachs Group Inc., JPMorgan Chase & Co. and Bank of America Corp. led the offering.

    5:26p
    BASF to Collaborate With HPE on 1-Petaflop Supercomputer

    BADF, the world’s largest chemical maker is turning to Hewlett Packard Enterprise for its high-performance computing needs.

    BASF recently announced a collaboration with HPE to build a 1-petaflop supercomputer for industrial chemical research at the company’s Ludwigshafen, Germany, headquarters. Once built, the new system is expected to reduce the time it takes to obtain results from several months to days, according to a press release.

    Whether the Apollo 6000-based system is used to create a patented super-elastic foam for lightweight running shoes, or to develop a three-way conversion catalyst that transforms nearly all harmful emissions from gasoline-powered vehicles into CO2, it is designed to expand BASF’s ability to run virtual experiments and simulate processes more accurately.

    “The new supercomputer will promote the application and development of complex modeling and simulation approaches, opening up completely new avenues for our research at BASF,” Martin Brudermueller, BASF CTO, said in a statement. “The supercomputer was designed and developed jointly by experts from HPE and BASF to precisely meet our needs.”

    The supercomputer will make use of Intel’s Xeon processors, commonly found in enterprise-grade servers, and use the chipmaker’s Omni-Path Fabric, which provides architecture to network nodes in high-performance computing.

    In layman’s terms this means a multitude of compute nodes can work together on the same tasks at once and reduce the processing time it takes to complete them.

    “In today’s data-driven economy, high performance computing plays a pivotal role in driving advances in space exploration, biology, and artificial intelligence,” HPE CEO Meg Whitman said in a statement. “We expect this supercomputer to help BASF perform prodigious calculations at lightning-fast speeds, resulting in a broad range of innovations to solve new problems and advance our world.”

    Last year in August, Whitman kicked HPE’s supercomputer expertise up a notch by purchasing Silicon Graphics International for about $275 million, a move that added high-performance computing capabilities to improve data analytics. The HPC industry is an $11 billion market with an estimated compound annual growth rate from 6 to 8 percent over the next few years, according to the market research firm IDC.

    BASF turned to HPE at least one other time in 2015 for its computing  needs, outsourcing its two Ludwigshafen data centers to the US IT company and transferring roughly 100 jobs.

    6:00p
    Self-Service Analytics are Not Sustainable

    Roman Stanek is CEO at GoodData.

    In the last five years, Business Intelligence (BI), data and analytics vendors have all focused on delivering tools that are “self-service,” granting a typical business user the ability to approach data and analytics without a background in statistical analytics, BI, or data mining. These self-service solutions turned legacy BI on its head by offering a powerful promise: Users of all skill levels would be able to operate complex software with a few easy clicks to make radically better business decisions, freeing analysts and data scientists to focus on strategic work rather than reporting. This sounds Utopian, but there’s a problem: It’s not sustainable.

    On the surface, data democratization sounds ideal. The amount of raw data that business of all types generate as part of their daily operations has grown exponentially, so the idea of giving everyone in the organization (not just the “Data Elite”) the ability to derive insights and value from this information is extremely attractive. But as the flood of data continues to increase, the ability of these tools to provide clear insights is rapidly degrading. And simply trying to cram more information into these tools will only compound the problem; as Emperor Joseph told Mozart, “there are only so many notes an ear can hear over the course of an evening.”

    “By 2018  most business users will have access to self-service tools, but the fact remains that there’s too much data for the average business user to know where to start.” said Anne Moxie of Nucleus Research.  “What we are starting to see is a new generation of BI where mundane daily tasks are automated while flagging more significant anomalies for employees to focus on.”

    The fundamental problem is that “self-service” tools require business users to spend more and more of their time digging through data, but an average employee can’t be expected to reliably self-identify the intricate patterns in that data that could be either an important early warning sign, or nothing at all. More importantly, they should not have to. Business users don’t need access to more raw information; they need tools that take the leg work out of data analysis and automatically surface them the insights they need so they can focus on doing their jobs better.

    The solution to this problem lies in harnessing advances in machine learning and artificial intelligence to automate analysis and surface actionable insights where they’re needed most: at the point of work. Embedding these “Smart Business Applications” into the tools that business users are already using every day will increase the penetration of data and impact of these recommendations, without every employee in the organization needing hours of training to master BI tools.  

    Imagine how powerful it would be if, rather than having to use a separate BI platform to self-perform complex data analysis, your employees were presented with simple, reliable and machine-accurate recommendations for the next business actions they should take directly inside the core business applications they use every day. If implemented correctly, this combination of embedding and machine-learning capabilities can deliver in-context automation, recommendations and insights that can truly deliver tangible results. These Smart Business Applications can automate many of the low-level decisions that business users face, freeing them up to focus on more strategic and impactful problems. The applications are limitless, but could be game changing for industries like:

    • Financial Services: Loan officers need automated insights at scale and in real-time to make the most risk-averse decisions possible. While algorithms are already being used to recommend approval or denial of loans, these are antiquated systems. Using machine learning, these systems can be augmented and enriched to include unstructured data from social media, IoT, buying behavior and more to paint a much more accurate picture of applicants far more quickly.
    • Retail: Scorecarding and benchmarking is not enough. Adding predictive analytics around expected customer segmentation and product demands and combining them with real-time data about how products are selling will give product manufacturers a better view of how they are performing and what they should change in the future.
    • Healthcare: Machine learning will allow brands to access data from across their entire provider network to benchmark hospital/doctor performance against industry regulated KPIs and improve operating margins.
    • Accounting: Advances in predictive analytics will allow accounts payable to apply lessons learned from hundreds of thousands of transactions to process invoices more efficiently and identify fraudulent claims automatically to flag investigations.

    With the current generation of self-service BI tools, we’re asking everyone in the organization to become data scientists and business experts. And from everything I’ve seen, that’s not reasonable, and it’s not happening. Organizations should offload the work of analyzing reports to a trusted analytics partner and focus on giving their people the tools they need to get back to work. Machines are getting better at extracting insights from complex data than humans are, and companies need to invest in BI solutions that take advantage of advances in machine learning, predictive analytics and artificial intelligence to truly deliver data in-context with clear insights and machine-accurate recommendations.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

    6:30p
    SaaS Keeps SMBs and Solopreneurs from Falling Behind in Cloud

    Brought to You by Talkin’ Cloud

    While large enterprises and high-tech startups instigated the SaaS infrastructure revolution and primarily benefited from it, many mainstream small-and-medium-size businesses (SMBs), sole proprietors and “mom-and-pop” retailers may feel like they got left behind by cloud computing. However, the story remains more complicated. Strategic-thinking SMBs from Main Street have also harnessed Web 2.0 to leverage their narrower HR power to appear virtually as large as the big boys.

    With the rise of Amazon Web Services (AWS), Microsoft Azure and other public customer cloud platforms as well as B2B SaaS applications and more, even the solopreneurs among us can tap on-demand, online software.

    “Now that SMBs and mom-and-pop shops don’t have to have their websites hosted on GoDaddy and can go live in the AWS Cloud, they have taken a giant leap forward,” says Shawn Moore, CTO, Solodev, a web experience platform. “But someone still needs to build, manage and optimize their websites. Enter the DIY CMSes like SquareSpace, Weebly, Wix and WordPress. Now your local pizzeria can build its site in WordPress, host it free on AWS cloud computing and compete with Papa John’s and Pizza Hut.”

    The missing puzzle piece? Moore believes whoever identifies how to scale technological and marketing personnel as a commodity will win in SMB software.

    SMB Software and Services Largest Segment by 2020

    Analysts forecast SMB purchases of software products and services will become the largest IT segment by 2020. That sector will reach 38 percent of the market according to International Data Corporation. But even with that buying prowess behind them options for implementing cloud software have limited existence and maybe not the highest quality, according to certain consultants. Logically, this could lead management at innovation SMBs to take a cautious stance on business expansion.

    But with President Donald Trump a small business champion, according to his spokesperson, small-business confidence has hit a high not seen since 2004, a recent survey reveals. And that confidence has spread to plans for launching cloud solutions to build commercial relationships, with help of outsource IT professionals, according to those consultants.

    For example, Diener Precision Pumps, an SMB manufacturer of precision piston pumps, found Salesforce the right cloud platform to get closer to customers and achieve its goals, according to Bluewolf, a strategic Salesforce consulting partner.

    “We greatly needed a system of record for quick insight and reporting of our global sales, and with the right partner we were able to transform sales operations immediately,” says Mike Gann, IT engineer, Diener Precision Pumps, referring to Bluewolf Go. “In just one month, they delivered a fast Salesforce implementation. We now have the transparency to make the most of every customer moment and achieve our business goals.”

    How SMBs Should Decide to Move to Cloud Computing

    When implementing software, the question of SaaS versus on-premises remains fundamental. The decision usually depends on importance of cost, performance, control and data ownership, according to IT professionals. This strongly varies per the size of the mom-and-pop business, in their opinions.

    If applications have basis in the web browser, it may not matter whether they feed data from the cloud or a local network, according to Vadim Vladimirskiy, CEO of Adar, provider of Nerdio IT-as-a-Service for SMB IT staff and their MSPs. However, with some SaaS subscriptions, the cost efficiency equation may change with scale, getting more expensive with more users, according to Vladimirskiy.

    “But SaaS options typically have a very low barrier to entry,” Vladimirskiy says. “They don’t require hardware infrastructure to be set up, software to be installed and configured or ongoing updates to be tested and deployed. It’s typical for a small SaaS deployment to be very cost effective.”

    One of the biggest considerations SMBs need to make for SaaS regards the common platform a cloud computing solution provides to the customers running in the same instance, according to Vladimirskiy. An advantage remains that any upgrade made to the application simultaneously benefits everyone on the same servers and databases. But also the fact the same environment gets shared by many creates a limitation on any feature customization for individual customers. Basically, SMBs have to take what they can get from a SaaS solution.

    SMBs Leveraging Cloud Computing to Serve Enterprises

    Not only do MSPs that offer SaaS platforms provide SMBs the opportunity to compete with larger, more resource-laden companies but also enable them to make some of these big enterprises their customers. For example, a 10-person IT company, Agrei Consulting, worked with BitTitan, a Microsoft MSP platform enablement provider, to migrate a worldwide company with multi-workload Google apps to a virtualized Microsoft productivity suite. Specifically, Agrei Consulting used BitTitan MSPComplete to automate many of the tasks for migrating 40,000 Gmail accounts, successfully accelerating timelines, according to them.

    “We won the business of a global conglomerate in the media space,” says Israel Heskel, founder and CEO, Agrei Consulting. “We took on and completed the project moving a company of more than 30,000 employees across the globe to Office 365 in under four weekends—a task normally only possible for an IT service provider of 5,000 employees. This thrilled executives and delighted users with a smooth transition to Office 365. BitTitan can migrate just about anything to Office 365—whether a project for 12 or more than 150,000.”

    Virtual Assistants Help SMB Execs Delegate, Stay Up-to-Date

    In the modern workplace dissent does not often occur when it comes to virtualization—not only virtualization of Microsoft technology but also task delegation. As many SMB executives constantly work from the road, a lot of them could use virtual assistants (VAs) to take care of mundane meeting arrangements, travel schedules, memo writing and so on. A VA not only gets delegated the work to oversee projects and tasks but also keep the executive current on her market trends and options.

    “An SMB executive doing all her own work will not stay competitive,” says Melissa Smith, founder, The Personal Virtual Assistant, and author, Hire the Right Virtual Assistant. “One reason large companies are so effective and successful is because they know how to use resources. They don’t spend hundreds of thousands of dollars a year to pay executives to send emails, make calls or track payroll and billing. SMBs can take advantage of the same type of services by hiring virtual assistants.”

    Smith recommends SMBs hire VAs who communicate the same their executives do, starting the search on platforms already in use. Whether by phone, email, text or video chat, if you do not enjoy communicating the same way the working relationship cannot survive, according to Smith.

    “Communication is so important when working with a VA—whether to communicate tasks, projects or updates. VAs are essential to collaboration,” Smith says. “Tools like Asana, Trello and Slack are great for multiple updates per day and several people collaborating. It cuts emails received, status updates and helps automatically coordinate different time zones.”

    Using Cloud-based Software at Charitable NGOs

    You might think that cloud computing platforms only have uses in the for-profit world—if only at SMBs. Wrong. Small charitable non-governmental organizations (NGOs) also utilize the latest in corporate innovation, such as HR software. For example, Furniture Bank, a registered charity and social enterprise NGO that provides gently used furniture to former homeless and others getting back on their feet, utilizes TINYpulse HR software for unvarnished feedback about employee engagement.

    “Since 2014, we’ve used TINYpulse to gauge employee happiness and keep transparency and communication high,” says Dan Kershaw, executive director, Furniture Bank. “Since we have such an important mission, keeping our workforce happy not only helps with productivity but also in recruiting, retention and employee recognition.”

    Moreover, the tool helps leaders at Furniture Bank keep everyone on the same page and minimize miscommunications, according to TINYpulse. Companies like Furniture Bank embrace the relationship between employee engagement and customer engagement as a strategic priority, according to TINYpulse.

    Making Cloud Computing Secure for Mom-and-Pops

    With all the options available to mom-and-pop shops to get on the SaaS bandwagon, once there they need to stay secure. But lacking the security staff and expertise that larger enterprises can afford, what should they do?

    “The cloud brings big advantages to mom-and-pop shops including cost savings, efficiency and agility,” says Ofer Amitai, CEO and co-founder, Portnox, provider of network access control, visibility, management and policy compliance. “However, it also brings big challenges, especially when it comes to network security, and they don’t want to spend time monitoring the network.”

    According to Amitai, these SMBs should look for a cloud-based security platform that enables continuous risk monitoring of all individual devices, responds in real-time and protects against BYOD and IoT risks.

    This article originally appeared on Talkin’ Cloud.

    << Previous Day 2017/03/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org