Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, May 31st, 2017
Time |
Event |
12:00p |
If Everything is a Service, Why Do We Need Data Centers? Imagine a remake of the original Indiana Jones movie, Raiders of the Lost Ark. Eerie music is playing during that iconic final scene. The camera closely follows a clerk pushing a crate containing the Ark of the Covenant. It pans out. In the 21st Century version, the vast warehouse doesn’t contain endless rows of shelving and boxes. It is completely empty except for a desk, a computer and a shipping bay. Cut to a UPS truck arriving at the bay to take the crate to an unknown destination.
The IT version would be a data center manager sitting at a computer console in an otherwise empty basement. All the hardware and software that used to be neatly arranged in rows within that data center is now in the cloud. Could this be the fate that awaits us if Everything-as-a-service (XaaS) progresses to its logical conclusion?
“That vision of a manager in an empty data center could eventually apply to some businesses,” says Colm Keegan, a senior analyst at Enterprise Strategy Group. “In those cases, the data center manager’s job would be the overseer of external providers to ensure the enterprise received the performance and capabilities it required.”
This, however, would be far from a passive role. The data center manger would have to be engaged with all providers, monitoring their service levels, and constantly looking to improve performance and lower costs.
“You need visibility into operations, oversight into the health of your applications, and the ability to keep track of the usage of those services to understand why costs might be running high,” says Keegan. “You are always going to need a skeleton staff as who else is going to find out what’s causing inefficiencies such as orphan systems running in the cloud that were never decommissioned?”
The duties of the data center manager in an XaaS world, therefore, would be that of an analyst and liaison between internal business users and the cloud providers. In an initiative to add greater analytics capabilities, for example, it would be up to the data center manager to scope out the requirements and costs with the cloud provider in order to translate that business need into a technical capability. In many ways, then, the data center manager becomes the conductor of an orchestra of service “instruments” that would otherwise quickly descent into a cacophony of chaos.
Another aspect of control is keeping IT in the driver’s seat and the enterprise secure. If there are official cloud providers serving the company as a whole, then there are alternatives out there seeking to tempt line of business managers to go rogue and use their services instead. Preventing this from happening—and opening insecure avenues into the enterprise—requires the data center manager to stay on top of how well official service channels are meeting business objectives.
“Your internal customers care about it being easy to do business with you,” says Keegan. “If service is poor, slow, complex or pricey, they will find a way to do business with someone else.”
Hybrid Data Centers
Of course, the vision painted above may not necessarily come to pass. There will certainly be very large data centers run by service providers and IT behemoths like Google, Yahoo! and Facebook who have discovered the benefits of building their own data centers and cutting out the middle man. Equally, some larger businesses may choose to keep a major IT arsenal in house due to compliance/security mandates, corporate preference, application cost or poor cloud service.
In many other cases, the likelihood is that a hybrid model would result in part cloud and part internal data center. Take the case of Acorda Therapeutics which has two data centers. The primary data center near New York City has 20 physical servers, 17 of which are host servers for its virtual infrastructure consisting of 360 virtual machines (VMs). It also includes two Storage Area Networks (SANs) and redundant water-cooled A/C units. Redundant UPS units can each power the data center for 30 minutes, more than enough time for the diesel backup generator to kick in, which could power the data center for 36 hours without the need to refuel. The second data center serves applications specific to its manufacturing, lab and business operations in Massachusetts. It has 30 physical servers and 12 virtual servers.
The company runs some applications in the cloud and others on premise. Cloud apps include project management, expense reporting, payroll and sales reporting. On-premise apps include file sharing (EMC Syncplicity, which can synch cloud and on-premise data), manufacturing systems, lab operations, document management, clinical trial management and drug safety.
“Many of our applications will likely stay on-premise for the foreseeable future due to the nature surrounding the sensitivity of the data and compliance requirements,” says Joshua Bauer, assistant director of network operations at Acorda Therapeutics. “Any server that connects with any physical components, such as our lab or manufacturing operations must stay on-premise. Everything else will likely be transitioned to the public cloud.”
He sees the XaaS space continuing to grow, until it levels off with around 80 percent of hosted data. The remaining 20 percent will be for environments, he thinks, that are heavily regulated and/or tied to custom hardware such as manufacturing operations or any systems that serve custom-built processes.
Staying Relevant
This approach demands a shift in role for the data center manager. Keegan stressed the importance of a seamless transition between cloud and in-house data center service. The key here is being able to straddle either side and put in place tools that can work with both.
“It is likely that the traditional data center, as we know it today, will contract to support mainly mission critical workloads like online transaction processing systems, key financial applications and anything that is bound by regulatory compliance,” says Keegan. “Non-core workloads that are not directly material to business revenue like email, end-user productivity applications and CRM will increasingly be outsourced to Software as a Service (SaaS) providers.”
According to an ESG survey, the functions most likely to remain in house are accounting, financial, human resources, business intelligence, analytics, project management and industry-specific applications (Figure 1). Even workloads, like test/dev will begin to find a home-in service provider with Platform-as-a-Service (PaaS) environments.
“Hybrid cloud computing will emerge as the preferred method for traditional enterprises to manage their business application workloads: mission critical apps on-premise and non-core apps off-premise,” says Keegan.
With so much change on the horizon, what does a data center manager do who has lived to see his or her basement empire dwindle from a place packed with racks of servers, storage arrays, tape systems and assorted IT gear to the point where only a few racks are left? On top of that, a notice arrives on the desk saying that the data center is being moved to a much smaller space?
To stay relevant, existing enterprise data centers need to be more agile and responsive to the needs of the business, and they must empower internal end-users by providing them with many of the same self-service and IT infrastructure on-demand capabilities that are available in Amazon Web Services, Azure or Google.
“Application developers don’t have time to wait for IT to design, order, integrate, configure and provision resources over many weeks,” says Keegan. “If data centers want to survive, they have to become the ‘vendor of choice’ for these internal customers, and that means making it easy and quick for them to dial-up resources when they need them.”
That doesn’t mean only trying to keep everything in house. It is up to IT to determine when it’s best to deliver services internally or farm them out externally. But the data center manager has to know which providers to use when, and also create a framework that makes it desirable to utilize internal resources.
“IT organizations need to understand what the needs are of their internal clients and provide multiple avenues for them (hybrid cloud) to attain the services and resources they require to innovate and pursue new revenue streams for the business,” says Keegan.
Winning Strategies
Clearly, data center managers have to adapt or risk slowly fading away. Bauer’s advice is to study their opponents in this contest—the cloud providers—and see what practices they use that can be imported into the data center.
“To keep an existing enterprise data center relevant, companies should leverage technologies and practices that the larger data centers offer (on a much smaller scale of course),” says Bauer.
Similar to how local merchants have to adapt in the face of competition from the Walmarts and Home Depots of the world, the key is to find a niche and offer more personalized or customized service. Deliver what the larger providers can’t, which is customized and more personal service, adds Bauer.
He stresses brutal honesty by data center managers in the battle of cloud versus on-premise. If they do so, they will realize that there are already many off-the-shelf, or otherwise standard applications that are well suited for the cloud. To stay ahead of the game, therefore, IT professionals who manage on-premise data centers should manage the process of transitioning these applications to the cloud, or risk being left out in the cold when someone else decides to do it on their own. This leaves IT to focus on the more customized, regulated and sensitive applications which remain in house.
But there are some cases when it could be time to abandon ship.
“It may be time to let go of an internal data center if it becomes cost-prohibitive to maintain it or there are no more applications present which cannot be hosted in XaaS,” says Bauer.
Greg Schulz, an analyst with Server and StorageIO Group, concurs.
“Some environments, big and small will eliminate their data centers altogether by moving to an outsourced, managed or cloud service,” he says. “The key will be how data center managers and infrastructure technologist can revamp their service offerings to become a flexible business enabling asset instead of a cost overhead barrier to productivity.”
He lays out a series of tips on how to not only survive, but thrive in this brave new world. This includes some that are straight out of the cloud service play book:
- Study what the cloud providers offer, their dashboards, services menus, portals and cost structures.
- Add a services catalog including pricing, Service Level Agreements (SLA) and other metrics to keep users informed.
- Implement lower cost services with portals for self-provisioning.
- Streamline operations to remove complexity to reduce costs without cutting service capabilities.
- Move things that can easily be migrated or automated on the cloud or to other service providers, and migrate complex things that are no longer your center of attention.
Push Back
Finally, the data center manager should be willing and able to move applications from the cloud back internally if circumstances require it. Perhaps applications performance just doesn’t live up to expectations or security mandates require it to be brought inside. Just as importantly, costs can sometimes dictate what has to return to the data center.
“I’ve talked to businesses that dumped most of their usage of major cloud providers due to high costs,” says Keegan. “They liked the service, yet ended up pulling many workloads back on premise due to rising costs.” | 3:00p |
Disgruntled Employees and Data: a Bad Combination The impact of disgruntled individuals is as old as the history of humans. Confucius once said, “When anger rises, think of the consequences.” Although he never saw or imagined a data center, his wisdom should be carefully considered by managers of data centers.
“Data leakage by disgruntled employees is a very real problem,” says Brian Cleary, vice president at Waltham, Mass.-based Aveksa. “Organizations are struggling with the number of them who try to take confidential and highly valuable data for malicious intent or financial gain.”
Consider the following statistics from a survey of IT professionals by Ipswitch, a Lexington, Mass.-based global provider of secure file transfer solutions:
- Forty percent of employees admit to using personal email to go behind the backs of their employers and send sensitive information without being seen.
- More than 25 percent admitted to sending proprietary files to their personal email accounts, with the intent of using that information at their next place of employment.
- Nearly 50 percent of employees send classified information via standard email weekly, thereby putting payroll info, social security numbers, and financial data at risk due to lack of security.
- Forty-one percent of IT executives use personally owned external storage devices to back up work-related files monthly.
The issue is made increasingly complicated by orphaned accounts of those who leave companies that remained open and accessible far too long.
“It’s absolutely critical that employees only have access to what they should have access to and nothing more,” says Cleary. The risks of disgruntled employees leaking information increase when employees gain unnecessary access privileges due to promotions or transfers within an organization.
HR Plays a Big Role
Human Resources departments should be the first line of defense for many companies. HR experts are expected to conduct thorough interviews of all candidates, using their experience to make sure that individuals being considered are honest, have impressive resumes, are there for the right reasons, and have both the right skill set and excellent references.
Next, HR should perform background checks that include credit scores and drug tests, depending on a company’s policy. This process can take from three to six weeks but pays significant dividends in identifying potentially problematic individuals.
It’s also important that HR communicates with IT on issues such as when an employee should be terminated—down to the minute—as well as how denial of access will be implemented and determine what other instructions should be followed.
Appropriate policies and procedures should dictate the termination process to protect the organization, while an IT or operations manager needs to enforce the policies for the data center that include access control verification and no physical access without a designated escort.
One HR professional, who asked to remain anonymous, talked about a specific incident.
“Years ago, we had to let a CIO go. A CIO typically has multiple passwords and very easy access to virtually everything. We had to bring in a network specialist to make sure we had taken away his ability to get in. He was disgruntled—and so were we with him—so we suspected he might do something. We found five different ways he could get into the system. So we did an intrusion test to verify that we’d blocked those five entryways, as well as to discover whether he could find another way to get in. All this was done prior to his termination, with people who worked for him. It had to be kept extremely confidential. I don’t even think we told the people why these tests were being conducted. They thought we were just doing an intrusion test for generic security purposes, but we were really protecting ourselves against this person who had great access to everything in our system.” IT and HR were very involved in coordinating this ‘underground operation.’”
The consequences we fear from unhappy employees or other internal threats can be avoided, but the price for this is vigilance. The problem itself is complex: It’s more than an IT problem or a data center problem; it is an organizational problem, and one best addressed by close coordination across departments such as HR and IT.
Best Practices
Here’s a list of best practices for mitigating IP theft, IT sabotage and fraud from CERT, home of the well-known CERT Coordination Center. Based at Carnegie Mellon University’s Software Engineering Institute, the center focuses on identifying and addressing existing and potential threats, notifying system administrators and other technical personnel of these threats, and coordinating with vendors and incident response teams to address them.
- Consider threats from insiders and business partners in enterprise-wide risk assessments.
- Clearly document and consistently enforce policies and controls.
- Incorporate insider threat awareness into periodic security training for all employees.
- Implement strict password and account management policies and practices.
- Enforce separation of duties and least privilege.
- Define explicit security agreements for any cloud services, especially access restrictions and monitoring capabilities.
- Institute stringent access controls and monitoring policies on privileged users.
- Use a log correlation engine or security information and event management (SIEM) system to log, monitor, and audit employee actions.
- Monitor and control remote access from all end points, including mobile devices.
- Develop a comprehensive employee termination procedure.
- Implement secure backup and recovery processes.
- Develop a formalized insider threat program.
- Establish a baseline of normal network device behavior.
- Be especially vigilant regarding social media.
- Anticipate and manage negative issues in the work environment.
| 3:30p |
If You Think WannaCry is Huge, Wait for EternalRocks Giridhara Raam is a Product Analyst for ManageEngine.
While the world was responding to the WannaCry attack — which only utilized the EternalBlue exploit and the DoublePulsar backdoor — researchers discovered another piece of malware, EternalRocks, which actually exploits seven different Windows vulnerabilities.
Miroslav Stampar, a security researcher at the Croatian Government CERT, first discovered EternalRocks. This new malware is far more dangerous than WannaCry. Unlike WannaCry, EternalRocks has no kill switch and is designed in such a way that it’s nearly undetectable on afflicted systems.
Stampar found this worm after it hit his Server Message Block (SMB) honeypot. After doing some digging, Stampar discovered that EternalRocks disguises itself as WannaCry to fool researchers, but instead of locking files and asking for ransom, EternalRocks gains unauthorized control on the infected computer to launch future cyberattacks.
How Dangerous Is EternalRocks?
When EternalRocks hits a computer, it downloads a Tor browser and connects that computer to its command and control (C&C) server located in an unidentified location on the web. To avoid detection, EternalRocks stays dormant in the infected computer for 24 hours before activating and communicating with its C&C server.
In the early stages of the attack, EternalRocks shares an archive containing all seven exploits with its C&C sever, then downloads a component called svchost.exe to execute all other actions and take over the infected system. Once that’s done, EternalRocks searches for open SMB ports to infect other vulnerable computers.
One of the main features of EternalRocks is that it can turn into any major cyber weapon after successfully hijacking a system. For instance, it can be converted into either ransomware or a Trojan to cause more damage.
EternalRocks exploits seven vulnerabilities, including:
- EternalBlue – SMBv1 exploit tool
- EternalRomance – SMBv1 exploit tool
- EternalChampion – SMBv2 exploit tool
- EternalSynergy – SMBv3 exploit tool
- SMBTouch – SMB reconnaissance tool
- ArchTouch – SMB reconnaissance tool
- DoublePulsar – backdoor Trojan
EternalBlue, EternalChampion, EternalSynergy and EternalRomance are designed to exploit vulnerable computers, while DoublePulsar is used to spread the worm across networks. EternalRocks is far deadlier than WannaCry. Security professionals have even named it the “Doomsday Worm.”
Escape Cyberthreats With Proper Patch Management Practices
With new malware being unleashed every day since WannaCry, enterprises are looking for security solutions that can help them stay secure in spite of all these attacks. Experts suggest employing proper patch management procedures can keep your network and devices safe from any unwanted security breaches.
First WannaCry, then Adylkuzz, and now EternalRocks — all due to a single leak of NSA hacking tools. The whole world witnessed WannaCry’s impact when it used just two SMB vulnerabilities; imagine what EternalRocks can do with seven. Security researchers are still investigating EternalRocks. Until they neutralize the threat, you can stay safe and secure by staying on top of patch management.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 8:52p |
Morgan Stanley’s 16,000 Human Brokers Get Algorithmic Makeover Hugh Son (Bloomberg) — Call them cyborgs. Morgan Stanley is about to augment its 16,000 financial advisers with machine-learning algorithms that suggest trades, take over routine tasks and send reminders when your birthday is near.
The project, known internally as “next best action,” shows how one of the world’s biggest brokerages aims to upgrade its workforce while a growing number of firms roll out fully automated platforms called robo-advisers. The thinking is that humans with algorithmic assistants will be a better solution for wealthy families than mere software allocating assets for the masses.
At Morgan Stanley, algorithms will send employees multiple-choice recommendations based on things like market changes and events in a client’s life, according to Jeff McMillan, chief analytics and data officer for the bank’s wealth-management division. Phone, email and website interactions will be cataloged so machine-learning programs can track and improve their suggestions over time to generate more business with customers, he said.
“We’re desperately trying to pattern you and your behavior to delight you with something you may not have even been asking for, but based on what you have been doing, that you might find of value,” McMillan said in an interview. “We’re not trying to sell you, we’re trying to find the things you want and need.”
See also: JPMorgan Marshals an Army of Developers to Automate Finance
Faced with competition from cheaper automated wealth-management services and higher expectations set by pioneering firms like Uber Technologies Inc. and Amazon.com Inc., traditional brokerages are starting to chart out their digital future. It turns out that the best hope human advisers have against robots is to harness the same technologies that threaten their disruption: algorithms combined with big data and machine learning.
The idea is that advisers, who typically build relationships with hundreds of clients over decades, face an overwhelming amount of information about markets and the lives of their wealthy wards. New York-based Morgan Stanley is seeking to give humans an edge by prodding them to engage at just the right moments.
“Technology can help them understand what’s happening in their book of business and what’s happening with their clients, whether it be considering a mortgage, to dealing with the death of a parent, to buying IBM,” McMillan said. “We take all of that and score them on the benefit that will accrue to the client and the likelihood they will transact.”
Morgan Stanley will pilot the program with 500 advisers in July and expects to roll it out to all of them by year-end.
Additional high-tech tools are coming: McMillan and others are working on an artificial intelligence assistant — think Siri for brokers — that can answer questions by sifting the firm’s mountain of research. (The bank produces 80,000 research reports a year.) The brokerage also is automating paper-heavy processes like wire transfers and creating a digital repository of client documents, such as wills and tax returns. Established advisers tend to be older, so Morgan Stanley is hiring associates to train those who need help.
The technology means that for the first time in decades, the balance of power between financial advisers and their employers may shift. For years, top advisers could command multimillion-dollar bonuses by jumping to a competitor. That slowed to a crawl this year because of regulatory changes, and now the technological push will further the trend, according to Kendra Thompson, a managing director at Accenture Plc.
Bonuses Obsolete
Backed by a firm’s algorithms, “advisers are going to be part of a value proposition, rather than the service conduit for the industry,” Thompson said. “The cutting of the bonus check, it’s nearly over.”
Morgan Stanley isn’t swearing off robo-advisers, either. It plans to release one in coming months, along with rivals Bank of America Corp., Wells Fargo & Co. and JPMorgan Chase & Co. The technology was pioneered by startups Wealthfront Inc. and Betterment LLC and went mainstream at discount brokers Charles Schwab Corp. and Vanguard Group Inc. Robos could have $6.5 trillion under management by 2025, from about $100 billion in 2016, according to Morgan Stanley analysts.
An in-house robo-adviser and a learning machine that acquaints itself with rich clients might alarm advisers who plan to keep working for decades. McMillan is adamant that the flesh-and-blood broker will be needed for years to come because the wealthy have complicated financial planning needs that are best met by human experts.
“When I talk to financial advisers, they’re always like, ‘Is this going to put me out of business?’” he said. “That’s always the big elephant in the room. I can tell you factually that we are a long ways away from that.” | 10:24p |
GE Bets on LinkedIn’s Data Center Standard for Predix at the Edge LinkedIn is spearheading a new open source standard for the way servers are designed and deployed in data centers, and it has a big partner to help grow an ecosystem around the standard. GE Digital, General Electric’s software unit, is planning to adopt it for deploying edge data center solutions for users of Predix, its industrial Internet of Things platform.
Whether it’s collecting and analyzing sensor data from a chemical manufacturing plant, from an off-shore oil rig, or from a jet engine, near-real time feedback based on the analysis means there has to be some computing muscle close to where the data is generated. Even with the fastest networks, the amount of time it would take for data to travel from a factory or an aircraft to a central data center hundreds or thousands of miles away and back would be simply too big.
“We have 1 to 2 millisecond latency [limit] to figure out if a turbine’s going to start malfunctioning,” Darren Haas, GE Digital’s senior VP for cloud and data, said in an interview with Data Center Knowledge. Haas joined GE last year after six years as head of cloud engineering at Apple, which he joined after Apple acquired Siri, Inc. He was part of the original team of engineers that built the technology that powers the world’s most famous virtual assistant.
GE’s solution to the latency problem is to ship a “Predix box,” which will combine all the hardware and software needed for processing the data on-site, to whatever location a client needs that computing muscle in, Haas said. “We’re going to get Predix completely loaded on it and just drop-ship them everywhere. The locations we’re looking at are all across the world.”
GE has a lot riding on Predix, the crown jewel of a software unit it expects to grow from $6 billion last year to $15 billion in 2020. Industrial IoT is viewed as core to the next chapter in the American titan’s history. Predix is a global cloud platform designed for developers to build and deploy industrial IoT applications. GE uses Microsoft Azure and may use Amazon Web Services in the future to host the core Predix platform, which will communicate with edge nodes at customer sites, Haas said.
He expects LinkedIn’s hardware standard, called Open19, to make the process of deploying those edge nodes easier, because the hardware will be the same, regardless of where it’s being installed. Predix is designed to enable developers to build software that can move between different “form factors, environments, and regions, but we still wrestle with different standards and systems by node, region, and vendor,” Haas said in a statement. The standard will “allow us to deliver racks quickly, reduce deployment costs, and have a wider inventory, making sourcing easier than custom solutions, regardless of environment.”
In other words, if numerous hardware makers around the world adopt the standard – and several of them have already put that process in motion – GE will not be limited to the sales and manufacturing cycles of one or two suppliers and their ability to deliver globally.
The Ecosystem is Everything
Open19 describes a cage that can be installed in a standard 19-inches-wide data center rack and filled with standard “brick” servers of various default widths and heights (half-width, full-width, single-rack unit height, double height). It also includes two power-shelf options, a single network switch for every two cages, and cable-less power and network connectors on the servers that plug into a shared backplane. A data center technician can quickly screw the cage into a rack and slide the bricks in, without the need to connect cables for every box.
Hardware based on the standard isn’t yet production-ready, Yuval Bachar, LinkedIn’s principal engineer for global infrastructure architecture and strategy and Open19’s key advocate, said. Earlier this month, LinkedIn, together with GE, Hewlett Packard Enterprise, and a number of other hardware and software makers, launched a non-profit foundation to oversee further development of the standard and, importantly, build an ecosystem of vendors and end users around it. If that ecosystem doesn’t gain a certain critical mass, Open19 will remain little more than LinkedIn’s custom hardware spec. In other words, not a standard.
But there’s a lot of excitement about the effort among vendors, with Flex, the electronics manufacturing giant formerly known as Flextronics, using it as a platform to enter the data center market as a vendor that sells hardware directly to end users rather than simply manufacturing it on behalf of other sellers. HPE is planning to make its line of hardware products for hyper-scale data centers called Cloudline compliant with Open19, Kara Long, an HPE senior director who oversees hyper-scale product management, said.
Both Flex and the Chinese hardware maker Inspur had prototype Open19 gear on display at the foundation’s launch event held at Flex offices in Santa Clara, California. Vendors involved in the launch also included Supermicro, Wiwynn, hyve, QCT, Broadcom, Marvell, Cavium, Schneider Electric, and Vapor IO, among others.
OCP-Like Prices at Lower Purchase Volumes
After conversations with vendors about Open19 hardware for Predix, Haas expects to buy it at a price that’s competitive with hardware designed to specifications of the Open Compute Project, an established open source hardware design community spearheaded by Facebook. But vendors selling OCP gear (many of them are now also involved in Open19) are mainly interested in selling large volumes to hyper-scale buyers, such as Facebook and Microsoft, at steep discounts that come with large-volume orders.
Unlike most OCP-compliant products, designed for custom OCP racks and electrical distribution, LinkedIn developed Open19 so that it can be deployed in any standard data center, making it more feasible for vendors to manufacture and sell the gear in smaller volumes. And Haas expects to get OCP-like prices with “very minimal purchase size.” GE needs to start rolling out the edge infrastructure for Predix fairly quickly, he said.
Read more: LinkedIn’s Data Center Standard Aims to Do What OCP Hasn’t
Zero-Touch Edge Nodes with AI Capabilities
The plan is to make Predix boxes as hands-off for customers as possible. “We don’t want any of the people that we drop these boxes on to have to do anything, so it has to be zero-touch, full automation,” Haas said.
Many of the edge nodes will have to provide high enough performance to handle Machine Learning workloads, he said. Several people involved in Open19 have indicated that high-density GPU servers for Machine Learning that are compliant with the standard are being considered.
Late last year, GE Digital acquired a startup called Wise.io to accelerate development of Predix Machine Learning capabilities. While a lot of the development and training of Machine Learning models for Predix will be done in the cloud, with the final models shipped out to the edge, Haas expects some of those edge workloads to require lots of compute muscle per square foot, with power densities going north of 20kW per rack in some cases. “We’re working on some pretty crazy modeling at the edge,” he said.
See also: Micro-Data Centers Out in the Wild: How Dense is the Edge?
The Flexibility Trade-off
While GE’s strategy for Predix computing at the edge makes sense, committing to a single hardware standard has its pros and cons, Ashish Nadkarni, computing platforms program director at the market research firm IDC, said. On the one hand, by participating in the standard’s development you can push it in a direction that’s suitable for your needs, or design custom hardware that’s compliant and can be delivered by multiple vendors; on the other hand, a standard is only as good as the ecosystem around it.
LinkedIn and GE are for now the only hardware customers who officially support Open19. Yes, there is a number of vendors who are involved, but more buyers will have to join for those vendors to stay committed and for new ones to sign on.
The fact that LinkedIn is now owned by Microsoft, one of the two biggest users of OCP gear that’s standardized on OCP across its entire global cloud infrastructure, adds another element of uncertainty. Will its new parent company eventually decide to consolidate LinkedIn’s infrastructure into its own facilities, depriving Open19 of its core founding user? Microsoft hasn’t yet made a decision either way, but that kind of consolidation is a common step for companies.
For the strategy to work, GE will also have to sell its Predix customers on Open19, Nadkarni said. “That’s going to ultimately decide how Predix pans out.” A better strategy, in his opinion, would be to make edge software for the IoT platform independent of the type of hardware it runs on, he said. Whether that’s possible technically, given GE’s plans for a zero-touch, fully automated solution and sophisticated Machine Learning workloads at the edge, is unclear. (Haas mentioned plans to work with hardware vendors on a software layer underneath Predix that would ensure the compute and memory resources deliver the kind of performance necessary for those workloads.)
“The GE Predix strategy [for edge computing] has to be more flexible,” Nadkarni said. “It has to be independent of a data center rack standard.” |
|