Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, June 22nd, 2017
Time |
Event |
1:00p |
Does LinkedIn’s Data Center Standard Make Sense for HPE and the Like? It’s easy to look at the new Open19 data center rack standard initiative spearheaded by LinkedIn as competition for the Open Compute Project. It does have some of the same vendors involved; in particular, Hewlett Packard Enterprise, which has said it will make its OCP Cloudline hardware Open19-compliant.
However, when you look at both the scope of the new initiative and the list of vendors who’ve signed up for the Open19 consortium, it’s clear that this is a very different approach, aimed at different customers.
Senior Analyst Jeff Fidacaro from 451 Research told Data Center Knowledge that Open19 is focused on “efficient, interoperable, interchangeable hardware that can be quickly added to a rack.” You can take a pick-and-mix approach to compute, storage and networking, snapping them into existing racks retrofitted with Open19 cages and power shells. Correctly installed, that will improve servicing times, letting engineers pull and replace servers from racks in less than a minute.
For vendors though, he suggested this is “essentially repackaging existing technologies in a more efficient manner, decreasing the integration and lead time and making them more standardised” – but without having to contribute IP to the project for competitors to use.
That’s an integration model that HPE has long been familiar with, from its early blade servers to its modern Synergy hardware. In many ways, Open19 hardware is what HPE has been calling “composable infrastructure”, where compute, storage, and networking are treated as resource pools that can be configured on the fly to support application needs.
Managing that composable infrastructure is key to delivering the benefits Open19 offers, and that’s one area HPE could add significant value to the new standard. The HPE OneView management platform allows configuration at a rack level, deploying servers onto bare metal and assigning storage and virtual networks. With Open19 purely focused on hardware form factors and connectors, a software management layer is essential and HPE is well placed to deliver both hardware and software. Today OneView only works with HPE hardware; for Open19 it would have to support any hardware that meets the standard.
Management Tools
Finding the right management tool is a key issue facing anyone trying to build a modern data center. Neither enterprises not hosting services have the talent to compete with what Fidacaro calls “the armies of developers” at the hyperscale cloud services, or at LinkedIn itself. LinkedIn can build the software to turn the composable pieces of Open19 into the data center it needs; other customers will need to buy in tools and maybe services. That’s something HPE can deliver: a software layer that bridges the gaps between management tooling and the modern application ecosystem.
As Open19 is software agnostic, and targeted at a wide range of enterprises with a wider set of infrastructure skills; a strong services offering like HPE’s is likely to be essential. Getting clients up and running will provide work for its technology support services team, taking clients from a traditional data center to a private cloud, and integrating tools that go beyond the scope of Open19.
The ideal of the modern data center is a massive IoT installation, full of sensors to monitor the environment and equipment. That requires special data center infrastructure management (DCIM) tools to handle the resulting data and to use it to manage hardware and software loads. Ideally, you want to use tooling to move VMs around and power servers up and down based on current demands. OneView doesn’t cover that, but it has APIs to the existing DCIM platforms.
How Viable is Open19?
The bigger question is whether HPE can make a business out of Open19.
LinkedIn – which started this project long before it had access to Microsoft’s cloud hardware resources – isn’t the only business that’s looking for a center infrastructure standard beyond OCP that doesn’t lock them in to a single vendor. As a mid-tier cloud player LinkedIn doesn’t have the hyperscale demands that drove Facebook to creating Open Compute in the first place. Instead, it’s any large data center that’s looking for access to dense, composable, infrastructure.
“The Open19 goal is for everyone to be able to source the components they need, which is difficult with OCP if you’re not hyperscale,” Fidacaro said. That makes Open19 an interesting approach for customers like hosting providers, large-scale IoT services, and financial services. It’s a market that’s currently under-served, and one that’s likely to grow quickly.
That market is attractive to HPE, especially because it’s already familiar with supporting composable infrastructure in its Synergy product.
Fidacaro also believes Open19 is suited to the new idea of micro-data centers; highly monitored, remotely operated, small data centers at the edge.
Working from a single rack up, Open19 offers a mix of building block elements that can be deployed in a standard 19” rack, with simplified power and networking connectors. HPE’s existing data center products like Synergy and the OPC 21 Cloudline both appear suitable for use in Open19 cabinets (once modified). Adapting its hardware to support Open19 won’t be difficult; choosing the right set of SKUs might be trickier.
HPE certainly needs to find new markets for its cloud hardware. The hyperscale clouds are moving away from buying vendor hardware, and switching to their own custom hardware like Microsoft’s Olympus where they’re able focus on their specific needs. HPE CEO Meg Whitman has suggested that the company might move away from its partnership with Foxconn after one unnamed customer (widely believed to be Microsoft) reduced its order significantly. As the server and data market shifts, there’s really no alternative for vendors like HPE but to join these open communities.
There is still an opportunity for differentiation. Open19 makes it easier for proprietary hardware to take advantage of common standards; all it mandates is power and network connections as well as case sizes. But it’s also true that what’s in the case doesn’t matter, as long as it fits in the Open19 cage in the rack. The challenge will be for HPE to preserve margins when competing with contract manufacturers and other server vendors. After all, if anyone’s server can fit into that rack, how can HPE justify a higher price? Services and software are very likely to be the answer, because at this point, the software to build and manage the composable Open19 vision is very much the missing piece. | 5:29p |
Get Away from the Status Quo: Modernize Your Middleware-Tier
Franco Rizzo is Senior Pre-sales Architect at TmaxSoft.
If you’ve been wondering recently whether your web application framework can use an upgrade, it’s time to perform a thorough evaluation of your infrastructure to confirm whether (or not) it’s still meeting the organization’s needs – and there are nine critical questions you can ask to determine this.
Before we dive into that, however, it may be helpful to start with a couple of definitions for terms that sometimes are misunderstood:
Mid·dle·ware: Software that acts as a bridge between an operating system or database and applications, especially on a network.
Mid·dle–Tier: The processing that takes place in an application server that sits between the user’s machine and the database server. The middle-tier server performs the business logic.
Both terms are often used interchangeably. But fundamentally, in web architecture, both refer to web servers, application servers, content management systems and other tools that support application development and delivery. The middleware-tier is integral for an organization to ensure user requests are processed quickly and securely.
Applications servers (middleware-tier) are nothing new and have long been an integral part of on-premise distributed architectures. Middleware-tier connects users and facilitates the integration of legacy applications with distributed corporate data and applications. With the growth of hybrid cloud workload deployments, the benefits of a well-architected middleware-tier can significantly improve performance.
Middleware-tier offers the following operational advantages:
- Ensure application portability across on-premise and public environments
- Facilitate high-availability architecture
- Enable DevOps productivity
Middleware-tier is the nexus for a distributed multi-tier architecture, enabling the decoupling of the data consumer from the data producer; an apt example is decoupling a database from applications. Furthermore, middleware-tier provides access to a wide variety of services, databases, messaging services and connection to external enterprise services like SaaS, Cloud and IoT Event Streaming, which allow enterprises to deploy mission-critical applications in a scalable environment. This scalability occurs in the middle – not in an application and not in a database. What’s more, functionality, flexibility and security are also improved.
Most organizations already have middleware-tier architecture in place. But how can an organization be sure its current web application framework is providing the best security and performance and is cost-effective? Determining whether the current framework is meeting the enterprise’s needs requires a multi-dimensional effort.
Evaluating Your Current Middleware-Tier Framework
When evaluating an organization’s current web application system, there are nine important questions to ask:
- How flexible is it – can it scale capacity as needed?
- How quickly can the organization cluster together multiple instances of its middleware in order to accommodate business requirements?
- What type of security protocols does it offer? For example, one of the most popular protocols is Apache J Server Protocol; but the issue is that it is an open protocol – everyone has access to it, which increases the possibility of hacker attacks.
- What is the overall TCO? There are a couple of ways to look at cost – there is the cost of an actual product, and there are also costs associated with not being able to scale or not having the most optimally performing architecture.
- Are you fully leveraging virtualization? Application servers that are not optimized for today’s Software Defined Data Centers can be expensive to maintain over time, so having a product licensed for today’s virtualized environment is critical.
- How are licensing costs structured – and are you only paying for what you use, or are you paying for extra capacity you don’t need?
- Does the applications server conform to the latest J2EE standards?
- Is it high availability – if an application crashes and needs to be redeployed, how smoothly does that happen?
- How robust is the administration aspect?
If the answers to any of these questions do not align with an organization’s security, throughput and productivity goals, then it may be time to explore a new middleware option.
Evaluating a New Option
When establishing a web application system in a multi-distributed environment, the most important issues for consideration are:
- Load-balancing structure;
- Inflow and request control; and
- Performance optimization and security.
Each of these must be evaluated for any new framework.
Load balancing: When configuring a web server and application server on multiple nodes, the configuration must support load balancing. Load must not be concentrated on a specific node when forwarding a request from a front-end web server to a back-end application server. A dynamic load-balancing function using an application server load is especially critical for optimized performance.
Client inflow and request control structure: A web application system structure must be able to handle massive client requests as well as provide effective management of specific node, application or URI standard requests. If massive requests come through the front end to the application server layer without any handling, it will be too late to control the requests. This is because it is difficult to handle the performance load if an attempt is made to control requests after load congestion already occurs.
To resolve these problems, it is important to be able to control the inflow of requests before a problem occurs to the front-end web server.
Performance optimization: In a multi-node cluster environment, sessions are generally used for sharing application state information. Session information is managed by using a caching method where the cache hit ratio is the most important factor.
This means that in a multi-node environment, it is important to increase the probability of accessing session information from the cache area, which affects web application performance. The cluster structure optimized for high performance is a core architectural component.
Security: In general, when configuring the web and application servers, web servers are configured inside an extranet, while application and database servers are configured inside an intranet. A firewall is installed between the networks for security and a communication port is set between the web server and application server.
These settings, as well as methods for communication and data exchange between each server group, are important for security control.
Know Your Framework
While an organization may feel comfortable with its current web application framework, sometimes staying with the status quo can negatively impact business goals, productivity and user experience. If you’re not taking time to evaluate current infrastructure and question whether it’s performing at optimal levels, issues can crop up, and then suddenly the framework no longer functions properly.
Ask the right questions to determine whether your current framework is meeting the enterprise’s current as well as future needs; if it doesn’t, start examining other options and make load balancing, client inflow and request control structures, optimal performance and throughput, and security capabilities key requirements.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:05p |
MSP Fights to Hire Scarce Cloud Engineers  Brought to you by MSPmentor
At CorpInfo LLC, cloud is their business and business has been very good.
So good, in fact, that the cloud-focused subsidiary of a legacy reseller has grown from fewer than a handful of full-time staffers 18 months ago, to 54 employees today.
They expect that headcount to swell past 150 by the end of 2018, with operations expanding beyond current markets in California and Texas, to cities across the U.S.
Such rapid growth brings with it a host of exciting if sometimes daunting challenges, and this week, MSPmentor caught up with Josh Binnie – head of talent at CorpInfo – to discuss the gargantuan and hypercompetitive mission of recruiting and hiring so many qualified cloud engineers.
“As a professional services company, if it was easy we’d all be out of work,” he said. “That’s one of the things about being a good engineer: You’re never going to find yourself out of work.”
CorpInfo is an AWS Premier Partner that specializes in migrating clients from on-premise to cloud environements, and providing managed services to help customers run cloud infrastructures.
Previously known as Desktop Solutions, CorpInfo weathered some lean years before settling on a growth strategy that includes a robust cloud component.
The company has several public-sector clients and expects growth in that business line, among others.
Their need for quality technical talent is acute.
“Mostly, it’s engineers; cloud engineers and cloud solutions architects,” Binnie said.
That means people with software and cloud development backgrounds.
“That could be Azure, AWS – hands on, paid, professional experience within cloud,” he explained.
But in an acknowledgement of the difficulty of the challenge, Binnie is quick to qualify the description of his target candidate.
“We will take people who have extraordinary talent and can learn quickly,” he said. “They must have a baseline of technical knowledge (but) we definitely don’t have a cookie-cutter approach to hiring; we’re the opposite of that.”
As much as technical proficiency, Binnie said he looks for a flair that will add something to the greater team.
“We’ve regularly hired people because they bring something extraordinary to the table rather than because their butt fits the shape of that seat,” he said.
Along with cyber security specialists, engineers with cloud skills are among the most sought-after workers in the technology space.
Landing and successfully onboarding such candidates is only half the battle.
“We expect people to have regular offers,” Binnie conceded. “We’re constantly aware that we have to retain good talent.”
CorpInfo, he estimates, spends a little less than the roughly $5,000 per hire that’s typical in the space.
But, Binnie said, the company spends more than average on retaining employees, through extensive training and development, and other means.
“It’s the nature of the working environment that makes a difference; the nature of the projects,” he said.
They must be doing something right.
This month, CorpInfo landed on Inc.com’s 2017 list of top places to work.
The growing technical demands of the modern IT managed services space points to an increasingly challenging recruiting environment going forward, Binnie predicts.
And, to be clear, the MSP is also looking for talent on the sales side.
“Talent has continued to become more and more important,” he said. “We have 20 years doing this. Your approach to talent grows and matures.
“It becomes apparent that you need more and more folks who can work independently and can work intelligently,” Binnie added. “It’s fundamental. It’s an enormous part of the business.” | 9:46p |
South Korean Web Host Pays $1 Million to Recover Customer Data  Brought to You by The WHIR
A web host based in South Korea has paid over $1 million to a ransomware operation, called Erebus, that encrypted customer data related to 3,400 customer websites.
According to a report by Ars Technica, Nayana is working to recover the data from 153 Linux servers, but warned customers it would take time.
The company negotiated the payment after initial ransom demands were for $4.4 million, paid in Bitcoin. It is paying the ransom in three installments, according to a blog post by Trend Micro.
Security best practices recommend victims not pay ransom, but often times companies will do so under the radar as to not admit publicly that their network was insecure. If a company pays a ransom, there is no guarantee that they will get their data back or that the hackers will not strike again.
Ars Technica said that the Erebus ransomware once targeted only Windows operating systems, but a new variant works against Linux systems.
For more details on the Erebus ransomware, check out Trend Micro’s blog. |
|