Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, June 1st, 2017
Time |
Event |
12:00p |
The Right Questions to Ask About Data Center Strategy Long before the explosion of data, long before The Oxford Dictionary recognized “cybersecurity” as a word; and long before every device known to man could connect to the Internet, most businesses simply built their own data centers.
Today, creating and planning a data center strategy has become incredibly complicated. Companies now have a whole slew of choices: modernize an old facility, build a new one, lease, use colocation and/or cloud services and any number of combinations. Knowing which option will best fit the needs of the business starts with asking the right questions.
Tim Kittila, director of data center strategies for Parallel Technologies, says when he first meets with customers he asks the following questions about their business goals and objectives (not data center goals) and IT growth projections.
Kittila is scheduled to speak about data center strategy at the Data Center World Local conference at the Art Institute of Chicago in July. More about the event here.
How does the organization provide value or produce products? What makes their business continue to be relevant to their customers, and what reliability level is required by the organization to meet those business goals and objectives?
Based upon the company’s anticipated growth, what impact will that have on its assets depending on a selected data center strategy? When asking this question, Kittila says he gets varying responses; some have forecasts and projections, others have scantily clad inventories that are out of date.
Data center customers often don’t ask some of the most important questions. “Everyone gets focused on the immediate day-to-day priorities and forgets to slow down and analyze the full scenario and plan out a strategy,” he says.
“Operations seems to play second fiddle to the overall strategy of the data center. Even if a customer decided to move to a colo or cloud, understanding how to manage various ‘data center assets’ should be key to any organization. If managing an on-premise data center, operations will present the biggest risk to potential failure. If moving to a colo, understanding how to streamline moves, adds, and changes due to a facility being remote, can be challenging. For cloud practices, how do you eliminate the risk of shadow IT and those dreaded ‘expense report’ cloud purchases?”
Shadow IT refers to technology projects that are managed outside of, and without the knowledge of, the IT department. Kittila says this practice often results in processes and procedures that can prevent IT from being flexible and scalable enough to complete work and meet organizational objectives.
“We’ve seen examples of this via IT spinning up cloud services in order to accomplish company initiatives, he says. “This is not only reckless, but could put the company/organization at great risks. Bypassing processes in order to meet initiatives should be your first red flag. At that point, the process should be reviewed; then figure out a way not to force IT into a knee-jerk strategy.”
Another important step in determining a data center strategy is to measure and calculate the business and financial risks associated with an outage and translate those potential losses into costs.
“Understanding the impact to business operations due to a data center failure allows companies to make data-driven decisions to help eliminate potential downtime and reduce overall risk to the organization,” Kittila says.
While having hard-and-fast rules that apply to individual data center strategies would be convenient, it’s not realistic.
“Which option to choose always depends on business goals and objectives, and the IT requirements of the organization. We’ve experienced that it made sense to collocate when an organization’s CapEx dollars were better spent on building clinics vs. building data centers. In that case, the client moved its disaster recovery facilities into a colo. In other cases, if customers needed flexibility in a test/dev environment, we found that the cloud met their needs. Other times, we’ve had organizations choose to stay in their facilities due to the economics of an already depreciated asset that met their needs for reliability.”
Finally, having budget constraints shouldn’t deter a company from at least taking the first step.
“If money is an issue, then start with a plan. If your business is growing, and a plan is in place, then the budget for it will come in time,” he says. “I would recommend people not make hasty decisions just to ‘get something in place.’ Unraveling tactical Band-Aids can cause real headaches in the future.”
Tim Kittila will present “Asking the Right Questions for Data Center Strategy and Planning” at Data Center World Local, Chicago, on July 12 from 1:00-1:50 p.m. Register here for the conference. | 4:18p |
Cloudflare Hires Ex-Symantec Finance Chief in Move Toward IPO Alex Barinka (Bloomberg) — Cloudflare Inc. hired Symantec Corp.’s former chief financial officer to fill the same role at the network security company as it prepares for an initial public offering.
Thomas Seifert will join San Francisco-based Cloudflare on Thursday, according to the company.
After considering about 30 candidates over a two-and-a-half year search process, Seifert was the first person Cloudflare made a formal offer to, Chief Executive Officer Matthew Prince said. His public market experience, familiarity with European markets and understanding of cloud-based businesses all contributed to the decision, Prince said.
“Our management team members aren’t known entities to the public markets,” Prince said in an interview. “We were looking for someone that had that kind of gravitas to them.”
Cloudflare’s technology helps clients run websites more efficiently by speeding up performance, protecting from hacking attacks and analyzing traffic.
Founded in 2009 by Prince, Michelle Zatlyn and Lee Holloway, Cloudflare has raised $182 million from investors including Microsoft Corp., Fidelity Investments and New Enterprise Associates. It currently has an annual revenue run rate — the most recent month of revenue multiplied by 12 — of more than $100 million and gross margins of about 80 percent, Prince said.
See also: How to Survive an AWS Outage
IPO Preparations
The company is aiming to be ready to launch an initial public offering by the middle of next year, Prince said, though no set listing plans have been made. Seifert will help oversee key parts of Cloudflare’s pre-IPO preparations: Assembling internal processes and controls, growing the finance and investor relations team and planning how to position the company to Wall Street.
Cloudflare is joining a steady drumbeat of enterprise technology companies that are moving toward an exit. After a slow 2016, there’s been a resurgence of technology IPOs this year: 14 companies have raised a combined $6 billion so far, more than the $3.6 billion raised by 26 companies in the previous 12 months.
Seifert has already had a chance to practice one of the most crucial tasks of a public company CFO: Handling a quarterly earnings call. On a recent mock call — a practice Cloudflare has implemented in the past year — Seifert applauded the scripted introduction but said the questions, which come from the company’s existing investors, were too soft, Prince said.
Brightstar, AMD
Seifert joined Symantec from telecommunications company Brightstar Corp. in 2014. He previously worked as CFO — and later interim CEO — of chipmaker Advanced Micro Devices Inc., taking the reins in 2011 after Dirk Meyer was ousted, but bowing out of being permanently considered for the role. Seifert joined AMD in 2009 after working at German chipmaker Infineon Technologies AG and its spinoff, Qimonda AG, which went bankrupt.
He stepped down from Symantec in November, ceding the role of CFO to his counterpart from Blue Coat Systems Inc., which Symantec agreed to buy last year for about $4.65 billion.
Seifert will be based at Cloudflare’s San Francisco headquarters, literally as Prince’s right-hand man. The company recently moved into a new office with desks in groups of six, orchestrated by co-founder and head of user experience Zatlyn, who sits to the CEO’s left.
“The desk to my right was empty,” said Prince. “I said, ‘Michelle, who sits there?’ She said, ‘That’s where the CFO sits.’” | 5:00p |
CenturyLink Data Center Chief to Run Rackspace’s Private Cloud Business  Brought to You by Talkin’ Cloud
Rackspace announced on Thursday that it has named David Meredith president of private cloud and managed hosting, effective immediately. Meredith is the second appointment Rackspace has made in as many weeks after naming Joe Eazor CEO at the end of May.
According to Rackspace, Meredith will lead the largest revenue-driving business at Rackspace, its single-tenant business, with the goal of growth through investments and product strategy. He will report to Eazor, whose official start date is June 12.
See also: Rackspace CEO Taylor Rhodes Leaving Company
Prior to Rackspace, Meredith served as president of global data centers at CenturyLink. Previously, he held various leadership roles in international managed hosting businesses.
CenturyLink recently sold its data center unit to a newly formed company called Cyxtera, which is combining colocation services delivered from the former CenturyLink data center footprint with a robust set of enterprise security offerings.
Read more: Cyxtera Puts a Fresh Spin on CenturyLink’s Former Data Center Empire
“As a pioneer in the managed hosting business, Rackspace is strongly positioned to succeed in a market where businesses want help moving out of their legacy data centers so they can be more strategic in their IT operations,” Meredith said in a statement. “I’m pleased to be joining the Rackspace team at such an exciting time in the company’s history, and in the industry overall. I look forward to locking arms with teams across the business to deliver managed hosting and single-tenant solutions to customers in a way that is efficient, strategic and provides the highest value Fanatical Support.”
Meredith will take over partial responsibilities from Mark Roenigk, COO at Rackspace, who is leaving the company at the end of June after an eight-year tenure with the company.
“We are excited to have someone with David’s experience joining our team,” Jeff Cotten, president and interim CEO at Rackspace said in a statement. “Rackspace helped invent the managed hosting business in the late 1990s, and it has remained the largest portion of our business to-date. By strengthening our leadership position in this space, we can gain the business of enterprises that are increasingly outsourcing workloads they don’t want to re-architect for the public cloud. David will work closely with our marketing, sales and support functions to accelerate growth in this business. I am confident that David is the right person to lead this charge, and he will be a valuable addition to the company and our leadership team.”
“I’d also like to thank Mark Roenigk for his eight years of service to this company,” Cotten said. “Mark has embodied servant leadership for an entire generation of Rackers and has been integral in building and elevating Rackspace strategy around digital security, corporate social responsibility, open computing and operational excellence. We wish him the best moving forward.”
After a strong focus on investing in Rackspace’s managed cloud business, it will be interesting to watch how its private cloud and hosting business will evolve under Meredith’s leadership.
This article originally appeared on Talkin’ Cloud. | 5:30p |
Mainframe or Cloud: It Isn’t An All-or-Nothing Decision Christopher O’Malley is CEO of Compuware.
Recently there has been much discussion in IT circles, particularly in government, about the need to modernize legacy technologies and leverage newer alternatives. This desire is at the heart of the Modernizing Government Technology (MGT) Act, which is currently working through Congress. Here at Compuware, we are advocates of this bill, and believe modernization is the key to delivering the type of fast, convenient services that put customers’ and citizens’ needs first.
Indeed, there are technologies in use in both the private and public sectors that could be made more efficient. For example, virtualized x86 environments are prone to sprawl and demand constant attention. So the decision to consume the IT services associated with these costly environments (email or HR for example) from the cloud can offer many benefits, including flexibility and instant scalability.
However, there’s a dangerous element of “group think” that often accompanies IT modernization discussions across market sectors – that “new” automatically equates to “better.” There’s also the naïve tendency toward generalizing and believing that any technology that doesn’t include buzzwords like “cloud,” “machine learning,” “container” or “chatbot” must be replaced.
This isn’t always true. Sometimes there is simply no substitute for a modern version of the original, which is inherently better and doesn’t need to be replaced, but rather simply requires sincere stewardship. The mainframe is one example, having consistently defied predictions of its demise. Mainframes continue to support 70 percent of all enterprise data and 71 percent of all Fortune 500 companies’ core business processes.
The reasons for this longevity are simple. Mainframes, with new leading-edge models delivered every few years, have proven to be inherently more secure, powerful and reliable than the Cloud and distributed architectures, even though these alternatives are often perceived to be more modern. In addition, many organizations that have tried to move their critical systems-of-record off the mainframe have found the process to be altogether too risky, expensive and time-consuming. And, even if successful, they find that the systems they are left with are even more complex than the original, of less quality and therefore even more difficult and costly to maintain. There is no honor or reward in being successful at being unsuccessful.
In computing, the term “legacy” connotes an old, outdated technology, computer system or application program. But the post-modern mainframe is the most reliable, scalable and securable platform on the planet. To consider it outdated or unsupported technology is malarkey with a motive. According to one recent global CIO survey, 88 percent of respondents noted they expect their mainframe to continue to be a key business asset over the next decade; another 81 percent reported that their mainframes continue to evolve—running more new and different workloads than they did just a few years ago. The mainframe remains the only platform in the world that is capable of handling the huge surge in computing volumes brought on by mobile, and numerous studies have shown it to be more cost-effective in the long run than alternative architectures.
So in reality, the post-modern mainframe is anything but legacy, although its tools and processes do have to be modernized, allowing newer generations of developers to work on it as nimbly and confidently as they do on other platforms. This means replacing the antiquated green screen development environment with a modern, familiar IDE. It also means leveraging Java-like technologies that provide visualization capabilities to help developers understand poorly documented mainframe applications along with unit testing to maintain the quality of the code. “Agile-enabled” source code management and code deployment that ensure mainframe development organizations can participate 100 percent in DevOps processes are also required. We call this “mainstreaming the mainframe.”
The prospect of modernizing on the mainframe rather than moving off may leave many mainframe-based organizations feeling like they are at a crossroads. They see the benefits of keeping their mainframe – and understand it can be a competitive asset – but does this mean they can’t leverage the benefits of the cloud? The honest answer is no, because choosing between the cloud and the mainframe does not need to be an either/or scenario. Certain applications and workloads are better suited to the mainframe, while others are better suited to the cloud. The key is knowing the difference and devising a strategy that puts the worthy ideas of serving customers and citizens first and foremost.
This “two-platform IT” strategy entails keeping all mission-critical, competitively differentiating applications running on-premise, on the mainframe. Recent breaches and outages – most notably the Amazon S3 outage on February 28, which caused widespread availability issues for thousands of websites, apps, and IoT devices – should give any company pause before relegating their most critical computing systems, where near-perfect reliability is a must and mainframes thrive, to the cloud. On the other hand, non-mission critical and non-differentiating applications, such as email or HR, are better suited to the cloud’s economy of scale. Using the cloud in concert with the mainframe is where smart money invests.
In summary, when it comes to IT modernization, we must avoid becoming blindly enamored by the newest technologies, and getting caught up in sweeping generalizations. “New” does not necessarily mean “better” and the utility of longstanding technologies must be evaluated on a case-by-case basis, instead of slapping on labels like “legacy” and automatically dismissing them. The post-modern mainframe is a key asset, and combining it with the cloud in a smart, comprehensive strategy can yield an optimum resolution and a true win-win scenario.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 7:30p |
Cisco and IBM Team Up to Fight Cybercrime  By The VAR Guy
In the wake of the WannaCry ransomware attack that crippled systems around the globe earlier this month, two tech giants have joined forces to fight cybervillains. It’s like the start of a little cyber-Justice League, with Cisco and IBM standing together to defend against hackers and bad actors.
The new alliance is to all appearances a serious and significant one, with the Cisco Talos and IBM X-Force security teams committing to share threat intelligence during investigations of major breaches. The companies also plan to integrate their product portfolios with a series of releases over the next year.
A combination of the two companies’ security offerings makes sense, with IBM’s talents in analytics and cognitive solutions neatly fitting in with Cisco’s security infrastructure and detection capabilities.
See also: If You Think WannaCry is Huge, Wait for EternalRocks
“You marry those things and you have a really complementary set of capabilities,” Jason Corbin, Vice President of Strategy and Offering Management for IBM Security, told The VAR Guy. “Quite frankly, we’re meeting in the field anyway. A lot of customers have Cisco gear and IBM for security and analytics and incident response, and it’s just a natural progression for us to start to provide more value on top of our products in an out of box way for our joint customers.”
Cisco has said it will build apps on IBM’s QRadar threat intelligence platform, for instance, for Cisco products like Firepower and Threat Grid. For its part, IBM promises to lend its IT services support offerings such as its Resilient Incident Response Platform software and Watson for Cybersecurity to Cisco products, and offer IBM Global Services support of Cisco products for managed security service providers (MSSP).
The companies will work to make their security tools interoperable to make it easier for customers to craft an end-to-end security solution within the Cisco-IBM portfolio, netting them a hefty corner of the outsourced cybersecurity services market. Gartner says such services comprise the largest category of spending within the $81.6 billion information security industry.
See also: As WannaCrypt Recovery Continues, Analysts Back Microsoft’s Leader
“Our clients are overwhelmed with the volume of tools and solutions that are out there. Us tying our solutions together in a meaningful way has a really big impact on our clients in terms of cost, in terms of simplifying, in terms of delivering faster detection,” said Corbin. “What that means to our channel partners is that it’s going to open up some opportunities for our partners that are selling both Cisco and IBM to start to deliver really differentiated solutions in the market, especially given our approach around openness and collaboration that we have on our programs like QRadar.”
The more consolidated portfolio of security tools has the potential to ease the stress on partners and chief security officers who must juggle dozens of different products to form a comprehensive solution. Easing the strain of forcing a suite of disparate tools to work together may give security providers more resources to devote to threat detection and response.
Lalit Shinde, head of strategic partnerships and business development at Seceon, says the kind of partnership Cisco and IBM announced today represents an unsurprising next step in the evolving security landscape. “Seceon has been predicting this – too many security solutions in various silos are not working for our customers,” Shinde told The Var Guy. “Having good interoperability between various tools is a step in right direction by the industry’s largest security providers.”
The IBM X-Force and Cisco Talos security teams have committed to collaborating on security research to solve the challenges of their mutual customers. IBM will also offer joint customers an integration between X-Force Exchange and Cisco’s Threat Grid to up security analysts’ game.
Corbin says that IBM and Cisco have been in talks about a partnership since the RSA security conference earlier this year. The WannaCry crisis offered an opportunity for a trial run, and both sides were pleased enough with the results to move forward with making the partnership more official.
The two companies haven’t settled on any changes in partner incentives or other program elements the partnership may necessitate, but Corbin says that it’s a conversation they’ll be having soon.
This article originally appeared on The VAR Guy. |
|