Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, December 8th, 2015
| Time |
Event |
| 1:00p |
Forget Hardware, Forget Software, Forget the Infrastructure LAS VEGAS – Enterprise IT has to forget about hardware, forget about the infrastructure, forget about software, and think more about getting their job done, which is delivering services or applications.
That’s according to David Cappuccio, a VP at Gartner who oversees research in enterprise data center strategies and trends. In the opening keynote at Gartner’s annual data center management summit here Monday, Cappuccio together with colleague Thomas Bittman outlined Gartner’s vision for the new role the IT organization has to play in the enterprise.
That new role has less to do with managing disparate bits of infrastructure and more to do with selecting the best infrastructure strategy to provide a specific service. The toolbox they can select from includes on-premise or colocation data centers and cloud – private, public, or hybrid, on-prem or outsourced.
“The role of IT is shifting to become an intermediary between the customer and the data center and the service provider,” Bittman, a Gartner VP and distinguished analyst, said. “The service provider might be you, but it might be Google, or it might be Salesforce. It comes down to delegating responsibility.”
Gartner expects digital business to drive more and more revenue for enterprises of all kinds, which is why the market research and consulting firm is placing so much importance on an agile, multi-dimensional approach to infrastructure strategy.
Today, digital business capabilities drive 18 percent of enterprise revenue, Raymond Paquet, a managing VP at Gartner, said. The analysts expect that portion to grow to 25 percent in two years and more than double by 2020, reaching 41 percent.
“We (enterprise IT) have the opportunity to lead and work with our businesses to drive this revenue,” he said.
By its nature, the enterprise IT environment today is complex, and it continues growing in complexity as companies release more applications and add more infrastructure components to support them. Some is deployed as cloud services and procured by the business side of the house, while some is set up by IT.
In addition to the primary data center, that environment is likely to include a secondary data center, some colocation space, a disaster recovery site, DR-as-a-Service, branch-office IT, Software-as-a-Service applications, micro data centers in branch offices, social-networking platforms used by staff, and so on.
The big opportunity for IT to add value is to act as a broker and to reduce that complexity for its business users. But their decisions have to be driven with specific application needs in mind:
- Latency
- Reputation
- Service continuity
- Performance
- Security
- Data Protection
- Compliance
- RTO and DR
Taking on this new role is easier said than done. It isn’t a one-time wholesale switch. IT leaders have to sell the new approach to their customers, taking one application and one group of users at a time, documenting the results and advertising them to the rest of the business.
Are you freeing up some time for the users? Are you reducing ineffective use of resources? Measuring and promoting results like that is how IT can convince the business that it can lead and add value, the analysts said.
Ultimately, IT has to change the perception of IT in the business customers’ eyes: IT slows and weighs us down and says ‘no.’ The new perception should be that enterprise IT accelerates time to value, adds value, and protects the enterprise, Bittman said. | | 4:00p |
DuPont Fabros Planning Massive Toronto Data Center DuPont Fabros Technology, the wholesale data center provider that leases lots of space and power capacity to the likes of Microsoft, Facebook, and Yahoo, is expanding into Toronto, a growing data center market the company says is underserved by data center providers.
The geographic expansion is part of a broad series of strategic changes the technology-focused real estate investment trust is making.
With a new CEO on board – former NTT exec Christopher Eldredge took over the helm from DFT’s founding CEO Hossein Fateh in February – the company is now providing more power density and infrastructure redundancy options than it has before and offering full-service leases in addition to triple-net leases, which used to be its only available option. It has also stopped pursuing retail colocation business, an initiative it announced last year, choosing instead to double down on its traditional bread-and-butter wholesale data center model.
Toronto is the furthest along, but DFT is also going through the site selection process in the Portland and Phoenix markets. In Toronto, “we’re in the process of securing the land and doing the design,” Scott Davis, whom DFT recently promoted to CTO, said in an interview.
DFT currently has 12 data centers in four US markets: Northern Virginia, New Jersey, Chicago, and Silicon Valley. The facilities total 3 million square feet and nearly 270 MW of critical power capacity.
Its two biggest customers are Microsoft and Facebook, each driving about 20 percent of annualized rent revenue. Rackspace is third biggest, driving about 11 percent, and Yahoo is fourth, driving 7 percent of revenue. Other customers include Dropbox, Symantec, Server Central, and UBS.
The company chose Toronto after an extensive search and analysis of supply and demand dynamics across top markets in North America. Ontario’s capital, the fourth largest city on the continent by population, has robust demand for data center space, and especially wholesale, Davis said.
“Dallas (for example) is a very good market but pretty saturated from a provider standpoint, whereas Toronto has very few providers,” he said. “It’s actually underserved. We’d rather be in early to an emerging market than be the last ones into an established market.”
There are a few retail colocation providers serving Toronto, and the wholesale providers that have gone into the market did well there, he said, citing competitor Digital Realty Trust as an example. Digital, Davis said, “sold pretty quickly, but it wasn’t really large-scale.”
Exact capacity DFT is bringing to the Toronto market is still undetermined, but the company has no plans of deviating from its model of building massive campuses whose scale enables it to drive down the cost for its customers. | | 4:30p |
Understanding the Role of Flash Storage in Enterprise IT More organizations across a number of industries are looking at different ways to control storage and their data. Traditional storage solutions still have their place, but new methods are allowing IT shops a lot more flexibility in how they design their storage solutions, and flash is one of the most popular options. So is it really catching on? Is the world really going to solid-state?
Let’s examine one use case that’s been seeing a resurgence in the modern enterprise: VDI.
In the past, technologies like VDI were seen as heavy fork-lift projects which required time, resources, dedicated infrastructure, and big budgets. That has all changed with advancements within network, compute, and storage. Today, strong VDI offerings provide five-nines availability and greater scalability, as well as non-disruptive operations. With this in mind, it’s important to note that for a truly successful VDI deployment, all-flash storage should be part of the change in the VDI ecosystem. Ultimately, this will enable much higher performance for end users.
Often times, with sub-millisecond performance user experience with all-flash storage in the background is even better than the performance they had with physical devices and definitely better than VDI with spinning disks or even hybrid storage solutions. This type of technology has become one of the big change factors which now enable successful VDI deployments.
Higher-performance flash does not necessarily mean higher cost, but we’ll talk about VDI economics in a bit. From an economics perspective, new types of all-flash systems actually provide some of the highest levels of data reduction. With greater than 10:1 data reduction capabilities, the cost of high-performance storage is much more reasonable than one might expect, especially when companies expect flash to be a premium storage offering with premium prices.
The reality is large data reduction capabilities result in lower cost of storage, greater data center real estate savings, and savings in power and cooling. All of this results in a lower total cost of ownership as well as lower cost per desktop.
All-flash and solid-state solutions are revolutionizing resource utilization and reshaping the economics of the data center. Many of the older premonitions we had around all-flash are quickly going away as more organizations adopt flash for its many benefits.
To really understand this evolution in storage, we need to look at the good parts around flash technologies, the challenges, and what the future will hold.
The Good
Let’s start with cost. You’d think that all-flash comes at a premium, but you should definitely crunch the numbers again as they relate to your business. You can save a lot of money by nixing out a massive storage controller and opting for a complementary all-flash array. That sounds great, but what about management? Well, here’s where we usually talk about software-defined storage. However, modern storage management goes beyond SDS and into actually controlling enterprise functionality from the logical layer. The next generation of data abstraction won’t care about your hypervisor, your underlying hardware platform, or even which data center you’re using. New types of all-flash and storage systems create greater levels of data control and management. Furthermore, these data abstraction layers offer deduplication, data encryption, acceleration, and more. New flash controllers are smaller, smarter, faster, and less expensive. Plus, they’re fitting into more use cases for your business, and they’re integrating into cloud and virtualization layers much more seamlessly.
The Bad
It’s not all sunshine when it comes to controlling cloud and data storage. It’s certainly exciting to have so many more conversations around next-generation storage solutions, but we also have to be realistic. We’re seeing adoption happen already, but it’s more use-case-specific rather than across the enterprise. Creating an enterprise flash solution takes away from a tried and true method of data control. What if your data simply needs that underlying controller resource? What if you need some kind of proprietary replication method that only a traditional enterprise controller currently has? What if your apps require a validated design for support? These are all legitimate challenges to deploying an all-out solid-state solution.
Finally, although cost is becoming less of an issue, it’s still a big consideration. Certain data types simply do not need to live on an all-flash array. Information which needs to be archived or just doesn’t require much performance is better suited for traditional disk environments. And so, the role of all-flash within the enterprise will have to remain use-case-specific. It’s not entirely feasible to have an enterprise organization rip out its entire storage ecosystem to replace it with all-flash. Right now, that’s just not economical. However, as cloud systems evolve and offsite storage/archival become easier to integrate with, we’ll see higher percentages of all-flash controlling more of the enterprise ecosystem.
The Future
A big concern around all-flash arrays revolves around reliability, management, and resiliency. The good news is that now you can incorporate your uptime requirements around solid-state technologies while abstracting powerful controls into the logical layer. The power of virtualization and next-gen data movement allow you to create intelligently replicated and resilient storage platforms.
Now, let’s talk about the future a bit. Commodity systems are going to make an impact. Already storage administrators will actively look at options outside of traditional storage. Just look at how well new technologies like Nimble and Pure are doing. With that in mind, the data control layer will only continue to evolve. The future of data abstraction will revolve around the capability to manage heterogeneous environments without complexity. An organization will be able to acquire a new branch or division without having to rip out their existing infrastructure. They simply need to import a VM, point storage repositories and data points to that VM, and allow it to do the rest. Powerful REST APIs are impacting how we integrate with cloud automation and management solutions like OpenStack, CloudStack, and others.
New types of applications and use cases are demanding more resources and better delivery methodologies. Commodity platforms, virtualization, and the proliferation of all-flash technology will profoundly impact the storage market moving forward. And, as much as it’s all changing, it will be up to the enterprise to understand their own evolving use-cases and see where new types of solid-state technologies have a direct fit.
Admins can now pick and choose their storage platform. Applications, data sets, and users are all influencing how we create data and support our organizations. Commodity systems and converged platforms are all playing their part in shaking up the storage industry. The bottom line is there are new options out there. And, in a growing number of use cases, all-flash systems make a lot of sense. | | 5:00p |
Detection v. Prevention: The Next Step in Enterprise Security Nir Polak is CEO and Co-founder of Exabeam.
There’s one thing every heavily publicized data breach has in common: It wasn’t uncovered until it was too late. The breach at the U.S. Office of Personnel Management (OPM) in February was still active more than three months after security workers learned of it. In fact, many of them have another thing in common, preventative security measures weren’t enough to stop them.
Prevention has always been a major component of security. Firewalls stand at the perimeter of sensitive, private networks and attempt to keep every malicious file out. As the OPM breach and countless other disasters prove, though, it’s just not enough. More than 21 million records were compromised before the breach was detected in the first place. Prevention-focused initiatives have a place in cybersecurity, but there needs to be more. As we move into 2016 and confront new threats, detection needs to become an equally significant component of enterprise IT security standards. Like so many other parts of the enterprise, the answer to improving the approach to network security and eliminating disasters comes in the form of analytics derived from big data.
Prevention Isn’t Enough
Security without a method of stopping attacks already in progress makes it impossible for businesses to stay in front of cybercriminals. Too often, security teams and businesses only hear of attacks after a third party informs them of a possible issue. Instances of data movement, suspicious remote logins and others start to pile up, and it’s clear something is wrong. Without a clear picture tying their relationship to each other, it can take weeks or even months for a problem to be detected and even longer for a resolution.
Beyond that, it has become increasingly difficult for most solutions designed for prevention to keep up with cybercriminals. Threats evolve every day, with hackers staying well ahead of the software and solutions designed to protect networks. A few small adjustments to the code of a virus or a misstep by an employee, such as downloading a tainted attachment, and all of that time spent trying to prevent disasters is undone, especially without anything in place to detect cybercriminals after they enter a network.
Simplifying Detection By Monitoring for Suspicious Behaviors
User behavior analytics (UBA) solutions monitor all network activity to help security teams identify issues as they’re developing. Instead of learning of a six-month-old breach from a third party, companies are alerted to the issues as they happen. By analyzing all users on a network, UBA tools develop a clear understanding of typical behavior. Whenever an account strays from those established norms, it’s marked as an anomaly. Companies currently employ security incident and event management (SIEM) systems that pull log data of behaviors, but still spew out thousands of alerts a day. When supplemented with UBA tools, security workers know what to look for and they can identify issues promptly and address them before they compromise any more data.
UBA tools’ algorithms learn from every bit of activity on a network. As they gather and assess more information, they become even more effective methods of a holistic security protocol. This is increasingly important because attackers evolve and change course even after they’ve been detected. Solutions designed to spot just one or a few forms of anomalous behavior won’t necessarily be able to keep track of an intruder. By automating detection and analysis, teams are able to respond quickly and focus on the major problems. Digging through the countless alerts they receive every day is inefficient and doesn’t typically stop attacks. When security teams know which alerts are the most dangerous, they can put all of their resources into the most problematic issues.
Businesses’ investments in SIEM solutions have helped both IT and security teams do their jobs more effectively. Working UBA into the equation makes SIEM more useful, leveraging the data from repositories to grow even smarter and sensitive to ongoing attacks. UBA presents profound analysis of any and all suspicious users, detailing their every move through a network. This gives security teams the ability to identify every piece of data likely associated with a breach and take the steps required to resolve incidents with greater accuracy. So much of this work is currently done manually, wasting time and inevitably resulting in inaccuracies.
Advancing security protocol with better detection capability through UBA solutions has quickly become a competitive necessity. When these issues persist, companies struggle, but, more importantly, customers take their business elsewhere.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:00p |
Electronic Frontier: TPP Language on Open Source Heavy-Handed 
This post originally appeared at The Var Guy
Should governments be able to force source code to be open? Arguably, yes. But the Trans-Pacific Partnership agreement prevents authorities from requiring that, as the Electronic Frontier Foundation warned recently. As a result, the TPP places severe restrictions on open source software.
The TPP, which was finalized this October, is a trade agreement between a dozen countries in the Pacific region. Among its many stipulations is an interesting one regarding open source code. As the EFF observed, an article of the TPP reads:
“No Party shall require the transfer of, or access to, source code of software owned by a person of another Party, as a condition for the import, distribution, sale or use of such software, or of products containing such software, in its territory.”
In other words, the trade agreement bars governments from requiring that software source code be shared if the entity that owns the code does not wish to share it.
This may not seem very remarkable at first glance. Most of the licenses and copyrights that protect proprietary source code already prevent it from being shared without the owner’s explicit permission. In fact, obtaining and reusing the source code of proprietary software without authorization by the code owners would be a crime in most situations.
But as the EFF notes, there are situations where it could make sense for government regulators or other authorities to require that source code be open to certain third parties, if not to the public at large. Those situations involve public health and safety.
For example, regulators might have a reasonable interest in inspecting software on devices that host personal data for flaws that could violate users’ privacy. IoT devices like home alarm systems and wearables could be safer if a third party were able to check their code for malware. Connected cars and medical devices are hard to check for safety if the only people with access to their software source code are the manufacturers.
How governments are to regulate software on devices like these, and the extent to which such code should be open source in the traditional public sense, are questions still being answered as software continues to evolve from something that powers only traditional computers to an ubiquitous presence that controls nearly every aspect of human life.
It seems clear for now, however, that explicitly preventing regulators from requiring code to be opened for inspection, as the TPP does, is probably a bad idea. It also seems like a step backward in an era when the clearly dominant trend in software is to make code more open, not less. After all, even Microsoft has learned to love open source in the past year.
Does that make the TPP one of the worst threats to open source software of 2015? It just might.
This first ran at http://thevarguy.com/open-source-application-software-companies/tpp-trade-act-threatens-open-source-iot-and-beyond-eff-sa |
|