Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 16th, 2016
Time |
Event |
12:00p |
Three Steps to Manage, Protect and Secure Data in 2016 Jaspreet Singh is CEO of Druva.
To say that we live in a data-rich world is an understatement.
Every day, we create and store 2.5 quintillion bytes of data. In fact, 90 percent of the data in the world today has been created in the last two years alone, according to IBM. The amount of data created on digital information platforms daily is eight times greater than the information stored in all of the libraries in the U.S. (source: LexisNexis, 2015).
Exponential Data Creation
In addition to this exponential data creation, the mechanism for storing, managing and protecting these massive volumes of data has changed. We now live in the era of the “borderless enterprise,” a fluid and adaptable ecosystem whose resources are accessed, used and shared via mobile, social and cloud. Historically, IT had full control of protecting data, and data was centralized in a company’s data center. During the last few years, mobility and the cloud have literally pushed data out of the data center and onto various devices. Rather than remaining under control and saved on central IT systems, data is increasingly being created on mobile devices and laptops that never touch the corporate network.
According to a Ponemon Institute study, 44 percent of corporate data stored in cloud environments is not managed or controlled by the IT department. As a result, enterprises are tasked with protecting data amidst an increasingly dispersed data landscape while also keeping this data compliant with regulations, enterprise policies, global data privacy and industry-specific regulations (HIPAA, GLBA, COPPA). Also, as recently as November 2015, analyst firm Gartner found that employees would have four personal computing devices that they could use for work by 2018, over and above any corporate IT assets that might be provisioned for them.
Data Protection: More Devices, More Channels
In addition to an increasing number of devices in the enterprise, business is being conducted via more channels than ever before. Rather than email, phone and documents being the formal communication channels, there are now additional channels that have to be supported. Instant messaging and chat apps may be used for real-time communications, while text messages and social media are also used to share work-related information. These text or IM messages may never touch official company IT assets, yet they may have a material impact on company decision making.
How do enterprises then manage and protect data coming from different devices through a variety of channels? This proliferation of new devices and channels represents a huge change when it comes to handling and managing data. For companies, this move from the center to the edge – this decentralization – means that data protection and compliance have to evolve as well.
Compliance Changes
Let’s talk about compliance. In addition to HIPAA and GLBA, there are many mandates and regulations businesses need to keep top-of-mind. The recent EU General Data Protection Regulation (GDPR) on data protection clearly defines personal data and provides a common set of mandates across 28 countries, including provisions for data handling, cloud computing, data breach notification (and including mandates for companies doing business with EU companies). What does this mean for businesses? Enterprises will need to examine their approach to handling and managing customer data – both for general data protection and for more specific compliance mandates.
Data Protection and Compliance: What Companies Should Know
There are three steps that organizations can take to plan ahead and improve collaboration when it comes to protecting data and meeting compliance-related regulations:
Step #1: Recognize that data protection and compliance are two sides of the same coin. When done right, data protection practices protect the business at all times, capturing all data that employees create across the business and moving it to a secure secondary location. Compliance doesn’t work the same way; while there may be processes that have to be “compliant” in order for the business to run its operations, the team will tend get involved when there is a change in regulation or a need for an audit to take place.
This compliance investigation will normally demand a huge amount of time and effort to complete as the IT team searches across company files, emails and records for the required information. This workload can be reduced through smarter auditing and management of data ahead of any incident, particularly when it comes to data that might live on mobile devices.
Step #2 – Move from reactive compliance to a proactive approach. As I mentioned above, compliance teams tend to operate in reactive mode to audits, as many events can’t be forecasted. While there are some tasks that may come up every year – including auditors coming to check on financial performance – the most critical compliance events cannot be predicted. This is where an external incident can lead to a full-scale audit, and where the availability of all information and data is required to meet the demands of that audit. In this case, speed of providing information will be essential to that audit.
This approach requires two things: Automating the process of meeting any relevant compliance regulations, and automating how any files or data are classified as they are created across the business. To automate the overall process, it’s important to check which regulations apply to any part of the business first and then work to define how the requirements can be met. Alongside this, sensitive information such as personal health information (PHI), personally identifiable information (PII), personal credit information (PCI) and confidential Intellectual Property (IP) information may be created or used within new files or data all the time.
As new files are created, they can be automatically checked for any information that should be handled for compliance, and then put through the necessary process. By making use of more automation, the data protection and compliance teams can quickly assess and take corrective action for non-compliance on regulated or policy-managed end-user data.
Step #3 – Cover the whole enterprise, not just the center. As mentioned earlier, a great deal of computing and data creation is taking place at the edge of the organizations – from remote workers on multiple devices.
All these employees are creating data and files that have to be stored somewhere; the challenge is that there is often no official process for controlling and protecting that data. Rather than focusing on the central IT systems, the move to remote working and greater use of cloud applications means that organizations need to focus on the endpoint instead.
As we produce, access and manage data on more devices within different channels from disparate locations, proactivity will be key to organizations’ survival. Proactivity, collaboration and planning are prerequisites for getting ahead of the data protection and compliance curve – and staying one step ahead.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 7:12p |
Docker Container Orchestration: What You Need to Know Kubernetes. Mesos. Marathon. Centurion. If you’re a newcomer to Docker containers, these names probably sound like terms from a classical history book. But they’re actually container orchestration platforms. And they’re only a few of them. Confused? Here’s what you need to know about container orchestration.
To understand container orchestration, you first need to understand containers, of course. Today, when people talk about containers, they’re referring to virtualized apps that use the LXC layer in the Linux kernel to abstract code from the underlying system.
On their own, containers only let you run individual apps. That doesn’t make them very useful. If you just want to run a few apps you can do it more easily without containers.
But if you have a large number of apps to run — as you likely do if you have migrated to the cloud — containers come in handy. They make it easy to turn apps on and off to meet fluctuating demand. They also let you move apps seamlessly between different servers.
Unless you’re super-human, though, you can’t move container apps around very efficiently on your own. You need a management platform that will automatically spin containers up, suspend them or shut them down when needed — and, ideally, also control how they access resources like the network and data storage.
That’s where orchestration platforms come in. They provide this piece of the container puzzle.
Decisions, Decisions
The container ecosystem is basically open source. Open source programmers like choice. So it’s no surprise that there are now dozens of orchestration platforms that can handle containers.
Discussing all of them here would make for a very long read. So we’ll outline just the top orchestration platforms:
- Kubernetes: Descended from a platform (Borg) that Google developed to manage its infrastructure, Kubernetes is well suited for very large cloud environments. It’s complex and sophisticated, which is great if you have a large datacenter — but overkill if you only need to manage a few hundred containers.
- Swarm: This is Docker’s home-grown orchestration platform. Docker claims it’s faster than the competition, but results will probably vary depending on how big your environment actually is. If you build your container infrastructure using other Docker components, though, it probably makes sense to use Swarm, too.
- Mesos: Hosted by Apache, Mesos is designed for general datacenter management, not just containers. But it supports them, too. It’s an obvious orchestration choice if you are building a cloud that hosts not just containerized apps but also other virtual or bare-metal servers.
- Kontena: A new, up-and-coming (so we believe) orchestration tool that promises to “maximize developer happiness.” It’s designed for ease-of-use, making it a good option for admins new to the container game.
Again, we could go on. But we think these are the essentials. If your favorite container orchestration tool is not listed here, feel free to let us know why you think it should be.
This first ran at http://talkincloud.com/cloud-computing/docker-container-orchestration-what-you-need-know
| 7:40p |
Steve Garvey: Living Out His Dreams It’s a dream most 7-year-old boys can only dream about: being bat boy for his hometown teams, the Brooklyn Dodgers and the N.Y. Yankees, and later the Detroit Tigers during spring training.
During those five awestruck years between 1956 and 1961, a young Steve Garvey took the baseball field among the sport’s biggest heroes: Roy Campanella, Jackie Robinson, Gil Hodges, Sandy Koufax, the list goes on—all members of the champion Dodgers.
The Data Center World keynote speaker recalled memory after memory on Tuesday; one even brought on a collective gasp from attendees when he talked about hearing the crack of the bat—not just any bat—but from homerun king Mickey Mantle.
The packed room clearly knew baseball.
It’s not so much what Garvey learned about the physicality of baseball that he remembers most but the words of wisdom from players like Robinson, who became the first black player in MLB in 1947.
“You can be the best player on field, but if you can’t think, problem solve, deduce, be passionate and use the knowledge you have, you’re probably not going to be professional,” the now 67-year-old paraphrased. “You play hard, you look and you listen.”
Little did that Brooklyn boy know that he had barely scratched the surface of dreams.
Garvey took those words, and many more, to heart during his years in Little League, up through high school, into college, as a professional and today as a retired champion.
He has a laundry list of accomplishments on the field as a first baseman for the Los Angeles Dodgers and the San Diego Padres. The ones he appeared most proud of as he spoke about his career didn’t focus on a batting stance or throwing motion.
“You must be passionate, you must dedicate yourself, and you must be relentless in the pursuit of your goals. If you do, you will be successful.”
For example, picture this infield from the 1975 Dodgers: Garvey’s on first base, Davey Lopes at second, Bill Russell at shortstop and Ron Cey at third. This foursome played together for a major league record eight seasons—one that Garvey insisted will never be broken.
Then there’s the streak of playing 1,207 consecutive games from Sept. 3, 1975 to July 29, 1983, a National League record. That landed him on the cover of Sports Illustrated as baseball’s “Iron Man.”
Nothing kept him from heading to the ballpark, suiting up, and playing each game with the same level of intensity. Garvey recalled playing at times with a hyperextended elbow, 22 stitches in his chin, a pulled hamstring, a bruised heel, a migraine, the flu, a 103° fever and a toenail so impacted they had to drill a hole in it to relieve the pressure.
Of course, we can’t leave out the fact that in his 19-year career, Garvey was a .294 hitter with 272 home runs and 1,308 RBI. And, he played an entire season at first base without an error. Interestingly, Garvey had little to no experience at first when he was asked to play.
“In our lives sometimes we’re asked to do something we may not understand, or not feel good about; but you have to seize the moment,” he said. “Sure, you make mistakes and you keep getting knocked down. But, if you don’t reach for perfection, you won’t accept excellence.”
Following Garvey’s presentation, he asked the audience for questions. One man stood up and asked for his thoughts about not yet being inducted into the Hall of Fame after all these years. It’s one he’s been asked over and over and over again.
“When it comes down to it, it’s not about the back of my jersey (Garvey) as it is the front (Dodgers, Padres).” Despite living out many dreams throughout his life, making the Hall of Fame remains the most elusive.
“To be honest, I am disappointed. I always thought of my career as a body of work and not just about numbers.” | 9:35p |
IT Innovators: Delivering Television as a Service via the Public, Private and Hybrid Cloud  By WindowsITPro
Software as a service (SaaS) continues to be one of the more significant growth areas in the tech field. And it’s clear that the way we watch television is changing rapidly by technology—in particular, cloud-based services like Netflix. That means Television as a Service (TVaaS) is a logical combination of these two areas.
Viaccess Orca, a global leader in the protection and enhancement of content services, is both leveraging the advantages of cloud-based television services and working through the kinks. The model becomes ever more appealing as the speed of the cloud continues to increase, which is a key consideration when streaming data-heavy high-definition video content.
“What we are offering is not a product or solution any more, it’s a service,” says Chem Assayag, executive vice president of sales and business development at Viaccess Orca. “We are taking care of many aspects of the solution that the content provider would have to take care of himself.”
Readers are likely familiar with “TV everywhere” services that allow consumers to access television and movie offerings on a variety of hardware, anywhere they happen to be, whether on a television at home, a phone on the city bus, or a tablet in an airport. What Viaccess Orca does is offer the SaaS that is the means to getting that service to customers, for a variety of clients.
And while things are intentionally uncomplicated from the user end, they are considerably more so on the developer end. “Things might look simple from the outside,” Assayag says, but “it gets more complicated as you look further into it.”
An Increasingly Popular Option
Viaccess Orca’s TVaaS offering makes use of both the public and private cloud. Content subscribers can access the former, and the latter stores deep data on users’ viewing choices, content preferences, and engagements for the providers.
But along with the options for customization for both the client and their customers, the cloud offers other selling points. The scalability of a cloud-based model is one key financial advantage, Assayag says, both for Viaccess Orca itself and for customers. Having a scalable solution allows a customer to get in in the early stages, and their costs grow with the company, and presumably, its revenues.
And being able to offer a solution to customers at every level, not just those with a large budget to spend up front, widens Viaccess Orca’s potential for sales. The scalability lets smaller companies get in the game, Assayag says, and also provides their customers with the flexibility to change their services as their needs shift.
Another advantage of the cloud-based model is that it offers considerable savings at the outset, Assayag says. “With a non-cloud model, you’d have machines sitting in the customer premises.” But with a cloud solution the “machines,” or rather the software a machine would typically run in the home, exists in the cloud. This allows customers to pay for access, which costs less than buying the hardware themselves. It also reduces the hardware needs of the client by using a hybrid-cloud model.
“You don’t have to spend a significant upfront,” Assayag says.
Today, consumer awareness of TVaaS options remain somewhat low. A survey from Altman Vilandrie and Co., commissioned by Epix and released in October 2015, found that only about a third of Americans are aware of “TV everywhere” offerings. But the same survey also found that the number of people who are cutting the cord of traditional cable is currently at 17 percent and increasing. That means companies that hit the sweet spot between security, usability, and speed–on both ends of the service–have an opportunity to capture several areas of this growing tech-minded market.
Terri Coles is a freelance writer based in St. John’s, NL. Her work covers topics as diverse as food, health and business. If you have a story you would like profiled, contact her at coles.terri@gmail.com.
The IT Innovators series of articles is underwritten by Microsoft, and is editorially independent.
This first ran at http://windowsitpro.com/it-innovators/it-innovators-delivering-television-service-public-private-and-hybrid-cloud | 9:51p |
Moving Away from AWS Cloud: Dropbox Isn’t an Anomaly, and Here’s Why  By The WHIR
As Amazon Web Services was blowing out the candles on its 10th birthday cake, AWS customer Dropbox was not afraid to be a bit of a party pooper.
Let me explain: WIRED published an article on Monday outlining how Dropbox had decided to move most of its infrastructure away from AWS cloud. Essentially, the San Francisco-based cloud storage company has mostly outgrown AWS cloud, and for the past two-and-a-half years has been building its own infrastructure. With 500 million users, Dropbox has grown considerably since its launch in 2008, and now 90 percent of its data is stored on its custom-built infrastructure.
If you’re wondering how it’s possible to outgrow a global infrastructure like AWS, which spans 12 geographic regions with 5 more to be added this year, it’s a fair question. Dropbox, and others, are doing what they can to answer it.
It Costs How Much?
A big factor is cost. While AWS is convenient and enables customers to spin instances on and off, it is not cheap. This narrative certainly is nothing new; several years ago marketing software startup Moz started to move away from AWS cloud in favor of its private cloud. In 2014, Moz CEO Sarah Bird said that it was spending “$6.2 million at Amazon Web Services, and a mere $2.8 million on [its] own data centers.” Simply put, the cloud killed its margins.
Despite the AWS price drops over the years, Dropbox told WIRED that it gets “substantial economic value” by operating its own infrastructure. Dropbox VP of engineering Akhil Gupta rightly said “[n]obody is running a cloud business as a charity…there is some margin somewhere.”
On the other hand, hiring and retaining the people to create a custom-built infrastructure is expensive. Hardware costs add up. So does cooling. And real estate. And so on and so on. You have to seriously consider all of the costs involved before you start moving data around.
Hybrid, Hybrid, Hybrid
“Dropbox stores two kinds of data: file content and metadata about files and users,” Gupta said in a blog post. “We’ve always had a hybrid cloud architecture, hosting metadata and our web servers in data centers we manage, and storing file content on Amazon. We were an early adopter of Amazon S3, which provided us with the ability to scale our operations rapidly and reliably. Amazon Web Services has, and continues to be, an invaluable partner—we couldn’t have grown as fast as we did without a service like AWS.”
Indeed, a hybrid approach is what Dropbox’s infrastructure will look like for the foreseeable future. The company said it plans to invest in its own infrastructure and it will partner with Amazon “where it makes sense”; for instance, later this year Dropbox will expand its partnership with AWS to store data in Germany for European business customers that request it. It’s unclear how many customers we’re talking about, or how much data, but Germany is a known for being particularly complex and stringent when it comes to data security so having a partner to navigate that is helpful.
With Tech, Could What’s Old be New Again?
If you follow fashion trends, you may know that a common school of thought is that trends cycle every 20 years. It does help explain the recent resurgence of 90s choker necklaces.
So if what’s old is new again, could tech trends like colocation have a comeback? Is owning a data center cool again? Not necessarily. Technology and the cloud are not one-size fits all. I expect we’ll continue to see companies augment their investments in their own infrastructure with AWS, and consultants and others that serve particular markets, enterprise for example, add on more services around moving workloads away from, and not just to, the cloud. This is where hosting providers and others come in. You know how hard it is to run a data center. You know what it takes to migrate a ton of data from one place to another. Capitalize on that knowledge and it could pay off big time.
Analysts believe Apple could be one of the next companies to move away from AWS. In February, analysts at Morgan Stanley wrote in a note that Apple adding three new data centers to its fleet could signal plans to move away from AWS over the next 18-24 months.
Of course, Dropbox and Apple are among a special breed of companies. Moving away from AWS may not be a macro trend. For as many companies that move away from AWS, there are companies like Netflix going all-in. Then again, I never thought I’d regret throwing out a tattoo choker necklace I wore in the 90s, but here we are.
This first ran at http://www.thewhir.com/blog/moving-away-from-aws-cloud-dropbox-isnt-an-anomaly-and-heres-why | 11:52p |
Why Data Center Managers Should Care about DevOps While developers and IT operations professionals have been excited about the concept of DevOps, data center operators, the people who run the infrastructure for the teams upstream, haven’t generally been involved in the conversation. Jack Story, distinguished technologist at Hewlett-Packard Enterprise, thinks that is a mistake.
And people make that mistake because there is a lot of confusion about what DevOps is and isn’t. In a session at this week’s Data Center World Global conference in Las Vegas, Story attempted to make the case that data center operators should be part of the DevOps process and explain what it is.
A lot of confusion about DevOps comes from the misconception that it is about tools and automation. “It is not about automating the processes that you have today,” Story said. “It is not a tool. It is a cultural and organizational change.”
More than anything, a switch to DevOps is a cultural one, and that is the kind of change that is the most difficult for any organization. “We resist change in our profession; we resist change as human beings,” he said.
But change in IT today is a necessity. Even if you aren’t providing cloud services as an IT organization, your customers, be they marketing directors or developers, generally expect you to provide services the way a cloud provider would.
“It’s the perception that cloud can be cheaper; it’s the perception that cloud can be faster.”
What do they expect exactly? They expect all the attributes of cloud computing the National Institute of Standards and Technology identified five years ago in its definition of the model:
- On-demand self-service
- Broad network access
- Resource pooling
- Rapid elasticity or expansion
- Measured service
This pressure is forcing IT organizations to rethink the silos they know and love. The strict delineation between network, server, storage, and facilities teams just doesn’t work for DevOps.
It is about enabling developers to switch from the “waterfall” model, where a new software deployment is given a long timeframe, allowing a lot of time for preparation and planning before it goes live, to agile methodologies, whose core elements are collaboration across teams, deploying quickly, and improving continuously.
It is about tightly aligning software development with business needs, because the waterfall model doesn’t work in this day and age. The feature or product is simply outdated by the time it goes live.
Business requirements today don’t stay static, so there’s no room to be static for the technology teams that serve the business.
Marrying operations with development to enable the DevOps model is basically about drawing all stakeholders, from development down to data center operations, into a single collaborative process.
HP’s internal IT organization made the transition and saw the benefits almost immediately, Story said. Those benefits were greater velocity, quality, and stability.
The recent split of the company in two went a lot smoother because the IT organization was functioning this way, he added. The split into HPE and HP Inc went almost unnoticed by the employees, as far as the tools they were using were concerned, but the work that was done in the background by the IT teams was “tremendous,” Story said.
The process is still ongoing at HPE, because there is never a point at which you can say, ‘OK, we are now DevOps.’ It is a philosophy and a methodology, and not everybody in the organization will like it, he warned, so not everybody will stay onboard.
“At a couple of environments we had to have people removed,” he said. “A lot of conversations are going to get emotional, because it is change.” |
|