Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, June 24th, 2015
| Time |
Event |
| 12:00p |
CoreOS CEO: Application Container Standard Good News for IT Ops SAN FRANCISCO – Developers may not be the only crowd that will benefit from the application container standardization effort that was announced at this week’s DockerCon here. It is bound to make life easier for data center managers and other IT staff that oversee the infrastructure where much of the code developers write will ultimately end up running.
That’s according to Alex Polvi, CEO of CoreOS, a startup with a version of Linux optimized for containers and massive-scale server clusters.
The promise of application containers is providing a fast and easy way to deploy code written on a developer’s laptop in production in a company’s data center or in the cloud. Another promise of containers, sometimes also referred to as “OS-level virtualization,” is much higher server utilization rates than even virtual machines can provide.
The progress toward widespread use of containers, however, had been in danger of slowing down because of a dispute over attributes a standard container should have. That dispute now appears to be over, since all major players in the container ecosystem have gotten behind a vendor-neutral project to create a single container standard.
Given how fast Docker containers have grown in popularity over the two-plus years the company has been in existence, it is clear that in the near future many enterprise IT shops will see their developers start pushing containerized applications into production.
Because the container vendor ecosystem has now agreed to create a common set of basic building blocks, operations staff will not have to worry whether their IT stack supports Docker, CoreOS, or another platform. They will simply have to make sure their stack supports the standard created by the Open Container Project, Polvi, whose company started the standard dispute last year, said.
“We have unanimous industry alignment now,” he said. “Pretty much every single vendor at the table [is] saying this is the way that we see the future of infrastructure going.”
Besides CoreOS and Docker, that list of vendors includes names like Red Hat, Microsoft, VMware, Google, Amazon, HP, IBM, EMC, Cisco, and Mesosphere, among others. In other words, the IT establishment is fully behind OCP (not to be confused with Facebook’s Open Compute Project).
Because Docker has been the undisputed leader in the space, such standardization effort may not immediately seem like a good business move as far as the company is concerned. If most customers use its technology, and the technology is based on a proprietary standard, it’s a lot easier to compete.
But the container ecosystem still has a long way to go before it matures, and having an open standard at the core will spur that ecosystem to grow faster, which will only benefit Docker and others in it.
“Microsoft held on to IE (Internet Explorer) for as long as they could as a proprietary standard, because it’s in the best interest of their business. But it’s not in the best interest of the user,” Polvi said, using the software giant as an analogy.
“We just nipped that one in the bud as an ecosystem right now. We’re going to do this thing right upfront [and] not let anyone grab a hold of the whole market right away.”
The pieces of intellectual property Docker is donating to OCP, the base container format and runtime, represent only five percent of the company’s code base, Docker CEO Ben Golub wrote in a blog post, referring to the IP as low-level “plumbing.”
Vendors will “focus on innovation at the layers that matter, rather than wasting time fighting a low-level standards war,” he wrote. The project, under the auspices of the Linux Foundation, will define the container format and the runtime, and not the entire stack.
It is tools in the layers above that basic plumbing that will make real difference for users, who will not be forced to choose a Docker or a CoreOS and be stuck with it. “Instead, their choices can be guided by choosing the best damn tools to build the best damn applications they can,” Golub wrote. | | 3:30p |
Laboratory in the Sky? Why Cloud-Based Labs Are Replacing Conventional Ones Moran Shayovitch is the Marketing Manager at CloudShare.
As any educator knows, retention rates are low for mere theoretical learning. To truly internalize new information students must have the benefit of an experiential encounter with the subject. That’s why tech lab work is so crucial. Unfortunately, for most students, the computer lab experience is limited due to time, space and financial constraints. Enter the cloud. Cloud-based computer labs are becoming prevalent in organizations around the world, replacing conventional labs. Cloud-based labs reduce training expenses, allow instructors to remotely monitor the progress of all students, and enable students to use the facilities both before the course begins and after it’s completed. The following will show how cloud-based computer labs can improve your training program.
Easy Does It – Simple Configuration and Maintenance
In a conventional campus, each student accesses a laboratory application running on a specific operating system and hardware configuration. To offer the application on different operating systems and hardware configurations requires modifying the parameters to suit the new environments, as well as on-going maintenance which can be costly. These multiple environments can quickly complicate set-up and lead to chaos.
With cloud computing, lab configuration is easy. You simply install your lab application once in the cloud environment, clone it and send each student a URL complete with a unique, pre-installed and configured training application. The operating systems and hardware used are irrelevant, because the application does not reside on your computers.
From a cost perspective, cloud-based training environments are web-based platforms that require no software installation on your company’s hardware system. This eliminates the need to manage your own servers, especially important if you don’t have the IT expertise or staff to handle such a big project. The software is managed for you by the cloud-based provider. This results in significant savings on capital expenditures as well as implementation and operational expenses. According to elearningindustry.com, cloud-based online classes reduce training expenses by up to 50 percent.
Someone to Watch Over Me – Complete Instructor Oversight
In conventional labs, instructors can often see only one student session at a time and must move physically between students to interact with them. Cloud-based labs allow instructors to monitor the progress of the entire class simultaneously through real-time thumbnail images of each student’s lab session. They can then drill down into selected student instances to provide personal assistance with questions and problems – all from their own terminal.
Late Night at the Lab – After-Hours Access
A traditional training course offers fixed start and end times. To review the material before or after the course, trainees typically receive printed documentation, which they can review at home. But the lab application for a specific course is only available for a limited time. As a result, the effectiveness of the lab sessions weaken the more time elapses after the course is completed.
Since online training is not tied to a physical campus, the same lab application can easily be made accessible even beyond the official training session. This enables instructors to send trainees pre-course prep assignments, or allow students to practice in the lab long after the course concludes. Since access to the training lab is rigorously monitored, after-hours access is often offered as a premium service.
So Close and Yet So Far – Remote Lab Access
Many organizations have more than one office distributed across the country. Often, conducting lab trainings for the entire enterprise can be difficult to manage. However, with the implementation of a cloud-based training lab, this is easy to achieve. All the offices require is a working internet connection.
Cloud-Based Labs Get High Marks
With simple set-up and maintenance, remote and after-hours access, and global monitoring, the popularity of cloud-based labs is likely to continue to grow.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:55p |
Dell Pushes HPC Server Envelope With New PowerEdge Dell has upgraded its line of PowerEdge servers aimed at high-performance computing applications and revealed 27 racks of the new HPC servers make up the Comet supercomputer launched recently at University of California, San Diego.
Based on Intel Xeon E5-2600 v3 processors that provide up to 18 cores per socket to make available 144 cores per 2U chassis, the new PowerEdge C6320 servers can be configured with up to 512GB of DDR4 memory and up to 72TB of local storage.
Brian Payne, executive director for Dell Server Solutions, said that in addition to being used to drive traditional HPC applications, the servers are also being applied to run Big Data analytics based on Hadoop and configured as hyper-converged appliances designed to run VMware EVO: Rail software or hyper converged Nutanix software that Dell also resells.
As Intel continues to move up the performance curve of its processors, IT organizations are increasingly opting to run application workloads that were once the sole province of RISC servers running Unix or custom-built supercomputer platforms on Intel Xeon-class processors. According to Payne, that shift not only expands the number of HPC-class applications that IT organizations can afford to run, it expands the market place for Dell into new segments that the vendor previously would not have been able to play in.
“We’re not only seeing customers deploy these servers in hyper converged appliances,” Payne said. “Customers are also running Hadoop on them to save money by offloading tasks from mainframes.”
The PowerEdge C6320 comes integrated with iDRAC8 with Lifecycle Controller, which enables IT organizations to automate routine management tasks to reduce the number of steps required to deploy, monitor, and update their servers.
All told, Dell claims that the new HPC servers provide up to two times performance improvement over the previous generation of servers running a leading HPC benchmark and can achieve 999 Gigaflops on a single server.
In the case of UCSD, the school has deployed a total of 1,944 nodes or 46,656 cores, which represents a five-fold increase in compute capacity over the previous HPC system the university was employing.
The National Science Foundation gave the university a $12 million grant in 2013 to build the Comet system. | | 6:37p |
Google to Turn Alabama Power Plant Into Data Center Google is planning to repurpose infrastructure of an Alabama power plant that is scheduled for shutdown to build a data center, the company announced today.
The approach is highly unusual, but a lot of power-plant equipment can be reused for a data center project, according to Google. Like in other areas of its operations, the company has never shied away from unusual approaches to data center design and construction.
The company expects to start data center construction on the grounds of the Widows Creek coal power plant in Jackson County early next year. This will be the first Google data center in Alabama and its 14th data center location worldwide.
The power plant has been in operation since the 1960s.
“Decades of investment shouldn’t go to waste just because a site has closed; we can repurpose existing electric and other infrastructure to make sure our data centers are reliably serving our users around the world,” Patrick Cammons, senior manager for Google data center energy and location strategy, wrote in a blog post.
“At Widows Creek, we can use the plants’ many electric transmission lines to bring in lots of renewable energy to power our new data center.”
While Google hasn’t repurposed a power plant before, this is not the first time it has chosen to convert an old industrial site into a data center. The Google data center in Hamina, Finland, is a repurposed 60-year-old paper mill.
In Hamina, Google reused the cooling infrastructure that drew water from the Bay of Finland to cool equipment at the mill for data center cooling.
As the company’s user base grows and as it adds new services, it continues to invest enormous amounts of money in data center construction, as do its rivals in the cloud services business. Data centers power all of Google’s services, and the company has to continue building out this infrastructure to keep up with demand growth.
The company went from running about 100 servers 15 years ago when it started to spending about $5 billion per quarter on building and operating data centers.
Google data center investments announced this year alone included a $300 million data center expansion in the Atlanta metro, a $380 million Singapore expansion, a $1 billion commitment to data center build-out in Iowa, and a $66 million data center project in Taiwan.
As it builds out its global infrastructure, the company also invests in renewable energy to offset carbon emissions associated with the enormous amount of power its data centers consume. Google said it will work with Tennessee Valley Authority, its electric utility in Alabama, to bring renewable energy generation onto the grid that will serve its future data center there. | | 6:53p |
Enterprise Mobile App Use to Grow Rapidly as Companies Look to External Developers: Report 
This article originally appeared at The WHIR
The number of mobile apps used by enterprise will growing rapidly over the next two years, but development challenges will lead to two-thirds of them being developed externally, according to a 451 Research report. The 2015 Enterprise Mobile Application Report (PDF), sponsored by cloud enterprise mobility solutions company Kony, shows more than half of respondents plan to deploy 10 or more apps in the next two years, despite a shortfall in capabilities.
451 Research surveyed 480 IT professionals in North America, Europe, and Australia in March, and found that budget and resource limitations, a skills shortage, legacy infrastructure, overall technology fragmentation, and immature lifecycle workflows will drive enterprises to seek help from third-party app developers.
“There is strong demand for new mobile apps, and companies are broadening their focus beyond core processes and application silos; however, enterprises are still very much in the early stages when it comes to mobile app strategies,” said Chris Marsh, study author and principal analyst, 451 Research. “IT is still in the driver’s seat when it comes to both the bulk of internal mobile app development, technology procurement and project management, although line of business want input and greater collaboration. Line of business is also starting to bring a great amount of funding support to the discussion.”
Enterprises are expecting their IT departments to develop most of apps internally, with the amount of development outside of IT dropping from 42 percent now to 21 percent in two years. That internal development, however, will be only 35 percent of all planned app development, while business application vendors are expected to make up 21 percent. Digital agency partners, developer partners, and systems integrators are expected to split the rest of the enterprise mobile app workload roughly evenly.
“The global market for enterprise mobility is expected to grow from $72 billion to $284 billion by 2019, nearly quadrupling in size,” Dave Shirk, president of Products and Marketing, Kony, Inc said. “Companies need to be prepared to meet this demand for mobile business solutions with proper alignment between lines of business, IT developers and IT management, to effectively manage and lead enterprise mobility projects.”
This first ran at http://www.thewhir.com/web-hosting-news/enterprise-mobile-app-use-to-grow-rapidly-as-companies-look-to-external-developers-report | | 7:54p |
Oracle to Open Cloud Data Center in Brazil Oracle announced Wednesday plans to open a cloud data center in Brazil to support its Software-as-a-Service offerings.
As revenue from the sales of business-software licenses continues a steady decline – it has historically been Oracle’s main source of revenue – the company has been investing heavily in cloud services. Brazil is one of the fastest-growing emerging markets for such services, and a data center there should improve user experience for Brazilian customers.
“This is great news for our customers throughout Latin America who are ready to capitalize on the region’s growing economy and take advantage of the unlimited possibilities cloud computing offers,” Oracle CEO Mark Hurd said in a statement.
Earlier this week Oracle announced a major expansion of its cloud portfolio, adding 24 cloud services to the list, including database, archive storage, Big Data, and mobile, among others.
Slated to come online in August, the São Paulo facility will be Oracle’s 19th cloud data center worldwide. The company said it will help it better manage service levels and data governance for customers in Latin America.
The company announced in April plans to launch a cloud data center in Japan.
The São Paulo facility will house Oracle’s own Engineered Systems that will support the cloud services. These are hardware systems designed specifically to maximize performance of Oracle software. | | 8:32p |
Creating a Healthy Data Center Workplace Every CIO knows that finding and retaining the right IT talent is one of the most difficult jobs they have. Not only are IT professionals with the right skills in short supply, competition for those people is nothing short of fierce.
Given the fact that it often takes six months or more to train IT professionals in emerging technologies, it’s in the interest of senior IT leaders to do everything possible to minimize IT staff turnover. After all, salary and benefits only buy so much loyalty, productivity, and engagement.
In a workshop at the Data Center World Fall conference in National Harbor, Maryland, in September, Gordon Blocker, principal consultant at the Table Group, will outline what it takes to create a “healthy” workplace that enables smart people to not only succeed but to thrive.
Blocker says most people become disenchanted with their jobs when they are forced to toil away anonymously, without metrics in place to determine whether they are actually succeeding, becoming disassociated with the core purpose of the organization as a whole.
Companies that are committed to organizational health focus on the cohesion of leadership teams, organizational clarity (of purpose, values, strategy, etc.), over-communication of that clarity, and reinforcement of clarity through human systems, he says.
“Smart teams on their own don’t necessarily get healthier,” says Blocker. “But healthy teams do get smarter.”
For more information, sign up for the spring Data Center World, which will convene in National Harbor, Maryland on September 20-23,2015, and attend Blocker’s session, “Tools for the Data Center Manager: Practical Skills for Increasing Employee Engagement” | | 10:47p |
Storage, Compliance, and Regulations in the Age of Cloud We all know that cloud computing has come a long way. We’ve got new ways to connect, new ways to deliver data, and a lot more user distribution. In an ever-connected world, the user and the organization are demanding a persistent connection regardless of device, location, or even data type. That means that both cloud and the data center model had to adapt to these new types of demands.
Well, this worked for a lot of organizations. They were able to deliver applications, desktops, and rich content via the cloud to a dispersed user base and an ever-growing organization. But it wasn’t perfect. The cloud model was only partially evolved since many eager cloud adopters were still limited in what they could do. Healthcare, pharmaceuticals, some public organizations, government, and other compliance or regulation-bound entities just couldn’t utilize the full capacity of the cloud.
So can compliance, regulations, and storage all live in the cloud? Believe it or not, there are new services and evolving models which now support a more compliance-oriented infrastructure. Here are a few examples:
- The Government Cloud. Ever hear of FedRAMP? If not, it’s time to take a look. Basically, FedRAMP is the result of close collaboration with cybersecurity and cloud experts from GSA, NIST, DHS, DOD, NSA, OMB, the Federal CIO Council and its working groups, as well as private industry. Already, cloud providers like IBM, HP, Microsoft, and Akamai are FedRAMP certified cloud service providers. Amazon is in the mix as well. For example, AWS GovCloud (US) allows US government agencies and customers supporting the US government to move more sensitive workloads into the cloud.
- PCI/DSS and the Hybrid Cloud. E-commerce in the cloud has always been a bit of a challenge. The passing of sensitive information caused serious issue for cloud providers. And so, providers like Rackspace decided to get creative. By intelligently controlling data through the cloud, the organization’s servers, and the payment gateway you’re able to continuously control the flow of sensitive information. According to Rackspace, when you host your infrastructure in their cloud, you can also sign up with a separate payment processor to provide tokenization, which occurs when you replace credit card data with meaningless numbers or “tokens.” When you accept a payment, non-PCI data is routed to your Rackspace-hosted environment, while the tokenized credit card data is routed to your payment processor. Since your customers’ credit card data is not routed to your Rackspace hosted infrastructure — only to the payment processor — your Rackspace environment stays out of the scope of your PCI requirements.
- Cloud for Healthcare. File and data collaboration, also known as the “Dropbox challenge,” has really crept up on the healthcare industry. In fact, HIPAA compliance in general can be a cloud nightmare. And so, a recent change to HIPAA (the Omnibus Rule) now allows for the creation of a business associate (BA). This is any organization that has more than just transient access to data (FedEx, UPS, USPS, for example).
As more organizations move towards a cloud model, there will be new rules written around cloud computing. Data centers are becoming more compliant and a lot more secure. As more users connect to obtain information via a cloud model, there will be a need for optimized security and data segregation.
Throughout the entire cloud planning and cloud storage process there are still some big takeaways to consider:
- It’s all about the use case. When you’re working with some kind of a cloud workload make sure to understand the impacts on your IT environment, your users, and your business. In some cases compliance isn’t the only barrier to a cloud storage deployment. Applications and data sets might have some very strict delivery profiles.
- Work with your provider. It’s good to be in the data center business. More providers are offering various kinds of compliance-ready cloud services and there are even more eager customers. Through this kind of growth, make sure to work with your provider when deploying specific kinds of compliance-bound workloads. There are a lot of new options around multi-tenant segmentation and control.
- Keep security at the forefront. As cloud computing continues to boom, there will be more data stored and more targets created. In creating a complaint-ready cloud architecture, next-gen security technologies can keep data flowing safely. This includes application firewalls scanning for anomalous traffic patterns and even port-specific security policies.
The good news is that new rules are being passed allowing new kinds of industries to leverage even more cloud services. As more content becomes web-born and web-delivered, the data center provider will sit square in the middle of the entire architecture. Fortunately, the future of the cloud compute model is looking to be a bit friendlier towards compliance-driven workloads. |
|