Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, August 17th, 2015
| Time |
Event |
| 12:00p |
Five Critical Layers of Next-Gen Data Center Automation and Orchestration If you look at the modern data center and cloud landscape you’ll notice a lot more interconnectivity and new capabilities to dynamically pass resources. Some solutions even allow for cross-connects for the easier flow of data. The interesting piece here is how all of these technologies, which are currently influencing the end user and corporation, are directly pushing for the evolution of the modern data center through data center automation. Cloud computing, Big Data and IT consumerization have transformed the data center into the central hub for everything.
Today, there are entire organizations that are born from a cloud model which resides within the data center. Looking ahead, new cloud and data center control systems will only continue to become more critical. As more users utilize content delivered directly from cloud resources, the data center will need to be able to handle the influx of new demand.
This means creating new efficiencies at all levels via data center automation.
Here are the Five Key Automation and Orchestration Layers:
- It’s no longer about 1-to-1 application server mapping. Now, with high-density computing and the need for greater levels of multi-tenancy, servers are hosting more users and more workloads. Data center administrators don’t have the time to configure individual blades. Now, hardware and server profiles are built in seconds. Admins only need to insert a new blade and allow the server layer automation to take over. New technologies allow administrators to create powerful “follow-the-sun” data center models where hardware automatically re-provisions itself for the appropriate set of new users. These policies can then scale cross-data center. When coupled with a load-balancing solution, you can dynamically port users to a data site that is most efficient for the workload and has the available resources. All of this is done through orchestration and automation policy.
- Software/Application: New technologies are able to look at applications running within the data center or within the cloud and help automate and control them. Power physical and virtual load balancers, for example, can see that a certain type of application is receiving too many connections. From there, an automated process will allow the administrator to provision another instance of the application or a new server which will host the app. Furthermore, a few examples of automation include technologies like provisioning services. Application layering and provisioning services are able to connect directly into virtualization brokers to help the delivery and control of both desktops and applications. Other platforms like CloudPlatfom, OpenStack and Eucalyptus further help automate and create true cloud orchestration. From there organizations are able to granularly control hosts, clusters, various zones, and even core virtual machine resources.
- The virtualization and hypervisor layer is more important than ever. Today, they sit as the bridge to both your data center and the cloud. Automation and orchestration tools aim to directly integrate with the virtual layer to better control resources, virtual services delivery, and the virtual workloads themselves. Automation has become such an integral piece that you have direct plug-ins into the hypervisor platform. For example, you can send VMs from one data center to another. Or, you can push entire repositories from an on-premise data center to a cloud facility all from the virtual layer. You can even integrate security policy, user control, and application automation into your hypervisor. With all of this in mind, the virtualization layer is a critical (and powerful) piece when creating your orchestration and automation strategy
- Although still emerging, there are already very large organizations deploying technologies like CloudStack, OpenStack, and even OpenNebula for their cloud automation and extension layers. Furthermore, many cloud automation and orchestration tools now place governance and advanced policy control directly into their products. Some technologies allow cloud admins to control security aspects of their cloud. Aside from being able to control costs around resource utilization, utilizing these cloud controls creates a very dynamic automated cloud platform. Finally, automation solutions like Puppet help create a unified management and automation approach to sometimes very heterogeneous data centers. Puppet is capable of controlling environments – cloud, virtual, and physical – and allows you to automate the management of compute, storage, and network resources. To support a diverse cloud model, you can use a VMware platform, CloudStack, OpenStack, Eucalyptus, Amazon, or even your own bare-metal data center.
- Data Center: Some services bridge the gap between IT and control engineers for connecting, managing, and automating industrial networks and control systems. Today’s industrial organizations are driven to increase production and reduce costs while maintaining quality and safety. As networks converge, the physical infrastructure becomes even more critical to support the demands of real-time control, data collection, and device configuration. In a recent article, we discussed the concept of a “lights-out” data center. Although we’re not quite at that sort of advanced-robotics data center automation level, many large data center providers are looking at ways to align data center power, cooling, environmental, and overall management all together to create one intelligent control layer. The future may very well aim to directly unify cloud automation with data center resources control and delivery.
When creating any sort of data center automation or orchestration architecture, remember to design around your use case and your business. The whole idea here is to simplify business process and create new levels of efficiency. New solutions spanning the entire data center allow you to proactively manage very dynamic workloads and a diverse set of users. You’ll create better visibility into the distributed data center and be able to truly utilize the capacity of the cloud. Through it all, you’ll allow your content to flow more efficiently and allow the user to be much more productive. | | 3:00p |
Flash Boys, the Capital Markets…and Solving Enterprise Application Performance in the Cloud Mark Casey is President and CEO of CFN Services.
Michael Lewis’ book Flash Boys went a long way to capture the technology and physical infrastructure underlying high-frequency trading and the US equities market – how trading applications, equipment, colocation and connections were strategically architected and meticulously fine-tuned to send financial data and trading signals across stock exchanges and liquidity venues in a fraction of a blink of an eye.
The “black box” puzzle that eluded even the largest and most sophisticated Wall Street investors back in 2007, and ultimately transformed the last remaining physical trading floors into the accelerated electronic marketplaces of today, parallels the model now disrupting the traditional enterprise wide-area network.
The Fragmentation Challenge
While ensuring accelerated, high-performance application delivery in a highly fragmented IT environment represents a new challenge for many global enterprises, this is a problem that has been solved in the global capital markets.
The deregulation of the US financial markets over the past 10 years drove the fragmentation of liquidity, changing the market structure of securities trading and resulting in the creation of a complex and highly fragmented global electronic marketplace.
With infrastructure distributed across hundreds of liquidity pools, located in different places, connected via high-speed fiber or microwave networks, trading firms are able to move large volumes of data and execution orders in very specific and consistent ways – at latencies that can be measured at the millisecond, and even nanosecond-level.
It’s in the quest for speed and reliability that capital markets participants mastered the optimization of networks, data delivery and application performance across globally distributed platforms – solving the fragmentation challenge.
The Disruption is Real: Enterprise Apps Are Moving to the Cloud
Although running applications outside enterprise walls once raised serious concerns about security and compliance, Gartner reports these fears have largely been proven unfounded, and the unsuitability of the cloud for business critical services is “one of the top 10 myths in the world of technology.”
With cloud adoption accelerating on a global scale, the need for agility is driving enterprises to leverage the public cloud for software-as-a-service (SaaS) deployments and move mission-critical apps to the cloud. However, many are finding their application performance is lagging – and fragmentation is a key reason.
As more enterprise apps move to the cloud, there’s increased fragmentation of the software stack – from the applications and data to the physical locations from where they are served. Modern applications are no longer a single executable running in one place, but a mix of on-premise, public cloud and hybrid apps and services that span in-house and cloud provider environments.
This complexity is further compounded by device proliferation and BYOD. With the workforce no longer tethered to the PBX and enterprise LAN, users access enterprise apps from remote locations, over dozens of public and wireless networks, and from devices operating outside the enterprise IT perimeter.
While enterprises look for big agility gains from the cloud, the complexity of interconnecting and ensuring the performance of this highly fragmented ecosystem stands in the way.
From Wall Street to Main Street: Fragmentation Problem Solved
Mirroring the evolution of technology within the global capital markets, many of today’s digital enterprises are transforming their network, infrastructure and application delivery models to create the foundation for high performance and agility – migrating to next-generation architectures that integrate network and cloud hubs into enterprise data centers and the WAN.
Next-generation network architectures for distributed IT and the cloud are built on a high-performance core anchored on commercial data centers. Gone are the days when the hub-and-spoke architecture of carrier MPLS networks connected users in distributed branch locations back to centralized or regional enterprise-managed data centers.

Today’s highly fragmented application ecosystem requires a footprint of geographically distributed hubs that enable a high-performance global mesh network, optimized for interconnecting legacy enterprise apps with distributed SaaS apps, data and users. Users must be able to ingress and egress the enterprise WAN at multiple points based on their mode of access and proximity to the applications they’re utilizing.
By integrating commercial data centers as hubs into the enterprise WAN, enterprises can extend their core network closer to the network edge and deliver higher levels of performance, reliability and predictability across the WAN. This helps reduce network latency, increase bandwidth and improve application performance.
Commercial data centers also serve as network access points or public peering locations, close to the core of the Internet and a broad range of network, cloud and IT service providers. Moving closer to the Internet core enables more simplified, reliable and direct access to public SaaS, IaaS and other cloud services; and closer end-user proximity delivers better user experience.
This transformative network and application delivery model leverages the same high-performance distributed, global infrastructure and vibrant data center ecosystem pioneered by and used to solve fragmentation and performance challenges in the global capital markets.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:02p |
IBM Launches Linux Mainframes, Open Sources Mainframe Software With large numbers of its mainframe customers already running Linux, IBM today announced the next logical step in the evolution of the big business machines in the form a pair of Linux mainframes that only run the open source OS and the formation of an Open Mainframe Project under the auspices of the Linux Foundation.
Of course, Linux mainframes are nothing new. IBM has been deploying the combination for the past 15 years together with its zOS platform, and about half of its mainframe customers already run at least one instance of Linux on a mainframe. In contrast, the new LinuxONE offerings come installed with only a distribution of Linux that will be supported by IBM.
The company made the announcement at the LinuxCon North America conference in Seattle. IBM said the Open Mainframe Project represents the single largest amount of mainframe code contributed to the open source community. In addition to IBM, founding members of the project include Compuware, Marist College, RSM Partners and Suse.
Angel Diaz, VP of cloud architecture and technology for IBM, said code being donated includes the technology that IBM created to identify mainframe issues that help prevent failures before they happen. It includes overall performance improvements by enabling better integration of heterogeneous networks and cloud computing environments.
A key part of the mainframe code contribution is IT predictive analytics for constant monitoring for unusual system behavior. That code can be used by developers to build similar sense-and-respond resiliency capabilities on other systems.
Marist College and Syracuse University’s School of Information Studies will both host clouds that will provide the developer community with access to a virtual instance of IBM LinuxONE at no cost.
IBM also will create a LinuxONE Developer Cloud for independent software providers that will be hosted at IBM data centers in Dallas, Beijing, and Boeblingen, Germany. Under that program, ISVs will gain free access to LinuxONE resources to port, test, and benchmark new applications.
IBM also reaffirmed that open source and industry tools and software including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, and Chef will run on z Systems to better foster the development of hybrid cloud deployments.
Finally, IBM is committing to new financing models for the LinuxONE portfolio that provide more flexibility in terms of how enterprises pay for what they use and scale up quickly when their business grows.
“Our goal is to both increase the number of MIPs being consumed on the mainframe and increase the number of applications that run on Linux,” said Diaz. “By working with the Linux Foundation we can accelerate the art of the possible.” | | 5:56p |
PlumGrid, Cisco, Others Launch Open Network Virtualization Project Moving to create a more common base of network virtualization technologies for Linux IT vendors to collaborate on, the Linux Foundation today announced the formation of the IO Visor Project, whose aim is to create a neutral forum for development of an open, programmable data plane for modern networking environments.
Early supporters of the project, announced at the three Linux Foundation conferences taking place this week in Seattle, include Barefoot Networks, Broadcom, Canonical, Cavium, Cisco, Huawei, Intel, PlumGrid, and Suse. The IO Visor Project will initially be based on technology developed by PlumGrid, a provider of virtual networking software for OpenStack clouds.
The three conferences are LinuxCon, CloudOpen, and ContainerCon.
Jim Zemlin, executive director of the Linux Foundation, said that rather than competing at the lower levels of the networking and virtualization stacks, it makes more financial sense for vendors to collaborate on the development of enabling technologies while continuing to compete at management layers above the data plane.
As the virtualization of compute, storage, and networking continues to grow, fundamental changes to the way IO and networking subsystems are designed is now required, he added. The IO Visor Project will provide the development tools for the creation of high-speed, event-driven functions for distributed network environments.
Specifically, the project is expected to lead to the creation of programmable distributed data and forwarding plane with dynamic IO modules that can be loaded and unloaded in-kernel at runtime without recompilation. In addition, the project is chartered with delivering high-performance distributed scale-out forwarding without having to compromise functionality to achieve it.
The end result is will be an ability to modify a Linux kernel at runtime without rebooting the server or the entire data center. This functionality is not only core to the creation of programmable software-defined networks and Network Function Virtualization software that runs across those networks, but also Internet of Things and cloud applications running across highly distributed networks.
Conspicuously absent from the open source network virtualization project, however, are a host of networking vendors that have already committed to building their own distributed SDN technologies, including VMware, Microsoft, HP, Juniper Networks, Dell, IBM, and just about every major telecommunications carrier.
The degree to which any Linux-centric project can address the inherent cross-platform requirements associated with the development of a programmable data and forward plane remains to be seen. At least theoretically, vendors participating in the project should be able to leverage economies of scale to compete more effectively against rivals that don’t participate, thereby forcing rivals to follow their technology lead.
In addition, interoperability between products based on the IO Visor Project should be higher than interoperability between those that don’t share a common base of technologies.
“Interoperability is going to be a natural here,” said Zemlin. “That’s critical because virtualization is now core to server, storage, and networking.”
At the same time, however, many vendors apparently believe that being able to develop a programmable data and forwarding platform that spans more than just Linux might be one of those areas where they still add considerable value. | | 7:27p |
Developer Survey Shows Public Clouds are Seldom Used for Development 
This article originally appeared at The WHIR
Despite the growth in cloud hosting for production environments, nearly half of developers (44 percent) use a private, in-house cloud platform for development, ahead of Amazon Web Services (16 percent) and Microsoft Azure (13 percent), according to a new poll of 13,000 developers worldwide.
This is one of the key takeaways from the the ninth edition of the annual “State of the Developer Nation”, the largest survey of developers. It was conducted over a period of five weeks in May and June 2015 by VisionMobile, a London-based firm specializing in analysis of the app economy and the developer ecosystem.
The report mentions that cloud applications hosted in local development environments can often move onto a public cloud without great upheaval. But developers are a little apprehensive about the security and resilience of public cloud as their primary development platform for the time being. Also, in-house hardware can make it easier integrate with legacy systems, local databases, and control systems.
“In many instances the convenience of self-hosting still outweighs the advantages of the public cloud,” the report states. “The availability of cloud environments is also a factor, enabling enterprises to realise many of the advantages of cloud computing within their own infrastructure, and ensure their applications will be cloud-friendly when the advantages of a hosted solution become irresistible.”
AWS is Popular for Production, Not Particularly for Development Yet
The study notes that AWS is used by big players such as Pintrest, Yelp, and for video streaming services such as Netflix and Amazon’s own Amazon Instant Video, but among developers polled only 16 percent are using Amazon as their primary cloud hosting platform for development, making it only slightly more popular than Microsoft Azure.
“Amazon may dominate the public-cloud industry, but when it comes to cloud development the most-popular hosting option is to keep things in-house,” the report states.
The Connection Between Development Languages and Cloud Providers
AWS provides the greatest linguistic variability, and portrays itself as language (and platform) agnostic, meaning that Amazon customers aren’t selecting the platform on the basis of language support. AWS’s widespread usage means that it provides a clearer picture of what languages are most used by cloud developers. Java is the most popular language, followed closely by PHP, but even those two aren’t obviously predominant with neither of them making up even a quarter of all languages used.
But every other cloud platform basically has one or two dominant languages.
Heroku is a favorite for Ruby and JavaScript developers. Ruby’s chief designer was hired by Heroku as Chief Architect in 2011, leading many Ruby developers to the Heroku cloud. There are also Ruby-specific features such as high-level API calls that enable dynamic elasticity.
Microsoft’s clouds platform has been successful in attracting developers accustomed to the Microsoft ecosystem. More than half (54 percent) of Azure developers use C#, which was originally developed by Microsoft and has emerged in the cloud era as a modern and popular language for cloud development environments.
Google’s PaaS offering, Google App Engine, is predominantly used for Java, and Digital Ocean is most often used for PHP and Python apps.
Organizations choose cloud providers on various factors such as previous relationships and pricing, but developers could also push their employers towards using a cloud provider suited to their development language and style.
Location and Education Influence Language Choice
In its assessment of “Developer Nation”, the report found that self-taught developers favor newer languages such as HTML5, JavaScript, Ruby, Lua, and Swift. More formally educated individuals generally favor more established languages such as Java, Objective C, C, C++, and C#.
North American developers were more likely than their Asian counterparts to use the newer languages. HTML5 is the primary choice of 47 percent of developers in North America.
Developers Experimentation will Happen, Even if the Business Case Isn’t Clear
It’s interesting to note that among the 13,000 developers surveyed by VisionMobile, most are essentially amateurs. More than half of mobile developers (51 percent), and well over half of Internet of Things developers (59 percent) aren’t making a sustainable income (less than $500 a month).
The study notes that between promising consumer-oriented “Smart Home” and wearable technologies led by Google and Apple, IoT to drive greater efficiencies in the workplace, and municipal “Smart City” projects that haven’t worked out as-of-yet, there’s a great deal of uncertainty about the eventual audience of IoT. Study authors note that many respondents were “‘not sure’ of their eventual market, but developing systems anyway…The IoT industry is still developing, and this uncertainty about eventual audience is a sign that developers, and the companies they work for, are starting to understand that.”
However, the report said that the opportunities around creating applications for existing Things presents opportunities that aren’t as costly as producing hardware. This is seen in retail, where more than half of developers are creating applications rather than infrastructure. They produce software for existing embedded devices such as tills, bar-code readers, smart tags, and beacons rather than having to create new ones. Developers will also likely build applications around Apple Watch and Android Wear devices as they become more widely used.
Understanding developers is clearly important for any cloud service provider, and VisionMobile’s study provides some useful insights into how developers are using some of the most popular cloud services.
This first ran at http://www.thewhir.com/web-hosting-news/developer-survey-shows-public-clouds-are-seldom-used-for-development | | 11:02p |
Digital Realty to Require Execs to Own Company Stock Digital Realty will require each of its executives and directors to own an amount of company stock commensurate with their role and salary to bring company leadership’s interests in line with shareholders’, the San Francisco-based data center services giant announced Monday.
Under the new rules, the company’s CEO will be required to own its common stock worth six times his base salary, and his direct reports will have to own stock worth three times theirs. The proportion for “certain other executive officers” is 1.5 times their base salary, and directors will be required to own stock worth 2.5 times the number of shares they receive under Digital’s incentive award plans.
While the practice of using executive stock ownership requirements as additional incentive for company leaders to perform has been around for decades, there has been more focus on it in recent years, according to Towers Watson, a business consulting firm. A recent study by the firm found that 90 percent of Fortune 500 companies had stock ownership guidelines.
The common requirement for CEOs has been to own five times their salary in company stock, with multiples decreasing down the corporate ladder, according to Towers Watson.
While these requirements are usually based on execs’ base salaries, base salaries are usually only a portion of their total compensation.
Digital Realty CEO William Stein’s total 2014 compensation, for example, was about $5.62 million, consisting of a base salary of $750,000, a $1 million bonus, and performance-based incentives Digital gives its execs, according to the company’s SEC filings.
Stein, Digital’s former CFO, became its permanent CEO only in November of last year, replacing the company’s founding CEO Michael Foust. Foust’s total 2014 compensation was about $5.58 million, $860,000 of which was his base salary.
Scott Peterson, the company’s chief investment officer who orchestrated its recent $1.9 billion acquisition of Telx, earned a total of about $3.26 million in 2014. His base salary that year was about $460,000. | | 11:22p |
Lagrange Joins Google Cloud as Tech Partner If your company website moves as slow as a snail, it very likely customers accustomed to instant access all the time may not stay customers for long. Speed on the web is why Lagrange Systems just became a Google Cloud Platform Technology Partner, our sister site Talkin’ Cloud reported.
That means its flagship product – CloudMaestro – is now integrated with the Google Cloud and is in a better position to deliver web applications to users, increase site speeds, and eliminate downtime. Specifically, the integration is meant to provide users with the right amount of infrastructure in real time to manage constantly changing site requirements.
“The only way to always offer a consistently positive customer experience is to make sure your website infrastructure stays perfectly synchronized with demand,” said Jay Smith, CTO of Lagrange Systems. “The Google Cloud Platform with CloudMaestro can provide high-quality resources that deliver consistent performance relative to the load – an approach that helps businesses meet their user expectations in an affordable, simple and scalable way.”
Founded in 2012, Lagrange Systems is privately held and headquartered in the San Francisco Bay Area. According to the company’s website, CloudMaestro is also integrated with the following cloud providers:
- Amazon
- Softlayer
- Azure
- Rackspace
- CloudSigma
- ScaleMatrix
- Centurylink
- Verizon
- Joyent
- Hostway
- HP Cloud
Read the complete article at: http://talkincloud.com/cloud-companies/lagrange-systems-joins-google-cloud-platform-technology-partner |
|