Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, March 24th, 2015
| Time |
Event |
| 2:00p |
Pivotal’s PaaS Now Comes With Free Managed AWS Infrastructure In a bid to meet growing hybrid cloud needs, Pivotal has added an enterprise-supported version of its hosted cloud solution, Pivotal Web Services. It includes free, managed Amazon Web Services cloud infrastructure with the purchase of a Pivotal Cloud Foundry license.
Previously available only in beta without support, the Pivotal Cloud Foundry PaaS software now includes a new, one-click AWS integration feature, enabling users to natively deploy Cloud Foundry applications on AWS infrastructure.
Cloud Foundry is available as open source, so the 1-click AWS deployment with support in Pivotal Web Services provides enticement for the paid version offered by Pivotal.
Pivotal wants to make it possible for customers to deploy Cloud Foundry at scale in whatever setup they want with easy portability. Scalable virtual appliance support is available for VMware vSphere, OpenStack and AWS.
The added support makes the AWS-hosted version suitable for more than small projects and means projects can move freely between other types of deployment. The move is to appease two different groups with different needs: Line of Business and IT.
“When enterprises want to build modern, next-generation applications they turn to Platform as a Service (PaaS), but once they successfully built these applications they need to deploy them,” said Holger Mueller, vice president and principal analyst, Constellation Research. “Easier deployment options to multiple cloud vehicles like public, hybrid, and private cloud are sought after and valued highly by enterprises who look for PaaS vendors to make not only the creation, but also the operation of their new next-generation applications easier and more efficient.”
The company predominantly ships software to customers who in turn run it as a cloud service. The enterprise-hosted version in Pivotal Web Services is a complement to and not a replacement for these customers. By filling out the types of ways to deploy Cloud Foundry, it’s making the PaaS suitable for more types of applications and purposes. Specifically, the enterprise-hosted version is good for line of business projects because it’s a quick way to build something without the wait.
The hosted version was often used for small business projects. Now more things can be created there and then be brought back easily to private PaaS when needed, or vice versa.
“A lot of times, the line of business needs were under pressure to move fast and were going to Amazon themselves,” said James Watters, vice president of Pivotal’s Cloud Platform Group. “Now with Cloud Foundry, we cover the spectrum, between hosted, virtual appliance, dedicated and on-prem.”
The profit in the PaaS market, said Watters, lies in private data centers, but the company recognizes that it needs to enable all types of setups.
“We’re filling out the portfolio in terms of multi-cloud and hybrid,” said Watters. “There’s more and more pressure to offer the hosted edition. It helps with adoption, mobile, and outward facing apps. You had to find one platform that both development and operations groups could agree on. Cloud native apps really benefit from a platform approach. That platform allows you to grow and scale as opposed to automation and ad-hoc. We want to be like Starbucks—on every corner. No inhibitors.”
Watters said that there’s not a lot of workload overhead in running large Cloud Foundry on Amazon. “We can slice off 512mb very efficiently, and it doesn’t cost very much per year.”
Only a couple of operations people control all of Amazon because of its simple scalability.
Pivotal’s Amazon Machine Image (AMI) accesses the users’ Amazon credentials and builds itself out to whatever scale is desired. The AMI automates directly against the APIs and updates itself, something that Watters said that no other AMI currently does.
Complete hybrid cloud support lets operators migrate Cloud Foundry applications freely between public and private clouds, regardless of their underlying infrastructure. | | 3:00p |
Joyent’s Docker Cloud Skips the VM Layer Docker containers, which over the past two years or so have been enjoying a lot of popularity in the world where IT operations intersect with application development, have had one disadvantage when deployed on traditional public cloud infrastructure: server virtualization.
Virtualization impedes performance of applications running in Docker containers, according to Bryan Cantrill, CTO of Joyent, a San Francisco-based cloud service provider. The company announced today a new Docker cloud architecture, which takes that virtualization layer out of the equation.
Joyent has been talking about its plans to bring Docker containers as a bare-metal cloud service to market since last year. In October of 2014, the company raised $15 million to push forward its business strategy that included Docker cloud services.
Application containers in Joyent’s Docker cloud run directly on bare-metal servers. The company will provide containers as a typical public cloud service, hosted in its data centers, but users will also be able to deploy the architecture privately, in their own facilities.
The software that makes it possible is open source and available on GitHub. Users that want to deploy it in-house have the choice of downloading it and setting it up themselves or buying it as a commercial product from Joyent, in which case the company will provide documentation and support.
The open source software package is the same software that runs Joyent’s cloud, called Triton. “The stack that’s up on GitHub, that’s the stack that we run in production,” Cantrill said.
In creating Triton, Joyent has solved some technological problems with running Docker containers in a multi-tenant environment. One of them is isolation – a problem the company has actually solved a long time ago.
The public cloud services Joyent has been providing for years runs on its SmartOS operating system, which also uses application containers. “With our SmartOS history, we’ve got totally secure zones, containers,” Cantrill said.
The difference between what has been in place and Triton is Joyent’s cloud hasn’t been executing Linux binaries in its containers, and it hasn’t been plugged into Docker. Triton adds the ability to execute Linux binaries directly “on metal” and is connected to the remote Docker API interface, through which developers can provision containers in the data center.
Triton’s security comes from its fundamentally different substrate than the Linux one Docker containers usually run on. “Docker security problems, they are not Docker problems. They are Linux kernel problems,” Cantrill said.
Joyent’s cloud has been designed with multi-tenancy in mind from the beginning, so those problems are simply not there.
Major public cloud providers, including Amazon Web Services, Microsoft Azure, and Google Cloud Platform, offer Docker containers as a service running on top of their clouds. But in addition to lower performance, the big problem with running containers on their infrastructure is network management, Cantrill said.
Because containers are being deployed in cloud VMs, networking between containers is not a “first-class citizen,” he said. “Management is just brutal.”
In Joyent’s Docker cloud, containers are connected with VXLANs (virtual extensible LANs).
For the public cloud service, Joyent will charge per container per minute. The service will be hosted in its Ashburn, Virginia, data center (within an Equinix facility), but there are plans to expand it to other locations. | | 3:30p |
Solving the Data Center Paradox: Enabling Higher Performance at a Lower Cost Motti Beck is Director of Enterprise Market Development at Mellanox Technologies Inc. Follow Motti on Twitter: @MottiBeck.
For the last decade, virtualization technology has proven itself and has become the most effective way to increase data center efficiency. While initially most of the efforts were on developing server virtualization technologies, the recent focus has been on developing advanced interconnect technologies that enable more efficient data communication between servers and storage. This recent trend is also a direct outcome of the adaptation of innovative data center architectures, such as the Scale-Out , In-Memory compute, and Solid State Drive-based storage systems that heavily depend on the level of functionality and performance that the interconnect can deliver.
Scale-out architectures actually aren’t new. They appeared more than a decade ago in High Performance Computing (HPC). At that time, the industry realized that using distributed systems built with standard off-the-shelf servers provided much higher performance, enabling a reduction in Total Cost of Ownership (TCO). Later, scale-out architectures started to be used in data centers, first in distributed database systems and later in cloud infrastructures, where faster east-to-west traffic is required for better communication between the various virtual machines (VM).
On the storage side, the amount of data that must be processed in real-time and stored continues to grow, and using traditional SAN-based systems has become almost impossible due to the cost and to the exponential complexity associated with scaling such systems. Thus, as is the case on the compute side, scale-out storage architecture has proven itself as the right answer. Furthermore, in order to support mobile users and enable them to access the cloud in real-time, In-Memory processing technology has become more popular, allowing organizations to leverage the growing capacity and the lower cost of Solid State Drives (SSDs).
Innovative networking technology providers have developed interconnect products that address these emerging market trends and enable maximum efficiency of virtualized data centers at lower cost. Working closely with hyper-scale cloud and Web 2.0 partners over the last several years these solutions have delivered significant improvements in Return on Investment (ROI). Networking innovation has not only increased the data communication speed from 10GbE to 100GbE and reduced the data communication latency to a few hundred nanoseconds, but also added “offload” engines that execute complex functions directly on the input/output (IO) controller to minimize the Central Processing Unit (CPU) overhead, resulting in much higher CPU application availability and improving the overall system efficiency.
Also, vendors have recently introduced a programmable NIC (Network Interface Card) with an Application Acceleration Engine that is based on high performance IO controller, which provides maximum flexibility for users to bring their own customized protocols. All of these innovative technologies improve the reliability, data processing speed, and real-time response time, and lower the TCO of virtualized data center.
The Virtual Desktop Infrastructure (VDI) system is a good example that demonstrates the efficiency and cost savings that can be gained by solutions that use high performance end-to-end interconnects. VDI efficiency is measured by the maximum number of Virtual Desktop users that the infrastructure can support (more users will minimize the cost per user). A good example is the record number of Virtual Desktops per server using a 40GE NIC.
As previously mentioned, interconnect performance is just one dimension that continues to improve. The other dimension is enabling embedded engines to offload complex jobs from the expensive CPU to the IO controller, resulting in higher reliability, faster job completion, and real-time predictable response time, which is extremely important to mobile users. A good example of an offload is the Remote Direct Memory Access (RDMA) engine, which offloads communication tasks from the CPU to the IO controller. InfiniBand was the first protocol to run over RDMA, and later the industry enabled this capability to run over converged Ethernet as a standard called RoCE – RDMA over Converged Ethernet. RDMA has been also adopted by the storage industry with Microsoft’s SMB-Direct storage protocol as well as for iSCSI with a standard known as iSER (iSCSI extensions for RDMA).
The performance and efficiency gains are significant for hypervisors accessing storage over RoCE, outperforming traditional Ethernet communication. One of the best examples is Fluid Cache for SAN, which cuts the latency associated with access to storage by 99 percent, and enables four times more transactions per second and six times more concurrent users. Furthermore, when running VDI over 10GbE with RDMA, the system can handle 140 concurrent Virtual Desktops as compared to only 60 when running over 10GbE. This, of course, translates to significant savings in CapEx & OpEx. Actually, one of the most notable deployments of RDMA is in Microsoft Azure where 40GE with RoCE is being used to access storage at a record speed using zero CPU cycles, achieving massive cost of good (COGS) savings.
Network virtualization is a relatively new market trend that requires support for overlay network standards such as VXLAN, NVGRE and the emerging Geneve. These new standards are a must in any modern cloud deployment. However, they put a very heavy load on the server resources, which directly affects performance and prevents the system from exchanging data at full speed. Using advanced NICs which includes VXLAN and NVGRE offload engines, removes such data communication bottlenecks and maximizes CPU utilization and thus, expensive resources in the system don’t need to stay idle, which improves the ROI.
Mobile users expect to be able to access their required information at any time and from anywhere, which means that massive amounts of data must be analyzed in real-time. This requires adopting new scale-out architectures in which the system efficiency heavily depends on interconnect technologies that pave the way toward achieving these goals.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:14p |
Telecom Group Takes Net Neutrality Debate to Court 
This article originally appeared at The WHIR
The net neutrality debate moved from the FCC to the courts this week, as two separate lawsuits were filed challenging the Commission’s February ruling that Internet access should be regulated as a utility. Industry group USTelecom filed an appeal with the US Court of Appeals for the District of Columbia Circuit, and Alamo Broadband filed an appeal with the US Court of Appeals for the Fifth Circuit in New Orleans.
USTelecom represents AT&T, Verizon, and other companies. Its appeal argues that the regulations ushered in by the FCC ruling are “arbitrary, capricious, and an abuse of discretion,” as well as being technically inadmissible due to various conflicts with standing laws, regulations, and procedures, Reuters reports.
“As we have said throughout this debate, our member companies conduct their business in conformance with the open Internet principles, and support their enactment into law. However, we also support a regulatory approach that relies upon Section 706 authority of the Communications Act, and we do not believe the Federal Communications Commission’s move to utility-style regulation invoking Title II authority is legally sustainable,” said USTelecom President Walter McCormick.
The appeals were filed now to meet the appellate deadline, should the FCC start the 10-day window of opportunity on “the date of release or issuance of the FCC’s order” instead of on the date of Federal Registrar publication.
The Alamo Broadband suit makes a similar argument to USTelecom’s, according to Reuters.
The FCC’s decision is not necessarily the net neutrality guarantee all advocates were hoping for. Lawsuits from industry were expected, however, and are expected to be followed by more, if the net neutrality ruling survives that long. In the aftermath of the FCC ruling industry analyst Jeff Kagan predicted a final decision would come from the courts and is “years away.”
The Columbia Circuit has twice rejected FCC Net Neutrality measures, including the Open Internet Order in early 2014. The FCC measures are also being challenged in the legislature, by the Internet Freedom Act.
This article originally appeared at http://www.thewhir.com/web-hosting-news/telecom-group-takes-net-neutrality-debate-court | | 5:59p |
Video: Watch a Mac Mini Fly to Space While it doesn’t have an entire blimp to send soaring above Silicon Valley or fly over the NSA data center in Utah to get its message out, Mac Mini Vault, a company that provides data center space for Apple’s desktop computers used to host websites, does have one or two Mac Minis lying around.
Mac Mini Vault, a subsidiary of Wisconsin ISP and data center provider CyberLynk Networks, is one example of several data center service providers capitalizing on the growing Mac hosting niche within the colocation market. Another provider, called MacStadium, raised $1 million in funding in December 2014.
Earlier this month, Mac Mini Vault celebrated its fifth anniversary by attaching a Mac Mini along with a couple of GoPros to a helium-filled balloon and sending it flying into the stratosphere. The 6-pound payload (they removed most of the Min’s internals to meet payload-weight regulations) reached peak altitude of nearly 112,000 feet before returning back to earth.
The rig took off from a spot just outside of Blythe, California, on the bank of the Colorado River.
Here’s the video shot by the Mac hosting company’s rig during the flight:
| | 8:53p |
Wells Fargo Consolidates Data Centers, Lowers Carbon Emissions Data center infrastructure is costly on its own, but there is also the additional cost of carbon emissions associated with the enormous amount of power data centers consume.
All big U.S. companies have been under pressure to reduce carbon emissions associated with their operations in recent years. Many of them have looked to data center consolidation as one of the main ways to reduce their carbon footprint.
San Francisco-based Wells Fargo, the fourth-largest bank in the U.S., has been doing just that. The company has been consolidating its data center infrastructure since 2009, when it merged its IT functions with Wachovia National Bank.
Wells Fargo acquired Wachovia during the economic collapse of 2008, interrupting what was going to be a government-arranged Citigroup takeover of the troubled Charlotte, North Carolina-based bank, meant to prevent it from failing.
Wells Fargo has shut down close to 100 satellite data centers since early 2009, Bob Culver, the bank’s head of data center strategy and technology, wrote in a blog post. The ongoing data center consolidation effort has consisted of moving workloads from underutilized servers onto newer, more efficient gear, server virtualization, and moving infrastructure into bigger facilities.
“We estimate that our efficiency has increased 70 percent from mid-2011 to today,” Culver wrote. “Our mainframe environment takes less than half the power that it did just two years ago, yet has more processing capacity.”
Wells Fargo has gone from less than 4,000 virtual machines running in its data centers to about 30,000 now. That, coupled with spinning up more than 21,000 virtual desktops, has reduced the amount of facilities space by more than 18 percent.
Culver expects to close 11 percent more of the data center portfolio over the next two years, which would bring it to two-thirds of what it was in 2009, when its infrastructure was merged with Wachovia’s.
Data center consolidation has consistently resulted in reduction of Wells Fargo’s carbon footprint, which the company has committed to cutting by 35 percent by 2020. According to Culver, his team’s efforts resulted in a 5.5-percent reduction in 2012, a 5.8-percent reduction in 2013, and a 5.2-percent reduction last year. | | 11:09p |
HP Intros Turnkey OpenStack Private Cloud Solution HP today announced the Helion Rack, a pre-configured, pre-tuned, and pre-tested OpenStack private cloud solution that combines HP’s hardware, its distribution of Cloud Foundry (Platform-as-a-Service), and OpenStack, the open source cloud architecture.
The integrated turnkey product is designed to help enterprises develop and deploy private clouds for internal service delivery without having to configure and design. It scales out by adding storage and compute nodes.
The private cloud is viewed as being DevOps-enabled, allowing internal people to rapidly provision capacity they need. Cloud Foundry provides multiple-language app development, and OpenStack controls the cloud infrastructure. HP Helion acts as both private Infrastructure-as-a-Service (IaaS) and PaaS.
The platform is also optimized for hosting applications requiring secure, compliant, and performance-tuned infrastructure as well as for developing, designing and deploying cloud-native applications.
“The demand from lines of business and dev/test teams for fast delivery of flexible compute resources is putting many IT departments under intense pressure,” said Owen Rogers, senior analyst at 451 Research, in a press release. “While IT departments understand these needs, they don’t always have the time, resources, infrastructure, or skills needed to meet demand. HP Helion Rack can help overcome the cloud and OpenStack software skills barrier that delays many companies’ private cloud deployments.”
HP’s OpenStack private cloud solution will be available in April with prices based on configuration. | | 11:28p |
Rubrik Raises $10M for Converged Data Management System Rubrik received $10 million in an oversubscribed Series A round led by Lightspeed Venture Partners and others. The company provides a converged data management system and platform with hardware that offers ample backup and recovery-related functionality in one package.
With an eye toward simplification, Rubrik dubs its offering “a time machine for cloud infrastructure,” that fuses data management with web-scale IT, eliminating the need for disparate backup software and acting as an all-in-one-box.
The box takes up two rack units and provides backup, de-duplication, compression and version management. Plus, data can move over to public clouds like AWS once it’s ready.
Industry analyst firm IDC estimates that businesses will spend $47 billion in 2015 on infrastructure to protect data, manage disaster recovery, enable DevOps, and archive for compliance and long-term retention. Convergence is occurring across the landscape in a bid to challenge legacy architectures.
In the storage, backup and recovery worlds, Rubrik enters an increasingly crowded field of companies looking to unite several types of data functionality, previously available as only separate software offerings, into one.
The later-stage Actifio, object storage Exablox, hybrid storage startup Reduxio, and Nutanix all combine several features into one, albeit with different approaches. Nutanix, for example, focuses on the primary environment while Rubrik focuses on the secondary environment.
Rubrik is also competing with giants like EMC, which is increasingly reimagining enterprise IT architecture. There are also larger converged infrastructure plays like VCE and SimpliVity that extend beyond data management and converge other functionality.
Rubrik Co-founder and CEO Bipul Sinha brings much experience in the world of convergence, acting as founding investor and board member in Nutanix and scale-out storage provider PernixData.
“For years IT has been forced to stitch together legacy pieces of infrastructure to manage data through the application lifecycle, from recovery to provisioning production replicas for DevOps,” said Sinha in a press release. “Today we are excited to announce the first act in our product journey. We have built a powerful time machine that delivers live data and seamless scale in a hybrid cloud environment. Businesses can now break the shackles of legacy and modernize their data infrastructure, unleashing significant cost savings and management efficiencies.”
Experience at Rubrik extends beyond Sinha. Co-founder and VP of Engineering Arvind Jain was a Google distinguished engineer and founding engineer of Riverbed Technology.
Soham Mazumdar, co-founder and architect, was co-founder of Tagtile (acquired by Facebook) and architect of Google’s disk-based search index.
Arvind Nithrakashyap, co-founder and CTO of Rubrik, also served as co-founder of Oracle Exadata and Oracle Cluster Principal Engineer.
“There is a huge opportunity to disrupt the secondary environment, which is the foundation of enterprise IT,” said Ravi Mhatre, partner at Lightspeed in a press release. “We invested in Rubrik because of the exceptional caliber of its leadership team and game-changing technology in a market that has remained stagnant for more than a decade.”
Other investors include John W. Thompson (Microsoft Chairman and Symantec Former CEO), Frank Slootman (ServiceNow CEO and Data Domain Former CEO) and Mark Leslie (Leslie Ventures and Veritas Founding CEO). |
|