Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, August 17th, 2017
Time |
Event |
12:00p |
Maryland Offers Local Data Centers Grants for Energy-Reducing Projects Quite a few states offer tax breaks to attract data centers with the hopes of boosting local economies, but Maryland is the first to offer grants to existing facilities or those under construction in the state as an incentive to employ cost-effective technologies that improve data center energy efficiency.
The Maryland Energy Administration’s Data Center Energy Efficiency Grant Program is designed to support the state’s growing IT sector and to help reduce energy usage, lower overall PUEs, improve competitiveness, and drive innovation, MEA Director Mary Beth Tung said in a statement.
Although the pilot program allows any commercial, state/local government or non-profit data center located in Maryland with data floors at least 2,000 square feet to apply, each proposed project must be able to produce enough life-time net energy benefits to offset the costs of the project.
Applications for first-round consideration are due by Nov. 2, and all projects have a Dec. 30, 2019 completion deadline.
Grant awards for qualifying data center energy efficiency projects are expected to range from $20,000 to $200,000 per project, subject to funding availability. Grants are designed to cover up to 50 percent of the net customer cost (up to $200,000), after other incentives and grants have been applied, toward innovative and cost-effective energy efficiency solutions, according to MEA.
These include: server virtualization; air flow optimization; aisle containment; lighting controls; uninterruptible power supplies (UPS); motors and variable frequency drives; heating, ventilation, and air conditioning (HVAC) upgrades; and building insulation and envelope improvements.
Those data centers that propose using two or more of the above measures (not an all-inclusive list), will likely be the most competitive, according to Tung.
Datacentermap.com currently lists 22 colocation data centers spread across six regions in Maryland, with a dozen in the Baltimore area.
If you or your organization would like to participate in the Data Center Energy Efficiency Grant Pilot Program Webinar, which will explain the process in detail, RSVP to datacenters.mea@maryland.gov with your name, organization, and contact information. The date and time of the webinar will be scheduled and posted to the DCEEG website soon, and applications can be found here.
For more information or assistance contact Rory Spangler at Rory.Spangler@Maryland.gov. | 4:51p |
Cisco Outlook Shows Robbins Turnaround Hasn’t Spurred Growth
Ian King (Bloomberg) — Cisco Systems Inc., whose machines form the backbone of the internet, predicted another revenue decline as the company tries to remake itself amid a changing networking industry.Revenue in the current period may decline as much as 3 percent from a year earlier, the San Jose, California-based company said. That indicates sales of as little as $11.98 billion and compares with an average estimate by analysts for $12.1 billion. Net income in the fiscal first quarter, which ends in October, will be 48 to 53 cents a share. On average, analysts project earnings of 52 cents.
Cisco’s transition into a company that’s less dependent on hardware is hurting its overall growth as the software and services businesses that Chief Executive Officer Chuck Robbins is trying to build take time to gain ground. The company still gets the biggest chunk of revenue from high-priced hardware and that’s a challenge during an industry shift toward cheaper, software-based networking.
“It’s a big company and this kind of transition just has to be a gradual one,” said Simon Leopold, an analyst at Raymond James & Associates. “People will be patient. If you ask me how patient, I can’t say.”
Cisco shares fell 3.7 percent to $31.14 at 10:03 a.m. in New York Thursday. That brought its gains for the year to 3 percent, compared with a 17 percent gain by the Nasdaq Composite Index.
Cloud Competition
Robbins is working to restore the kind of growth that made Cisco one of the world’s largest companies. The networking-gear maker hasn’t reported an annual revenue gain of more than 10 percent since 2010. His effort to fire up sales are being hampered by a shift to computing in the cloud — in remote data centers that provide services over the internet. Owners of such facilities like Amazon.com Inc.’s Amazon Web Services, are increasingly building their own hardware and replacing traditional suppliers of servers, storage and networking.
Profit in the fiscal fourth quarter, which ended July 29, was $2.42 billion, or 48 cents a share. Sales fell 4 percent to $12.1 billion, Cisco said Wednesday in a statement. That marked the seventh consecutive year-over-year contraction in quarterly revenue. Analysts on average projected profit of 51 cents a share on revenue of $12.06 billion, according to data compiled by Bloomberg.
Sales in Cisco’s biggest business, switching, declined 9 percent in the fourth quarter as did revenue from routing, the second-largest unit. Collaboration, which includes videoconferencing, fell 3 percent.
Cisco’s customers had held back on ordering ahead of the company offering a new set of products for the switching market, according to Robbins. That was just a “pause” and demand for the new range of switches is good, he said. Overall, orders improved in the quarter compared with the preceding three months.
Slower Transition
The company’s transition into a software company that books recurring revenue is more difficult than at other companies because Cisco has traditionally been paid upfront for hardware, he said in a phone interview. Offering switches and routers that are more flexible and come with software-as-a-service subscriptions will help speed up that shift and rekindle revenue growth, he said.
“One of the key things that we needed to do was get some energy in our core markets,” Robbins said. That started with the offering of new switch products in June. “You’re going to see more and more of that innovation coming from us.”
Software and subscription deferred revenue is now more than $5 billion, Robbins said. That represents a gain of 50 percent from the same period a year earlier. He declined to predict when company revenue will return to growth.
“I don’t believe that any type of company has been through the type of transition that we’re going through,” he said. “It may take a little longer. I feel good about where we are.”
| 5:22p |
Coherent vs. Direct Detection in Metro Data Center Interconnectivity Niall Robinson is VP of Global Business Development for ADVA Optical Networking.
The expansion of global data centers has been rapid. Driven by the fierce migration to cloud-based services and constantly increasing capacity needs, internet content providers (ICPs) have built bigger and bigger facilities to store and process information.
In the past couple of years though, the trend for ever-larger mega data centers has hit a wall. Once a single implementation grows beyond a certain number of servers, megawatts and square feet, it no longer delivers on new efficiencies and economies of scale. What’s more, as data centers expand, they represent a bigger risk of failure. That’s why cloud-scale operators are focusing on creating multi-data center clusters in particular geographical regions. These groupings of multiple facilities enable ICPs to offer superior business continuity, disaster recovery, and digital media delivery by maximizing the performance, availability, and redundancy of data services.
Of course, high-availability metro-regional clusters can only be effective if the data center interconnectivity (DCI) between them doesn’t become a bottleneck. This creates an urgent need for high-capacity single-span DWDM DCI links. Tbit/s networks optimized for distances up to 80km are required. This DCI infrastructure needs to deliver maximum efficiency in four key areas:
Power: From a sustainability as well as a business perspective, energy usage is as critical in DCI as it is within the data center. Current practice, such as relying on solutions that use active backplanes, often consumes unnecessary watts per bit.
Space: As demand grows, data center real estate comes at an ever higher premium. That’s why it’s critical to leverage an optical transport platform that also minimizes footprint. Data center clusters increasingly depend on ultra-compact solutions with configurations that use the least possible rack space.
Simplicity: With their relatively small operations teams, ICPs require solutions that offer genuine plug-and-play installation and simplified provisioning.
Cost: While equipment flexibility is a high priority for telecommunications operators, when it comes to DCI transport, cost is king. All new innovation needs to guarantee reductions in capital and operating expenditure.
Optimizing and balancing these different considerations is crucial. An important part of this juggling act is deciding between coherent and direct detect methods of optical signal detection. The choice between these two modulation formats has emerged as the key question facing today’s metro DCI infrastructure designers.
In recent years, coherent detection, which involves a local oscillator at the receiver tracking the phase of the optical transmitter, has revolutionized long-haul DWDM networks. It has enabled 100Gbit/s transport on a single wavelength over thousands of kilometers.
Even more recently, coherent detection has been utilized in metro DCI networks. This enhances performance and spectral efficiency, but the increased cost and power consumption make coherent solutions less than ideal for these shorter links. The installation simplicity of coherent system over such short links is highly attractive to the stretched ICP operations teams.
Meanwhile, advances in technology that support 100Gbit/s direct detect DCI applications are challenging the dominance of coherent detection in this market. New solutions built on pulse-amplitude modulation 4 (PAM4) technology are able to transport direct detect signals up to 80km while still meeting stringent optical signal-to-noise ratio requirements.
Here’s how the two modulation formats compare:
- Fiber capacity: Coherent detection offers more in terms of spectral efficiency and, therefore, optical bandwidth. However, the reduced capacity of direct detect is often more than sufficient to connect regional data center clusters.
- Size, power and cost: Direct detect networks are naturally simpler than coherent options, which require application-specific integrated circuit and digital signal processors. This means direct detect systems are cheaper, smaller and consume less power.
- Simplicity: New smart optical line systems, developed exclusively to support 100G direct detect, are leveling the installation simplicity playing field between coherent and direct detect solutions.
- Applications: Direct detect is optimized for shorter reaches and point-to-point connections in metro networks. Coherent technology, on the other hand, is uniquely suited to long distance data transport.
It’s clear why the lower power consumption and lower cost-per-bit of direct detect transmission across distances up to 80km make it a compelling choice for operators of regional data center clusters. What’s more, further direct detect technology research is exploring solutions such as PAM4 with 50Gbaud technology or PAM8. This ensures direct detect will remain competitive to coherent solutions in years to come.
Direct detection certainly has a bright future. It may not be as scalable or as spectrally efficient as the coherent alternative, but its value is increasing rapidly in this expanding space.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:42p |
BT Exploring Open Data Center Switch Tech to Transform Its Network The British telco giant BT is kicking off a research project together with Dell EMC to see how effective open network switches you’d normally see in places like Facebook or Microsoft data centers would be for delivering telecommunications services.
The two companies are building a proof-of-concept at BT Labs in Adastral Park, a science campus near Ipswitch, England, to validate the approach to managing telco traffic using open switches that can run specialized software not written specifically for a particular piece of hardware – a departure from the traditional tightly-integrated hardware-software solutions that have powered telco networks for decades.
The overall concept is frequently referred to as “disaggregated switching.” Pioneered by hyper-scale data center operators, it is thought to be less expensive than buying specialized hardware from networking equipment giants – Dell EMC notwithstanding – and allow for more flexibility to create custom functionality. The drawback is that implementing it requires lots of engineering resources, which is something the internet giants have in droves.
See also: Facebook Makes Open Source Networking a Reality
BT (as do other telcos) views this software-defined networking technology as a way to make its network more flexible and easy to manage, using Network Functions Virtualization to replace expensive physical appliances, such as firewalls and routers, with software functions that run on commodity, or merchant, silicon. The overall promise is to make buying and using telco services somewhat similar to the way companies buy and use cloud services by the likes of Amazon Web Services and Microsoft Azure, while reducing the cost and complexity of managing the network for the operator.
According to a Dell EMC announcement released this week, potential use cases the proof-of-concept will explore include:
- Instant activation of Ethernet circuits from a third party (such as an enterprise)
- Ability of the system to deliver real-time network operational data
- Bandwidth calendaring – flexing the bandwidth of an Ethernet circuit according to customer need via a predetermined calendar
- Delivering network telemetry data to third parties automatically
BT has been providing software defined WAN (SD-WAN) services for a couple years, but it’s been relying on partners like Cisco and Nuage for the technology.
US-based telco giants Verizon and AT&T have been transforming their networks in this new way for some time now, and both view it as essential to enabling the upcoming 5G wireless standard, considered crucial to the success of next-generation applications such as the Internet of Things (including self-driving cars) and virtual and augmented reality.
Verizon’s efforts in this area took an interesting turn earlier this week, when the company announced a partnership with AWS to provide virtual network services on the giant’s cloud to help customers manage connectivity between their infrastructure and the cloud.
See also: What’s Behind AT&T’s Big Bet on Edge Computing | 7:26p |
Docker Can Now Containerize Legacy Apps Running on Mainframes Docker this week announced the first update to its rebranded flagship platform with the release of Docker Enterprise Edition (EE) 17.06. Back in March, Docker rolled out the first Docker EE, built on the backs of what had been known as Docker Commercially Supported and Docker Datacenter.
The new release comes on the heels of a report last week from Bloomberg that the container company has been raising money, which will result in $75 million dollars being added to its coffers by the end of the month, bringing with it a new valuation of $1.3 billion — up $300 million from its previous valuation.
The features added to Docker’s flagship product indicate the company is targeting data centers, DevOps, and of course, the hybrid cloud.
Most important for the data center, perhaps, is Docker EE 17.06’s support for mainframes — specifically support for IBM System Z running Linux. “This means for an enterprise, all under one umbrella, all of your major applications can be containerized and managed by Docker,” David Messina, Docker’s chief marketing officer, told Data Center Knowledge.
See also: Why Docker is So Popular — Explaining the Rise of Containers
This should prove useful for DevOps teams saddled with legacy applications running on System Z. In March, Docker introduced the ability to port legacy apps to containers without the need to modify code, and this now makes that ability available for apps running on the mainframe architecture as well. Containerizing legacy apps had previously been offered by other companies, but had required refactoring.
“Companies that have a lot of old systems of record are not going to go and refactor all these applications,” Jenny Fong, Docker’s director of products, explained to Data Center Knowledge. “Sometimes for regulatory reasons they’re not allowed to touch the actual code, but they’re still looking for some improvements in terms of portability, in terms of how they can maintain and keep those applications secure.”

(Image: Docker)
According to Fong, some of the legacy apps that Docker has seen brought to containers are more than 25 years old, which makes security a big issue, since apps of that age are almost certain to be unsecure by design. Not only is it easier to conduct security scans on apps in containers, a containerized app is inherently more secure than an app running on bare metal or in a virtualized environment, due to things like the isolation factor and the added control that containers bring, she said.
‘Bring Your Own Node’
Across all architectures, Docker EE 17.06 has added new security features to the mix as well, mainly with the expansion of RBAC, or Role Based Access Controls. One of these is a feature that Docker calls “bring your own node.”
“New teams that want to join the Docker EE environment can bring their own nodes to the cluster,” Fong said, “and then through these new access control capabilities, the central IT team can grant them access only to those nodes. Basically, as a new team or a new line of business joining the fray, I only see those nodes. I don’t have the ability to leverage or use any of the other nodes.”
See also: IBM’s New Mainframe Encrypts Entire Data Center at Once
Policy-based automation capabilities have been enhanced as well. One example is a feature that allows for automatic image promotion that enables administrators to establish rules a container must meet before it can get “promoted” — such as into production.
“These rules can be based on tags that developers put on the image,” she said, “but they can also be based on the results of a security scan. If you think about an image that is going into a production folder, you can set a rule that says ‘this image has to be clear of any critical or major vulnerabilities before I will move it.'”
Particularly important for admins — as well as security teams — the new Docker EE also offers more granularity when it comes to assigning roles and responsibilities.
“The system now has the high degree of flexibility so it’s not force-fitting you into its roles and responsibilities requirements,” Messina said. “It gives you flexibility in that regard.”
Unicorn’s Revenue Reportedly Still Low
With this release of Docker EE 17.06, coupled with the current round of behind-the-scenes fundraising, it appears that new CEO Steve Singh, who took over the reins in early May, has been busy. According to Bloomberg, he intends for much of the newly raised $75 million in venture capital to be used to build a sales and marketing team to target corporate clients.
That’s a necessity, evidently. In June, Fortune wrote that sources close to Docker estimate the company’s 2016 annual revenue to be somewhere in the neighborhood of $10 million. However, the article went on to quote a Docker spokeswoman saying, “This number is lower than our current revenue number.” |
|