Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, October 19th, 2016
Time |
Event |
12:00p |
Digital Realty to Build Data Center Tower in Downtown Chicago Digital Realty Trust is planning to build a massive, 12-story data center in the South Loop of the Chicago Central Business District, which it says will act as an “annex” to its iconic building across the street, at 350 East Cermak, one of the world’s largest carrier hotels and a key network access point for financial services firms involved in the Chicago commodities market.
The proposed facility at 330 East Cermak will be a 54MW tower containing 660,000 square feet of space, built on two acres of land owned by industrial developer CenterPoint Properties, which will contribute it to a joint venture with Digital Realty.
Digital, a San Francisco-based data center REIT, is billing the future Chicago data center as an expansion of 350 E. Cermak, which has historically been in high demand because of the large amount of networks that can be accessed there. Until recently, Telx, which Digital acquired last year, operated the network meet-me room in the building.
That room and meet-me rooms in other locations were one of the key attributes that made Telx attractive to Digital, which recently pivoted from being an almost strictly wholesale data center provider to an interconnection-centric one with a mix of wholesale and retail colocation services. Bringing more capacity to downtown Chicago for companies interested in low-latency access to 350 E. Cermak will give this new strategy a big boost.
Read more: Telx Acquisition Closed, Here’s Digital Realty’s Plan
A Major Milestone
It can take years to reach the point when a project has all of the entitlements necessary to break ground in any CBD, and getting a big construction project off the ground in Chicago is especially difficult. Representatives from Digital Realty, CenterPoint, and CBRE — which is marketing the future data center — told Data Center Knowledge the project was “shovel-ready,” which is a significant milestone.
The partners expect the data center tower to be completed within two years of breaking ground. However, the timing for commencement of development, as well as the final size, power density, and redundancy configuration will be subject to market demand.
The Leasing Challenge
The next challenge facing the development team will be to sign up one or more anchor tenants to de-risk the new development. Unlike most suburban data center campuses, there is no practical way to phase a 12-story data center tower, so upfront commitment from one or more big tenants is a must before ground gets broken.
Additionally, it is far more cost-effective to build out large data center halls in a mid-rise building in a downtown CBD while the contractor is mobilized for construction of the building shell.
 Rendering of Digital Realty’s planned data center at 330 E. Cermak in Chicago. The company’s existing carrier hotel at 350 E. Cermak is immediately to the right. (Image: Digital Realty)
The uncertainty of the development timeline will certainly make the task of signing anchor tenants more challenging for CBRE and the development partnership team. It is a classic “chicken and egg” challenge that must be overcome in order to break ground.
Demand for data center capacity in top North American markets is outpacing supply, however, and Chicago is an especially tight market, according to CBRE and other real-estate brokers who specialize in data center space.
The site’s adjacency to CHI1 makes it a unique data center offering for this market, and it should appeal to cloud service providers who value connectivity options and low latency and who take down data center space in big chunks.
An Interconnection Hub and Historic Landmark
 350 East Cermak in Chicago is a key hub in the region’s digital economy and one of the most connected buildings in the country. (Photo: Rich Miller)
The eight-story Lakeside Technology Center at 350 E. Cermak contains over 1.1 million square feet of space, with 109MW of utility power provided by Commonwealth Edison. The gothic-style building was originally built in 1912 by R.C. Donnelly Co. as a press facility for Yellow Book and Sears Catalog.
This downtown Chicago icon was converted to telecom use in 1999 by The Carlyle Group and acquired by Digital Realty in 2005. It is connected to Digital’s CHI2 data center at 600 S. Federal in Chicago, as well as its suburban Franklin Park campus.
Read More: Chicago’s Data Fortress for the Digital Economy | 3:00p |
Intel Fourth-Quarter Sales May Miss Estimates on PC Doldrums (Bloomberg) — Intel Corp. forecast fourth-quarter sales that may fall short of estimates, sparking concern that lackluster year-end personal-computer demand will mean manufacturers have no need to replenish chip inventories built up in recent months.
Revenue will be $15.7 billion, plus or minus $500 million, the company said in a statement Tuesday. Analysts had projected $15.9 billion, the average of estimates compiled by Bloomberg. Adjusted gross margin, the only measure of profitability that Intel forecasts, will be in line with analysts’ estimates at 63 percent.
Intel’s third-quarter sales were lifted by processor orders from PC makers that decided to build up their supply of chips ahead of the holiday shopping season. The dimmer fourth-quarter outlook from the world’s largest semiconductor maker may indicate that PC demand has been slow to accelerate, meaning manufacturers will again be sitting on unused stockpiles of chips.
“It’s my concern that we just have an inventory build that could carry on into the first quarter,” said Kevin Cassidy, an analyst at Stifel Nicolaus & Co. “The market might have thought that PCs would be better into the fourth quarter.”
Intel shares, which had gained 9.6 percent this year, slipped 3.5 percent in extended trading following the announcement. Earlier, they gained 1.2 percent to $37.75.
Third-Quarter Pickup
Third-quarter net income rose to $3.38 billion, or 69 cents a share, compared with $3.11 billion, or 64 cents, in the same period a year earlier. Revenue rose 9.1 percent to $15.8 billion. Analysts, on average, had predicted a profit of 67 cents a share on sales of $15.6 billion. Adjusted gross margin, or the percentage of sales left after subtracting production costs, widened to 65 percent from 64 percent, Intel said.
The company’s client computing group, which sells PC chips, posted third-quarter sales of $8.89 billion, a gain of 4.5 percent from a year earlier.
On Sept. 16, Intel raised its forecasts for the third quarter, citing “replenishment of PC supply chain inventory.” The Santa Clara, California-based company increased its projection to about $15.6 billion from about $14.9 billion.
There were other signs of improvement in PC demand last quarter. Worldwide shipments fell 3.9 percent in the third quarter, market researcher IDC said earlier this month, a smaller drop than the decline of 4.1 percent in the second quarter. Unit sales in the U.S. rose 1.7 percent, a second consecutive quarterly gain.
Intel’s data center division, which provides server chips for computers used by corporate networks and for the large systems run by companies such as Google Inc. and Amazon.com Inc., had revenue of $4.54 billion, up 9.7 percent from a year earlier. The company set an annual target of double-digit percentage growth for that unit — a prediction it missed in the first half of the year, leading to concerns Intel might fall short for the year.
“There’s some disappointment there,” said Stifel Nicolaus’s Cassidy. | 4:10p |
Cocktail Talk: Evolution and the Future of Containers and Virtualization Chris Crosby is CEO of Compass Datacenters.
Have you ever been to a social gathering—a cocktail party let’s say—where you didn’t know anyone and it seemed like everyone was talking about something you couldn’t understand? While you stood by that fern hoping that no one would come up and ask you about the evening’s main topic of chit-chat, didn’t you wish that you knew something about the subject so that you could at least nod knowingly and laugh at the right times? Of course you did.
Sometimes the world of data centers and the cloud can be like that. Just when you think you’ve got it all down, along comes something new that everyone but you, is comfortable talking about, and suddenly you’re thinking that maybe a few ferns on the raised floor might liven up the place. In the world of data centers, the trendy cocktail topic is how Docker (and other containers) has assumed the role of Homo Erectus compared to virtualization’s Neanderthal man. For the less paleo logical among you, this means that containers are the next step in the server efficiency evolutionary process, and virtualization will soon be coming to a museum near you. Like most discussions conducted over a few martinis, the truth lies somewhere in between.
For our purposes let’s use Docker as our container representative. In short, Docker is a container technology in which an application is housed in a file system that contains everything that it needs to run: code, runtime system tools, system libraries, in short, anything that can be installed on any Linux or Microsoft server. When we compare containers like Docker to virtualization it is important to note that both technologies have their own strengths and weaknesses; but in terms of implementation, they are not necessarily mutually exclusive.
When we compare Docker to virtualization, the major points of differentiation are found in two areas: structure and purpose.
- Structure – In a virtual environment, each virtual machine requires a full operating system and operations are controlled via the hypervisor layer. Due to these requirements, each virtual machine is burdened with processer intensive overhead – the hypervisor tax. Docker eliminates the need for the hypervisor layer on condition that all containers on a single machine share the same operation system kernel to reduce overhead and make a more efficient use of RAM.
- Purpose – Whereas virtualization was developed primarily to increase the efficiency of hardware utilization and provide server to OS neutrality, Docker, and containers in general, provide data center or cloud operators a simplified method for the creation of highly distributed systems supporting multiple applications, worker tasks and other processes to run autonomously on a single physical server or multiple virtual machines. This design enhances the portability of Docker in that it can be run multiple platforms including within the data center, in public or private clouds and bare metal offerings.
Due to its lower level of overhead, Docker is less resource intensive than a virtual machine, thereby enabling its applications to “spin up” quickly (in milliseconds) versus minutes with a virtual machine. This speed differential and more efficient resource usage also manifests itself in faster propagation of an application than a virtual machine counterpart and the ability for it (and other container-based systems) to support four (4) to six (6) times the number of application instances on a server. Deployed effectively, a container system provides the ability to run more applications on less hardware; thereby resulting in substantial savings in the areas of servers and power.
So, will Docker ultimately lead to the extinction of virtualization within data centers? At the present time the answer to this question is probably not. Container solutions like Docker are not like for like replacements for virtual machines. Unlike virtual machines, all containers must share the same underlying operating system, a substantial limitation in a mixed environment. For example, at the present time you could not run both Windows and Linux applications on the same server. Virtual machines also offer a higher degree of security than container alternatives, making them potentially unsuitable for an organization’s more sensitive applications. Therefore, it is likely that we will see some manner of coexistence of containers and virtualization within data centers, with the implementation of each based on the specific organizational requirements for the specific application(s) to be supported.
Docker, and containers in general, have been deployed by into various infrastructure tools such as Google Cloud, AWS and Azure. This makes sense as need to quickly spin up applications is characteristic of the requirements of public cloud users. These same capabilities are now beginning to enter into the Enterprise. However, the prevalence of virtualization schemes within existing data centers, coupled with its broad base of end user familiarity ensure its viability in the coming years.
Based on the strengths and weaknesses of both technologies, it is not unreasonable to assume that the use of virtualization moving forward will become more limited by specific requirements, security for example, while the use of containers within the data center will continue to expand. End users will have to factor these considerations into their future data center planning and the educated party guest, when asked for his or her opinion on the evolutionary future of both will be able to take a sip of their scotch and water and confidently answer, “It depends…”
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:23p |
Botched Server Install Results in $2.14 Million HIPAA Breach Fine  Brought to you by MSPmentor
A Catholic health care system has agreed to pay $2.14 million to settle claims it failed to change the default settings after installing new server, allowing public access to the private health records of 31,800 patients.
St. Joseph Health – which operates hospitals, community clinics, nursing facilities and provides a range of other health care services – agreed it was in potential violation of security rules of the Health Insurance Portability and Accountability Act (HIPAA).
The U.S. Department of Health and Human Services’ Office of Civil Rights (OCR) opened an investigation on Feb. 14, 2012, after St. Joseph Health reported that files containing electronic protected health information had been publicly accessible via Google and other browsers during the entire preceding year.
“The server SJH purchased to store the files included a file sharing application whose default settings allowed anyone with an Internet connection to access them,” OCR said in an Oct. 17 statement announcing the settlement.
“Upon implementation of this server and the file sharing application, SJH did not examine or modify it,” the statement continued. “As a result, the public had unrestricted access to PDF files containing the ePHI of 31,800 individuals, including patient names, health statuses, diagnoses, and demographic information.”
See also: Merger of Two Healthcare Giants Makes IT Transformation Inevitable
Federal investigators determined the health care nonprofit failed to coduct a thorough evaluation of the environmental and operational implications of installing the new server.
Also, multiple contractors hired by St. Joseph to assess risks and vulnerabilities of ePHI were brought on in a patchwork fashion that did not result in the enterprise-wide risk analysis required by HIPAA.
“Entities must not only conduct a comprehensive risk analysis, but must also evaluate and address potential security risks when implementing enterprise changes impacting ePHI,” OCR Director Jocelyn Samuels said in a statement. “The HIPAA Security Rule’s specific requirements to address environmental and operational changes are critical for the protection of patient information.”
See also: HIPAA Breach Case Results in Record $5.5M Penalty
In addition to the financial payment, St. Joseph Health agreed to a corrective action plan that includes a thorough risk analysis, implementation of a risk management plan and staff training.
The $2.14 million penalty brings the total amount of settlements for HIPAA security violations to $22.84 million this year, up sharply from $6.2 million in all of 2015.
This first ran at http://mspmentor.net/msp-mentor/botched-server-install-results-214-million-hipaa-breach-fine | 6:44p |
Here are the Top DCIM Software Vendors, According to Gartner Nlyte Software, Emerson Network Power, and Schneider Electric continue to lead in the DCIM software market, according to a new analyst report.
Nlyte is ahead of its two primary rivals, both much bigger than Nlyte. Nlyte is leading among the top DCIM software vendors in terms of both vision and ability to execute, Gartner said in its 2016 Magic Quadrant for DCIM, released earlier this month. DCIM stands for Data Center Infrastructure Management.
The DCIM market is continuing to evolve, according to the analyst firm, although large enterprises continue to be the primary adopters of DCIM software solutions. And adoption in the US is happening faster than in other parts of the world.
Gartner has observed vendors taking bold steps to gain market share, including acquisitions to fill functionality gaps and efforts to improve the implementation experience and ease of use, which are two of the biggest deterrents to DCIM software adoption.
While DCIM is not industry vertical-specific, the analysts did note two verticals that have emerged as primary targets for the vendors: federal government and colocation providers.
Colo companies are deploying these tools to optimize their facilities and, in some cases, to provide DCIM as a service to their customers.
Adoption of DCIM software by government agencies is being driven by the government’s data center consolidation and optimization initiatives. The latest White House effort, called Data Center Optimization Initiative, among other things requires agencies to implement DCIM tools by the end of fiscal 2018.
Not all vendors that were on Gartner’s Magic Quadrant for DCIM last year made it to the latest quadrant. The analysts dropped ABB, Device42, Geist, Modius, Optimum Path, and Rackwise because they “did not fully meet all of the 2016 inclusion criteria.” FieldView Solutions was dropped because it has been acquired by Nlyte.
Among other criteria, DCIM software vendors have to meet Gartner’s definition of a DCIM tool, have customers in North America and at least one other region, have generated at least $5 million in 2015 DCIM revenue, and have deployed with at least 75 customers or have at least 75,000 racks under management by their DCIM solution.
Here is Gartner’s 2016 Magic Quadrant for Data Center Infrastructure Management Tools (click to enlarge):

Source: Gartner, 2016 | 7:57p |
Cisco Revives LAN-SAN Convergence Play in the Data Center For more than eight years now, networking products vendors — and the analyst firms that cover them — have touted the promise of converging LAN and storage traffic over the same Ethernet pipes. In its latest move to make that long-extended forecast finally come true for enterprises, Cisco announced Wednesday updates to its top-of-the-line Nexus 9000 series switches [pictured above] that add support for Fibre Channel over Ethernet (FCoE), and add support to its MDS storage network products for Fibre Channel over Internet (FCIP).
“10/40G FCoE is now available on 9K [Nexus 9000] platforms,” said Tony Antony, Cisco’s senior marketing manager for data center products and solutions, in a note to Data Center Knowledge, “in both ACI mode and NX-OS mode. On top of that, we are also introducing 25G/50G/100G IP Storage capabilities on certain 9K platforms.”
An all-new SAN Extension Module is being added to the MDS 9000 family. It includes 24 ports of 16 Gbps Fibre Channel, and 10 ports of 1 Gb and 10 Gb Ethernet on FCIP.
“The new SAN-Extension module which Cisco launched will be supported on MDS 9700 Multilayer Director Chassis (MSD 9706, MDS 9710 and MDS 9718) and will provide the FCIP SAN-Extension capability to enable backup/replication solutions between two data centers,” Antony told us. “With this module, Cisco brings a modular platform for customer requirements around speed, scale, density, availability and resiliency for SAN-Extension solutions.”
Leaf by Leaf
Cisco ACI (Application Centric Infrastructure) is the company’s policy-driven system for managing network configurations based on the needs of workloads. But it’s a very different methodology from how data centers presently operate, which is why Cisco also maintains NX-OS.
The company’s goals for Nexus 9000 (9K) are already on record: It wants enterprises to attach 9K switches into existing Cisco switch-based networks, preferably alongside the 5K tier (in-between the 7K and 2K tiers in a five-tier hierarchy). Here, in an NX-OS environment, 9K would play an intermediate role between top-of-rack switches and servers hosting virtual machines.
Those servers would then become endowed with Cisco’s AVS virtual switches, giving them access to “virtual leaves” to newer spine/leaf configurations, in the newer side of the data center running ACI. This way, migration between old and new network platforms can take place at customers’ own pace.
“We recommend a step-by-step approach,” Antony told us. “Implement FCoE in the access, then move to core. . . Rip and replace is not needed. You can have a phased approach. The current applications will be perfectly fine. As new applications are deployed, customers consider more agile architecture (ACI/9K) and hence customers’ investments are fully protected.”
But into that mix of NX-OS and ACI, Cisco is now tossing either a fat pipe or a monkey wrench. It’s a bridge between LAN and SAN, the common elements here being support for 100 gigabit-per-second, 50 Gbps, and 25 Gbps IP storage networking in 9K switch models, and reciprocal support for FCIP in more models of its MDS 9700 Multilayer Storage Director (beyond the MDS 9250i), due for release sometime during Cisco’s fiscal year 2017 (which ends in Q2 of calendar year 2017).
The grout for that connection will be supplied by a revision to Data Center Network Manager (DCNM) software that adds end-to-end management of converged SAN/LAN networks.
“DCNM provides the option to either install LAN or SAN or the converged product line,” Antony told Data Center Knowledge. “With role-based access control [RBAC], one can have different departments responsible for SAN and LAN while a super DC user can see both.”
The Six-Year Plan
End-to-end management and RBAC are features that Cisco described as an absolute necessity for any SAN and LAN convergence to take place. . . during an online conference some five years ago [PDF].
“Cisco said that there are three key requirements,” wrote Dr. Jim Metzler, the conference’s moderator, in a summary published in September 2011. “One of those requirements is a converged management platform that includes functionality such as provisioning and monitoring. The second requirement that Cisco mentioned is role-based access control (RBAC) to enable the tasks and roles of the SAN and LAN administrators to be kept separate and contained. The third requirement is the ability to have a VM-aware topology view that shows all the dependencies from the VM out to the physical host, through the fabric, and to the storage array.”
Seven months earlier, Cisco commissioned a Forrester study [PDF] touting the benefits of using IP networking as a convergence mechanism.
“While the cost of FC switching has come down over time,” wrote Forrester’s analysts in February 2011, “Ethernet is cheaper on a per-port basis and is likely to continue to be less expensive due to the higher volumes and larger pool of vendors serving the space compared with the relatively narrow and low-volume field of FC. Moving storage traffic to a form of Ethernet, a technology that is ubiquitous in data centers today, is seen by many as a move toward more use of industry standards and reduction of costs overall.”
That was nearly six years ago already. Since 2010, Cisco has accumulated a list of customers that adopted FCoE, including Coca-Cola Bottling Co. of Charlotte, North Carolina, Boeing, the Commonwealth of Massachusetts, and Texas University. Unquestionably, convergence at this level is indeed happening for some firms.
While there’s little dispute over Cisco’s assertions of the basic requirements for FCoE-based LAN/SAN convergence to come to fruition, the question now is whether, at long last, the maturity of the technology delivering those requirements has reached critical mass. |
|