Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, November 18th, 2015
| Time |
Event |
| 5:01a |
Vapor IO Wants to Bring Server Management Tools Out of the 90s Vapor IO, the data center infrastructure startup co-founded by Cole Crawford, former executive director of the Open Compute Project and one of people that stood at the genesis of OpenStack, has launched into general availability an open source technology that aims to bring basic server hardware management out of 1998 and into 2015.
OpenDCRE, which stands for Open Data Center Runtime Environment, does the same things Intel’s 17-year-old Intelligent Platform Management Interface (IPMI) does but in ways that better fit modern data center management tools, Crawford said.
The company also announced partnerships around OpenDCRE with data center analytics software companies Future Facilities and Romonet and planned to demonstrate the open source technology together with HP, using it to manage HP Cloudline, the commodity servers HP makes in partnership with the Taiwanese electronics manufacturing giant Foxconn.
In addition to its software products, Vapor has an unusual, cylindrical data center rack design, which the company says is a lot more energy- and space-efficient than traditional rows and aisles.
IPMI defines interface specs for out-of-band management, or management of server vitals without involving the operating system, CPU, or firmware. Data center managers use it to switch servers on or off or to track system temperature, power consumption, fans, or physical chassis intrusion.
The problem with IPMI, according to Crawford, is it was created in an era when IT managers had “personal relationships” with their servers. The technology was meant for high-touch server management, which in today’s world, where the data center is becoming increasingly automated – or software-defined – is inadequate.
“Fast-forward to today, and primary way to do management is still IPMI,” he said. IPMI is difficult to set up and it uses custom silicon on Baseboard Management Controllers, which create headaches for admins who want to automate infrastructure management.
Others have brought up the shortcomings of working with traditional vendor BMCs as well. Facebook created and open sourced its own OpenBMC software, saying BMC software supplied by vendors was too closed, and that vendors were too slow to make changes to the software for Facebook’s purposes.
Vapor’s OpenDCRE runs on Raspberry Pi 2, the $35 pocket-size computer. It can manage servers designed to Open Compute specs bypassing the BMC silicon completely. It manages traditional servers through BMC.
Its biggest advantage, however, is the API, which enables data center operators, system admins, DevOps admins, or vendors write code to automate infrastructure management using the familiar JSON format.
Vapor hopes more vendors extend OpenDCRE’s functionality. Crawford expects Croydon, UK-based Romonet, for example, to use it as part of its Operations Portal, data center performance tracking software that continuously assesses energy efficiency, availability, and capacity.
London-based Future Facilities will use it to create real-time heat maps in its Computational Fluid Dynamics modeling software, he said.
Any company that wants to streamline the physical data center and improve awareness of how their servers are working will benefit from OpenDCRE, Crawford said.
Vapor has built a commercial product on top of OpenDCRE called Vapor Core. It uses data collected by OpenDCRE to analyze application performance and the cost of that performance in terms of watts and dollars. Users can sign up for the beta version of Vapor Core starting today. | | 6:17p |
Microsoft: Connected World Requires Holistic Cybersecurity Approach 
This article originally appeared at The WHIR
The volume and complexity of cybersecurity risks associated with the Internet of Things and employee-owned devices has Microsoft considering a more holistic approach to security.
Microsoft CEO Satya Nadella spoke in Washington, D.C. on Tuesday about Microsoft’s contribution and approach to cybersecurity, announcing a new Cyber Defense Operations Center and a Microsoft Enterprise Security Group.
In his keynote, Nadella acknowledged that unless customers can trust technology, they will not use it. A Microsoft blog post elaborated on the company’s approach in how the company protects, detects, and responds to security threats, spending $1 billion over the past year on security and doubled its security executives. The company recently picked up Israeli security company Secure Islands.
As part of this initiative, Microsoft announced plans to open a new Cyber Defense Operations Center which is staffed with dedicated teams 24×7 and has access to thousands of security professionals, data analytics, engineers, developers, program managers, and operations specialists. Detecting and responding to threats as quickly as possible is crucial as the lost productivity related to cybersecurity threats is in the range of $3 trillion, Nadella said.
At a security conference in Toronto last month Microsoft said that it has taken control of botnets as part of its security strategy.
Microsoft also announced the Microsoft Enterprise Cybersecurity Group (ECG), a dedicated group of worldwide security experts that offers security assessments, provides ongoing monitoring and threat detection, and incident response capabilities.
According to Nadella, Microsoft updates Windows one billion times a month, and inspects over 200 million emails as part of Office 365 for malware, looking for attachments that may contain malware before they are sent to customers’ inboxes. It has 300 billion user authentications each month.
Recently, Microsoft announced that it would start delivering cloud services via data centers in the UK and Germany in order to comply with local data protection laws.
This first ran at http://www.thewhir.com/web-hosting-news/microsoft-connected-world-requires-holistic-cybersecurity-approach | | 6:27p |
Netflix Open Sources Continuous Delivery Platform for Multi-Cloud Environments 
This article originally appeared at The WHIR
In order to meet the challenges of continuous delivery across multiple clouds, Netflix began developing Spinnaker a year ago, and the company announced Monday that it has released the Continuous Delivery platform to GitHub. Spinnaker creates pipelines from creation to deployment, and is designed to be easily extended to deliver cluster management and deployment across different cloud deployment models, Netflix said.
The importance for Netflix of maintaining Continuous Delivery across multiple clouds was demonstrated in September, when it was affected by an AWS outage.
Spinnaker replaces AWS deployment automation tool Asgard, and is intended to benefit development with operational resilience, easy configuration, maintenance, and extension, and programmatic configuration and execution via a consistent API, according to the company. It also provides a global view of the environments an application passes through, and repeatable automated deployments, plus the benefits of Asgard, without migration.
Clusters can be deployed and managed simultaneously across AWS and Google Cloud Platform with Spinnaker, which also supports Cloud Foundry, and will add support for Azure in the near future, according to Netflix.
Spinnaker pipelines can be triggered by completing a Jenkins Job, manually, with a cron expression, or via other pipelines. It comes with a number of stages, which can be run serially or in parallel. Spinnaker can be used to manage server groups, load balancers, and security groups.
Netflix is engaging the developer community through a Slack channel andStackOverflow, and the GitHub package includes installation instructions and pre-existing images from Kenzan and Google.
This first ran at http://www.thewhir.com/web-hosting-news/netflix-open-sources-continuous-delivery-platform-for-multi-cloud-environments | | 8:15p |
Telecity Data Center Outage in London Dings Cloud, Internet Exchange Two consequent power outages at one of TelecityGroup’s data centers in London Tuesday afternoon local time disrupted operations for many customers, including the London Internet Exchange and AWS Direct Connect, the service that connects companies to Amazon’s cloud through private network links.
The provider’s Sovereign House data center in the London Docklands lost utility power and appears to have failed to switch to backup generators around 2pm. Power was restored but went down again, according to an incident report by EX Networks, one of the customers in the data center.
London-based Telecity has not yet said what the root cause of the data center outage was.
The facility is one of the data centers housing infrastructure of the London Internet Exchange, or LINX. Telecity told EXN it would have to shut down power to two suites that house LINX to fix the electrical infrastructure.
“Telecity are also installing six new power feeds from the non-affected train into the LINX suite in order to retain some service,” an EXN update read. “EXN will shut down affected peers during this work, however some minor disruption to connectivity services should be expected.”
AWS Direct Connect customers that use the service to connect their infrastructure at the Sovereign House data center to Amazon’s cloud data center in Ireland, which houses the cloud’s eu-west-1 availability region, experienced packet loss as a result of the interruption.
“We can confirm intermittent packet loss between the Direct Connect location at TelecityGroup, London Docklands, and the EU-WEST-1 Region,” an AWS status update on Tuesday read. “An external facility providing Direct Connect connectivity to the EU-WEST-1 Region has experienced power loss.”
Amazon said power had been restored about two hours later.
Sovereign House is one of eight data centers Equinix and Telecity have agreed to sell as a condition for regulatory approval for the planned merger. Redwood City, California-based Equinix announced it will acquire Telecity for $3.6 billion in May. | | 9:00p |
Stop Buying Magic: It’s a Data Protection Strategy that Doesn’t Work Mike Baker is the Founder and Principal at Mosaic451.
Magic is awesome at carnivals, and it most certainly got young wizard Harry Potter out of a few jams, but when magic is used with the hope that it will suddenly make your firm more secure – it simply does not work.
Magic is that intoxicating lure to a quick technological fix with blindside thinking that technology alone will keep the hackers at bay. Data centers are under constant pressure to safeguard assets, however, too many firms focus on security for the purpose of being in compliance. For example, the energy industry has secrets to protect, and there are huge regulatory burdens from the NERC (North American Electric Reliability Corporation), which maintains a set of cybersecurity standards for Critical Infrastructure Protection (CIP).
Cybersecurity has vaulted to the forefront of concerns for many businesses, yet fewer than one-third, whether it is energy, healthcare, finance or government agencies, say they’re prepared to meet the growing threat of an attack. Using the energy industry as an example, “We are seeing an industry that is actively moving forward with the deployment of comprehensive asset protection plans following several high-profile cyber and physical threat events,” according to an industry report from consulting firm Black & Veatch titled 2014 Strategic Directions: U.S. Electric Industry. However, only 32 percent of electric utilities surveyed for the report had integrated security systems with the “proper segmentation, monitoring and redundancies” needed for cyberthreat protection. Another 48 percent said they did not.
In 2013, a hacker compromised a U.S. Army database that held sensitive information about vulnerabilities in U.S. dams. In 2014, it was reported that Nuclear Regulatory Commission (NEC) computers within the past three years were successfully hacked by foreigners twice and also by an unidentifiable individual, according to an internal investigation.
The question is not “if” a cyberattack will happen but “when”. An even more important question is: Are we using the right approach to protect assets?
The Technology Disconnect
Compliance is a necessity and critically important, but here’s the big disconnect. Organizations should be devoting more resources to security for availability and for confidentiality. Do most corporations even want to be in the security business? No, but they must be because of the assets they hold.
Organizations fall short and expose themselves to cyberattacks when they over-rely on “magic and widgets”. Most companies, if they have funds, will buy the widget because something must be made to work to comply with the latest regulations. Spending millions on the latest technology might seem useful, but it is effectively useless.
Many organizations check boxes on the compliance checklist rather than look at the operations as a critical network and seek ways to defend it. They need to stop checking boxes. It is not because organizations are lazy, they just rely on magic. There is a very human desire to buy something tangible. Technology alone often attracts people who want to avoid responsibility. The magical widget means they don’t have to learn anything. It’s the short-term, easy fix.
For any organization serious about protecting assets, the brightest minds must be deployed, and the toolset utilized is secondary to the core intellectual capital that must be developed. This is where Managed Services Providers (MSPs) come into play.
Is Your MSP Just “Mailing It In?”
Buying an “intelligent human network” to keep assets secure does not mean doing everything remotely. It’s not a mail-in service. The best MSPs are those with a hybrid approach of remote and onsite engineers. If there aren’t people onsite, they don’t understand how information moves in times of crisis. Nothing can replace face-to-face interaction.
In a traditional Security Operations Center (SOC), the SOC is only responsible for monitoring. A hybrid MSP who employs both technology and an intelligent human network of on-site personnel can monitor and act as a full operations team.
If a company expects to pass the security test, the most effective approach is to form a hybrid MSP team of the most experienced professionals available and empower them with best-in-class technology. Technology, if deployed correctly, is a force multiplier for intelligent human beings.
Threats from hackers and cyberterrorists (both perceived and real), legislative mandates with the promise of fines for non-compliance, and the opportunity to upgrade network infrastructure are all driving compliance in the energy industry. With more and more sophisticated attacks being launched and dedicated to exploiting and compromising SCADA (supervisory control and data acquisition) infrastructure vulnerabilities, it’s more critical than ever to secure and protect networks.
Many industries exist in an environment where threats are both real and virtual; physical damage can be triggered by natural forces or nefarious intent. The best approach is preparedness, but there is not a single solution or magical Patronus Charm. It takes a complex and systematic approach that addresses the physical elements of cybersecurity and the cyber elements of physical asset security which will help organizations be better equipped and educated to manage the full spectrum of attacks every group will undoubtedly face.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. |
|