Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, April 14th, 2017
Time |
Event |
12:00p |
How Database Monitoring Can Improve Engineering and Development Team Productivity Editor’s Note: In this three-part series of articles, we look at various approaches to database monitoring that can improve app performance and availability, online customer experience, and engineering and development team productivity. In this final article, we address how proactive monitoring can impact engineering and development team productivity.
Kelsey Uebelhor is Director of Product Marketing at VividCortex.
Team performance often comes down to good communication and the proper use of specialization. For DBAs, engineers, and developers, effective communication and access to information keeps everyone on the same page, from initial issue identification to after-action knowledge sharing.
Specialization within a team is often necessary, as each member offers complementary skillsets that help smooth hand-offs as work moves through the process. But specialization can also create silos. In fact, the DevOps approach was developed to improve this process, to knock down silos and create better collaboration. You find this type of collaboration at work whenever a developer pushes code loaded with new database queries and the DBA load tests and troubleshoots the new release. Each player is bringing their skills and knowledge to the task of shipping faster, more stable applications.
However, in virtually any team context, from professional sports to DevOps teams, wherever you find segmented and highly specialized resources combined with poor communication and collaboration, you are likely to find subpar productivity. Given the unique role databases and DBAs play in your software development process, you need to pay close attention to the degree of specialization within your team, the level of communication, and what role metrics can play in identifying and overcoming barriers to better productivity. As a cohesive team, everyone is responsible for better performance, and database monitoring can bring the team together based on a better understanding of shared responsibilities and results.
Specialization
In your typical software development environment, developers are focused on generating new features as quickly as possible, while DBAs are focused on service availability and meeting the system performance demands of the business. Their tasks are segmented and assigned based on the specific knowledge and skills of each team member. DBAs, for example, have experience that others on the team may not have: designing, building, and maintaining scalable database servers. The downside to this approach is that by encouraging specialization, you then create a dependency on that specialist that, as you scale, can create bottlenecks and drive down team productivity. This situation is unique for DBAs, as they are involved in multiple handoffs, interactions, and information sharing between different teams. Their duties and knowledge are specialized, so it’s tempting to centralize the burden on them instead of sharing or offloading it with developers.
A better approach is to create a more collaborative relationship between developers and DBAs that encourages sharing responsibilities, rather than having stark demarcations between them. To address this, you could consider implementing a DevOps approach to software development that encourages collaboration or hiring full-stack developers with broader skills. Regardless of whether you go down either of these paths, an essential first step is to start knowledge sharing between the DBAs and developers that will lead to a common understanding of each other’s roles in the process. This helps pave the way for greater sharing of tasks and responsibilities in the future. Performance monitoring is a great place to start when looking at opportunities for overlapping responsibilities and knowledge sharing, as all parties are ultimately responsible for system performance.
Communication
In the fast paced, dynamic, and highly interdependent world that is software development, strong team communication is essential as a single misunderstanding can have potentially catastrophic consequences. Clear and unambiguous communication between developers and DBAs helps to maintain the tight coordination necessary to keep teams and systems running at peak efficiency. Constant vigilance, as continual code releases in production require issues to be quickly discovered and resolved, prevents slowdowns or major outages.
For optimal team productivity, there are two major elements of communication you need to think about: timeliness and quality. And just as with specialization, you need to strike the proper balance. In this case, you need the appropriate frequency so that the right people are constantly aware but not overwhelmed, and sufficient quality such that the information shared is informative and actionable. Without this balance, teams create a lot of noise, making more work for themselves as they comb through endless messages and alerts that don’t provide any meaningful information.
Improve Timeliness with Alerts
System alerts are an important feature of any application or database monitoring solution, but sometimes you can have too much of a good thing. You need to be able to fine tune your alerts using event-based triggers as well as thresholds to control the frequency. Also, developers and DBAs should be able to set and monitor their own alerts to help share the monitoring burden and improve time-to-response when issues arise. Monitoring should not be just a DBA responsibility.
Better Quality with Metrics
Improving the quality of communication means more depth and proper context that help all parties better understand issues as they arise. You need to be able to view trends and patterns to spot anomalies based on time-series data. By sharing screenshots of charts, comparing multiple graphs to find correlations, and drilling down to investigate and isolate issues, DBAs and developers can have more fruitful collaboration based on metrics-driven conversations.
Conclusion
Teams work better with frequent, high-quality communication and the right mix of skills working in concert. To find productivity improvements, you should take a close look at the skill segmentation on your DBA, engineering, and development teams. See if specialization may be creating bottlenecks and whether team communication is metrics-driven and insightful, or noisy and counterproductive. By tackling these two issues you can achieve tighter collaboration between developers and DBAs, and create a more cohesive and productive team that has a better understanding of shared responsibilities and results.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 7:43p |
Salesforce Achieves Green Cloud Milestone  Brought to You by The WHIR
Salesforce announced Thursday that it has achieved net-zero greenhouse gas emissions and carbon neutral cloud delivery for all customers, in a blog post, citing the universal impact of climate change.
The company is operating with net-zero emissions by purchasing high efficiency equipment and investments like virtual power purchase agreements. Two such agreements in 2016 for a combined 64 megawatts of wind power, Salesforce says. This also means it is counting the carbon and energy saved with its multi-tenant architecture compared to on-premise installations, which it says is 50 times more efficient.
These measures create an internal price on greenhouse gas and a financial incentive to reduce emissions, according to Salesforce Director of Sustainability Patrick Flynn.
“Salesforce knows that businesses can be powerful platforms for change, and we are committed to doing our part,” Flynn writes in the blog post. “We are proud to advocate for access to renewable energy and actively collaborate with like-minded peers through industry groups such as the Renewable Energy Buyers’ Alliance and the Future of Internet Power.”
Salesforce is using independent third-parties to audit and verify the quantification of carbon offset projects, and the blog post outlines projects in Honduras and India it is supporting as part of its emission-reduction efforts. The company also intends to transition to 100 percent renewable energy use.
Many cloud providers and data center companies, including Microsoft, Amazon, Apple, Google, and Equinix have stated they will maintain their commitments to renewable energy goals despite changes in U.S. environmental regulations.
This post originally appeared here at The Whir. | 7:50p |
Outer-Space Hacking a Top Concern for NASA’s Cybersecurity Chief NASA scientists glean valuable data about powerful space explosions and the energy of black holes from their Swift and Fermi satellites. The projects were supposed to last a few years. Instead, they’ve survived for more than a decade.
That’s great for researchers but a challenge for Jeanette Hanna-Ruiz because of the projects’ aging computer operating systems. As the space agency’s chief information security officer, she has to secure the data sent to and from planet Earth against cyberattacks.
“It’s a matter of time before someone hacks into something in space,” Hanna-Ruiz, 44, said in an interview at her office in Washington. “We see ourselves as a very attractive target.”
Cybersecurity at the National Aeronautics and Space Administration extends from maintaining email systems at the agency’s Washington headquarters to guarding U.S. networks in Russia, where Americans serve on crews working with the International Space Station. The agency also has to protect huge amounts of in-house scientific data and the control systems at its 20 research centers, laboratories and other facilities in the U.S.
Commandeering Controls
Among Hanna-Ruiz’s concerns is hackers breaching communications between NASA and one of its 65 spacecraft transmitting research data.
“There could be a company that wants it, there could be a nation-state that wants it,” Hanna-Ruiz said. The challenge, she said, is, “How do I harden these streams and communications flows?”
Her nightmare is a direct cyberattack on a satellite, perhaps even allowing adversaries to commandeer the controls.
Hanna-Ruiz, a lawyer, started her stint at NASA in August. She previously managed Microsoft Corp.’s consulting services and also served in cybersecurity advisory roles at the Department of Homeland Security and the White House during the Obama administration.
Her goal in the next 12 to 18 months, she said, is to “get control of our internal network” and work with the agency’s space missions on cybersecurity.
Last year, NASA reported 1,484 “cyber incidents,” including hundreds of attacks executed from websites or web-based applications, as well as the loss or theft of computing devices, according to the Office of Management and Budget’s annual report to Congress in March on federal cyber performance.
NASA aims to show “we’re leading the way in security — that’s the place we want to get to,” Hanna-Ruiz said.
Pre-Launch Testing
Building secure rockets, satellites and other instruments before they’re launched is key. Engineers submit equipment to tests to see whether it can withstand space, from subjecting it to severe vibration to temperature checks in deep-freeze chambers.
“We have a lot of people who are focused on getting this particular thing to space,” Hanna-Ruiz said. “They may not be necessarily thinking of security. The truth is I don’t know if I want them to be thinking about security. I want them to be excited and passionate about going to space.”
So Hanna-Ruiz’s cybersecurity teams step in to look for vulnerabilities in coding, firmware and other areas. The agency is also working to “harden” old industrial-control systems, such as those used to launch spacecraft, according to Hanna-Ruiz.
“At NASA that’s a big conversation for us now: What is the most valuable data and how do we secure that?” she said.
Working in Russia
NASA hasn’t sent people beyond low-Earth orbit since the final moon missions more than 40 years ago. But American astronauts travel to the International Space Station on Russian Soyuz capsules that take off from Kazakhstan. U.S. teams monitor NASA’s networks and data at the agency’s multiple offices across Russia.
“We’re always looking at those networks, we’re always looking at those systems to figure out if they’re vulnerable,” Hanna-Ruiz said, including data coming from the space station. “The great thing is NASA has a good history of working with Russia on space exploration and partnering with them to get our astronauts up to space, and that’s really worked out well for us.”
So far, that collaboration has endured amid tensions after U.S. intelligence agencies concluded Russia hacked into last year’s U.S. presidential campaign.
As Hanna-Ruiz works to bolster NASA’s cybersecurity, she says the agency’s center in Silicon Valley provides opportunities to connect with technologists attuned to the task.
“It would be great to have more people who are like aerospace engineers who have a background in cybersecurity, or who are Earth-science gurus and also have a cybersecurity background,” she said. “That niche of person is difficult to find.” | 9:26p |
Tips for Disinfecting Your Data Center Cyberattacks have pretty much become a part of every day life. Security firm ForeScout’s State of Cyber Defense Maturity Report found that more than 96 percent of organizations experienced a major IT security breach in the past year. One in six organizations had five or more significant security incidents in the past 12 months, and almost 40 percent had two or more incidents.
“The media reports of stolen information or compromised networks are almost a daily occurrence,” wrote Ray Boisvert, president of I-Sec Integrated Strategies. “The stories are increasingly alarming and the trend line is troublesome.”
How you respond, though, is the key factor. Here are several tips on how to disinfect your data center and beef it up against further attacks.
Anti-Virus and Firewalls Are Not Enough
A report by network security firm Netskope discovered that the overwhelming trend toward securing the cloud revolved around the utilization firewalls as an effective perimeter. The study found that 90 percent of cloud application usage in the enterprise had been blocked by network perimeter devices, yet someone in IT granted exceptions so they could continue to run—so much for firewalls securing the enterprise.
Similarly, anti-virus software can no longer keep up with the sheer volume of daily viruses and their variants that are created in cyberspace. You might recall that cybersecurity firm Radware recently announced the discovery a new Permanent Denial-of-Service (PDos) botnet named BrickerBot, designed to render the victim’s hardware useless. Also known as “phlashing,” a PDoS attack can damage a system so badly that it requires replacement or reinstallation of hardware and is becoming increasingly popular, according to Ron Winward, security evangelist for the company.
Do you think for a moment that Sony, Target or any of the big financial institutions that have suffered breaches didn’t have firewalls and AV in place?
That said, there are plenty of useful tools out there such as Malwarebytes that should be used to detect and cleanse the data center of any detected or suspected infections.
Implement Whitelisting, Add Intrusion Detection
Whitelisting is a good way to strengthen defenses and isolate rogue programs that have successfully infiltrated the data center.
“Even if malware already exists on a workstation, it will be blocked when it attempts to call home,” wrote Stu Sjouwerman, CEO of security firm KnowBe4 in a blog.
Also known as application control, whitelisting consists of a short list of the applications and processes that are authorized to run. This strategy limits use with a “deny-by-default” approach so that only approved files or applications can be installed. According to McAfee, dynamic application whitelisting strengthens security defenses and helps to prevent malicious software and other unapproved programs from running.
Modern networking tools should also be considered part of the security arsenal as they can highlight abnormal patterns if configured correctly. For example, you can set up intrusion detection to trigger when any host uploads greater than 20 MB more than say, eight times during a single day. That would eliminate normal user behavior and help contain existing threats.
Security Analytics
Boisvert advocates real-time analytics in tandem with a methodology that focuses on likely attack vectors as the best way to augment current security practices. The web, he wrote, should be regarded as a hostile environment filled with predators. As the bad guys are already inside, data center managers should be trying to figure out how to close the timeline to discovery. “We need to use software to do the heavy lifting to combat cyber-threats and cyber-terrorists. SAS, for example, has been working on behavioral analytics to better detect internal security threats.”
Boost the Human Perimeter
Perhaps the most important thing to realize is that technology alone will never solve the problem. Perfect email filters will cause the bad guys to use the phone. Perfect phone filters lead them to target peoples’ personal social media accounts. Close one door and they will find another—it’s not unlike those movies where the thief always gets the loot or the painting, no matter how many layers of security are employed. But there is something you can do about it.
“Training and education has to be is part of the solution to make people aware of these attacks, how they can detect, stop and report them,” wrote Sjouwerman.
End-user Internet Security Awareness Training is all about teaching users not to do silly things like clicking on suspect URLs in emails, or opening attachments that let in the bad hats. Sjouwerman recommended putting all staff through such training. “Even when an organization has published policies and implemented the many security procedures and technologies, it still needs to train its employees. The perimeter is dead; individual employeesvare now the perimeter.” |
|