When we initially started this series of looking at a day in the life of different careers in cybersecurity, the webinar on pen testing was one of our most popular. Whilst the concept of pen testing is very attractive to many and the opportunities seem plentiful, one area of it that has not been covered is writing the post-test report, and the process around getting that right.
Andy Gill is a pen tester for Pen Test Partners. In a recent interview on the Human Factor podcast with Jenny Radcliffe, Gill said that there is a line between “waffling” and getting the core understandings across. He said that too often, pen test reports overlook the business aspects in favor of the technical explanation.
He added: “A few times, when I started out, I wrote what I thought was a pretty good report but [mentor Paul Ritchie] rejected it, and told me how to fix it and make it better for the customer.”
This fits with Gill’s belief of helping people learn more, which started with him writing down instructions to a friend and led to the creation of a book and a blog.
Infosecurity caught up with Gill to talk about the art of writing the perfect pen test report. He explained that a report will commonly include “the technical stuff, what is wrong and how to fix it to protect the business, and also the technical explanations on what you’ve found such as XSS, where you need to scale up and reapply budget.”
In terms of writing, Gill said that he will typically write the technical part first, and then write the executive summary. If the app had particular problems, a report “will be about a 100 pages long and for a secure app maybe 10 pages, maybe tops 12.” He explained that a typical report will focus on the actions and what the issue is, how to reproduce the issue and how to fix it.
Citing his mentor Paul Ritchie, he said that he was taught that a report should be split into two sections: an executive summary which is written so anyone can pick it up regardless of technical knowledge and understand what is going on; and a technical section.
Pen test reports have been beset with criticisms of poorly written reports, which Gill said only contain “the core of what the client needs to do, and not tell the client what the issue is” and fail to explain that flaws have been found, how to “reproduce” them and how to properly mediate.
“At the end of the day, the report is what clients pay for, and the tangible output is the report,” he said. “I see a lot of awful reports, not just from companies I work for but often when you go in you will be given the previous company's report and some are actually laughable. I was on site with a client and they showed us the previous company’s report, and they had basically run an off shelf tool, and screened shot the tool and there was no talk on how to reproduce the issue, what it was and how to mitigate it, and what the risks were to the business. The client could have easily done it themselves. “
Gill explained that a typical pen test engagement will involve five days on site: four days testing and one day reporting, while some will involve five days of testing and five days of reporting. “Most jobs will last five days, and the report will be processed and be checked for technical inaccuracies and hopefully we'll get it back to the client the same day or the following week.”
What about major flaws found on a pen test, do you wait until the report is out to inform the client? Gill said that if he were to find a critical vulnerability he would report it during the test, and add it to the report later on.
As Gill is keen to pass the baton onto the next generation of practitioners, Infosecurity asked him what he would recommend any new and aspiring report writers do. He started by recommending “split the report” and to start with the executive summary in a language in which “you can explain the issue you’ve found to your granny or your mum.”
Gill was also keen to mention that in the technical section, a lot of people miss the impact to the client, so if there is XSS, you need to highlight the impact to the client and if it impacts the confidentiality of data within the application and potentially the integrity of the data stored. “If it is a destructive vulnerability and there is critical stuff in the app, if you can highlight the risks and talk about the impact it is more beneficial to the client as the technical lead may take this to C-level execs and highlight the risks if they don’t have funding to fix it.”
Gill concluded by saying that pen tests are carried out for many reasons: for compliance, for funding, “and other times a pen test is done as someone wants to justify that an app is secure and that falls back on compliance” while sometimes, people just want to be tested by a second set of eyes.
Photos courtesy of Chris Ratcliff at Steelcon