Follow the Evidence!

Digital forensics experts prone to bias, study shows

https://www.theguardian.com/science/2021/may/31/digital-forensics-experts-prone-to-bias-study-shows

A recent article by a British Newspaper, The Guardian published an article highlighting bias in Digital Forensics.
 

“A study found that experts tended to find more or less evidence on a suspect’s computer hard drive to implicate or exonerate them depending on the contextual information about the investigation that they were given. “

This, in itself, should not be news for those who conduct investigation activities but I do feel it is important to highlight some ways to combat this. As mentioned in our previous blog post, bias is unavoidable but recognising bias and being aware can help investigators address and overcome some of the issues associated with it.

The specific bias described in the article is known as confirmation bias (https://en.wikipedia.org/wiki/Confirmation_bias). Put simply, this is bias towards a result that supports a hypothesis.

The sheer volume of data and complexity of modern devices means that often it is impossible to conduct an examination without any background information. Accurate background information is required for reducing the scope of the analysis and create an effective investigation plan.

Spoiler Alert: They’re both right.

Confirmation bias across Private and Public sectors

In the public sector, as an expert your duty is to a neutral party (the court) regardless of who (defence or prosecution) has instructed you. An expert performing their role professionally and ethically would not gain any benefits from favouring either side. If anything, it is in their interest to remain neutral so that the expert can maintain their credibility and reputation as an ethical professional who does not yield to external pressure.

This is rarely the case within the private sector – for many it is often in the interest of internal investigation teams to ‘prove guilt’ whether this be due to performance metrics, resource justification or to save money. There is also a perception of reduced accountability as, more often than not, the final result of an investigation will be presented to another internal department (e.g. Human Resource) and would, therefore, be less likely to be scrutinised by an opposing party with their own set of experts.

The solution?

A solution to this will never be a simple one. Processes can be engineered but ultimately it is down to the professionalism and ethics of the digital forensics expert themselves.

Practitioners should always be held accountable and must be able to present all findings factually, and without prejudice.

This sounds like a simple enough solution, but often is a moral minefield especially to those new to the field. A junior analyst in the private sector may feel pressured into showing that they have achieved certain goals with specified results, within a certain timeframe.

This then brings me onto training. Recently, there appears to be a high number of digital forensic practitioners who lack some of the fundamental skills required for an analyst. In some cases this can include critically evaluating the findings as too much trust can be placed in forensic tooling.

Incident Response =/= Digital Forensics

There are far too many training institutions out there monetising on the idea that Digital Forensics and Incident Response are the same thing when they are, in fact, not.

You might be asking yourself, what’s the difference?

Incident response is generally broken down into five stages.

  1. Prepare
  2. Identify
  3. Contain
  4. Eradicate
  5. Recover
  6. Lessons learnt

An Incident Response team’s focuses the majority of their time on the initial response to an incident, often at the expense of forensic evidence preservation. Changes must be made to the environment for an incident to be contained and for subsequent eradication/ remediation to take place. Even the use of purpose built Endpoint Detection and Response (EDR) tools will have to alter data on the machine for it to perform its core functions.

Digital forensics, on the other hand, is all about preserving as much of the evidence as possible at a specific point in time and subsequently analysing the data in a manner so that it can be used to support any legal proceedings.

The ambiguity between Digital Forensics and Incident Response is further compounded by the over-reliance of automated tools. Many practitioners are becoming overly reliant on push-button forensics which in turn can amplify any preconceived bias.

Automated tools, whilst great at producing results, often lacks the ability to understand context. If the results of tools are (overly relied upon) without an analyst digging deeper into what was reported, they run the risk of forming assumptions thus, arriving at an incorrect conclusion.  An in depth knowledge around how the operating system creates each artefact, how each tool extracts this information and presents it could help examiners recognise context.

A real-life example of all the above in a perfect storm is within the case of Casey Anthony (https://www.nytimes.com/2011/07/19/us/19casey.html). This is a case study that, in my opinion, all practitioners should review.

To summaries the case, it was found that the prosecution computer forensic expert had overly relied on the automated results produced by the forensics software and failed to recognise a situation where the software had produced suspicious results.
 

The tool, used for the web history analysis, suggested that the defendant has visited a website containing the term ‘chloroform’ 84 times. Further research around the context would have highlighted that:

a) the result was inaccurate due to the method used by the tool to parse data

b) the context around the keyword was innocent (the result linked to a science related website documenting the historic use of chloroform, and was not an indicator for premeditated murder).
 

This was further compounded by a failure in the examination process where the prosecution examiner did not validate the accuracy of the tools used and the results, which may have allowed for the identification of any discrepancies.

A further concern was identified when the tool developer had alerted the prosecutors of an issue with the forensic software. However, this revelation was disregarded, possibly a result of confirmation bias from those who had wanted the software to confirm their hypothesis.

Without a deep knowledge and understanding of how a tool arrives at a conclusion, a practitioner cannot confidently validate the results, nor explain it if asked so in court – therefore, bringing their credibility as a subject matter expert into question.

Ultimately, the correct training will help alleviate some of the issues with the identified bias highlighted in the article. At the end of the day, one of the primary ‘value add’ of a professional digital forensics expert is their ability to draw a sound conclusion based on repeatable findings and observations. This often goes hand in hand with an analytical mindset but can only be achieved if the examiner is not clouded with unrecognised bias.