Humans rely heavily on our vision to understand the world around us. It should be no surprise that science, an extension of that understanding, should seek to describe its findings in visual form. In neuroscience, this has resulted in the field of neuroimaging, which includes techniques such as electroencephalography (EEG), magnetoencephalography (MEG), positron emission tomography (PET), and functional magnetic resonance imaging (fMRI).
These techniques all measure some proxy for neuronal activity. In the case of fMRI, the measured activity is a blood-oxygen level dependent (BOLD) signal, which describes the ratio of oxygenated to deoxygenated blood in the brain. When brain cells are active, they consume oxygen, and as a result, blood oxygen levels in active area decreases. The BOLD signal is used as a representation for neuronal activity, though the time-course of this signal is much slower than actual changes.
fMRI is popular among scientists because it has the potential to reveal the roles of specific networks of brain regions. It has also taken a front seat in media coverage of neuroscientific research, in no small part because the final products of fMRI analysis are attractive, seemingly convincing images of brain activity.
This is troubling for several reasons. fMRI is popularly regarded as a direct measure of what is going on in the brain, but this is not the case. The BOLD signal is much slower than the neuronal activity that causes it, and can only measure relative changes in the brain. Therefore, most fMRI experiments must compare different BOLD signals; usually a resting signal and a baseline signal recorded during the experiment (which can cover a broad range of cognitive and behavioural tasks and states). Because of this comparison, ‘activation’ or ‘deactivation’ is only relative to the baseline measures of brain activity. Furthermore, a region that becomes more ‘active’ during a task might be indirectly exciting or inhibiting other brain regions.
Statistics used to analyze fMRI results are another issue with interpretation of findings. Small changes in data processing or statistical analysis can bias results. This problem is exacerbated by the use of multiple analysis packages across the field, which often use different terminology and approaches for similar analyses. Neglecting small steps can lead to false positives, as intentionally demonstrated in an article in the Journal of Serendipitous and Unexpected Results by Craig Bennett and colleagues, who used fMRI to demonstrate BOLD activation in the central nervous system of a dead fish.
Compared to other scientific studies, neuroimaging studies require additional scrutiny, because their output has more persuasive power than many other techniques – paricularly in the case of fMRI. Several studies, including one by Deena Weisberg and colleagues published in the Journal of Cognitive Neuroscience, have found that non-experts are more likely to be persuaded by a scientific study if it is accompanied by fMRI brain images, rather than a different type of graphical representation of results.
Researchers have also shown that potential jurors who read summaries of a criminal trial were more likely to be convinced of a defendant’s guilt when fMRI evidence was produced. A study by David McCabe and colleagues in Behavioral Sciences and the Law found that the evidence provided by these brain scans was perceived as more powerful than similar evidence from polygraphs, or thermal facial imaging. This bias was removed when the participants received additional information critiquing the use of fMRI as a lie detection technique.
The use of fMRI in lie detection brings up many of the same issues as the use of polygraphs, the so-called ‘lie detectors,’ which compare physiological measurements of arousal during questioning to the same measurements during a neutral baseline. Polygraphs can be moderately accurate at detecting deception (around 70 per cent), but also have very high rates of false positives (around 65 per cent). According to creators of fMRI lie-detection, these methods are about 90 per cent accurate. However, as discovered in a study by Giorgio Ganis and colleagues published in NeuroImage, associating covert movements or mental imagery with irrelevant or baseline stimuli can reduce that rate to 33 per cent – less than chance.
This vulnerability to false positives, and to simple countermeasures, is only one technical limit to fMRI deception detection. Other issues exist, especially as laboratory studies of lie-detection have all involved willing, neurotypical (those without a developmental or psychiatric disorder strongly affecting mental function) participants in a safe lab setting, instructed to give a binary true/false response, none of which might be true in law enforcement or other similar settings, as discussed by Jeffrey Simpson in the Journal of the American Academy of Psychiatry and the Law, among others.
Technically speaking, fMRI may not be effective for deception detection, and the possibility of false positives remains. There are also a host of ethical and legal issues. Given the persuasive nature of fMRI results, even equivocal findings might be taken as strong evidence of an individual’s guilt or dishonesty. Privacy issues are highly relevant, as individuals are generally assumed to have some right to their private thoughts. Furthermore, conflicts of interest abound. In a recent American civil case, for example, a private corporation scanned their client a second time after the first scan delivered a ‘guilty’ result, on the grounds that their client had been tired during the first scan.
Of particular concern is the inaccessibility of fMRI technology. Scanners are expensive, running into hundreds of dollars per hour of scanning, and data processing and presentation requires a high level of expertise. If used by the legal system, criminal defendants may lack the resources to hire their own experts or pay for their own fMRI evidence. The costly nature would widen existing disparity between wealthy and poor defendants’ ability to defend themselves, as wealthier individuals would be able to afford convincing fMRI evidence of their innocence, and poorer individuals would not.
Similar problems regarding technical ability and economic disparity plague other potential applications of fMRI. While pain researchers have made a great deal of progress in determining neural correlates for the perception and experience of pain, given the subjective nature, it is not possible to say whether an individual is in pain simply by assessing brain activity. This problem exists throughout the medical field, where diagnoses based primarily on brain scans can run up against subjective experience, risking further denial of medical resources to individuals who already struggle to obtain help.