Reconstructing face images from EEG data

A recently-published scientific study by researchers from the Department of Psychology at the University of Toronto reports on the use of EEG data to reconstruct images of faces shown to research study participants. In broad brush strokes, images of faces were shown to study participants. While study participants looked at those images, EEG recordings of their brain activity were made and analysed. Subsequently, more faces were shown to study participants, and again their EEG data were recorded, and then this new EEG data were used to construct images of the faces that the study participants had been viewing.

What initially attracted my attention to this study was this CBC podcast. Have a listen to the podcast (it’s 24 minutes long). What struck me when I listened to the podcast was just how exciting it sounded. For me, at least, it sounded like EEG data was being used to create pretty accurate reconstructions of images viewed by study participants. The podcast also has interviews with researchers about some of the ethical and legal issues raised by this technology.

Listening to the podcast reminded me of the fMRI-based studies (from 2011, I think) in which fMRI data were used to reconstruct movies that subjects were watching. For examples of those earlier fMRI-based studies, see here and here.

The prospect of being able to use a much more inexpensive and portable technology like EEG to do something similar is very enticing. So, having listened to the podcast, I then looked up the original study to learn a bit more: Nemrodov D, Niemeier M, Patel A, and Nestor A (2018) ‘The Neural Dynamics of Facial Identity Processing: Insights from EEG-Based Pattern Analysis and Image Reconstruction’, eNeuro, 5: 1-17. The accuracy rates reported seem rather impressive, 64% for images of happy faces, and 69% for images of neutral faces (with 50% being no better than chance).

Having heard the podcast and read the article, here are my reflections. As usual, there seems to be a rather large gulf between what the media reports (which is rather hype-ish) and what the study accomplished (which appears modest). More importantly, though, to me all of the reconstructed images of faces look almost identical, even though the original images look like very different faces. That is, if you just look at the reconstructed images, to me they all look like the same person, even though the original images that the former are meant to be reconstructions of clearly look like different people.

If all you were shown was the main example, then here’s what you’d see, and you might even think to yourself “Wow — pretty good!”

But take a look at the following two examples:

To see what I mean, cover up the original images (left hand side), and just look at the reconstructions (right hand side). Then, do the opposite. To me all the reconstructed images look like the same dude (“Hi dude! And hi again, dude! And again…”), and yet the original images look like clearly different dudes (“Pleased to meet y’all, dudes!”).

What do I make of this? Well, clearly this is a proof-of-concept study, and the methods the authors described seem fine. However, as far as practical applications go, to me this is a long way away from showing that we are getting anywhere near close to encountering the sorts of ethical and legal problems that were discussed in the CBC podcast. I don’t want to make too much this, especially since the fMRI-based studies were rather impressive. And in any case, even if no studies were particularly impressive yet, there’d still be ample reason, I think, to reflect on the potential ethical and legal implications right now, rather than waiting till the technology is already effective.

Still, I also think it’s useful to reflect on the disparity between what the media reports and what scientists are actually doing.