This Software Can Guess What People Are Looking At

iStock
iStock / iStock
facebooktwitterreddit

When it comes to the human brain, scientists still have a lot left to learn. One function that remains mysterious? How we're able to interpret two-dimensional pictures—like a picture of a cat on a computer screen—into objects we recognize from real life. In order to learn more about this process, researchers from the University of Washington have found a way to use brain implants and advanced software to decode brain signals at nearly the speed of thought.

In the new paper published in PLOS Computational Biology [PDF], the team worked with seven epilepsy patients who had already been given temporary electrode implants in their brain to monitor their seizures. In front of a computer, the patients were shown pictures of human faces, houses, and blank gray screens randomly for just 400 milliseconds at a time (they were told to be on the look out for an upside-down house).

The electrodes in their brains were hooked up to a software that had been programmed to detect two specific brain signal properties: "event-related potentials" that occur in immediate response to an image and "broadband spectral changes" that linger after an image has already been seen. By digitizing brain signals at a rate of 1000 times per second, the software was able to pinpoint which combination of electrode locations and signals best matched the image the patients saw in front of them.

After an initial round to get the software up to speed, the patients were then shown an entirely new set of pictures. The computer was able to detect with 96 percent accuracy whether subjects were being shown a face, a house, or a blank screen without being exposed to the new pictures beforehand.

Researchers credited the computer’s success rate to its ability to detect not one but two separate brain signals. As they state in the study, both event-related potentials and broadband spectral changes "capture different and complementary aspects of the subject’s perceptual state." This insight could be used to shed light on how the brain is able to translate complex images like pictures on a screen.

It’s important to note, as Gizmodo does, that while the results are interesting, the study was still quite limited. Experiments conducted further down the road would ideally include more diverse sets of images divided up into different categories. In the future, the brain decoding technology could be used to build devices to help patients with paralysis and other disabilities communicate.

[h/t Gizmodo]