Gravitational lensing and facial recognition

Gravitational lensing and facial recognition
A Hubble telescope image of the galaxy cluster SDSS J1038+4849. Credit: JPL/NASA

These images of gravitational lensing, especially the one on the left, are pretty famous because apart from demonstrating the angular magnification effects of strong lensing very well, they’ve also been used by NASA in their Halloween promotional material. The ring-like arc that forms the ‘face’ is a result of a galaxy cluster, SDSS J1038+4849, lying directly in front of the object (on our line of sight) the light from which it is bending around itself. Because of the alignment, the light is bent all around it, forming what’s known as an Einstein ring. This particular instance was discovered in early 2015 by astronomer Judy Schmidt.

Seeing this image again prompted me to recall a post I’d written long ago on a different blog (no longer active) about our brains’ tendency to spot patterns in images that actually don’t exist – such as looking at an example of strong gravitational lensing and spotting a face. The universe didn’t intend to form a face but all our brain needs to see one is an approximate configuration of two eyes, a nose, a smile and a contour if it’s lucky. This tendency is called pareidolia. In more ambiguous images or noises, what each individual chooses to see is the basis of the (now-outmoded) Rorschach inkblot test.

A 2009 paper in the journal NeuroReport reported evidence that human adults identify a face when there is none only 35 milliseconds slower than when there is really a face (165 ms v. 130 ms) – and both through a region of the brain called the fusiform face area, which may have evolved to process faces. The finding speaks to our evolutionary need to identify these and similar visual configurations, a crucial part of social threat perception and social navigation. The authors of the 2009 paper have suggested using their findings to investigate forms of autism in which a person has trouble looking at face.

My favourite practical instance of pareidolia is in Google’s DeepDream project, which is a neural network used to over-process, differentiate between or recognise images. When software engineers at Google fed a random image into the network’s input layer and asked DeepDream to transform it into an image containing some specific, well-defined objects, the network engaged in a process called algorithmic pareidolia: picking out patterns that aren’t really there. Each layer of a neural network, understood to go bottom-up, analyses specific parts of an image, with the lower layers going after strokes and nodes and the higher layers, after entire objects and their arrangement.

In many instances, algorithmic pareidolia yielded images that looked similar to the work of the human visual cortex under the influence of LSD. This has prompted scientists to investigate whether psychedelic compounds cause electrochemical changes in the brain that are similar to instructions supplied to convolutional neural networks (of which DeepDream is a kind). In other words, when DeepDream dreamt, it was an acid trip. In June 2015, three software engineers from Google explained how pareidolia took shape inside such networks:

If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

Diving further into complex neural networks may eventually allow scientists to explore cognitive processes at a pace thousands of times slower than at which they happen in the brain.