From ‘A Single Laser Fired Through a Keyhole Can Expose Everything Inside a Room’, published by Gizmodo on September 8, 2021:
The keyhole imaging technique, developed by researchers at Stanford University’s Computational Imaging Lab, is so named because all that’s needed to see what’s inside a closed room is a tiny hole (such as a keyhole or a peephole) large enough to shine a laser beam through, creating a single dot of light on a wall inside. As with previous experiments, the laser light bounces off a wall, an object in the room, and then off the wall again, with countless photons eventually being reflected back through the hole and to the camera which utilizes a single-photon avalanche photodetector to measure the timing of their return.
When an object hidden in the room is static, the new keyhole imaging technique simply can’t calculate what it’s seeing. But the researchers have found that a moving object paired with pulses of light from a laser generate enough usable data over a long period of exposure time for an algorithm to create an image of what it’s seeing. The quality of the results is even worse than with previous NLOS techniques, but it still provides enough detail to make an educated guess on the size and shape of the hidden object. A wooden mannequin ends up looking like a ghostly angel, but when paired with a properly trained image recognition AI, determining that a human (or human-shaped object) was in the room seems very feasible.
That is, the technique is fairly simple – and I think researchers haven’t tried it before for a couple reasons: they didn’t think about such forms of reconnaissance (or surveillance) and they didn’t have the tools to do it. The latter is particularly interesting. As the last sentence of the excerpt says, the technique can produce a vague image of something, but with the help of AI/ML, the operator can sharpen it to get a clearer idea of the room’s contents. This is interesting because this is a leap, of sorts: AI/ML-based methods will be able to extract significantly more information from noise, where ‘significant’ means ‘to an extent that previously classical methods haven’t been able to achieve despite being used to exhaustion’ – akin to the difference between knowing whether there’s someone/something moving inside a closed room versus knowing the colour of their clothes and/or being able to distinguish between them and other objects in the room, etc.
So if you visualise a graph on which going from left to right means going from being fully visible to being fully hidden, AI/ML can, and will, chew away at the left side, pushing any object or person that wishes to be hidden from recon/surveillance measures to adopt more sophisticated methods to hide themselves, and therefore move further rightward on the graph. That is, the cost of and the technological literacy required to achieve obfuscation, obscurity and/or complete digital cloaking will both go up. By this point, in 2021, etc., this must sound like a trivial conclusion, but it’s also fascinating – in an awful sort of way – to be mindful that for its various touted abilities, AI/ML could also fetch almost-useful recon/surveillance methods from just behind the veil of usefulness to well in front.