According to Google and Stanford researchers, a machine learning agent made for converting aerial pictures into street maps and vice versa was observed to be deceiving humans by concealing info that it would later require in “a nearly imperceptible, high-frequency signal.”
However, this incident doesn’t show some type of harmful intelligence inherent to AI, it merely disclosed an issue with computers: they do exactly what they are trained to do. This problem exists since the invention of the computers.
The researchers aimed to speed up and advance the procedure of converting satellite images into Google’s superbly exact maps. For this purpose, they developed a neural network model named as CycleGAN. In few initial results, the agent performed suspiciously well. But the experts were surprised to see that when the agent recreated aerial images from its street maps, there was information on them that didn’t seem to be on the street map.
For example, roof skylights rejected while making the street map would mysteriously come back when the agent did the reverse procedure. In the image shown below, observe the dots that are present on both aerial images but not on the street map.
The researchers intended the agent to understand the attributes of each kind of map and compare them with the correct attributes of the other. But in actual, the agent was being trained on the lucidity of a street map and how much near an aerial map was to the original. Hence, it didn’t learn to develop on from another. It learned how to slightly convert the attributes of one into other’s noise patterns. The information on the aerial image is secretly saved into the real street map’s visual data: thousands of small alterations in color that computer can effortlessly recognize but human eye would simply ignore it.
Actually, the computer knew how to convert any aerial map into any street map. It doesn’t need to study the “real” street map in detail – all the info required for recreating the aerial image can be overlaid safely on a totally different street map, according to the researchers.
This procedure of converting info into pictures is nothing new and this science is known as steganography. However, a computer developing its own steganographic technique to avoid learning how to do the job at hand is quite new. And, this research is also not brand new as it got published last year.
People can see this as phase in the “the machines are getting smarter” tale, but in actual, it is quite opposite. The machine, not intelligent enough to perform the real hard task of transforming these refined picture types to each other, discovered a method to deceive humans. This could be prevented with more severe analysis of results provided by the agent and the experts undoubtedly did that.
This is an example that fits in the computing saying of PEBKAC. “Problem exists between keyboard and chair.” Or as according to HAL: “It can only be attributable to human error.” And last year, at the Neural Information Processing Systems conference, the paper titled “CycleGAN, a Master of Steganography,” was presented.