@YesOhYes
The way that system works is it divides an image in 4 parts and stores the average colour for each, so all images are stored as 4 colours. That compares against the 4 colour values of every other image, with the fuzziness being how far it can deviate.
Essentially, the system doesn’t look for shapes or anything fancy like that, so if all the pixels on an image have been altered to be more saturated, then the resulting stored value of the image is going to too different for the system to match it to another.
There were some trials on a more complex system some years ago, but results weren’t all that much precise for 90%+ of cases tested, and it was more intensive.