Summary: | Over the recent years, great concerns have been aroused around the topic of Deefake
due to its amazing ability in making a forgery image look like a genuine one. Many
approaches have been developed to alleviate such risks. Among these, one noticeable
track is to apply the model’s adversarial noise as a watermark to the image so that
when the image is modified, it would be drastically distorted to the extent that the
person’s facial features are no longer recognizable. Recent works have successfully
developed a cross-model universal attack method that can produce a watermark that
can protect multiple images against multiple models, breaking the previous constraint
of watermarks being image-model-specific. However, to ensure the desired level of
distortion, the adversarial noise threshold is set to relatively high, which makes the
watermark ultimately visible on human faces, impairing the image quality and aesthetic.
To mitigate this issue, we bring the idea of just noticeable difference (JND) into the
cross-model universal attack method, intending to produce an image quality preserved
universal watermark, while still maintaining the original protection performance. To
achieve this, we have made several attempts. First, we replace the threshold clamp at
each attacking step with the JND clamp. Second, we introduce a face parsing model
to gain finer control over the JND values. Specifically, we use the face parsing model
to segment portrait images into different parts and add scaling factors respectively for
each part to scale the JND values. Through this, we are able to achieve good visual
quality and at the same time, maintain good protection performance. Experiments
are conducted to show that the watermark produced from the new JND cross-model
universal watermark outperforms the previous one both in visual quality and protection
performance.
|