赞
踩
前言:这篇文章来总结一下针对AIGC检测的常见攻击手段,选取的研究工作均出自近5年AIGC检测相关文章。相关研究方向的小伙伴在进行鲁棒性实验时可以参考本文提到的常用攻击手段。
We show the robustness of the proposed method with two different post-processing methods: JPEG compression and image resize. For JPEG compression, we randomly select one JPEG quality factor from [100, 90, 70, 50] and apply it to each of the fake image. For image resize, we randomly select one image size from [256, 200, 150, 128].
To test this, we blurred (simulating re-sampling) and JPEG-compressed the real and fake images following the protocol in [38], and evaluated our ability to detect them (Figure 5).
Gaussian blurring (sigma: 0.1–1), JPEG quality factors (70–100), image cropping, and resizing (cropping/scaling factor: 0.25–1)
We also evaluate the robustness of our method and the baselines on the images post-processed with Gaussian Blur and JPEG Compression.
In this section, we assess the robustness of our proposed model in the face of seven post-processing techniques, encompassing adjustments to chromaticity, brightness, contrast, sharpness, rotation, and the application of Gaussian blur and mean blur… To create a more realistic simulation of complex real-world scenarios, we’ve incorporated randomness into the parameters controlling the image alterations. For instance, the factors governing the degree of image manipulation (chromaticity, brightness, contrast) are randomly selected from a range of 0.5 to 2.5 for each image in the test dataset. Similarly, the factor controlling image sharpness is an arbitrary integer within the range of 0 to 4. Rotation degrees range from 0 to 360, and the kernel size for both Gaussian and mean filters is 5 × 5.
For each image of the test, a crop with random (large) size and position is selected, resized to 200 × 200 pixels, and compressed using a random JPEG quality factor from 65 to 100.
To evaluate the robustness of the proposed framework to image perturbations, we apply common image perturbations on the test images with a probability of 50% following [13]. These perturbations include blurring, cropping, compression, adding random noise, and a combination of all of them. In this subsection, the discriminator of StyleGANbedroom is used as the transformation model.
Specifically, we evaluate the robustness of detectors and attributors against adversary example attacks, which are the most common and severe attacks against machine learning models. We leverage three representative adversary example attacks, namely FGSM [14], BIM [18], and DI-FGSM [41] to conduct the robustness analysis. Furthermore, given that our hybrid detector and attributor consider both the image and its corresponding prompt, we propose HybridFool, which maximizes the distance between the embedding of a given image and the prompt by adding perturbations to the image. In the following, we first present each adversary example attack we consider in this robustness analysis. Then, we show the evaluation results.
Here, we evaluate the robustness of detectors in two-class degradations, i.e., Gaussian blur and JPEG compression, following [47]. The perturbations are added under three levels for Gaussian blur (σ = 1, 2, 3) and two levels for JPEG compression (quality = 65, 30).
JPEG compression (100-60), WEBP compression (100-60), Resizing (1.25, 1.0, 0.75, 0.5, 0.25)
Following previous works [12, 59] we use JPEG compression (with quality q), center cropping (with crop factor f and subsequent resizing to the original size), Gaussian blur, and Gaussian noise (both with standard deviation σ).
Specifically, we adopt random cropping, Gaussian blurring, JPEG compression, and Gaussian noising, each with a probability of 50%.
we adopt the experimental setup from (Wu et al., 2023a) to perform resizing (with scales of 0.5, 0.75, 1.0, 1.25, 1.5) and JPEG compression (with quality factors of 60, 70, 80, 90, 100) on the tested images, which include both real and generated images
The post-processing operations we used include:
• Blurring: Gaussian filtering with a kernel size of 3 and sigma from 0.1 to 1.
• Brightness adjustment: the adjustment parameter is from 0.3 to 3.
• Contrast adjustment: Gamma transform with γ from 0.3 to 3.
• Random cropping: for a 256×256 image, the cropping size was from 256 to 96, and we up-sampled the amplitude spectrum of the LNP of the final cropped image back to 256.
• JPEG compression: quality factors from 70 to 100.
• Gaussian noise: the sigma was set from 1 to 10, and the PSNR of the original image and the image after adding noise is from 26 to 47.
• Pepper & Salt noise: the ratio of pepper to salt is 1:1. The density of the added noise is 0.001 to 0.01, and the PSNR of the noisy images is from 18 to 31.
• Speckle noise: the sigma ranges from 0.01 to 0.1, and the PSNR of the noisy images is from 22 to 57.
• Poisson noise: the lambda is set from 0.1 to 1, and the PSNR of the noisy images is from 3 to 58.
blur (0-3.0), compression (100-30), noise (0-3.0), resizing (1.0-0.2)
总结:常见的针对AIGC检测的鲁棒性测试手段有:JPEG压缩、resize(图像尺寸调整)、高斯模糊、高斯噪声,还有一些零散的攻击手段如图像裁剪、色度、亮度、对比度、锐化、旋转、随机噪声、椒盐噪声、散斑噪声、泊松噪声、对抗样本……注意在测试每种攻击手段时,要把攻击程度考虑进去,比如JPEG压缩就要考虑压缩因子。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。