当前位置:   article > 正文

一键快速还原修复人脸,CodeFormer 助力人脸图像修复_codeformer解压密码

codeformer解压密码

今天在查资料的时候无意间看到了一个很有意思的工具,就是CodeFormer ,作者给出来的说明是用于人脸修复任务的,觉得很有意思就拿来实践了一下,这里记录分享一下。

首先对人脸修复任务进行简单的回顾总结:

人脸修复是指对损坏、有缺陷或者遮挡的人脸图像进行恢复、修复或重建的过程。这涉及到图像处理、计算机视觉和机器学习等领域的技术。

人脸修复可以应用于多个场景,例如:

  1. 人脸去除遮挡:当人脸图像被遮挡或者被其他对象遮挡时,人脸修复可以通过推断遮挡区域的内容进行修复。这可以包括使用图像修复算法(如基于纹理填充、基于复制粘贴的方法)或者利用人脸识别和人脸重建技术进行修复。

  2. 人脸恢复和重建:当人脸图像受到严重损坏或者缺陷时,通过人脸修复技术可以进行图像的恢复和重建。这可以包括利用图像补全算法、人脸超分辨率重建、人脸合成等技术来重建完整的人脸图像。

  3. 人脸修复和美化:人脸修复还可以包括对人脸图像的美化和修饰。这可以包括去除皱纹、瑕疵、眼袋等改善面部外貌的操作,以及改变皮肤色调、增强对比度等优化图像质量的操作。

在人脸修复领域,常用的算法模型包括以下几种:

  1. 自编码器(Autoencoder):自编码器是一种无监督学习方法,将输入数据压缩为低维编码,然后再将编码恢复为原始数据。在人脸修复中,自编码器可以用于重建缺失或损坏的人脸部分。

  2. 卷积神经网络(Convolutional Neural Network, CNN):CNN是一种深度学习模型,广泛用于图像处理任务。在人脸修复中,CNN可以用于将损坏的人脸图像与完整的人脸图像进行对齐,从而进行修复。

  3. 生成对抗网络(Generative Adversarial Network, GAN):GAN由生成器和判别器组成,通过对抗学习的方式,生成逼真的以假乱真的图像。在人脸修复中,GAN可以用于生成缺失或损坏的人脸部分。

  4. 基于纹理填充的方法:基于纹理填充的方法使用纹理合成和纹理迁移技术,将周围的纹理信息应用于损坏区域,来修复人脸图像。

  5. 结构生成模型:结构生成模型基于人脸的结构信息,如关键点或网格,通过形状优化和插值方法,实现对人脸的修复和重建。

除了上述算法模型,还有一些其他的变种和组合方法,如图像修补算法、超分辨率重建算法、形状恢复算法等。这些算法模型都在不同场景下有其适用性和局限性,具体选择哪种模型需要根据具体任务和需求来进行权衡和选择。

接下来回归正题,来看本文要介绍学习的CodeFormer。

作者项目在这里,如下所示:

 盲脸恢复是一个高度不适定的问题,通常需要辅助指导来

1)改进从退化输入到期望输出的映射,或

2)补充输入中丢失的高质量细节。

在本文中,我们证明了在小代理空间中学习的离散码本先验通过将人脸恢复作为代码预测任务,大大降低了恢复映射的不确定性和模糊性,同时为生成高质量人脸提供了丰富的视觉原子。在这种范式下,我们提出了一种基于Transformer的预测网络,名为CodeFormer,用于对低质量人脸的全局组成和上下文进行建模,以进行代码预测,即使在输入严重退化的情况下,也能够发现与目标人脸非常接近的自然人脸。为了增强对不同退化的适应性,我们还提出了一个可控的特征转换模块,该模块允许在保真度和质量之间进行灵活的权衡。得益于富有表现力的码本先验和全局建模,CodeFormer在质量和保真度方面都优于现有技术,对退化表现出卓越的鲁棒性。在合成和真实世界数据集上的大量实验结果验证了我们方法的有效性。

文章的核心方法:

 (a) 我们首先通过自重构学习学习离散码本和解码器来存储人脸图像的高质量视觉部分。

(b) 在固定码本和解码器的情况下,我们引入了一个用于代码序列预测的Transformer模块,对低质量输入的全局人脸组成进行建模。此外,使用可控特征变换模块来控制从LQ编码器到解码器的信息流。请注意,此连接是可选的,当输入严重降级时,可以禁用此连接以避免不利影响,并且可以调整标量权重w以在质量和保真度之间进行权衡。

实例结果:

 官方项目地址在这里,如下所示:

 目前已经有将近1w的star量了,看来还是蛮受欢迎的。

官方的论文地址在这里,如下所示:

 感兴趣的话可以自行下载研读一下,这里我主要是想实际试用一下。

下载官方项目,本地解压缩如下所示:

 这里提供了可以直接使用的人脸修复推理模块,如下所示:

  1. import os
  2. import cv2
  3. import argparse
  4. import glob
  5. import torch
  6. from torchvision.transforms.functional import normalize
  7. from basicsr.utils import imwrite, img2tensor, tensor2img
  8. from basicsr.utils.download_util import load_file_from_url
  9. from basicsr.utils.misc import gpu_is_available, get_device
  10. from facelib.utils.face_restoration_helper import FaceRestoreHelper
  11. from facelib.utils.misc import is_gray
  12. from basicsr.utils.registry import ARCH_REGISTRY
  13. pretrain_model_url = {
  14. 'restoration': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth',
  15. }
  16. def set_realesrgan():
  17. from basicsr.archs.rrdbnet_arch import RRDBNet
  18. from basicsr.utils.realesrgan_utils import RealESRGANer
  19. use_half = False
  20. if torch.cuda.is_available(): # set False in CPU/MPS mode
  21. no_half_gpu_list = ['1650', '1660'] # set False for GPUs that don't support f16
  22. if not True in [gpu in torch.cuda.get_device_name(0) for gpu in no_half_gpu_list]:
  23. use_half = True
  24. model = RRDBNet(
  25. num_in_ch=3,
  26. num_out_ch=3,
  27. num_feat=64,
  28. num_block=23,
  29. num_grow_ch=32,
  30. scale=2,
  31. )
  32. upsampler = RealESRGANer(
  33. scale=2,
  34. model_path="https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/RealESRGAN_x2plus.pth",
  35. model=model,
  36. tile=args.bg_tile,
  37. tile_pad=40,
  38. pre_pad=0,
  39. half=use_half
  40. )
  41. if not gpu_is_available(): # CPU
  42. import warnings
  43. warnings.warn('Running on CPU now! Make sure your PyTorch version matches your CUDA.'
  44. 'The unoptimized RealESRGAN is slow on CPU. '
  45. 'If you want to disable it, please remove `--bg_upsampler` and `--face_upsample` in command.',
  46. category=RuntimeWarning)
  47. return upsampler
  48. if __name__ == '__main__':
  49. # device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
  50. device = get_device()
  51. parser = argparse.ArgumentParser()
  52. parser.add_argument('-i', '--input_path', type=str, default='./inputs/whole_imgs',
  53. help='Input image, video or folder. Default: inputs/whole_imgs')
  54. parser.add_argument('-o', '--output_path', type=str, default=None,
  55. help='Output folder. Default: results/<input_name>_<w>')
  56. parser.add_argument('-w', '--fidelity_weight', type=float, default=0.5,
  57. help='Balance the quality and fidelity. Default: 0.5')
  58. parser.add_argument('-s', '--upscale', type=int, default=2,
  59. help='The final upsampling scale of the image. Default: 2')
  60. parser.add_argument('--has_aligned', action='store_true', help='Input are cropped and aligned faces. Default: False')
  61. parser.add_argument('--only_center_face', action='store_true', help='Only restore the center face. Default: False')
  62. parser.add_argument('--draw_box', action='store_true', help='Draw the bounding box for the detected faces. Default: False')
  63. # large det_model: 'YOLOv5l', 'retinaface_resnet50'
  64. # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'
  65. parser.add_argument('--detection_model', type=str, default='retinaface_resnet50',
  66. help='Face detector. Optional: retinaface_resnet50, retinaface_mobile0.25, YOLOv5l, YOLOv5n, dlib. \
  67. Default: retinaface_resnet50')
  68. parser.add_argument('--bg_upsampler', type=str, default='None', help='Background upsampler. Optional: realesrgan')
  69. parser.add_argument('--face_upsample', action='store_true', help='Face upsampler after enhancement. Default: False')
  70. parser.add_argument('--bg_tile', type=int, default=400, help='Tile size for background sampler. Default: 400')
  71. parser.add_argument('--suffix', type=str, default=None, help='Suffix of the restored faces. Default: None')
  72. parser.add_argument('--save_video_fps', type=float, default=None, help='Frame rate for saving video. Default: None')
  73. args = parser.parse_args()
  74. # ------------------------ input & output ------------------------
  75. w = args.fidelity_weight
  76. input_video = False
  77. if args.input_path.endswith(('jpg', 'jpeg', 'png', 'JPG', 'JPEG', 'PNG')): # input single img path
  78. input_img_list = [args.input_path]
  79. result_root = f'results/test_img_{w}'
  80. elif args.input_path.endswith(('mp4', 'mov', 'avi', 'MP4', 'MOV', 'AVI')): # input video path
  81. from basicsr.utils.video_util import VideoReader, VideoWriter
  82. input_img_list = []
  83. vidreader = VideoReader(args.input_path)
  84. image = vidreader.get_frame()
  85. while image is not None:
  86. input_img_list.append(image)
  87. image = vidreader.get_frame()
  88. audio = vidreader.get_audio()
  89. fps = vidreader.get_fps() if args.save_video_fps is None else args.save_video_fps
  90. video_name = os.path.basename(args.input_path)[:-4]
  91. result_root = f'results/{video_name}_{w}'
  92. input_video = True
  93. vidreader.close()
  94. else: # input img folder
  95. if args.input_path.endswith('/'): # solve when path ends with /
  96. args.input_path = args.input_path[:-1]
  97. # scan all the jpg and png images
  98. input_img_list = sorted(glob.glob(os.path.join(args.input_path, '*.[jpJP][pnPN]*[gG]')))
  99. result_root = f'results/{os.path.basename(args.input_path)}_{w}'
  100. if not args.output_path is None: # set output path
  101. result_root = args.output_path
  102. test_img_num = len(input_img_list)
  103. if test_img_num == 0:
  104. raise FileNotFoundError('No input image/video is found...\n'
  105. '\tNote that --input_path for video should end with .mp4|.mov|.avi')
  106. # ------------------ set up background upsampler ------------------
  107. if args.bg_upsampler == 'realesrgan':
  108. bg_upsampler = set_realesrgan()
  109. else:
  110. bg_upsampler = None
  111. # ------------------ set up face upsampler ------------------
  112. if args.face_upsample:
  113. if bg_upsampler is not None:
  114. face_upsampler = bg_upsampler
  115. else:
  116. face_upsampler = set_realesrgan()
  117. else:
  118. face_upsampler = None
  119. # ------------------ set up CodeFormer restorer -------------------
  120. net = ARCH_REGISTRY.get('CodeFormer')(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9,
  121. connect_list=['32', '64', '128', '256']).to(device)
  122. # ckpt_path = 'weights/CodeFormer/codeformer.pth'
  123. ckpt_path = load_file_from_url(url=pretrain_model_url['restoration'],
  124. model_dir='weights/CodeFormer', progress=True, file_name=None)
  125. checkpoint = torch.load(ckpt_path)['params_ema']
  126. net.load_state_dict(checkpoint)
  127. net.eval()
  128. # ------------------ set up FaceRestoreHelper -------------------
  129. # large det_model: 'YOLOv5l', 'retinaface_resnet50'
  130. # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'
  131. if not args.has_aligned:
  132. print(f'Face detection model: {args.detection_model}')
  133. if bg_upsampler is not None:
  134. print(f'Background upsampling: True, Face upsampling: {args.face_upsample}')
  135. else:
  136. print(f'Background upsampling: False, Face upsampling: {args.face_upsample}')
  137. face_helper = FaceRestoreHelper(
  138. args.upscale,
  139. face_size=512,
  140. crop_ratio=(1, 1),
  141. det_model = args.detection_model,
  142. save_ext='png',
  143. use_parse=True,
  144. device=device)
  145. # -------------------- start to processing ---------------------
  146. for i, img_path in enumerate(input_img_list):
  147. # clean all the intermediate results to process the next image
  148. face_helper.clean_all()
  149. if isinstance(img_path, str):
  150. img_name = os.path.basename(img_path)
  151. basename, ext = os.path.splitext(img_name)
  152. print(f'[{i+1}/{test_img_num}] Processing: {img_name}')
  153. img = cv2.imread(img_path, cv2.IMREAD_COLOR)
  154. else: # for video processing
  155. basename = str(i).zfill(6)
  156. img_name = f'{video_name}_{basename}' if input_video else basename
  157. print(f'[{i+1}/{test_img_num}] Processing: {img_name}')
  158. img = img_path
  159. if args.has_aligned:
  160. # the input faces are already cropped and aligned
  161. img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)
  162. face_helper.is_gray = is_gray(img, threshold=10)
  163. if face_helper.is_gray:
  164. print('Grayscale input: True')
  165. face_helper.cropped_faces = [img]
  166. else:
  167. face_helper.read_image(img)
  168. # get face landmarks for each face
  169. num_det_faces = face_helper.get_face_landmarks_5(
  170. only_center_face=args.only_center_face, resize=640, eye_dist_threshold=5)
  171. print(f'\tdetect {num_det_faces} faces')
  172. # align and warp each face
  173. face_helper.align_warp_face()
  174. # face restoration for each cropped face
  175. for idx, cropped_face in enumerate(face_helper.cropped_faces):
  176. # prepare data
  177. cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)
  178. normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
  179. cropped_face_t = cropped_face_t.unsqueeze(0).to(device)
  180. try:
  181. with torch.no_grad():
  182. output = net(cropped_face_t, w=w, adain=True)[0]
  183. restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
  184. del output
  185. torch.cuda.empty_cache()
  186. except Exception as error:
  187. print(f'\tFailed inference for CodeFormer: {error}')
  188. restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
  189. restored_face = restored_face.astype('uint8')
  190. face_helper.add_restored_face(restored_face, cropped_face)
  191. # paste_back
  192. if not args.has_aligned:
  193. # upsample the background
  194. if bg_upsampler is not None:
  195. # Now only support RealESRGAN for upsampling background
  196. bg_img = bg_upsampler.enhance(img, outscale=args.upscale)[0]
  197. else:
  198. bg_img = None
  199. face_helper.get_inverse_affine(None)
  200. # paste each restored face to the input image
  201. if args.face_upsample and face_upsampler is not None:
  202. restored_img = face_helper.paste_faces_to_input_image(upsample_img=bg_img, draw_box=args.draw_box, face_upsampler=face_upsampler)
  203. else:
  204. restored_img = face_helper.paste_faces_to_input_image(upsample_img=bg_img, draw_box=args.draw_box)
  205. # save faces
  206. for idx, (cropped_face, restored_face) in enumerate(zip(face_helper.cropped_faces, face_helper.restored_faces)):
  207. # save cropped face
  208. if not args.has_aligned:
  209. save_crop_path = os.path.join(result_root, 'cropped_faces', f'{basename}_{idx:02d}.png')
  210. imwrite(cropped_face, save_crop_path)
  211. # save restored face
  212. if args.has_aligned:
  213. save_face_name = f'{basename}.png'
  214. else:
  215. save_face_name = f'{basename}_{idx:02d}.png'
  216. if args.suffix is not None:
  217. save_face_name = f'{save_face_name[:-4]}_{args.suffix}.png'
  218. save_restore_path = os.path.join(result_root, 'restored_faces', save_face_name)
  219. imwrite(restored_face, save_restore_path)
  220. # save restored img
  221. if not args.has_aligned and restored_img is not None:
  222. if args.suffix is not None:
  223. basename = f'{basename}_{args.suffix}'
  224. save_restore_path = os.path.join(result_root, 'final_results', f'{basename}.png')
  225. imwrite(restored_img, save_restore_path)
  226. # save enhanced video
  227. if input_video:
  228. print('Video Saving...')
  229. # load images
  230. video_frames = []
  231. img_list = sorted(glob.glob(os.path.join(result_root, 'final_results', '*.[jp][pn]g')))
  232. for img_path in img_list:
  233. img = cv2.imread(img_path)
  234. video_frames.append(img)
  235. # write images to video
  236. height, width = video_frames[0].shape[:2]
  237. if args.suffix is not None:
  238. video_name = f'{video_name}_{args.suffix}.png'
  239. save_restore_path = os.path.join(result_root, f'{video_name}.mp4')
  240. vidwriter = VideoWriter(save_restore_path, height, width, fps, audio)
  241. for f in video_frames:
  242. vidwriter.write_frame(f)
  243. vidwriter.close()
  244. print(f'\nAll results are saved in {result_root}')

test.jpg是我自己网上下载的测试图片,终端执行下面的命令:

python inference_codeformer.py -w 0.5 --has_aligned --input_path test.jpg

原图如下:

 结果图片如下所示:

 感觉有点变形了,不知道是不是图像的问题。

接下来看下灰度图的效果,原图如下所示:

 结果图片如下所示:

 感觉变高清了很多很多,我以为会变成彩色的,结果并没有。

接下来看下画笔涂抹严重的脸能修复成什么样,终端执行:

python inference_inpainting.py --input_path test.jpg

原图如下所示:

 结果图片如下所示:

 图像输入:

 修复输出:

 看起来效果还不错。

当然了,CodeFormer也可以实现对面部上色,终端执行:

python inference_colorization.py --input_path test.jpg

结果对比显示如下所示:

 当然了,如果不喜欢终端命令行的形式,官方同样提供了基于Gradio模块开发的APP,如下所示:

  1. """
  2. This file is used for deploying hugging face demo:
  3. https://huggingface.co/spaces/sczhou/CodeFormer
  4. """
  5. import sys
  6. sys.path.append('CodeFormer')
  7. import os
  8. import cv2
  9. import torch
  10. import torch.nn.functional as F
  11. import gradio as gr
  12. from torchvision.transforms.functional import normalize
  13. from basicsr.archs.rrdbnet_arch import RRDBNet
  14. from basicsr.utils import imwrite, img2tensor, tensor2img
  15. from basicsr.utils.download_util import load_file_from_url
  16. from basicsr.utils.misc import gpu_is_available, get_device
  17. from basicsr.utils.realesrgan_utils import RealESRGANer
  18. from basicsr.utils.registry import ARCH_REGISTRY
  19. from facelib.utils.face_restoration_helper import FaceRestoreHelper
  20. from facelib.utils.misc import is_gray
  21. os.system("pip freeze")
  22. pretrain_model_url = {
  23. 'codeformer': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth',
  24. 'detection': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_Resnet50_Final.pth',
  25. 'parsing': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth',
  26. 'realesrgan': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/RealESRGAN_x2plus.pth'
  27. }
  28. # download weights
  29. if not os.path.exists('CodeFormer/weights/CodeFormer/codeformer.pth'):
  30. load_file_from_url(url=pretrain_model_url['codeformer'], model_dir='CodeFormer/weights/CodeFormer', progress=True, file_name=None)
  31. if not os.path.exists('CodeFormer/weights/facelib/detection_Resnet50_Final.pth'):
  32. load_file_from_url(url=pretrain_model_url['detection'], model_dir='CodeFormer/weights/facelib', progress=True, file_name=None)
  33. if not os.path.exists('CodeFormer/weights/facelib/parsing_parsenet.pth'):
  34. load_file_from_url(url=pretrain_model_url['parsing'], model_dir='CodeFormer/weights/facelib', progress=True, file_name=None)
  35. if not os.path.exists('CodeFormer/weights/realesrgan/RealESRGAN_x2plus.pth'):
  36. load_file_from_url(url=pretrain_model_url['realesrgan'], model_dir='CodeFormer/weights/realesrgan', progress=True, file_name=None)
  37. # download images
  38. torch.hub.download_url_to_file(
  39. 'https://replicate.com/api/models/sczhou/codeformer/files/fa3fe3d1-76b0-4ca8-ac0d-0a925cb0ff54/06.png',
  40. '01.png')
  41. torch.hub.download_url_to_file(
  42. 'https://replicate.com/api/models/sczhou/codeformer/files/a1daba8e-af14-4b00-86a4-69cec9619b53/04.jpg',
  43. '02.jpg')
  44. torch.hub.download_url_to_file(
  45. 'https://replicate.com/api/models/sczhou/codeformer/files/542d64f9-1712-4de7-85f7-3863009a7c3d/03.jpg',
  46. '03.jpg')
  47. torch.hub.download_url_to_file(
  48. 'https://replicate.com/api/models/sczhou/codeformer/files/a11098b0-a18a-4c02-a19a-9a7045d68426/010.jpg',
  49. '04.jpg')
  50. torch.hub.download_url_to_file(
  51. 'https://replicate.com/api/models/sczhou/codeformer/files/7cf19c2c-e0cf-4712-9af8-cf5bdbb8d0ee/012.jpg',
  52. '05.jpg')
  53. def imread(img_path):
  54. img = cv2.imread(img_path)
  55. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
  56. return img
  57. # set enhancer with RealESRGAN
  58. def set_realesrgan():
  59. # half = True if torch.cuda.is_available() else False
  60. half = True if gpu_is_available() else False
  61. model = RRDBNet(
  62. num_in_ch=3,
  63. num_out_ch=3,
  64. num_feat=64,
  65. num_block=23,
  66. num_grow_ch=32,
  67. scale=2,
  68. )
  69. upsampler = RealESRGANer(
  70. scale=2,
  71. model_path="CodeFormer/weights/realesrgan/RealESRGAN_x2plus.pth",
  72. model=model,
  73. tile=400,
  74. tile_pad=40,
  75. pre_pad=0,
  76. half=half,
  77. )
  78. return upsampler
  79. upsampler = set_realesrgan()
  80. # device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
  81. device = get_device()
  82. codeformer_net = ARCH_REGISTRY.get("CodeFormer")(
  83. dim_embd=512,
  84. codebook_size=1024,
  85. n_head=8,
  86. n_layers=9,
  87. connect_list=["32", "64", "128", "256"],
  88. ).to(device)
  89. ckpt_path = "CodeFormer/weights/CodeFormer/codeformer.pth"
  90. checkpoint = torch.load(ckpt_path)["params_ema"]
  91. codeformer_net.load_state_dict(checkpoint)
  92. codeformer_net.eval()
  93. os.makedirs('output', exist_ok=True)
  94. def inference(image, background_enhance, face_upsample, upscale, codeformer_fidelity):
  95. """Run a single prediction on the model"""
  96. try: # global try
  97. # take the default setting for the demo
  98. has_aligned = False
  99. only_center_face = False
  100. draw_box = False
  101. detection_model = "retinaface_resnet50"
  102. print('Inp:', image, background_enhance, face_upsample, upscale, codeformer_fidelity)
  103. img = cv2.imread(str(image), cv2.IMREAD_COLOR)
  104. print('\timage size:', img.shape)
  105. upscale = int(upscale) # convert type to int
  106. if upscale > 4: # avoid memory exceeded due to too large upscale
  107. upscale = 4
  108. if upscale > 2 and max(img.shape[:2])>1000: # avoid memory exceeded due to too large img resolution
  109. upscale = 2
  110. if max(img.shape[:2]) > 1500: # avoid memory exceeded due to too large img resolution
  111. upscale = 1
  112. background_enhance = False
  113. face_upsample = False
  114. face_helper = FaceRestoreHelper(
  115. upscale,
  116. face_size=512,
  117. crop_ratio=(1, 1),
  118. det_model=detection_model,
  119. save_ext="png",
  120. use_parse=True,
  121. device=device,
  122. )
  123. bg_upsampler = upsampler if background_enhance else None
  124. face_upsampler = upsampler if face_upsample else None
  125. if has_aligned:
  126. # the input faces are already cropped and aligned
  127. img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)
  128. face_helper.is_gray = is_gray(img, threshold=5)
  129. if face_helper.is_gray:
  130. print('\tgrayscale input: True')
  131. face_helper.cropped_faces = [img]
  132. else:
  133. face_helper.read_image(img)
  134. # get face landmarks for each face
  135. num_det_faces = face_helper.get_face_landmarks_5(
  136. only_center_face=only_center_face, resize=640, eye_dist_threshold=5
  137. )
  138. print(f'\tdetect {num_det_faces} faces')
  139. # align and warp each face
  140. face_helper.align_warp_face()
  141. # face restoration for each cropped face
  142. for idx, cropped_face in enumerate(face_helper.cropped_faces):
  143. # prepare data
  144. cropped_face_t = img2tensor(
  145. cropped_face / 255.0, bgr2rgb=True, float32=True
  146. )
  147. normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
  148. cropped_face_t = cropped_face_t.unsqueeze(0).to(device)
  149. try:
  150. with torch.no_grad():
  151. output = codeformer_net(
  152. cropped_face_t, w=codeformer_fidelity, adain=True
  153. )[0]
  154. restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
  155. del output
  156. torch.cuda.empty_cache()
  157. except RuntimeError as error:
  158. print(f"Failed inference for CodeFormer: {error}")
  159. restored_face = tensor2img(
  160. cropped_face_t, rgb2bgr=True, min_max=(-1, 1)
  161. )
  162. restored_face = restored_face.astype("uint8")
  163. face_helper.add_restored_face(restored_face)
  164. # paste_back
  165. if not has_aligned:
  166. # upsample the background
  167. if bg_upsampler is not None:
  168. # Now only support RealESRGAN for upsampling background
  169. bg_img = bg_upsampler.enhance(img, outscale=upscale)[0]
  170. else:
  171. bg_img = None
  172. face_helper.get_inverse_affine(None)
  173. # paste each restored face to the input image
  174. if face_upsample and face_upsampler is not None:
  175. restored_img = face_helper.paste_faces_to_input_image(
  176. upsample_img=bg_img,
  177. draw_box=draw_box,
  178. face_upsampler=face_upsampler,
  179. )
  180. else:
  181. restored_img = face_helper.paste_faces_to_input_image(
  182. upsample_img=bg_img, draw_box=draw_box
  183. )
  184. # save restored img
  185. save_path = f'output/out.png'
  186. imwrite(restored_img, str(save_path))
  187. restored_img = cv2.cvtColor(restored_img, cv2.COLOR_BGR2RGB)
  188. return restored_img, save_path
  189. except Exception as error:
  190. print('Global exception', error)
  191. return None, None
  192. title = "CodeFormer: Robust Face Restoration and Enhancement Network"
  193. description = r"""<center><img src='https://user-images.githubusercontent.com/14334509/189166076-94bb2cac-4f4e-40fb-a69f-66709e3d98f5.png' alt='CodeFormer logo'></center>
  194. <b>Official Gradio demo</b> for <a href='https://github.com/sczhou/CodeFormer' target='_blank'><b>Towards Robust Blind Face Restoration with Codebook Lookup Transformer (NeurIPS 2022)</b></a>.<br>
  195. 声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/131772
    推荐阅读
    相关标签