当前位置:   article > 正文

如何实现图像搜索,文搜图,图搜图,CLIP+faiss向量数据库实现图像高效搜索

faiss数据库

如何实现图像搜索,文搜图,图搜图,CLIP+faiss向量数据库实现图像高效搜索

这是AIGC的时代,各种GPT大模型生成文本,还有多模态图文并茂大模型,

以及stable diffusion和stable video diffusion 图像生成视频生成等新模型,

层出不穷,如何生成一个图文并貌的文章,怎么在合适的段落加入图像,图像用什么方式获取,

图像可以使用搜索的形式获取,也可以使用stable diffusion生成

今天说说怎么使用搜索的形式获取,这种方式更高效,节省算力,更容易落地

clip模型,详细可以查看知乎

https://zhuanlan.zhihu.com/p/511460120

或论文https://arxiv.org/pdf/2103.00020.pdf

什么是faiss数据库

Faiss的全称是Facebook AI Similarity Search,是FaceBook的AI团队针对大规模相似度检索问题开发的一个工具,使用C++编写,有python接口,对10亿量级的索引可以做到毫秒级检索的性能。

简单来说,Faiss的工作,就是把我们自己的候选向量集封装成一个index数据库,它可以加速我们检索相似向量TopK的过程,其中有些索引还支持GPU构建,可谓是强上加强。

https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/

1.huggingface下载clip模型,默认是英文版,也有中文版,英文版的效果会更好些

英文版

  1. from PIL import Image
  2. import requests
  3. from transformers import CLIPProcessor, CLIPModel
  4. model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
  5. processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
  6. # url = "http://images.cocodataset.org/val2017/000000039769.jpg"
  7. # image = Image.open(requests.get(url, stream=True).raw)
  8. # inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
  9. # image_features = model.get_image_features(inputs["pixel_values"])
  10. # text_features = model.get_text_features(inputs["input_ids"],inputs["attention_mask"])
  11. # outputs = model(**inputs)
  12. # logits_per_image = outputs.logits_per_image # this is the image-text similarity score
  13. # probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
  14. # print(probs)

中文版

  1. from PIL import Image
  2. import requests
  3. from transformers import ChineseCLIPProcessor, ChineseCLIPModel
  4. import torch
  5. device = torch.device("mps")
  6. model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
  7. processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
  8. # url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
  9. # image = Image.open(requests.get(url, stream=True).raw)
  10. # Squirtle, Bulbasaur, Charmander, Pikachu in English
  11. # texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
  12. # # compute image feature
  13. # inputs = processor(images=image, return_tensors="pt")
  14. # image_features = model.get_image_features(**inputs)
  15. # image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
  16. # # compute text features
  17. # inputs = processor(text=texts, padding=True, return_tensors="pt")
  18. # text_features = model.get_text_features(**inputs)
  19. # text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
  20. # # compute image-text similarity scores
  21. # inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
  22. # outputs = model(**inputs)
  23. # logits_per_image = outputs.logits_per_image # this is the image-text similarity score
  24. # probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]]

2.可以爬一些图片,做图像库,搜索也是在这个图像库中搜索,这个爬取的图像内容和业务场景相关,

比如你想获取动物的图像,那主要爬动物的就可以,这是我随便下载的一些图片

3.把图像映射成向量,存储在向量数据库faiss中

  1. # from clip_model import model,processor
  2. import faiss
  3. from PIL import Image
  4. import os
  5. import json
  6. from chinese_clip import model,processor
  7. from tqdm import tqdm
  8. d = 512
  9. index = faiss.IndexFlatL2(d) # 使用 L2 距离
  10. # 文件夹路径
  11. # folder_path = '/Users/smzdm/Downloads/Animals_with_Attributes2 2/JPEGImages'
  12. folder_path = "image"
  13. # 遍历文件夹
  14. file_paths = []
  15. for root, dirs, files in os.walk(folder_path):
  16. for file in files:
  17. # 检查文件是否为图片文件(这里简单地检查文件扩展名)
  18. if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif')):
  19. file_path = os.path.join(root, file)
  20. file_paths.append(file_path)
  21. id2filename = {idx:x for idx,x in enumerate(file_paths)}
  22. # 保存为 JSON 文件
  23. with open('id2filename.json', 'w') as json_file:
  24. json.dump(id2filename, json_file)
  25. for file_path in tqdm(file_paths,total=len(file_paths)):
  26. # 使用PIL打开图片
  27. image = Image.open(file_path)
  28. inputs = processor(images=image, return_tensors="pt", padding=True)
  29. image_features = model.get_image_features(inputs["pixel_values"])
  30. image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
  31. image_features = image_features.detach().numpy()
  32. index.add(image_features)
  33. # 关闭图像,释放资源
  34. image.close()
  35. faiss.write_index(index, "image.faiss")

4.加载数据库文件和索引文件,使用文本搜索图像或图像搜索图像

  1. # from clip_model import model,processor
  2. import faiss
  3. from PIL import Image
  4. import os
  5. import json
  6. from chinese_clip import model,processor
  7. d = 512
  8. index = faiss.IndexFlatL2(d) # 使用 L2 距离
  9. # 保存为 JSON 文件
  10. with open('id2filename.json', 'r') as json_file:
  11. id2filename = json.load(json_file)
  12. index = faiss.read_index("image.faiss")
  13. def text_search(text,k=1):
  14. inputs = processor(text=text, images=None, return_tensors="pt", padding=True)
  15. text_features = model.get_text_features(inputs["input_ids"],inputs["attention_mask"])
  16. text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
  17. text_features = text_features.detach().numpy()
  18. D, I = index.search(text_features, k) # 实际的查询
  19. filenames = [[id2filename[str(j)] for j in i] for i in I]
  20. return text,D,filenames
  21. def image_search(img_path,k=1):
  22. image = Image.open(img_path)
  23. inputs = processor(images=image, return_tensors="pt")
  24. image_features = model.get_image_features(**inputs)
  25. image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
  26. image_features = image_features.detach().numpy()
  27. D, I = index.search(image_features, k) # 实际的查询
  28. filenames = [[id2filename[str(j)] for j in i] for i in I]
  29. return img_path,D,filenames
  30. if __name__ == "__main__":
  31. text = ["雪山","熊猫","长城","苹果"]
  32. text,D,filenames = text_search(text)
  33. print(text,D,filenames)
  34. # img_path = "image/apple2.jpeg"
  35. # img_path,D,filenames = image_search(img_path,k=2)
  36. # print(img_path,D,filenames)

比如用文字搜索

["雪山","熊猫","长城","苹果"]返回结果:

['雪山', '熊猫', '长城', '苹果'] [[1.2182312] [1.1529984] [1.1177421] [1.1656866]] [['image/OIP (10).jpeg'], ['image/OIP.jpeg'], ['image/OIP (8).jpeg'], ['image/apple2.jpeg']]



还可以使用图片搜图片,打开下面的注释

 返回结果

image/apple2.jpeg [[0.         0.11877532]] [['image/apple2.jpeg', 'image/OIP (14).jpeg']]

第一张图像是本身,完全相似,第二张可以看到是一个苹果

 
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/387663
推荐阅读
相关标签
  

闽ICP备14008679号