赞
踩
大三学生计算机视觉课程的作业,小白做的,还有很多地方可以完善~
ResNet18是一个经典的深度卷积神经网络模型,用于图像分类任务。它由微软亚洲研究院提出,是为了参加2015年ImageNet图像分类比赛而设计的。ResNet18的名称来自于网络中包含的18个卷积层。
ResNet18的基本结构包括以下部分:
输入层:接收大小为224x224的RGB图像。
卷积层:共4个卷积层,每个卷积层使用3x3的卷积核和ReLU激活函数,用于提取图像的局部特征。
残差块:共8个残差块,每个残差块由两个卷积层和一条跳跃连接构成。这些残差块的设计旨在解决深度卷积神经网络中的梯度消失和梯度爆炸问题,从而使得网络更容易训练和优化。
全局平均池化层:对特征图进行全局平均池化,将特征图转化为一维向量。
全连接层:包含一个大小为1000的全连接层,用于最终的分类输出。
输出层:使用softmax激活函数,生成1000个类别的概率分布,以进行图像分类。
总的来说,ResNet18通过引入残差块的结构,成功解决了深度卷积神经网络中的梯度问题,使得网络具有更好的训练性能和分类准确率。
数据集传送门:
peanut数据集是整体数据集,下面有这三个文件,test(测试集)train(训练集)validation(验证集)
每个文件夹下都有对应的类名:
test测试集:
train训练集:
validation:
- # 数据增强
- image_transforms = {
- 'train': transforms.Compose([
- transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)), # 随机裁剪到256*256
- transforms.RandomRotation(degrees=15), # 随机旋转
- transforms.RandomHorizontalFlip(p=0.5), # 依概率水平旋转
- transforms.CenterCrop(size=224), # 中心裁剪到224*224符合resnet的输入要求
- transforms.ToTensor(), # 填充
- transforms.Normalize([0.485, 0.456, 0.406], # 转化为tensor,并归一化至[0,-1]
- [0.229, 0.224, 0.225])
- ]),
- 'validation': transforms.Compose([
- transforms.Resize(size=256), # 图像变换至256
- transforms.CenterCrop(size=224),
- transforms.ToTensor(), # 填充
- transforms.Normalize([0.485, 0.456, 0.406],
- [0.229, 0.224, 0.225])
- ]),
- 'test': transforms.Compose([
- transforms.Resize(size=256), # 图像变换至256
- transforms.CenterCrop(size=224),
- transforms.ToTensor(), # 填充
- transforms.Normalize([0.485, 0.456, 0.406],
- [0.229, 0.224, 0.225])
- ])
- }
- dataset = 'peanut_data'
- train_directory = os.path.join(dataset, 'train') # 训练集的路径
- valid_directory = os.path.join(dataset, 'validation') # 验证集的路径
- test_directory = os.path.join(dataset, 'test') # 测试集路径
-
-
- batch_size = 32 #批处理大小
-
- data = {
- 'train': datasets.ImageFolder(root=train_directory, transform=image_transforms['train']),
- 'validation': datasets.ImageFolder(root=valid_directory, transform=image_transforms['validation']),
- 'test': datasets.ImageFolder(root=test_directory, transform=image_transforms['test'])
- } # 把dataset类型的数据放在数组里,便于通过键值调用
-
- train_data_size = len(data['train']) # 训练集的大小
- valid_data_size = len(data['validation']) # 验证集的大小
- test_data_size = len(data['test']) # 验证集的大小
-
- # dataset数据类型;分组数;是否打乱
- train_data = DataLoader(data['train'], batch_size=batch_size, shuffle=True) # DataLoader(dataset, batch_size, shuffle)
- valid_data = DataLoader(data['validation'], batch_size=batch_size, shuffle=True)
- test_data = DataLoader(data['test'], batch_size=batch_size, shuffle=True)
-
- print("训练集数据量为:{},验证集数据量为:{},验证集数据量为{}".format(train_data_size, valid_data_size, test_data_size))
因为我这个是8分类,所以nn.linear(256,8)里填的是8,这里需要修改
比如:如果是三分类,就是nn.Linear(256,3)
- resnet18 = models.resnet18(weights=True) # 开启预训练
-
- for param in resnet18.parameters(): # 由于预训练的模型中的大多数参数已经训练好了,因此将requires_grad字段重置为false。
- param.requires_grad = False
-
- fc_inputs = resnet18.fc.in_features
-
- resnet18.fc = nn.Sequential(
- nn.Linear(fc_inputs, 256),
- nn.ReLU(),
- nn.Dropout(0.4),
- nn.Linear(256, 8),
- nn.LogSoftmax(dim=1)
- )
-
- resnet18 = resnet18.to('cuda:0'if torch.cuda.is_available() else 'cpu')
-
- # 定义损失函数和优化器。
- loss_func = nn.NLLLoss()
- optimizer = optim.Adam(resnet18.parameters(), lr=0.01, betas=(0.9, 0.999))
- device = torch.device("cuda:0" if torch.cuda.is_available() else 'cpu')
-
-
- def train_and_valid(model, loss_function, optimizer, epochs=25):
- model.to(device)
- history = []
- best_acc = 0.0
- best_epoch = 0
- print('loading……')
- for epoch in range(epochs):
- epoch_start = time.time() # 每轮开始时间记录
- print("Epoch: {}/{}".format(epoch + 1, epochs), '\n')
- model.train() # 启用 Batch Normalization 和 Dropout。(随机去除神经元)
-
- train_loss = 0.0
- train_acc = 0.0
- valid_loss = 0.0
- valid_acc = 0.0
-
- for i, (inputs, labels) in enumerate(tqdm(train_data)): # 训练数据
- inputs = inputs.to(device)
- labels = labels.to(device)
-
- # 因为这里梯度是累加的,所以每次记得清零
- optimizer.zero_grad()
-
- outputs = model(inputs)
-
- loss = loss_function(outputs, labels)
-
- loss.backward()
-
- optimizer.step()
-
- train_loss += loss.item() * inputs.size(0)
-
- ret, predictions = torch.max(outputs.data, 1)
- correct_counts = predictions.eq(labels.data.view_as(predictions))
-
- acc = torch.mean(correct_counts.type(torch.FloatTensor))
-
- train_acc += acc.item() * inputs.size(0)
-
- with torch.no_grad(): # 用于通知dropout层和batchnorm层在train和val模式间切换。
- model.eval() # model.eval()中的数据不会进行反向传播,但是仍然需要计算梯度;
-
- for j, (inputs, labels) in enumerate(tqdm(valid_data)): # 验证数据
- inputs = inputs.to(device) # 从valid_data里获得输入和标签
- labels = labels.to(device)
-
- outputs = model(inputs) # 模型的输出
-
- loss = loss_function(outputs, labels) # 损失计算
-
- valid_loss += loss.item() * inputs.size(0)
-
- ret, predictions = torch.max(outputs.data, 1) # 在分类问题中,通常需要使用max()函数对tensor进行操作,求出预测值索引。
- # dim是max函数索引的维度0 / 1,0是每列的最大值,1是每行的最大值
- # 在多分类任务中我们并不需要知道各类别的预测概率,所以第一个tensor对分类任务没有帮助,而第二个tensor包含了最大概率的索引,所以在实际使用中我们仅获取第二个tensor即可。
- correct_counts = predictions.eq(labels.data.view_as(predictions))
-
- acc = torch.mean(correct_counts.type(torch.FloatTensor))
-
- valid_acc += acc.item() * inputs.size(0)
-
- avg_train_loss = train_loss / train_data_size
- avg_train_acc = train_acc / train_data_size
-
- avg_valid_loss = valid_loss / valid_data_size
- avg_valid_acc = valid_acc / valid_data_size
- history.append([avg_train_loss, avg_valid_loss, avg_train_acc, avg_valid_acc])
-
- if best_acc < avg_valid_acc:
- best_acc = avg_valid_acc
- best_epoch = epoch + 1
- torch.save(model.state_dict(), 'result/weight/dataset' + '_best_' + '.pt')
-
- epoch_end = time.time()
- date_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- # print(date_time)
- # torch.save(model.state_dict(), 'result/weight/dataset' + '_model_' + str(epoch + 1) + '.pt')
- file_path = "log/train.txt"
- os.makedirs(os.path.dirname(file_path), exist_ok=True)
- with open('log/train.txt', 'a+') as f:
- f.write(date_time + '\n')
- f.write("Epoch: {}/{}".format(epoch + 1, epochs) + '\n')
- f.write("Epoch: {:03d}, Training: Loss: {:.4f},"
- " Accuracy: {:.4f}%, \n\t\tValidation: Loss: {:.4f}, Accuracy: {:.4f}%, Time: {:.4f}s".format(
- epoch + 1, avg_valid_loss, avg_train_acc * 100, avg_valid_loss, avg_valid_acc * 100,
- epoch_end - epoch_start
- ) + '\n')
- f.write("Best Accuracy for validation : {:.4f} at epoch {:03d}".format(best_acc, best_epoch) + '\n')
- with open('log/train.txt', 'a+') as f:
- f.write('-------------------------------------------------------------------------------------------' + '\n')
- return model, history
- def test(model, loss_function):
- date_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- resnet18.load_state_dict(torch.load('result/weight/dataset_best_.pt'))
- # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # 设备自行判断
- test_loss = 0.0
- test_acc = 0.0
- test_start = time.time()
- with torch.no_grad(): # 用于通知dropout层和batchnorm层在train和val模式间切换。
- model.eval() # model.eval()中的数据不会进行反向传播,但是仍然需要计算梯度;
- for j, (inputs, labels) in enumerate(test_data): # 验证数据
- inputs = inputs.to(device) # 从test_data里获得输入和标签
- labels = labels.to(device)
-
- outputs = model(inputs) # 模型的输出
-
- loss = loss_function(outputs, labels) # 损失计算
-
- test_loss += loss.item() * inputs.size(0)
-
- ret, predictions = torch.max(outputs.data, 1) # 在分类问题中,通常需要使用max()函数对tensor进行操作,求出预测值索引。
-
- correct_counts = predictions.eq(labels.data.view_as(predictions))
-
- acc = torch.mean(correct_counts.type(torch.FloatTensor))
-
- test_acc += acc.item() * inputs.size(0)
-
- avg_test_loss = test_loss / test_data_size
- avg_test_acc = test_acc / test_data_size
- test_end = time.time()
- file_path = "log/test.txt"
- os.makedirs(os.path.dirname(file_path), exist_ok=True)
- with open('log/test.txt', 'a+') as f:
- f.write(date_time+'\n')
- f.write("test: Loss: {:.4f}, Accuracy: {:.4f}%, Time: {:.4f}s".format(avg_test_loss, avg_test_acc * 100,
- test_end - test_start)+'\n')
- f.write('-------------------------------------------------------------'+'\n')
- def drew_image(history):
- # print('history:', history)
- # Plot and save the loss curve
- plt.plot(history[:, 0:2])
- plt.legend(['Tr Loss', 'Val Loss'])
- plt.xlabel('Epoch Number')
- plt.ylabel('Loss')
- plt.ylim(0, 2)
- plt.savefig("result/pic/" + dataset + '_loss_curve.png')
- plt.close()
-
- # Plot and save the accuracy curve
- plt.plot(history[:, 2:4])
- plt.legend(['Tr Accuracy', 'Val Accuracy'])
- plt.xlabel('Epoch Number')
- plt.ylabel('Accuracy')
- plt.ylim(0, 1)
- plt.savefig("result/pic/" + dataset + '_accuracy_curve.png')
- plt.close()
- if __name__ == '__main__':
- num_epochs = 10
-
- print('start training……' + '\n')
- trained_model, history = train_and_valid(resnet18, loss_func, optimizer, num_epochs)
- print('train is over!' + '\n')
-
- test(resnet18, loss_func)
-
- print('test is over!' + '\n')
- history = np.array(history)
- os.makedirs("result/pic", exist_ok=True)
- drew_image(history)
- path = 'test_pic'
- classes = data['test'].classes
- print(classes)
- min_size = 30
- max_size = 400
- transform = transforms.Compose([
- transforms.Resize((224, 224)),
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
- ])
- def predict_image(folder_path):
- for filename in os.listdir(folder_path):
- if filename.endswith(".jpg"):
- path = os.path.join(folder_path, filename)
- resnet18.eval()
- img = Image.open(path)
-
- img_p = transform(img).unsqueeze(0).to(device)
- output = resnet18(img_p)
- pred = output.argmax(dim=1).item()
- p = 100 * nn.Softmax(dim=1)(output).detach().cpu().numpy()[0]
- print(filename + '预测类别为:', classes[pred])
-
- # 八分类
- print(
- '类别{}的概率为{:.2f}%,类别{}的概率为{:.2f}%,类别{}的概率为{:.2f}%'.format(classes[0], p[0], classes[1], p[1], classes[2],
- p[2], classes[3], p[3]))
-
- # 将概率和类别标签写在图片上,并保存
- img_draw = ImageDraw.Draw(img)
- img_draw.text(xy=(0, 0), text='Predicted: {}'.format(classes[pred]), fill=(255, 0, 0))
- img_draw.text(xy=(0, 20), text='Probability: {:.2f}%'.format(p[pred]), fill=(255, 0, 0))
- img.save("result/exp/" + filename)
- def delet_contours(contours, delete_list):
- delta = 0
- for i in range(len(delete_list)):
- del contours[delete_list[i] - delta]
- delta = delta + 1
- return contours
-
-
- def spilt_detect():
- folder_path = os.listdir(path)
- result_path = "result/exp"
- # print(os.path.join(path), '7979')
- for image in tqdm(folder_path):
- img = cv2.imread(os.path.join(path, image))
- hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
-
- lower_blue = np.array([100, 100, 8])
- upper_blue = np.array([255, 255, 255])
-
- mask = cv2.inRange(hsv, lower_blue, upper_blue)
-
- result = cv2.bitwise_and(img, img, mask=~mask)
- result = result.astype(int)
-
- new_background_color = (0, 0, 0)
-
- background = np.ones_like(img) * new_background_color
-
- final_image = cv2.add(result, background)
- final_image = final_image.astype(np.uint8)
-
- _, binary_image = cv2.threshold(final_image, 1, 255, cv2.THRESH_BINARY)
- inverted_image = cv2.bitwise_not(binary_image)
-
- inverted_image = cv2.cvtColor(inverted_image, cv2.COLOR_BGR2GRAY)
- _, binary_image = cv2.threshold(inverted_image, 127, 255, cv2.THRESH_BINARY)
-
- contours, hierarchy = cv2.findContours(binary_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
- contours = list(contours)
-
- delete_list = []
- for index in range(len(contours)):
- if (cv2.arcLength(contours[index], True) < min_size) or (cv2.arcLength(contours[index], True) > max_size):
- delete_list.append(index)
- contours = delet_contours(contours, delete_list)
-
- for i in range(len(contours)):
- x, y, w, h = cv2.boundingRect(contours[i])
- img_pred = img[y:y + h, x:x + w, :]
- img_pred = Image.fromarray(img_pred)
- img_pred = transform(img_pred)
- img_pred = torch.unsqueeze(img_pred, dim=0)
- img_pred = img_pred.cuda()
- pred = torch.argmax(resnet18(img_pred), dim=1)
- preds = classes[int(pred)]
- cv2.putText(img, preds, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 1, cv2.LINE_AA)
- cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)
- # 保存结果图像
- save_path = os.path.join(result_path, image)
- cv2.imwrite(save_path, img)
这个 delet_contours 函数的功能是从轮廓列表 contours 中删除指定索引的轮廓。具体来说,它接受两个参数:contours 是包含所有轮廓的列表,delete_list 是要删除的轮廓的索引列表。
函数首先计算了一个 delta 变量,然后遍历 delete_list 中的索引,从 contours 列表中删除对应索引的轮廓。删除一个元素后,之后要删除的元素索引会相应地减去之前已删除的元素数量,这就是 delta 的作用。
最后,函数返回更新后的轮廓列表 contours。
一句话总结:用于过滤和删除不需要的轮廓
split_detect函数则是对指定路径下的图像进行分割检测,并在检测到的目标周围绘制矩形框和标签,最后保存结果图像
主要实现过程为:灰度化,二值化,轮廓检测方法,根据设定的最小和最大尺寸筛选有效的轮廓,并删除不符合条件的轮廓,对通过筛选的轮廓进行分类预测,并在图像上绘制矩形框和标签
- if __name__ == '__main__':
- folder_path = r'E:\document\3_Third_year_of_college\Computer_Vision\Experiment_13\test_pic' # 图片文件夹路径
- spilt_detect()
- # predict_image(folder_path)
这里的folder_path需要改成你自己想做检测或分类的图像
使用spilt_detect是目标检测,多个花生同时标注
使用predict_image是图像分类,只支持一颗花生一颗花生的去打标签,分类
|—log(训练和测试的记录)
|——train.txt
|——test.txt
|—peanut_data
|——test
|——train
|——validation
|—result(运行结果)
|——weight(训练好的模型在这)
|——pic(准确度和损失值的图保存在这)
|——exp (预测的结果在这)
|—test_pic(想分类/检测的图片放这里)
|—load_dataset.py
|—predict.py
|—train.py
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。