赞
踩
我主要玩儿了pp_liteseg。
做法如下:
-
- batch_size: 6 # total: 4*6
- iters: 10000
-
- train_dataset:
- type: Dataset
- dataset_root: ./paddleseg/datasets/custom_dataset
- num_classes: 11
- mode: train
- train_path: ./paddleseg/datasets/custom_dataset/train.txt
- transforms:
- # - type: ResizeStepScaling
- # min_scale_factor: 0.9
- # max_scale_factor: 1.1
- # scale_step_size: 0.25
- # - type: RandomPaddingCrop
- # crop_size: [960, 720]
- # - type: RandomHorizontalFlip
- # - type: RandomDistort
- # brightness_range: 0.5
- # contrast_range: 0.5
- # saturation_range: 0.5
- # - type: Normalize
- - type: Resize # 送入网络之前需要进行resize
- target_size: [ 512, 512 ] # 将原图 resize 成 512*512 再送入网络
- val_dataset:
- type: Dataset
- dataset_root: ./paddleseg/datasets/custom_dataset
- num_classes: 11
- mode: val
- val_path: ./paddleseg/datasets/custom_dataset/val.txt
- transforms:
- - type: Resize # 送入网络之前需要进行resize
- target_size: [ 512, 512 ] # 将原图 resize 成 512*512 再送入网络
-
- optimizer:
- type: sgd
- momentum: 0.9
- weight_decay: 5.0e-4
-
- lr_scheduler:
- type: PolynomialDecay
- learning_rate: 0.01
- end_lr: 0
- power: 0.9
- warmup_iters: 200
- warmup_start_lr: 1.0e-5
-
- loss:
- types:
- - type: OhemCrossEntropyLoss
- min_kept: 250000 # batch_size * 960 * 720 // 16
- - type: OhemCrossEntropyLoss
- min_kept: 250000
- - type: OhemCrossEntropyLoss
- min_kept: 250000
- coef: [1, 1, 1]
-
- model:
- type: PPLiteSeg
- backbone:
- type: STDC1
- pretrained: https://bj.bcebos.com/paddleseg/dygraph/PP_STDCNet1.tar.gz
- arm_out_chs: [32, 64, 128]
- seg_head_inter_chs: [32, 64, 64]
这些数据怎么弄的?
如果你原来的图像是一堆bmp,现在想转化为一堆jpg,怎么办?因为这里必须用jpg!
代码如下:
-
- # coding:utf-8
- import os
- from PIL import Image
-
-
- # bmp 转换为jpg
- def bmpToJpg(file_path):
- for fileName in os.listdir(file_path):
- # print(fileName)
- newFileName = fileName[0:fileName.find(".")]+".jpg"
- print(newFileName)
- im = Image.open(file_path+"\\"+fileName)
- im.save(file_path+"\\"+newFileName)
-
-
- # 删除原来的位图
- def deleteImages(file_path, imageFormat):
- command = "del "+file_path+"\\*."+imageFormat
- os.system(command)
-
-
- def main():
- file_path = "D:\\BaiduNetdiskDownload\\PaddleSeg-release-2.8\\paddleseg\\datasets\\custom_dataset\\bmp"
- bmpToJpg(file_path)
- #deleteImages(file_path, "bmp")
-
-
- if __name__ == '__main__':
- main()
-
-
- python tools/data/split_dataset_list.py D:/BaiduNetdiskDownload/PaddleSeg-release-2.8/paddleseg/datasets/custom_dataset/ images labels
-
- python tools/train.py --config configs\pp_liteseg\gao.yml --do_eval --use_vdl --save_interval 500 --save_dir output
竟然提示我,:Could not locate zlibwapi.dll.
Installation Guide - NVIDIA Docs
在这里,下载一个zlib,主要是那个dll,我放在了c:/windows/system32下面
-
- python tools/predict.py --config configs\pp_liteseg\gao.yml --model_path output/iter_4500/model.pdparams --image_path paddleseg/datasets/custom_dataset/images/aWriteRectangle2ByWireEnd000.jpg --save_dir output/result
-
- python tools/export.py --config configs/pp_liteseg/gao.yml --model_path output/iter_30000/model.pdparams --save_dir output/inference_model --output_op none
注意最后这个--with_softmax
没他,最后一层输出的就是整数,也就是类别。
有了他,如果11类,那么会输出共11维,第0维说的,第0类在每个像素点的概率,是浮点型小数。
下面是全部命令,注意正反斜杠。
- python tools/data/split_dataset_list.py /home/gao/PaddleSeg/paddleseg/datasets/custom_dataset/ images labels
-
- python tools/train.py --config configs/pp_liteseg/gao.yml --do_eval --use_vdl --save_interval 500 --save_dir output
-
- python tools/predict.py --config configs/pp_liteseg/gao.yml --model_path output/iter_30000/model.pdparams --image_path paddleseg/datasets/custom_dataset/test --save_dir output/result
-
- python tools/export.py --config configs/pp_liteseg/gao.yml --model_path output/iter_30000/model.pdparams --save_dir output/inference_model --output_op none
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。