赞
踩
pointpillars 论文
pointpillars 论文
PointPillars - gitbook_docs
使用 NVIDIA CUDA-Pointpillars 检测点云中的对象
3D点云 (Lidar)检测入门篇 - PointPillars PyTorch实现
模型部署入门教程(三):PyTorch 转 ONNX 详解
PointPillar代码解析-OpenPCDet
pointpillars deployment 学习
模型部署——pointpillars转一个onnx
pointpillars算法最突出的是提出一种柱形的编码功能,点云依然采取常用的体素组织起来。VoxelNet 直接采用体素3D卷积,SECOND采用稀疏卷积,pointpillars采用pillar方式转换成为2D卷积来加深网络,以此来提高效率与精度。至于后面接SSD还是RPN等网络,只是相对于2d卷积下的网络根据应用场景与需求来进行选取。
自动驾驶中基于Lidar的object检测,简单的说,就是从3D点云数据中定位到object的框和类别。具体地,输入是点云 X ∈ R N × c X∈R^{N×c} X∈RN×c (一般 c=4 ),输出是 n 个检测框bboxes, 以第 i 个检测框bbox为例, 它包括位姿信息 ( x i , y i , z i , w i , l i , h i , θ i ) (x_i,y_i,z_i,w_i,l_i,h_i,θ_i) (xi,yi,zi,wi,li,hi,θi) 和 类别信息 ( l a b e l i , s c o r e i ) (label_i,score_i) (labeli,scorei)。
基于Lidar的object检测模型包括:Point-based,Voxel-based,Point-Voxel-based,Multi-view-based。
经典模型:
基于 Point-based 的模型,直接对点云进行处理,可以减少位置信息的损失,但同时也带来了巨大的计算资源消耗,使其很难做到实时。
经典模型:
基于 Voxel-base 的模型,相较于 Point-base 的模型在推理速度上有所提升,但是由于模型中使用了三维卷积的 backbone,所以也仍然很难做到实时。
相较于其他的模型,PointPillars 在推理速度方面有着明显的优势(遥遥领先),同时又能保持着不错的准确性。
经典模型:
经典模型:
voxel-base 的模型中常常使用 voxelization(体素化)。在实际使用过程中我们都希望我们的模型又快又准,所以为了可以权衡速度和精度,VoxelNet 提出了使用 voxelization(体素化)的方法来处理点云。
点云是三维空间中的物体表示,因此一个自然的思路是将空间在长宽高三个方向划分格子,每个格子称为 voxel(体素),通过处理将其转换为 3 维数组的形式,再使用 3D 卷积和 2D 卷积的网络处理,如下图所示:
体素化也会带来一些问题,例如不可避免的会造成一些信息的丢失,对体素参数较为敏感,以及转换成 3 维数组后提取特征时通常需要用到 3 维卷积。3 维的卷积是一个相当耗时的操作,所以当我们设置体素化的粒度过大时会导致较多的信息丢失,但如果粒度过小又会导致计算时间几何增加。
PointPillars 在 VoxelNet 中的 voxel 的基础上提出了一种改进版本的点云表征方法 pillar,可以将点云转换成伪图像的形式,进而通过 2D 卷积实现目标检测,相较于 VoxelNet 将点云转换成 voxel 形式然后使用相当耗时的 3 维卷积来处理特征,PointPillars 这种使用 2 维卷积的网络在推理速度上有很大的优势。
什么是 pillar?原文中的描述是“ a pillar is a voxel with unlimited spatial extent in the z direction ”,其实很简单,将空间的 x,y 轴两个方向上划分格子,然后再将每个格子在 z 轴上拉伸,使其可以覆盖整个空间 z 轴,就可以得到一个 pillar,且空间中的每个点都可以划分到某个 pillar 中。
VFE(Voxel Feature Encode),体素特征编码层,其实是简化版的pointnet。
【模型加速】PointPillars模型TensorRT加速实验(1)
【模型加速】PointPillars模型TensorRT加速实验(2)
【模型加速】PointPillars模型TensorRT加速实验(3)
3D卷积太昂贵了,不适合边缘设备部署。
PointPillar延续了VoxelNet和SECOND的思路,VoxelNet的方法是直接用体素做3D卷积,SECOND用了稀疏卷积,而PointPillar使用了pillar的方式,直接转成2维卷积进行提速。
PointPillars 是一个既简单又实用的模型,在保持较高精度的同时又有很高的推理速度,同时部署也很友好,是一个十分常用的模型。
整个算法逻辑包含3个部分:数据预处理,神经网络,后处理。其中神经网络部分,原论文将其结构描述为3个部分:
PFN(Pillar Feature Network)。
因为不同点云帧的点云数量是变化的,非空Pillar的数量自然也是不同的,在考虑将PFN导出为ONNX模型时,需要采用dynamic shape。
从PFN的8个输入可知,num_points表示每个Pillar包含的实际点云数量,这个轴是dynamic的。
总结:shape变化,(P,N,D) ——》(P,N,C)——》(P,C)——》(C,H,W)。
PFN有8个输入:
#准备输入
pillar_x = torch.ones([1, 1, 9918, 100], dtype=torch.float32, device="cpu")
pillar_y = torch.ones([1, 1, 9918, 100], dtype=torch.float32, device="cpu")
pillar_z = torch.ones([1, 1, 9918, 100], dtype=torch.float32, device="cpu")
pillar_i = torch.ones([1, 1, 9918, 100], dtype=torch.float32, device="cpu")
num_points_per_pillar = torch.ones([1, 9918], dtype=torch.float32, device="cpu")
x_sub_shaped = torch.ones([1, 1, 9918, 100], dtype=torch.float32, device="cpu")
y_sub_shaped = torch.ones([1, 1, 9918, 100], dtype=torch.float32, device="cpu")
mask = torch.ones([1, 1, 9918, 100], dtype=torch.float32, device="cpu")
PFN的输出shape为(1,64,pillar_num,1),pillar_num表示非空pillar的数量,是dynamic shape。因为不同点云帧的点云数量是变化的,非空Pillar的数量自然也是不同的。
MFN(Middle Feature Extractor Network)是用来将PFN提取的Pillar级的点云深度特征进一步转化为伪点云图像。
MFN有2个输入,且都是dynamic的:
MFN的输出spatial_features是一个固定尺寸(1,64,496,432)的特征图,是RPN网络的唯一的输入。
RPN有一个输入,且是固定尺寸的。
rpn_input = torch.ones([1, 64, 496, 432], dtype=float_type, device=device)
torch.onnx.export(net.rpn, rpn_input, rpn_onnx_file, verbose=verbose)
print('rpn.onnx transfer success ...')
代码阅读 :SECOND pytorch版本
3D点云 (Lidar)检测入门篇 - PointPillars PyTorch实现
GitHub - zhulf0804/PointPillars: A Simple PointPillars PyTorch Implenmentation for 3D Lidar(KITTI) Detection.
GitHub - jjw-DL/OpenPCDet-Noted: OpenPCDet代码分析与注释
参数项 | 参数含义 |
---|---|
P | 非空 pillar 数量为P,训练集中最多16000, 测试集中最多40000 |
N | 每个 pillar 中存储的最大点云数量为 N,如果一个 pillar 中点的数量不及 N,则用 0 补全,若超过 N,则从 pillar 内的点中采样出 N 个点来 |
C | 每个pillar encoder后的channel数量为C |
D | 对 pillar 中的每个点进行编码,其中每个点的表示会包括点的坐标,反射强度,pillar 的几何中心,点与 pillar 几何中心的相对位置,将每个点的表示的长度记为 D |
(P,N,D) | 一个点云样本用一个(P,N,D)的张量表示 |
M | 32 |
second.pytorch --------
|---images
|---second ----|---apex
|---torchplus |---builder
|---configs
|---core
|---data
|---framework
|---kittiviewer
|---protos
|---pytorch ------|---builder
|---spconv |---core
|---utils |---models
|---utils
因为gt label中提供的bbox信息是Camera坐标系的,因此在训练时需要使用外参等将其转换到Lidar坐标系; 有时想要把3d bbox映射到图像坐标系中的2d bbox方便可视化,此时需要内参。具体转换关系如Figure 2。坐标系转换的代码见utils/process.py
。
数据增强应该是Lidar检测中很重要的一环。发现其与2D检测中的增强差别较大,比如3D中会做database sampling(我理解的是把gt bbox进行cut-paste), 会做碰撞检测等。在本库中主要使用了采用了5种数据增强, 相关代码在dataset/data_aug.py
。
Figure3是对上述前4种数据增强的可视化结果。
对于输入点云
X
∈
R
N
×
4
X∈R^{N×4}
X∈RN×4 , PointPillars是如何一步步地得到bbox的呢 ? 相关代码见model/pointpillars.py
。
输入项 | 含义 | shape |
---|---|---|
voxels(pillars) | [20000, 32, 4] | |
coors(coors_batch) | [20000, 4] | |
num_points(npoints_per_pillars) | [20000] |
Lidar的range是[0, -39.68, -3, 69.12, 39.68, 1], 即(xmin, ymin, zmin, xmax, ymax, zmax)。
输出项 | 含义 | shape |
---|---|---|
bbox_pred(bbox_preds) | bbox回归 | [1, 42, 248, 216] |
bbox_cls_pred(cls_scores) | 类别分类 | [1, 18, 248, 216] |
bbox_dir_cls_pred(dir_cls_preds) | 朝向分类 | [1, 12, 248, 216] |
3D点云 (Lidar)检测入门篇 - PointPillars PyTorch实现
Head的3个分支是基于anchor分别预测了类别, bbox框(相对于anchor的偏移量和尺寸比)和旋转角度的类别, 那么在训练时, 如何得到每一个anchor对应的GT值呢 ? 相关代码见model/anchors.py
。
3D点云 (Lidar)检测入门篇 - PointPillars PyTorch实现
现在知道了类别分类head, bbox回归head和朝向分类head的预测值和GT值, 接下来介绍损失函数。相关代码见loss/loss.py
。
3D点云 (Lidar)检测入门篇 - PointPillars PyTorch实现
基于Head的预测值和anchors, 如何得到最后的候选框呢 ? 相关代码见model/pointpillars.py
。
评估指标同2D检测类似, 也是采用AP, 即Precison-Recall曲线下的面积。不同的是, 在3D中可以计算3D bbox, BEV bbox 和 (2D bbox, AOS)的AP。
car.fhd.config
model: {
second: {
network_class_name: "VoxelNet"
# 体素生成
voxel_generator {
point_cloud_range : [0, -40, -3, 70.4, 40, 1] # 点云范围
# point_cloud_range : [0, -32.0, -3, 52.8, 32.0, 1]
voxel_size : [0.05, 0.05, 0.1] # 体素大小
max_number_of_points_per_voxel : 5 # 每个体素的最大点数
}
# 体素特征提取器
voxel_feature_extractor: {
module_class_name: "SimpleVoxel"
num_filters: [16]
with_distance: false
num_input_features: 4
}
# 中间特征提取器
middle_feature_extractor: {
module_class_name: "SpMiddleFHD"
# num_filters_down1: [] # protobuf don't support empty list.
# num_filters_down2: []
downsample_factor: 8
num_input_features: 4
}
# RPN网络
rpn: {
module_class_name: "RPNV2"
layer_nums: [5]
layer_strides: [1]
num_filters: [128]
upsample_strides: [1]
num_upsample_filters: [128]
use_groupnorm: false
num_groups: 32
num_input_features: 128
}
# 损失函数
loss: {
classification_loss: {
weighted_sigmoid_focal: {
alpha: 0.25
gamma: 2.0
anchorwise_output: true
}
}
localization_loss: {
weighted_smooth_l1: {
sigma: 3.0
code_weight: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
}
}
classification_weight: 1.0
localization_weight: 2.0
}
num_point_features: 4 # model's num point feature should be independent of dataset
# Outputs
use_sigmoid_score: true
encode_background_as_zeros: true
encode_rad_error_by_sin: true
sin_error_factor: 1.0
use_direction_classifier: true # this can help for orientation benchmark
direction_loss_weight: 0.2 # enough.
num_direction_bins: 2
direction_limit_offset: 1
# Loss
pos_class_weight: 1.0
neg_class_weight: 1.0
loss_norm_type: NormByNumPositives
# Postprocess
post_center_limit_range: [0, -40, -2.2, 70.4, 40, 0.8]
nms_class_agnostic: false # only valid in multi-class nms
box_coder: {
ground_box3d_coder: {
linear_dim: false
encode_angle_vector: false
}
}
target_assigner: {
class_settings: {
anchor_generator_range: {
sizes: [1.6, 3.9, 1.56] # wlh
anchor_ranges: [0, -40.0, -1.00, 70.4, 40.0, -1.00] # carefully set z center
rotations: [0, 1.57] # DON'T modify this unless you are very familiar with my code.
}
matched_threshold : 0.6
unmatched_threshold : 0.45
class_name: "Car"
use_rotate_nms: true
use_multi_class_nms: false
nms_pre_max_size: 1000
nms_post_max_size: 100
nms_score_threshold: 0.3 # 0.4 in submit, but 0.3 can get better hard performance
nms_iou_threshold: 0.01
region_similarity_calculator: {
nearest_iou_similarity: {
}
}
}
# anchor_generators: {
# anchor_generator_stride: {
# sizes: [1.6, 3.9, 1.56] # wlh
# strides: [0.4, 0.4, 0.0] # if generate only 1 z_center, z_stride will be ignored
# offsets: [0.2, -39.8, -1.00] # origin_offset + strides / 2
# rotations: [0, 1.57] # DON'T modify this unless you are very familiar with my code.
# matched_threshold : 0.6
# unmatched_threshold : 0.45
# }
# }
sample_positive_fraction : -1
sample_size : 512
assign_per_class: true
}
}
}
#训练输入读取器,原batch_size=8,num_workers=3
train_input_reader: {
dataset: {
dataset_class_name: "KittiDataset"
# kitti_info_path: "/media/yy/960evo/datasets/kitti/kitti_infos_train.pkl"
# kitti_root_path: "/media/yy/960evo/datasets/kitti"
kitti_info_path: "/home/cv/文档/datasets/KITTI_PP/kitti_infos_train.pkl"
kitti_root_path: "/home/cv/文档/datasets/KITTI_PP"
}
batch_size: 8
preprocess: {
max_number_of_voxels: 17000
shuffle_points: true
num_workers: 1
groundtruth_localization_noise_std: [1.0, 1.0, 0.5]
# groundtruth_rotation_uniform_noise: [-0.3141592654, 0.3141592654]
# groundtruth_rotation_uniform_noise: [-1.57, 1.57]
groundtruth_rotation_uniform_noise: [-0.78539816, 0.78539816]
global_rotation_uniform_noise: [-0.78539816, 0.78539816]
global_scaling_uniform_noise: [0.95, 1.05]
global_random_rotation_range_per_object: [0, 0] # pi/4 ~ 3pi/4
global_translate_noise_std: [0, 0, 0]
anchor_area_threshold: -1
remove_points_after_sample: true
groundtruth_points_drop_percentage: 0.0
groundtruth_drop_max_keep_points: 15
remove_unknown_examples: false
sample_importance: 1.0
random_flip_x: false
random_flip_y: true
remove_environment: false
#数据库采样器
database_sampler {
# database_info_path: "/media/yy/960evo/datasets/kitti/kitti_dbinfos_train.pkl"
database_info_path: "/home/cv/文档/datasets/KITTI_PP/kitti_dbinfos_train.pkl"
sample_groups {
name_to_max_num {
key: "Car"
value: 15
}
}
database_prep_steps {
filter_by_min_num_points {
min_num_point_pairs {
key: "Car"
value: 5
}
}
}
database_prep_steps {
filter_by_difficulty {
removed_difficulties: [-1]
}
}
global_random_rotation_range_per_object: [0, 0]
rate: 1.0
}
}
}
train_config: {
optimizer: {
adam_optimizer: {
learning_rate: {
one_cycle: {
lr_max: 2.25e-3
moms: [0.95, 0.85]
div_factor: 10.0
pct_start: 0.4
}
}
weight_decay: 0.01
}
fixed_weight_decay: true
use_moving_average: false
}
# steps: 99040 # 1238 * 120
# steps: 49520 # 619 * 80
# steps: 30950 # 619 * 80
# steps_per_eval: 3095 # 619 * 5
steps: 23200 # 464 * 50
steps_per_eval: 2320 # 619 * 5
save_checkpoints_secs : 1800 # half hour
save_summary_steps : 10
enable_mixed_precision: false
loss_scale_factor: -1
clear_metrics_every_epoch: true
}
#测试输入读取器,原batch_size=8,num_workers=3
eval_input_reader: {
dataset: {
dataset_class_name: "KittiDataset"
# kitti_info_path: "/media/yy/960evo/datasets/kitti/kitti_infos_val.pkl"
# # kitti_info_path: "/media/yy/960evo/datasets/kitti/kitti_infos_test.pkl"
# kitti_root_path: "/media/yy/960evo/datasets/kitti"
kitti_info_path: "/home/cv/文档/datasets/KITTI_PP/kitti_infos_val.pkl"
# kitti_info_path: "/home/cv/文档/datasets/KITTI_PP/kitti_infos_test.pkl"
kitti_root_path: "/home/cv/文档/datasets/KITTI_PP"
}
batch_size: 8
preprocess: {
max_number_of_voxels: 40000
shuffle_points: false
num_workers: 3
anchor_area_threshold: -1
remove_environment: false
}
}
点云检测, 相比于点云中其它任务(分类, 分割和配准等), 逻辑和代码都更加复杂, 但这并不是体现在网络结构上, 更多的是体现在数据增强, Anchors和GT生成, 单帧推理等。
点云检测, 相比于2D图像检测任务, 不同的是坐标系变换, 数据增强(碰撞检测, 点是否在立方体判断等), 斜长方体框IoU的计算等; 评估方式因为考虑到DontCare, difficulty等, 也更加复杂一些。
OpenPCDet
mmdetection3d
second.pytorch
PointPillars-TF
simple-pointpillar
PointPillars
nutonomy_pointpillars
可用版本
git clone https://github.com/traveller59/second.pytorch.git
cd ./second.pytorch/second
推荐用Anaconda管理虚拟环境。
conda install scikit-image scipy numba pillow matplotlib
pip install fire tensorboardX protobuf opencv-python
pip install torchplus
pip install pycamia
pip install spconv
安装boost库
apt-get install libboost-all-dev
GitHub - facebookresearch/SparseConvNet: Submanifold sparse convolutional networks
git clone https://github.com/facebookresearch/SparseConvNet.git
cd SparseConvNet/
bash develop.sh
添加second.pytorch至PYTHONPATH。
export PYTHONPATH=$PYTHONPATH:/your_second.pytorch_path/
utils/io.py
。└── KITTI_DATASET_ROOT
├── training <-- 7481 train data
| ├── image_2 <-- for visualization
| ├── calib
| ├── label_2
| ├── velodyne
| └── velodyne_reduced <-- empty directory
└── testing <-- 7580 test data
├── image_2 <-- for visualization
├── calib
├── velodyne
└── velodyne_reduced <-- empty directory
推荐:不需要科学上网,下载速度挺快的。
kitti-3d-object-detection-dataset
测试数据集为KITTI数据集,KITTI官网需要翻墙,且访问速度慢。网上有很多百度网盘链接,推荐用网盘下载。
create_data.py
指令用法Usage: create_data.py <group|command>
available groups: copy | pathlib | pickle | fire | np | imgio | sys |
box_np_ops | kitti
available commands: bound_points_jit | prog_bar | create_kitti_info_file |
create_reduced_point_cloud |
create_groundtruth_database
For detailed information on this command, run:
create_data.py --help
kitti infos
python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
reduced point cloud
python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT
groundtruth-database infos
python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT
kitti
|- training
|- calib (#7481 .txt)
|- image_2 (#7481 .png)
|- label_2 (#7481 .txt)
|- velodyne (#7481 .bin)
|- velodyne_reduced (#7481 .bin)
|- testing
|- calib (#7518 .txt)
|- image_2 (#7518 .png)
|- velodyne (#7518 .bin)
|- velodyne_reduced (#7518 .bin)
|- kitti_gt_database (# 19700 .bin)
|- kitti_infos_train.pkl
|- kitti_infos_val.pkl
|- kitti_infos_trainval.pkl
|- kitti_infos_test.pkl
|- kitti_dbinfos_train.pkl
train_input_reader: {
...
database_sampler {
database_info_path: "/path/to/kitti_dbinfos_train.pkl"
...
}
kitti_info_path: "/path/to/kitti_infos_train.pkl"
kitti_root_path: "KITTI_DATASET_ROOT"
}
...
eval_input_reader: {
...
kitti_info_path: "/path/to/kitti_infos_val.pkl"
kitti_root_path: "KITTI_DATASET_ROOT"
}
cd ~/second.pytorch/second
python ./pytorch/train.py train --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
cd ~/second.pytorch/second/
python pytorch/train.py evaluate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
python ./pytorch/train.py evaluate --config_path=configs/pointpillars/car/xyres_16.proto --model_dir=/mnt/d/datasets/archive/car_fhd --measure_time=True --batch_size=1
python ./pytorch/train.py evaluate --config_path=configs/pointpillars/car/xyres_16.proto --model_dir=/mnt/d/datasets/archive/car_fhd --measure_time=True --batch_size=1
middle_class_name PointPillarsScatter
remain number of infos: 3769
Generate output labels...
/mnt/d/MyDocuments/cache/second.pytorch/second/../second/pytorch/models/voxelnet.py:786: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. (Triggered internally at /pytorch/aten/src/ATen/native/IndexingUtils.h:25.)
box_preds = box_preds[a_mask]
/mnt/d/MyDocuments/cache/second.pytorch/second/../second/pytorch/models/voxelnet.py:787: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. (Triggered internally at /pytorch/aten/src/ATen/native/IndexingUtils.h:25.)
cls_preds = cls_preds[a_mask]
/mnt/d/MyDocuments/cache/second.pytorch/second/../second/pytorch/models/voxelnet.py:790: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. (Triggered internally at /pytorch/aten/src/ATen/native/IndexingUtils.h:25.)
dir_preds = dir_preds[a_mask]
[100.0%][===================>][0.70it/s][44:38>00:01]
generate label finished(1.41/s). start eval:
avg forward time per example: 0.679
avg postprocess time per example: 0.012
Car AP@0.70, 0.70, 0.70:
bbox AP:0.00, 0.00, 0.00
Car AP@0.70, 0.50, 0.50:
bbox AP:0.00, 0.00, 0.00
资源占用情况
详细步骤,请参考:second.pytorch
# 运行server
python ./kittiviewer/backend/main.py main --port=xxxx
# 启动本地web server
cd ./kittiviewer/frontend && python -m http.server
# 浏览器打开
http://127.0.0.1:8000
Traceback (most recent call last):
File "point_pillars_training_run.py", line 112, in <module>
pillar_net.load_weights(os.path.join(MODEL_ROOT, "model.h5"))
File "/home/yoyo/miniconda3/envs/ppillar/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/yoyo/miniconda3/envs/ppillar/lib/python3.8/site-packages/keras/saving/hdf5_format.py", line 835, in load_weights_from_hdf5_group
raise ValueError(
ValueError: Weight count mismatch for layer #2 (named cnn/block1/conv2d0 in the current model, cnn/block1/conv2d0 in the save file). Layer expects 1 weight(s). Received 2 saved weight(s)
解决办法:
加载模型
pillar_net.load_weights(os.path.join(MODEL_ROOT, "model.h5"))
改为
pillar_net.load_weights(os.path.join(MODEL_ROOT, "model.h5"),by_name=True , skip_mismatch=True)
module 'cffi' has no attribute 'FFI'
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/numba/core/typing/context.py", line 158, in refresh
self.load_additional_registries()
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/numba/core/typing/context.py", line 701, in load_additional_registries
from . import (
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/numba/core/typing/cffi_utils.py", line 19, in <module>
ffi = cffi.FFI()
AttributeError: module 'cffi' has no attribute 'FFI'
错误原因:
pip显示已安装cffi,但是在Anaconda环境中找不到
解决办法:
在Anaconda中安装cffi
conda install cffi
如果安装没有权限
conda install -c local cffi
module.__version__
版本检查错误 File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/matplotlib/__init__.py", line 201, in _check_versions
if LooseVersion(module.__version__) < minver:
AttributeError: module 'kiwisolver' has no attribute '__version__'
错误原因:
在matplotlib版本检查过程中,由于kiwisolver没有 `__version__` 这个属性而报错
解决办法:
注释版本检查的代码
其他类似的问题,操作一致。
Traceback (most recent call last):
...
...
...
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py", line 4, in <module>
LooseVersion = distutils.version.LooseVersion
AttributeError: module 'distutils' has no attribute 'version'
Traceback (most recent call last):
...
...
...
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/matplotlib/__init__.py", line 209, in _check_versions
if parse_version(module.__version__) < parse_version(minver):
AttributeError: module 'dateutil' has no attribute '__version__'
imp
包被弃用./pytorch/train.py:1: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
错误原因:
DeprecationWarning:已弃用imp模块,改用importlib;有关其他用途,请参阅该模块的文档
解决办法:
由于imp包暂未使用,注释掉即可
# import imp
ModuleNotFoundError: No module named 'torchplus.tools'
Traceback (most recent call last):
File "./pytorch/train.py", line 21, in <module>
from second.pytorch.builder import (box_coder_builder, input_reader_builder,
...
...
...
"/mnt/d/MyDocuments/cache/second.pytorch/second/../second/pytorch/core/box_torch_ops.py", line 10, in <module>
from torchplus.tools import torch_to_np_dtype
ModuleNotFoundError: No module named 'torchplus.tools'
错误原因:
torchplus版本太低
解决办法:
卸载并安装新版本
pip uninstall torchplus
pip install torchplus
ImportError: cannot import name 'BeautifulSoup' from 'bs4'
Traceback (most recent call last):
File "./pytorch/train.py", line 14, in <module>
import torchplus
....
...
...
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/notion/utils.py", line 4, in <module>
from bs4 import BeautifulSoup
ImportError: cannot import name 'BeautifulSoup' from 'bs4' (/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/bs4/__init__.py)
错误原因:
beautifulsoup4版本太低
解决办法:
卸载并安装新版本
pip uninstall beautifulsoup4
pip install beautifulsoup4
ImportError: 'pyctlib.watch.debugger' cannot be used without dependency 'line_profiler'.
Traceback (most recent call last):
File "./pytorch/train.py", line 14, in <module>
import torchplus
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/torchplus/__init__.py", line 33, in <module>
from .tensor import *
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/torchplus/tensor.py", line 33, in <module>
from pyctlib.visual.debugger import profile
File "/home/yoyo/miniconda3/envs/ppillar-torch/lib/python3.8/site-packages/pyctlib/visual/debugger.py", line 18, in <module>
raise ImportError("'pyctlib.watch.debugger' cannot be used without dependency 'line_profiler'. ")
ImportError: 'pyctlib.watch.debugger' cannot be used without dependency 'line_profiler'.
错误原因:
缺少line-profiler包
解决办法:
安装line-profiler
pip install line-profiler
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。