当前位置:   article > 正文

pointNet训练预测自己的数据集Charles版本(二)_pointnet训练自己的数据

pointnet训练自己的数据

之前博客介绍了如何跑通charles版本的pointNet,这篇介绍下如何来训练和预测自己的数据集,介绍如何在自己的数据集上做点云语义分割,此篇的环境配置和博客中保持一致。点云分类较简单,方法差不多,这边就不特地说明了。

一.在自己的点云数据集上做语义分割

1. RGB-D Scenes Dataset v.2数据集介绍

博主拿数据集RGB-D Scenes Dataset v.2来做实验,数据集下载链接如下:

RGB-D Scenes Dataset v.2

 所下载的数据集的目录结构如下:

 01.label是标注数据,01.ply是点云数据,其它类似,可看到点云和标注数据是分离开来的,这边博主手写了如下脚本来合并01.label和01.ply文件,以将label中数据作为Scalar field。脚本如下:

  1. import numpy as np
  2. import glob
  3. import os
  4. import sys
  5. from plyfile import PlyData, PlyElement
  6. import pandas as pd
  7. BASE_DIR = os.path.dirname(os.path.abspath(__file__))
  8. ROOT_DIR = os.path.dirname(BASE_DIR)
  9. sys.path.append(BASE_DIR)
  10. if __name__ == "__main__":
  11. with open(BASE_DIR + '/rgbd-scenes-v2/pc/01.ply', 'rb') as f:
  12. plydata = PlyData.read(f)
  13. print(len(plydata.elements[0].data))
  14. label = np.loadtxt(BASE_DIR + '/rgbd-scenes-v2/pc/01.label')
  15. print(label.shape)
  16. vtx = plydata['vertex']
  17. points = np.stack([vtx['x'], vtx['y'], vtx['z'],vtx['diffuse_red'],vtx['diffuse_green'],vtx['diffuse_blue']], axis=-1)
  18. label = label[1:len(label)]
  19. label = label[:,np.newaxis]
  20. print(label.shape)
  21. print(points.shape)
  22. combined = np.concatenate([points, label], 1)
  23. # current_label = np.squeeze(current_label)
  24. print(combined.shape)
  25. #write the points into txt
  26. out_data_label_filename = BASE_DIR + '/01_data_label.txt'
  27. fout_data_label = open(out_data_label_filename, 'w')
  28. for i in range(combined.shape[0]):
  29. fout_data_label.write('%f %f %f %d %d %d %d\n' % (combined[i,0], combined[i,1], combined[i,2], combined[i,3], combined[i,4], combined[i,5], combined[i,6],))
  30. fout_data_label.close()

这里用cloudcompare软件打开合并和的01_data_label.txt文件,显示效果如下:

2. 新数据集生成

这边博主会对原数据集RGB-D Scenes Dataset v.2做预处理(各ply文件),生成新的数据集。

用cloudcompare打开01.ply文件,通过segment工具(小剪刀)来切割点云,获取桌子及桌子上物品点云。

 裁切出的点云数据集如下

然后保存桌子点云到本地磁盘上, 其它点云文件(ply)文件类似,从中只获得桌子的点云。博主这边上传下所裁切得到的12份点云文件。

链接: https://pan.baidu.com/s/1rCDhruH_C_hpoZb5BreujA 提取码: ec0h

 

 博主对这12份点云又做了一些裁切操作, 生成了35份点云出来,如下链接

链接: https://pan.baidu.com/s/1w0hhOhgEonxniiMOvcFsIA 提取码: 4eo6
 

3. 新数据集标注(35份点云)

这边博主只做3种标签,桌面像素打标签为0,书本打标签1,帽子打标签为2,杯子碗一类的打标签为3。如下是用cloudcompare给tabel01 - Cloud_1(可从如上百度网盘链接中获取文件)打标签过程。先对点云使用如上的剪刀工具,把点云分割成桌面点云和碗两部分点云,然后点击工具栏的“+”符号。

 然后选中两个点云和合并 

 生成后点云效果如下:

 如上如果点击的是Yes按钮,则合并后的点云保存的点云文件如下格式

如上如果点击No按钮,则合并后的点云保存的点云文件如下格式(这里采取这种方式保存

 但合并后的带标注数据的点云和原点云的,点的排列顺序不一致

 剩余34份文件都按照此方法进行标注下。博主上传下这35份带标注信息的点云文件,链接如下:

链接: https://pan.baidu.com/s/1jbVWHlVKUWcq4t9K-Yk6bQ 提取码: 4t92
 

4. 生成训练用的h5文件

博主这边修改了gen_indoor3d_h5.py文件,代码如下:

  1. import os
  2. import numpy as np
  3. import sys
  4. BASE_DIR = os.path.dirname(os.path.abspath(__file__))
  5. ROOT_DIR = os.path.dirname(BASE_DIR)
  6. sys.path.append(BASE_DIR)
  7. sys.path.append(os.path.join(ROOT_DIR, 'utils'))
  8. # import data_prep_util
  9. import indoor3d_util
  10. import glob
  11. import h5py
  12. # Constants
  13. # indoor3d_data_dir = os.path.join(data_dir, 'mydata_h5')
  14. NUM_POINT = 4096
  15. H5_BATCH_SIZE = 1000
  16. data_dim = [NUM_POINT, 6]
  17. label_dim = [NUM_POINT]
  18. data_dtype = 'float32'
  19. label_dtype = 'uint8'
  20. # Set paths
  21. # filelist = os.path.join(BASE_DIR, 'meta/all_data_label.txt')
  22. # data_label_files = [os.path.join(indoor3d_data_dir, line.rstrip()) for line in open(filelist)]
  23. output_dir = os.path.join(ROOT_DIR, 'data/mydata_h5')
  24. if not os.path.exists(output_dir):
  25. os.mkdir(output_dir)
  26. output_filename_prefix = os.path.join(output_dir, 'ply_data_all')
  27. output_room_filelist = os.path.join(output_dir, 'all_files.txt')
  28. fout_room = open(output_room_filelist, 'w')
  29. # --------------------------------------
  30. # ----- BATCH WRITE TO HDF5 -----
  31. # --------------------------------------
  32. batch_data_dim = [H5_BATCH_SIZE] + data_dim
  33. batch_label_dim = [H5_BATCH_SIZE] + label_dim
  34. h5_batch_data = np.zeros(batch_data_dim, dtype = np.float32)
  35. h5_batch_label = np.zeros(batch_label_dim, dtype = np.uint8)
  36. buffer_size = 0 # state: record how many samples are currently in buffer
  37. h5_index = 0 # state: the next h5 file to save
  38. def save_h5(h5_filename, data, label, data_dtype='uint8', label_dtype='uint8'):
  39. h5_fout = h5py.File(h5_filename)
  40. h5_fout.create_dataset(
  41. 'data', data=data,
  42. compression='gzip', compression_opts=4,
  43. dtype=data_dtype)
  44. h5_fout.create_dataset(
  45. 'label', data=label,
  46. compression='gzip', compression_opts=1,
  47. dtype=label_dtype)
  48. h5_fout.close()
  49. def insert_batch(data, label, last_batch=False):
  50. global h5_batch_data, h5_batch_label
  51. global buffer_size, h5_index
  52. data_size = data.shape[0]
  53. # If there is enough space, just insert
  54. if buffer_size + data_size <= h5_batch_data.shape[0]:
  55. h5_batch_data[buffer_size:buffer_size+data_size, ...] = data
  56. h5_batch_label[buffer_size:buffer_size+data_size] = label
  57. buffer_size += data_size
  58. else: # not enough space
  59. capacity = h5_batch_data.shape[0] - buffer_size
  60. assert(capacity>=0)
  61. if capacity > 0:
  62. h5_batch_data[buffer_size:buffer_size+capacity, ...] = data[0:capacity, ...]
  63. h5_batch_label[buffer_size:buffer_size+capacity, ...] = label[0:capacity, ...]
  64. # Save batch data and label to h5 file, reset buffer_size
  65. h5_filename = output_filename_prefix + '_' + str(h5_index) + '.h5'
  66. save_h5(h5_filename, h5_batch_data, h5_batch_label, data_dtype, label_dtype)
  67. fout_room.write('mydata_h5' + '\'' + h5_filename)
  68. print('Stored {0} with size {1}'.format(h5_filename, h5_batch_data.shape[0]))
  69. h5_index += 1
  70. buffer_size = 0
  71. # recursive call
  72. insert_batch(data[capacity:, ...], label[capacity:, ...], last_batch)
  73. if last_batch and buffer_size > 0:
  74. h5_filename = output_filename_prefix + '_' + str(h5_index) + '.h5'
  75. save_h5(h5_filename, h5_batch_data[0:buffer_size, ...], h5_batch_label[0:buffer_size, ...], data_dtype, label_dtype)
  76. fout_room.write('mydata_h5/ply_data_all' + '_' + str(h5_index) + '.h5')
  77. print('Stored {0} with size {1}'.format(h5_filename, buffer_size))
  78. h5_index += 1
  79. buffer_size = 0
  80. return
  81. path = os.path.join(BASE_DIR + '/mydata_withlabel', '*.asc')
  82. files = glob.glob(path)
  83. points_list = []
  84. for f in files:
  85. print(f)
  86. points = np.loadtxt(f)
  87. print(points.shape)
  88. sample = np.random.choice(points.shape[0], NUM_POINT)
  89. sample_data = points[sample,...]
  90. print(sample_data.shape)
  91. points_list.append(sample_data)
  92. data_label = np.stack(points_list, axis=0)
  93. print(data_label.shape)
  94. data = data_label[:,:,0:6]
  95. label = data_label[:,:,6]
  96. print(data.shape)
  97. print(label.shape)
  98. sample_cnt = 0
  99. insert_batch(data, label, True)
  100. # for i, data_label_filename in enumerate(data_label_files):
  101. # print(data_label_filename)
  102. # data, label = indoor3d_util.room2blocks_wrapper_normalized(data_label_filename, NUM_POINT, block_size=1.0, stride=0.5,
  103. # random_sample=False, sample_num=None)
  104. # print('{0}, {1}'.format(data.shape, label.shape))
  105. # for _ in range(data.shape[0]):
  106. # fout_room.write(os.path.basename(data_label_filename)[0:-4]+'\n')
  107. #
  108. # sample_cnt += data.shape[0]
  109. # insert_batch(data, label, i == len(data_label_files)-1)
  110. #
  111. fout_room.close()
  112. # print("Total samples: {0}".format(sample_cnt))

运行结果如下:

 5. 分割网络训练

修改sem_seg/train.py文件,代码如下:

  1. import tensorflow.compat.v1 as tf
  2. tf.compat.v1.disable_eager_execution()
  3. import argparse
  4. import math
  5. import h5py
  6. import numpy as np
  7. import socket
  8. import os
  9. import sys
  10. BASE_DIR = os.path.dirname(os.path.abspath(__file__))
  11. ROOT_DIR = os.path.dirname(BASE_DIR)
  12. sys.path.append(BASE_DIR)
  13. sys.path.append(ROOT_DIR)
  14. sys.path.append(os.path.join(ROOT_DIR, 'utils'))
  15. import provider
  16. import tf_util
  17. from model import *
  18. parser = argparse.ArgumentParser()
  19. parser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')
  20. parser.add_argument('--log_dir', default='log', help='Log dir [default: log]')
  21. parser.add_argument('--num_point', type=int, default=4096, help='Point number [default: 4096]')
  22. parser.add_argument('--max_epoch', type=int, default=500, help='Epoch to run [default: 50]')
  23. parser.add_argument('--batch_size', type=int, default=2, help='Batch Size during training [default: 24]')
  24. parser.add_argument('--learning_rate', type=float, default=0.001, help='Initial learning rate [default: 0.001]')
  25. parser.add_argument('--momentum', type=float, default=0.9, help='Initial learning rate [default: 0.9]')
  26. parser.add_argument('--optimizer', default='adam', help='adam or momentum [default: adam]')
  27. parser.add_argument('--decay_step', type=int, default=300000, help='Decay step for lr decay [default: 300000]')
  28. parser.add_argument('--decay_rate', type=float, default=0.5, help='Decay rate for lr decay [default: 0.5]')
  29. parser.add_argument('--test_area', type=int, default=6, help='Which area to use for test, option: 1-6 [default: 6]')
  30. FLAGS = parser.parse_args()
  31. BATCH_SIZE = FLAGS.batch_size
  32. NUM_POINT = FLAGS.num_point
  33. MAX_EPOCH = FLAGS.max_epoch
  34. NUM_POINT = FLAGS.num_point
  35. BASE_LEARNING_RATE = FLAGS.learning_rate
  36. GPU_INDEX = FLAGS.gpu
  37. MOMENTUM = FLAGS.momentum
  38. OPTIMIZER = FLAGS.optimizer
  39. DECAY_STEP = FLAGS.decay_step
  40. DECAY_RATE = FLAGS.decay_rate
  41. LOG_DIR = FLAGS.log_dir
  42. if not os.path.exists(LOG_DIR): os.mkdir(LOG_DIR)
  43. os.system('cp model.py %s' % (LOG_DIR)) # bkp of model def
  44. os.system('cp train.py %s' % (LOG_DIR)) # bkp of train procedure
  45. LOG_FOUT = open(os.path.join(LOG_DIR, 'log_train.txt'), 'w')
  46. LOG_FOUT.write(str(FLAGS)+'\n')
  47. MAX_NUM_POINT = 4096
  48. NUM_CLASSES = 4
  49. BN_INIT_DECAY = 0.5
  50. BN_DECAY_DECAY_RATE = 0.5
  51. #BN_DECAY_DECAY_STEP = float(DECAY_STEP * 2)
  52. BN_DECAY_DECAY_STEP = float(DECAY_STEP)
  53. BN_DECAY_CLIP = 0.99
  54. HOSTNAME = socket.gethostname()
  55. ALL_FILES = provider.getDataFiles(ROOT_DIR + '/data/mydata_h5/all_files.txt')
  56. # room_filelist = [line.rstrip() for line in open('indoor3d_sem_seg_hdf5_data/room_filelist.txt')]
  57. # Load ALL data
  58. data_batch_list = []
  59. label_batch_list = []
  60. for h5_filename in ALL_FILES:
  61. data_batch, label_batch = provider.loadDataFile(ROOT_DIR + '/data/' + h5_filename)
  62. data_batch_list.append(data_batch)
  63. label_batch_list.append(label_batch)
  64. data_batches = np.concatenate(data_batch_list, 0)
  65. label_batches = np.concatenate(label_batch_list, 0)
  66. print(data_batches.shape)
  67. print(label_batches.shape)
  68. train_data = data_batches
  69. train_label = label_batches
  70. test_data = data_batches
  71. test_label = label_batches
  72. print(train_data.shape, train_label.shape)
  73. print(test_data.shape, test_label.shape)
  74. def log_string(out_str):
  75. LOG_FOUT.write(out_str+'\n')
  76. LOG_FOUT.flush()
  77. print(out_str)
  78. def get_learning_rate(batch):
  79. learning_rate = tf.train.exponential_decay(
  80. BASE_LEARNING_RATE, # Base learning rate.
  81. batch * BATCH_SIZE, # Current index into the dataset.
  82. DECAY_STEP, # Decay step.
  83. DECAY_RATE, # Decay rate.
  84. staircase=True)
  85. learning_rate = tf.maximum(learning_rate, 0.00001) # CLIP THE LEARNING RATE!!
  86. return learning_rate
  87. def get_bn_decay(batch):
  88. bn_momentum = tf.train.exponential_decay(
  89. BN_INIT_DECAY,
  90. batch*BATCH_SIZE,
  91. BN_DECAY_DECAY_STEP,
  92. BN_DECAY_DECAY_RATE,
  93. staircase=True)
  94. bn_decay = tf.minimum(BN_DECAY_CLIP, 1 - bn_momentum)
  95. return bn_decay
  96. def train():
  97. with tf.Graph().as_default():
  98. with tf.device('/gpu:'+str(GPU_INDEX)):
  99. pointclouds_pl, labels_pl = placeholder_inputs(BATCH_SIZE, NUM_POINT)
  100. is_training_pl = tf.placeholder(tf.bool, shape=())
  101. # Note the global_step=batch parameter to minimize.
  102. # That tells the optimizer to helpfully increment the 'batch' parameter for you every time it trains.
  103. batch = tf.Variable(0)
  104. bn_decay = get_bn_decay(batch)
  105. tf.summary.scalar('bn_decay', bn_decay)
  106. # Get model and loss
  107. pred = get_model(pointclouds_pl, is_training_pl, bn_decay=bn_decay)
  108. loss = get_loss(pred, labels_pl)
  109. tf.summary.scalar('loss', loss)
  110. correct = tf.equal(tf.argmax(pred, 2), tf.to_int64(labels_pl))
  111. accuracy = tf.reduce_sum(tf.cast(correct, tf.float32)) / float(BATCH_SIZE*NUM_POINT)
  112. tf.summary.scalar('accuracy', accuracy)
  113. # Get training operator
  114. learning_rate = get_learning_rate(batch)
  115. tf.summary.scalar('learning_rate', learning_rate)
  116. if OPTIMIZER == 'momentum':
  117. optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=MOMENTUM)
  118. elif OPTIMIZER == 'adam':
  119. optimizer = tf.train.AdamOptimizer(learning_rate)
  120. train_op = optimizer.minimize(loss, global_step=batch)
  121. # Add ops to save and restore all the variables.
  122. saver = tf.train.Saver()
  123. # Create a session
  124. config = tf.ConfigProto()
  125. config.gpu_options.allow_growth = True
  126. config.allow_soft_placement = True
  127. config.log_device_placement = True
  128. sess = tf.Session(config=config)
  129. # Add summary writers
  130. merged = tf.summary.merge_all()
  131. train_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'train'),
  132. sess.graph)
  133. test_writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 'test'))
  134. # Init variables
  135. init = tf.global_variables_initializer()
  136. sess.run(init, {is_training_pl:True})
  137. ops = {'pointclouds_pl': pointclouds_pl,
  138. 'labels_pl': labels_pl,
  139. 'is_training_pl': is_training_pl,
  140. 'pred': pred,
  141. 'loss': loss,
  142. 'train_op': train_op,
  143. 'merged': merged,
  144. 'step': batch}
  145. for epoch in range(MAX_EPOCH):
  146. log_string('**** EPOCH %03d ****' % (epoch))
  147. sys.stdout.flush()
  148. train_one_epoch(sess, ops, train_writer)
  149. eval_one_epoch(sess, ops, test_writer)
  150. # Save the variables to disk.
  151. if epoch % 10 == 0:
  152. save_path = saver.save(sess, os.path.join(LOG_DIR, "model.ckpt"))
  153. log_string("Model saved in file: %s" % save_path)
  154. def train_one_epoch(sess, ops, train_writer):
  155. """ ops: dict mapping from string to tf ops """
  156. is_training = True
  157. log_string('----')
  158. current_data, current_label, _ = provider.shuffle_data(train_data, train_label)
  159. current_data = current_data[:,0:NUM_POINT,:]
  160. current_label = current_label[:,0:NUM_POINT]
  161. file_size = current_data.shape[0]
  162. num_batches = file_size // BATCH_SIZE
  163. total_correct = 0
  164. total_seen = 0
  165. loss_sum = 0
  166. for batch_idx in range(num_batches):
  167. if batch_idx % 1 == 0:
  168. print('Current batch/total batch num: %d/%d'%(batch_idx,num_batches))
  169. start_idx = batch_idx * BATCH_SIZE
  170. end_idx = (batch_idx+1) * BATCH_SIZE
  171. feed_dict = {ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :],
  172. ops['labels_pl']: current_label[start_idx:end_idx],
  173. ops['is_training_pl']: is_training,}
  174. summary, step, _, loss_val, pred_val = sess.run([ops['merged'], ops['step'], ops['train_op'], ops['loss'], ops['pred']],
  175. feed_dict=feed_dict)
  176. train_writer.add_summary(summary, step)
  177. pred_val = np.argmax(pred_val, 2)
  178. correct = np.sum(pred_val == current_label[start_idx:end_idx])
  179. total_correct += correct
  180. total_seen += (BATCH_SIZE*NUM_POINT)
  181. loss_sum += loss_val
  182. log_string('mean loss: %f' % (loss_sum / float(num_batches)))
  183. log_string('accuracy: %f' % (total_correct / float(total_seen)))
  184. def eval_one_epoch(sess, ops, test_writer):
  185. """ ops: dict mapping from string to tf ops """
  186. is_training = False
  187. total_correct = 0
  188. total_seen = 0
  189. loss_sum = 0
  190. total_seen_class = [0 for _ in range(NUM_CLASSES)]
  191. total_correct_class = [0 for _ in range(NUM_CLASSES)]
  192. log_string('----')
  193. current_data, current_label, _ = provider.shuffle_data(test_data, test_label)
  194. current_data = current_data[:, 0:NUM_POINT, :]
  195. current_label = current_label[:, 0:NUM_POINT]
  196. current_label = np.squeeze(current_label)
  197. file_size = current_data.shape[0]
  198. num_batches = file_size // BATCH_SIZE
  199. for batch_idx in range(num_batches):
  200. start_idx = batch_idx * BATCH_SIZE
  201. end_idx = (batch_idx+1) * BATCH_SIZE
  202. feed_dict = {ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :],
  203. ops['labels_pl']: current_label[start_idx:end_idx],
  204. ops['is_training_pl']: is_training}
  205. summary, step, loss_val, pred_val = sess.run([ops['merged'], ops['step'], ops['loss'], ops['pred']],
  206. feed_dict=feed_dict)
  207. test_writer.add_summary(summary, step)
  208. pred_val = np.argmax(pred_val, 2)
  209. correct = np.sum(pred_val == current_label[start_idx:end_idx])
  210. total_correct += correct
  211. total_seen += (BATCH_SIZE*NUM_POINT)
  212. loss_sum += (loss_val*BATCH_SIZE)
  213. for i in range(start_idx, end_idx):
  214. for j in range(NUM_POINT):
  215. l = current_label[i, j]
  216. total_seen_class[l] += 1
  217. total_correct_class[l] += (pred_val[i-start_idx, j] == l)
  218. log_string('eval mean loss: %f' % (loss_sum / float(total_seen/NUM_POINT)))
  219. log_string('eval accuracy: %f'% (total_correct / float(total_seen)))
  220. log_string('eval avg class acc: %f' % (np.mean(np.array(total_correct_class)/np.array(total_seen_class,dtype=np.float))))
  221. if __name__ == "__main__":
  222. train()
  223. LOG_FOUT.close()

运行文件,开始训练

  6. 分割网络预测

博主修改了下sem_seg/batch_inference.py中的代码,如下:

  1. import numpy as np
  2. import tensorflow.compat.v1 as tf
  3. tf.compat.v1.disable_eager_execution()
  4. import argparse
  5. import os
  6. import sys
  7. BASE_DIR = os.path.dirname(os.path.abspath(__file__))
  8. ROOT_DIR = os.path.dirname(BASE_DIR)
  9. sys.path.append(BASE_DIR)
  10. from model import *
  11. import indoor3d_util
  12. parser = argparse.ArgumentParser()
  13. parser.add_argument('--gpu', type=int, default=0, help='GPU to use [default: GPU 0]')
  14. parser.add_argument('--batch_size', type=int, default=1, help='Batch Size during training [default: 1]')
  15. parser.add_argument('--num_point', type=int, default=4096*20, help='Point number [default: 4096]')
  16. parser.add_argument('--model_path', default='log/model.ckpt', help='model checkpoint file path')
  17. parser.add_argument('--dump_dir', default='dump', help='dump folder path')
  18. parser.add_argument('--output_filelist', default='output.txt', help='TXT filename, filelist, each line is an output for a room')
  19. parser.add_argument('--room_data_filelist', default='meta/area6_data_label.txt', help='TXT filename, filelist, each line is a test room data label file.')
  20. parser.add_argument('--no_clutter', action='store_true', help='If true, donot count the clutter class')
  21. parser.add_argument('--visu', default='true', help='Whether to output OBJ file for prediction visualization.')
  22. FLAGS = parser.parse_args()
  23. BATCH_SIZE = FLAGS.batch_size
  24. NUM_POINT = FLAGS.num_point
  25. MODEL_PATH = FLAGS.model_path
  26. GPU_INDEX = FLAGS.gpu
  27. DUMP_DIR = FLAGS.dump_dir
  28. if not os.path.exists(DUMP_DIR): os.mkdir(DUMP_DIR)
  29. LOG_FOUT = open(os.path.join(DUMP_DIR, 'log_evaluate.txt'), 'w')
  30. LOG_FOUT.write(str(FLAGS)+'\n')
  31. ROOM_PATH_LIST = [BASE_DIR + "/mydata_withlabel/tabel01 - Cloud_1_withlabel.asc",
  32. BASE_DIR + "/mydata_withlabel/tabel01 - Cloud_2_withlabel.asc",
  33. BASE_DIR + "/mydata_withlabel/tabel01 - Cloud_3_withlabel.asc",
  34. BASE_DIR + "/mydata_withlabel/tabel05 - Cloud_1_withlabel.asc",
  35. BASE_DIR + "/mydata_withlabel/tabel10 - Cloud_2_withlabel.asc"]
  36. NUM_CLASSES = 4
  37. def log_string(out_str):
  38. LOG_FOUT.write(out_str+'\n')
  39. LOG_FOUT.flush()
  40. print(out_str)
  41. def evaluate():
  42. is_training = False
  43. with tf.device('/gpu:'+str(GPU_INDEX)):
  44. pointclouds_pl, labels_pl = placeholder_inputs(BATCH_SIZE, NUM_POINT)
  45. is_training_pl = tf.placeholder(tf.bool, shape=())
  46. # simple model
  47. pred = get_model(pointclouds_pl, is_training_pl)
  48. loss = get_loss(pred, labels_pl)
  49. pred_softmax = tf.nn.softmax(pred)
  50. # Add ops to save and restore all the variables.
  51. saver = tf.train.Saver()
  52. # Create a session
  53. config = tf.ConfigProto()
  54. config.gpu_options.allow_growth = True
  55. config.allow_soft_placement = True
  56. config.log_device_placement = True
  57. sess = tf.Session(config=config)
  58. # Restore variables from disk.
  59. saver.restore(sess, MODEL_PATH)
  60. log_string("Model restored.")
  61. ops = {'pointclouds_pl': pointclouds_pl,
  62. 'labels_pl': labels_pl,
  63. 'is_training_pl': is_training_pl,
  64. 'pred': pred,
  65. 'pred_softmax': pred_softmax,
  66. 'loss': loss}
  67. for room_path in ROOM_PATH_LIST:
  68. out_data_label_filename = os.path.basename(room_path)[:-4] + '_pred.txt'
  69. out_data_label_filename = os.path.join(DUMP_DIR, out_data_label_filename)
  70. out_gt_label_filename = os.path.basename(room_path)[:-4] + '_gt.txt'
  71. out_gt_label_filename = os.path.join(DUMP_DIR, out_gt_label_filename)
  72. print(room_path, out_data_label_filename)
  73. eval_one_epoch(sess, ops, room_path, out_data_label_filename, out_gt_label_filename)
  74. def eval_one_epoch(sess, ops, room_path, out_data_label_filename, out_gt_label_filename):
  75. error_cnt = 0
  76. is_training = False
  77. total_correct = 0
  78. total_seen = 0
  79. loss_sum = 0
  80. total_seen_class = [0 for _ in range(NUM_CLASSES)]
  81. total_correct_class = [0 for _ in range(NUM_CLASSES)]
  82. points = np.loadtxt(room_path)
  83. print(points.shape)
  84. sample = np.random.choice(points.shape[0], NUM_POINT)
  85. sample_data = points[sample,...]
  86. points_list = []
  87. points_list.append(sample_data)
  88. data_label = np.stack(points_list, axis=0)
  89. print(data_label.shape)
  90. current_data = data_label[:, :, 0:6]
  91. current_label = data_label[:, :, 6]
  92. print(current_data .shape)
  93. print(current_label.shape)
  94. file_size = current_data.shape[0]
  95. num_batches = file_size // BATCH_SIZE
  96. print(file_size)
  97. for batch_idx in range(num_batches):
  98. start_idx = batch_idx * BATCH_SIZE
  99. end_idx = (batch_idx+1) * BATCH_SIZE
  100. cur_batch_size = end_idx - start_idx
  101. feed_dict = {ops['pointclouds_pl']: current_data[start_idx:end_idx, :, :],
  102. ops['labels_pl']: current_label[start_idx:end_idx],
  103. ops['is_training_pl']: is_training}
  104. loss_val, pred_val = sess.run([ops['loss'], ops['pred_softmax']],
  105. feed_dict=feed_dict)
  106. if FLAGS.no_clutter:
  107. pred_label = np.argmax(pred_val[:,:,0:12], 2) # BxN
  108. else:
  109. pred_label = np.argmax(pred_val, 2) # BxN
  110. correct = np.sum(pred_label == current_label[start_idx:end_idx,:])
  111. total_correct += correct
  112. total_seen += (cur_batch_size*NUM_POINT)
  113. loss_sum += (loss_val*BATCH_SIZE)
  114. pred_label = pred_label[:, :, np.newaxis]
  115. pred_data_label = np.concatenate([current_data, pred_label], 2)
  116. np.savetxt(out_data_label_filename, pred_data_label[0, :, :], fmt="%.8f %.8f %.8f %.8f %.8f %.8f %d",
  117. delimiter=" ")
  118. log_string('eval mean loss: %f' % (loss_sum / float(total_seen/NUM_POINT)))
  119. log_string('eval accuracy: %f'% (total_correct / float(total_seen)))
  120. return
  121. if __name__=='__main__':
  122. with tf.Graph().as_default():
  123. evaluate()
  124. LOG_FOUT.close()

注意model.py文件中13需要改为4(自己的数据集上只区分了四类)

运行文件,结果保存在 dump中,可用cloudcompare打开以预测标签作为scalar field的文件

 可看到,大体分割出来了。

这边由于在前面训练时候,只随机从各点云文件中提取了4096个点,还是很稀疏的。一些类别的点参与训练不充分。后续可以就这些点再去优化,博主这边暂时不继续做了,感兴趣的童鞋可以继续优化下去,这边只说明如何在自己的训练集上做训练和预测。

上传下博主的工程,链接如下:

链接: https://pan.baidu.com/s/1HWRCwtorUC6fVWeaKjh5Qg 提取码: 318v
 

参考博客

制作PointNet以及PointNet++点云训练样本_点云数据集制作_CC047964的博客-CSDN博客

点云标注 - 知乎

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/2023面试高手/article/detail/360315
推荐阅读
相关标签
  

闽ICP备14008679号