当前位置:   article > 正文

【Python进阶】实现多进程,pool.map()方法的使用_pool.map put(task)

pool.map put(task)

【Python进阶】实现多进程,pool.map()方法的使用

例子1(最简单的):

import time
from multiprocessing.pool import Pool

def numsCheng(i):
    return i * 2

if __name__ == '__main__':
    time1 = time.time()
    nums_list = [1, 2, 3, 4, 5, 6, 7, 8, 9]
    max_processes = multiprocessing.cpu_count()
    print(f"Max number of processes: {max_processes}")
    pool = Pool(processes=max_processes )#全进程运行
    result = pool.map(numsCheng, nums_list)
    pool.close()        # 关闭进程池,不再接受新的进程
    pool.join()         # 主进程阻塞等待子进程的退出

    print(result)
    time2 = time.time()
    print("计算用时:", time2-time1)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

输出为:

[2, 4, 6, 8, 10, 12, 14, 16, 18]
计算用时: 0.21639275550842285
  • 1
  • 2

例子2(自己修改后的):

import matplotlib.pyplot as plt
from PIL import Image
from multiprocessing import Pool, freeze_support,cpu_count

if 
__name__ == '__main__':#加这两行防止报错
    freeze_support()


    base_dir = r"C:\\Users\jie\Desktop\轨迹_大模型任务\数据集level1"
    for sub_dir1 in os.listdir(base_dir):
        if sub_dir1 == "量测场景":
            multi_tuple = []#存放n个元组的数据,用于map
            sub_dir1_path = os.path.join(base_dir, sub_dir1)  # ./数据集level1/量测场景/目录
            for filename in os.listdir(sub_dir1_path):
                file_path = os.path.join(base_dir, sub_dir1, filename)
                print(file_path)  # C:\\Users\jie\Desktop\轨迹_大模型任务\数据集level1\关联表\关联结果-0.csv

                # 提取匹配部分,根据量测得到对应的关联表
                match = re.search(r"-.*\.csv$", filename)  # -0.csv
                connect_name = match.group()

                # 提取数字部分
                match = re.search(r"\d+", filename)
                number = match.group()

                scene_dir = 'new_data/场景' + number
                middle_dir = '场景' + number

                pic_cj_dir = scene_dir + "/predict_picture"
                if not os.path.exists(pic_cj_dir):
                    os.makedirs(pic_cj_dir)
                csv_cj_dir = scene_dir + "/predict_csv"
                if not os.path.exists(csv_cj_dir):
                    os.makedirs(csv_cj_dir)

                connect_name = "关联结果" + connect_name
                connect_path = os.path.join(base_dir, '关联表', connect_name)

                print("正在写入:", middle_dir)
                multi_tuple.append((file_path, connect_path, middle_dir))
            # getMaxLineData(file_path, connect_path, middle_dir)#middle_dir为场景i

            num_processes = cpu_count()#8;
            print("---->>>>>>>>  采用多线程,cpu数目是",num_processes,"个  <<<<<<<--------")
            pool = Pool(processes=num_processes)
            result = pool.map(getMaxLineData, multi_tuple)#1048附近有数据
            pool.close()  # 关闭进程池,不再接受新的进程
            pool.join()  # 主进程阻塞等待子进程的退出
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49

中间报错了一个,如下所示:

RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

解决方法:
加如下代码即可

if __name__ == '__main__':
                freeze_support()
  • 1
  • 2
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/181325
推荐阅读
相关标签
  

闽ICP备14008679号