赞
踩
最近,我需要实现一个程序,以便尽快将位于Amazon EC2中的文件上传到Python中的S3。文件大小为30KB。
我尝试过一些解决方案,使用多线程、多处理、共同例程。下面是我在Amazon EC2上的性能测试结果。
3600(文件数量)*30K(文件大小)~~105M(总计)-->**5.5s [ 4 process + 100 coroutine ]**
10s [ 200 coroutine ]
14s [ 10 threads ]
代码如下所示
用于多线程def mput(i, client, files):
for f in files:
if hash(f) % NTHREAD == i:
put(client, os.path.join(DATA_DIR, f))
def test_multithreading():
client = connect_to_s3_sevice()
files = os.listdir(DATA_DIR)
ths = [threading.Thread(target=mput, args=(i, client, files)) for i in range(NTHREAD)]
for th in ths:
th.daemon = True
th.start()
for th in ths:
th.join()
用于协程client = connect_to_s3_sevice()
pool = eventlet.GreenPool(int(sys.argv[2]))
xput = functools.partial(put, client)
files = os.listdir(DATA_DIR)
for f in files:
pool.spawn_n(xput, os.path.join(DATA_DIR, f))
pool.waitall()
用于多处理def pproc(i):
client = connect_to_s3_sevice()
files = os.listdir(DATA_DIR)
pool = eventlet.GreenPool(100)
xput = functools.partial(put, client)
for f in files:
if hash(f) % NPROCESS == i:
pool.spawn_n(xput, os.path.join(DATA_DIR, f))
pool.waitall()
def test_multiproc():
procs = [multiprocessing.Process(target=pproc, args=(i, )) for i in range(NPROCESS)]
for p in procs:
p.daemon = True
p.start()
for p in procs:
p.join()
机器的配置为Ubuntu 14.04、2个cpu(2.50GHz)、4G内存
达到的最高速度约为19Mb/s(105/5.5)。总的来说,太慢了。有没有办法加快速度?无堆栈python可以更快吗?
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。