赞
踩
requests是一个很棒的HTTP协议请求工具,我们可以用它写爬虫或做测试,关于该库的用法可以参考之前的文章
【爬虫】requests请求方式、Response、Session
这篇文章主要记录一下使用requests下载文件的方法
如果是小文件,比如说普通图片,完全可以一次请求加载到内存,即最普通的get请求
import requests
req = requests.get("http://www.test.com/xxxxx/test.jpg")
with open(r"c:\test.jpg", "wb") as f:
f.write(req.content)
如果是大文件,一次性加载可能会导致内存爆满,所以可以采取分块读写的方法,每次只读写一小块就可以了
import requests
req = requests.get("http://www.test.com/xxxxx/test.jpg", stream=True)
with open(r"c:\test.jpg", "wb") as f:
for chunk in req.iter_content(chunk_size=1024): # 每次加载1024字节
f.write(chunk)
上面用到的是iter_content()方法读取文件块,还有一个iter_lines()方法,功能与前者差不多,不过源码注释的最后一行写着“This method is not reentrant safe”
我们可以把稍微改进一下,实现断点续传功能,这样在下载大文件被中断的时候可以继续下载。为了方便使用,我们可以把代码封装成一个类
import sys import requests import os class Downloader(object): def __init__(self, url, file_path): self.url = url self.file_path = file_path def start(self): res_length = requests.get(self.url, stream=True) total_size = int(res_length.headers['Content-Length']) print(res_length.headers) print(res_length) if os.path.exists(self.file_path): temp_size = os.path.getsize(self.file_path) print("当前:%d 字节, 总:%d 字节, 已下载:%2.2f%% " % (temp_size, total_size, 100 * temp_size / total_size)) else: temp_size = 0 print("总:%d 字节,开始下载..." % (total_size,)) headers = {'Range': 'bytes=%d-' % temp_size, "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0"} res_left = requests.get(self.url, stream=True, headers=headers) with open(self.file_path, "ab") as f: for chunk in res_left.iter_content(chunk_size=1024): temp_size += len(chunk) f.write(chunk) f.flush() done = int(50 * temp_size / total_size) sys.stdout.write("\r[%s%s] %d%%" % ('█' * done, ' ' * (50 - done), 100 * temp_size / total_size)) sys.stdout.flush() if __name__ == '__main__': url = "https://vd2.bdstatic.com/mda-imt4u2h7u35k/xxxxxxx" path = "C:/test.mp4" downloader = Downloader(url, path) downloader.start()
补充:
如果你想让下载速度更快,可以使用shutil.copyfileobj()方法直接把文件流写进硬盘里
import requests
import shutil
def download_file(url, path):
with requests.get(url, stream=True) as r:
with open(path, 'wb') as f:
shutil.copyfileobj(r.raw, f)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。