当前位置:   article > 正文

python下载网站文件到本地,python下载网站所有网页_python下载网页文件

python下载网页文件

大家好,给大家分享一下python怎么从指定网址下载文件到本地,很多人还不知道这一点。下面详细解释一下。现在让我们来看看!

Source code download: 本文相关源码

Python根据URL地址下载文件并保存至对应目录

引言

在编程中经常会遇到图片等数据集将图片等数据以URL形式存储在txt文档中,为便于后续的分析,需要将其下载下来,并按照文件夹分类存储。本文以Github中Alexander Kim提供的图片分类数据集为例,下载其提供的图片样本并分类保存
Python 3.6.5,Anaconda, VSCode

1. 下载数据集文件

建立项目文件夹,下载上述Github项目中的raw_data文件夹,并保存至项目目录中用python3.4画满天星
在这里插入图片描述## 2. 获取样本文件位置
编写get_doc_path.py,根据根目录位置,获取目录及其子目录所有数据集文件

  1. import os
  2. def get_file(root_path, all_files={}):
  3. '''
  4. 递归函数,遍历该文档目录和子目录下的所有文件,获取其path
  5. '''
  6. files = os.listdir(root_path)
  7. for file in files:
  8. if not os.path.isdir(root_path + '/' + file): # not a dir
  9. all_files[file] = root_path + '/' + file
  10. else: # is a dir
  11. get_file((root_path+'/'+file), all_files)
  12. return all_files
  13. if __name__ == '__main__':
  14. path = './raw_data'
  15. print(get_file(path))

3. 下载文件

3.1 读取url列表并
  1. for filename, path in paths.items():
  2. print('reading file: {}'.format(filename))
  3. with open(path, 'r') as f:
  4. lines = f.readlines()
  5. url_list = []
  6. for line in lines:
  7. url_list.append(line.strip('\n'))
  8. print(url_list)
3.2 创建文件夹
  1. foldername = "./picture_get_by_url/pic_download/{}".format(filename.split('.')[0])
  2. if not os.path.exists(folder_path):
  3. print("Selected folder not exist, try to create it.")
  4. os.makedirs(folder_path)
3.3 下载图片
  1. def get_pic_by_url(folder_path, lists):
  2. if not os.path.exists(folder_path):
  3. print("Selected folder not exist, try to create it.")
  4. os.makedirs(folder_path)
  5. for url in lists:
  6. print("Try downloading file: {}".format(url))
  7. filename = url.split('/')[-1]
  8. filepath = folder_path + '/' + filename
  9. if os.path.exists(filepath):
  10. print("File have already exist. skip")
  11. else:
  12. try:
  13. urllib.request.urlretrieve(url, filename=filepath)
  14. except Exception as e:
  15. print("Error occurred when downloading file, error message:")
  16. print(e)

4. 完整源码

4.1 get_doc_path.py
  1. import os
  2. def get_file(root_path, all_files={}):
  3. '''
  4. 递归函数,遍历该文档目录和子目录下的所有文件,获取其path
  5. '''
  6. files = os.listdir(root_path)
  7. for file in files:
  8. if not os.path.isdir(root_path + '/' + file): # not a dir
  9. all_files[file] = root_path + '/' + file
  10. else: # is a dir
  11. get_file((root_path+'/'+file), all_files)
  12. return all_files
  13. if __name__ == '__main__':
  14. path = './raw_data'
  15. print(get_file(path))
4.2 get_pic.py
  1. import get_doc_path
  2. import os
  3. import urllib.request
  4. def get_pic_by_url(folder_path, lists):
  5. if not os.path.exists(folder_path):
  6. print("Selected folder not exist, try to create it.")
  7. os.makedirs(folder_path)
  8. for url in lists:
  9. print("Try downloading file: {}".format(url))
  10. filename = url.split('/')[-1]
  11. filepath = folder_path + '/' + filename
  12. if os.path.exists(filepath):
  13. print("File have already exist. skip")
  14. else:
  15. try:
  16. urllib.request.urlretrieve(url, filename=filepath)
  17. except Exception as e:
  18. print("Error occurred when downloading file, error message:")
  19. print(e)
  20. if __name__ == "__main__":
  21. root_path = './picture_get_by_url/raw_data'
  22. paths = get_doc_path.get_file(root_path)
  23. print(paths)
  24. for filename, path in paths.items():
  25. print('reading file: {}'.format(filename))
  26. with open(path, 'r') as f:
  27. lines = f.readlines()
  28. url_list = []
  29. for line in lines:
  30. url_list.append(line.strip('\n'))
  31. foldername = "./picture_get_by_url/pic_download/{}".format(filename.split('.')[0])
  32. get_pic_by_url(foldername, url_list)
4.3 运行结果

执行get_pic.py
当程序意外停止或再次执行时,程序会自动跳过文件夹中已下载的文件,继续下载未下载的内容

{‘urls_drawings.txt’: ‘./picture_get_by_url/raw_data/drawings/urls_drawings.txt’, ‘urls_hentai.txt’: ‘./picture_get_by_url/raw_data/hentai/urls_hentai.txt’, ‘urls_neutral.txt’: ‘./picture_get_by_url/raw_data/neutral/urls_neutral.txt’, ‘urls_porn.txt’: ‘./picture_get_by_url/raw_data/porn/urls_porn.txt’, ‘urls_sexy.txt’: ‘./picture_get_by_url/raw_data/sexy/urls_sexy.txt’}
reading file: urls_drawings.txt
Try downloading file: http://41.media.tumblr.com/xxxxxx.jpg
Try downloading file: http://41.media.tumblr.com/xxxxxx.jpg
Try downloading file: http://ak1.polyvoreimg.com/cgi/img-thing/size/l/tid/xxxxxx.jpg
Error occurred when downloading file, error message:
HTTP Error 502: No data received from server or forwarder
Try downloading file: http://akicocotte.weblike.jp/gaugau/xxxxxx.jpg
Try downloading file: http://animewriter.files.wordpress.com/2009/01/nagisa-xxxxxx-xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg

后注:由于样本数据集内容的问题,上述地址以xxxxx代替具体地址,案例项目也已经失效,但是方法仍然可以借鉴

20.9.23更新:数据集地址:https://github.com/ZQ-Qi/nsfw_data_scrapper,单纯为了学习和实践本文代码的可以下载该数据集进行尝试

文章知识点与官方知识档案匹配,可进一步学习相关知识
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/409406?site
推荐阅读
相关标签
  

闽ICP备14008679号