赞
踩
系列文章:
目录
5、利用logging.basicConfig()保存log到文件
7、实现logging中TimedRotatingFileHandler多线程应用
在开发过程中,如果程序运行出现了问题,我们是可以使用我们自己的 Debug 工具来检测到到底是哪一步出现了问题,如果出现了问题的话,是很容易排查的。但程序开发完成之后,我们会将它部署到生产环境中去,这时候代码相当于是在一个黑盒环境下运行的,我们只能看到其运行的效果,是不能直接看到代码运行过程中每一步的状态的。在这个环境下,运行过程中难免会在某个地方出现问题,甚至这个问题可能是我们开发过程中未曾遇到的问题,碰到这种情况应该怎么办?
如果我们现在只能得知当前问题的现象,而没有其他任何信息的话,如果我们想要解决掉这个问题的话,那么只能根据问题的现象来试图复现一下,然后再一步步去调试,这恐怕是很难的,很大的概率上我们是无法精准地复现这个问题的,而且 Debug 的过程也会耗费巨多的时间,这样一旦生产环境上出现了问题,修复就会变得非常棘手。但这如果我们当时有做日志记录的话,不论是正常运行还是出现报错,都有相关的时间记录,状态记录,错误记录等,那么这样我们就可以方便地追踪到在当时的运行过程中出现了怎样的状况,从而可以快速排查问题。
因此,日志记录是非常有必要的,任何一款软件如果没有标准的日志记录,都不能算作一个合格的软件。作为开发者,我们需要重视并做好日志记录过程。
logging 模块相比 print 有这么几个优点:
虽然logging模块功能很强大,但还是有坑在里面的,下面我们一一道来。
以上就是整个 logging 模块的基本架构和对象功能,了解了之后我们详细来了解一下 logging 模块的用法。
- import logging
- logging.debug('debug 信息')
- logging.warning('只有这个会输出。。。')
- logging.info('info 信息')
由于默认设置的等级是warning,所有只有warning的信息会输出到控制台。不推荐使用
-------------------------------------------------------------------
利用logging.basicConfig()打印信息到控制台
- import logging
- logging.basicConfig(format='%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s',
- level=logging.DEBUG)
- logging.debug('debug 信息')
- logging.info('info 信息')
- logging.warning('warning 信息')
- logging.error('error 信息')
- logging.critical('critial 信息')
由于在logging.basicConfig()中的level 的值设置为logging.DEBUG, 所有日志级别高于等于debug的log才能打印到控制台。
日志级别: debug < info < warning < error < critical
- logging.basicConfig(level=logging.DEBUG,
- filename='new.log',
- filemode='a',##模式,有w和a,w就是写模式,每次都会重新写日志,覆盖之前的日志
- #a是追加模式,默认如果不写的话,就是追加模式
- format=
- '%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s'
- #日志格式
- )
注意的是:一旦在logging.basicConfig()设置filename 和filemode,则只会保存log到文件,不会输出到控制台。!!!
2019/5/29 补充:
但当某些信息较长,会导致日志输出信息不够美观 我们可以对其的固定长度等进行设置:
%%(asctime)s %%(name)s %%(levelname)-8s %%(filename)-25s %%(lineno)-4s %%(message)s
-8 的意思是左对齐,固定长度8 ,默认用空格填充
下面还有一些常用的设置:
|
咳咳,当正题了~~,推荐使用Logging组件: 记录器(Logger )、处理器(Handler)、过滤器(Filter)和格式化器(Formatter )。
简单介绍常用的配置一下:
Logger
Logger.setLevel()指定logger将会处理的最低的安全等级日志信息,。
Logger.addHandler()和Logger.removeHandler()从记录器对象中添加和删除处理程序对象。处理器详见Handlers。
Logger.addFilter()和Logger.removeFilter()从记录器对象添加和删除过滤器对象。
Handler
logging.StreamHandler -> 控制台输出
logging.FileHandler -> 文件输出
logging.handlers.RotatingFileHandler -> 按照大小自动分割日志文件,一旦达到指定的大小重新生成文件
logging.handlers.TimedRotatingFileHandler -> 按照时间自动分割日志文件
Formatters
Formatter对象设置日志信息最后的规则、结构和内容,默认的时间格式为%Y-%m-%d %H:%M:%S,下面是Formatter常用的一些信息
%(name)s | Logger的名字 |
%(levelno)s | 数字形式的日志级别 |
%(levelname)s | 文本形式的日志级别 |
%(pathname)s | 调用日志输出函数的模块的完整路径名,可能没有 |
%(filename)s | 调用日志输出函数的模块的文件名 |
%(module)s | 调用日志输出函数的模块名 |
%(funcName)s | 调用日志输出函数的函数名 |
%(lineno)d | 调用日志输出函数的语句所在的代码行 |
%(created)f | 当前时间,用UNIX标准的表示时间的浮 点数表示 |
%(relativeCreated)d | 输出日志信息时的,自Logger创建以 来的毫秒数 |
%(asctime)s | 字符串形式的当前时间。默认格式是 “2003-07-08 16:49:45,896”。逗号后面的是毫秒 |
%(thread)d | 线程ID。可能没有 |
%(threadName)s | 线程名。可能没有 |
%(process)d | 进程ID。可能没有 |
%(message)s | 用户输出的消息 |
来一个例子,大家就明白了
- import logging
- from logging import handlers
-
- filename = os.path.join(dirpath+"log")
-
-
- fmt='%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s')
- log_format = logging.Formatter(fmt) #设置日志格式
-
- sh = logging.StreamHandler() #往屏幕上输出
- sh.setFormatter(fmt) #设置屏幕上显示的格式
-
- fh = logging.handlers.TimedRotatingFileHandler(filename=filename , interval=1, when='M', backupCount=2, delay=True) #保存日志到文件
- fh.suffix = "%Y-%m-%d_%H-%M"
- fh.extMatch = re.compile(r"^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}$")
- fh.setFormatter(fmt)
-
-
- # log
- logger = logging.getLogger(__name__)
- logger.setLevel("DEBUG") #设置日志等级
- logger.addHandler(sh)
- logger.addHandler(fh)
TimedRotatingFileHandler 是以时间分割
filename 是输出日志文件名的前缀,比如log/myapp.log
when 是一个字符串的定义如下:
“S”: Seconds
“M”: Minutes
“H”: Hours
“D”: Days
“W”: Week day (0=Monday)
“midnight”: Roll over at midnight
interval 是指等待多少个单位when的时间后,Logger会自动重建文件,当然,这个文件的创建取决于filename+suffix,若这个文件跟之前的文件有重名,则会自动覆盖掉以前的文件,所以有些情况suffix要定义的不能因为when而重复。backupCount 是保留日志个数。默认的0是不会自动删除掉日志。若设2,则在文件的创建过程中库会判断是否有超过这个2,若超过,则会从最先创建的开始删除。
extMatch会根据when的值进行初始化:
‘S’: suffix=”%Y-%m-%d_%H-%M-%S”, extMatch=r”\^d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}”;
‘M’:suffix=”%Y-%m-%d_%H-%M”,extMatch=r”^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}”;
‘H’:suffix=”%Y-%m-%d_%H”,extMatch=r”^\d{4}-\d{2}-\d{2}_\d{2}”;
‘D’:suffxi=”%Y-%m-%d”,extMatch=r”^\d{4}-\d{2}-\d{2}”;
‘MIDNIGHT’:”%Y-%m-%d”,extMatch=r”^\d{4}-\d{2}-\d{2}”;
‘W’:”%Y-%m-%d”,extMatch=r”^\d{4}-\d{2}-\d{2}”;
这边如果delay=False,表面直接输入文件,不入缓存。会导致‘win32Error : 文件被占用的错误’.这是由于TimedRotatingFileHandler并不是线程安全的。
而且运行发现,每个进程都保持一个固定的文件句柄,导致在达到条件回滚时,相互之间的rename会相互干扰,还有好多问题
logging中RotatingFileHandler和TimedRotatingFileHandler对于多进程不支持。
1、修改源码,但在实际开发过程,特别麻烦。
2、引用第三方库ConcurrentLogHandler,但它只支持文件大小分割日志,但按时间分割日志,是比较常见的。
3、继承父类,重写方法。简单易用。下面具体讲解这部分:
网上重写的方法都多多少少有点问题。github上有现成开源的,写得非常完善,经得起实战。但也有缺点(性能较低)
https://github.com/kieslee/mlogging
不过要注意的是,作者虽然说它只支持linux,但windows也可以,不过要做一点修改:
下面是源码:继承重写了logging的StreamHandler:
- from logging import StreamHandler, FileHandler
- from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler
- import fcntl, time, os, codecs, string, re, types, cPickle, struct, shutil
- from stat import ST_DEV, ST_INO, ST_MTIME
-
-
- class StreamHandler_MP(StreamHandler):
- """
- A handler class which writes logging records, appropriately formatted,
- to a stream. Use for multiprocess.
- """
-
- def emit(self, record):
- """
- Emit a record.
- First seek the end of file for multiprocess to log to the same file
- """
- try:
- if hasattr(self.stream, "seek"):
- self.stream.seek(0, os.SEEK_END)
- except IOError, e:
- pass
-
- StreamHandler.emit(self, record)
-
-
- class FileHandler_MP(FileHandler, StreamHandler_MP):
- """
- A handler class which writes formatted logging records to disk files
- for multiprocess
- """
- def emit(self, record):
- """
- Emit a record.
- If the stream was not opened because 'delay' was specified in the
- constructor, open it before calling the superclass's emit.
- """
- if self.stream is None:
- self.stream = self._open()
- StreamHandler_MP.emit(self, record)
-
-
- class RotatingFileHandler_MP(RotatingFileHandler, FileHandler_MP):
- """
- Handler for logging to a set of files, which switches from one file
- to the next when the current file reaches a certain size.
-
- Based on logging.RotatingFileHandler, modified for Multiprocess
- """
- _lock_dir = '.lock'
- if os.path.exists(_lock_dir):
- pass
- else:
- os.mkdir(_lock_dir)
-
- def doRollover(self):
- """
- Do a rollover, as described in __init__().
- For multiprocess, we use shutil.copy instead of rename.
- """
-
- self.stream.close()
- if self.backupCount > 0:
- for i in range(self.backupCount - 1, 0, -1):
- sfn = "%s.%d" % (self.baseFilename, i)
- dfn = "%s.%d" % (self.baseFilename, i + 1)
- if os.path.exists(sfn):
- if os.path.exists(dfn):
- os.remove(dfn)
- shutil.copy(sfn, dfn)
- dfn = self.baseFilename + ".1"
- if os.path.exists(dfn):
- os.remove(dfn)
- if os.path.exists(self.baseFilename):
- shutil.copy(self.baseFilename, dfn)
- self.mode = 'w'
- self.stream = self._open()
-
-
- def emit(self, record):
- """
- Emit a record.
- Output the record to the file, catering for rollover as described
- in doRollover().
-
- For multiprocess, we use file lock. Any better method ?
- """
- try:
- if self.shouldRollover(record):
- self.doRollover()
- FileLock = self._lock_dir + '/' + os.path.basename(self.baseFilename) + '.' + record.levelname
- f = open(FileLock, "w+")
- fcntl.flock(f.fileno(), fcntl.LOCK_EX)
- FileHandler_MP.emit(self, record)
- fcntl.flock(f.fileno(), fcntl.LOCK_UN)
- f.close()
- except (KeyboardInterrupt, SystemExit):
- raise
- except:
- self.handleError(record)
-
-
- class TimedRotatingFileHandler_MP(TimedRotatingFileHandler, FileHandler_MP):
- """
- Handler for logging to a file, rotating the log file at certain timed
- intervals.
- If backupCount is > 0, when rollover is done, no more than backupCount
- files are kept - the oldest ones are deleted.
- """
- _lock_dir = '.lock'
- if os.path.exists(_lock_dir):
- pass
- else:
- os.mkdir(_lock_dir)
-
- def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0):
- FileHandler_MP.__init__(self, filename, 'a', encoding, delay)
- self.encoding = encoding
- self.when = string.upper(when)
- self.backupCount = backupCount
- self.utc = utc
- # Calculate the real rollover interval, which is just the number of
- # seconds between rollovers. Also set the filename suffix used when
- # a rollover occurs. Current 'when' events supported:
- # S - Seconds
- # M - Minutes
- # H - Hours
- # D - Days
- # midnight - roll over at midnight
- # W{0-6} - roll over on a certain day; 0 - Monday
- #
- # Case of the 'when' specifier is not important; lower or upper case
- # will work.
- if self.when == 'S':
- self.suffix = "%Y-%m-%d_%H-%M-%S"
- self.extMatch = r"^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}$"
- elif self.when == 'M':
- self.suffix = "%Y-%m-%d_%H-%M"
- self.extMatch = r"^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}$"
- elif self.when == 'H':
- self.suffix = "%Y-%m-%d_%H"
- self.extMatch = r"^\d{4}-\d{2}-\d{2}_\d{2}$"
- elif self.when == 'D' or self.when == 'MIDNIGHT':
- self.suffix = "%Y-%m-%d"
- self.extMatch = r"^\d{4}-\d{2}-\d{2}$"
- elif self.when.startswith('W'):
- if len(self.when) != 2:
- raise ValueError("You must specify a day for weekly rollover from 0 to 6 (0 is Monday): %s" % self.when)
- if self.when[1] < '0' or self.when[1] > '6':
- raise ValueError("Invalid day specified for weekly rollover: %s" % self.when)
- self.dayOfWeek = int(self.when[1])
- self.suffix = "%Y-%m-%d"
- self.extMatch = r"^\d{4}-\d{2}-\d{2}$"
- else:
- raise ValueError("Invalid rollover interval specified: %s" % self.when)
-
- self.extMatch = re.compile(self.extMatch)
-
- if interval != 1:
- raise ValueError("Invalid rollover interval, must be 1")
-
-
- def shouldRollover(self, record):
- """
- Determine if rollover should occur.
- record is not used, as we are just comparing times, but it is needed so
- the method signatures are the same
- """
- if not os.path.exists(self.baseFilename):
- #print "file don't exist"
- return 0
-
- cTime = time.localtime(time.time())
- mTime = time.localtime(os.stat(self.baseFilename)[ST_MTIME])
- if self.when == "S" and cTime[5] != mTime[5]:
- #print "cTime:", cTime[5], "mTime:", mTime[5]
- return 1
- elif self.when == 'M' and cTime[4] != mTime[4]:
- #print "cTime:", cTime[4], "mTime:", mTime[4]
- return 1
- elif self.when == 'H' and cTime[3] != mTime[3]:
- #print "cTime:", cTime[3], "mTime:", mTime[3]
- return 1
- elif (self.when == 'MIDNIGHT' or self.when == 'D') and cTime[2] != mTime[2]:
- #print "cTime:", cTime[2], "mTime:", mTime[2]
- return 1
- elif self.when == 'W' and cTime[1] != mTime[1]:
- #print "cTime:", cTime[1], "mTime:", mTime[1]
- return 1
- else:
- return 0
-
-
- def doRollover(self):
- """
- do a rollover; in this case, a date/time stamp is appended to the filename
- when the rollover happens. However, you want the file to be named for the
- start of the interval, not the current time. If there is a backup count,
- then we have to get a list of matching filenames, sort them and remove
- the one with the oldest suffix.
-
- For multiprocess, we use shutil.copy instead of rename.
- """
- if self.stream:
- self.stream.close()
- # get the time that this sequence started at and make it a TimeTuple
- #t = self.rolloverAt - self.interval
- t = int(time.time())
- if self.utc:
- timeTuple = time.gmtime(t)
- else:
- timeTuple = time.localtime(t)
- dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)
- if os.path.exists(dfn):
- os.remove(dfn)
- if os.path.exists(self.baseFilename):
- shutil.copy(self.baseFilename, dfn)
- #print "%s -> %s" % (self.baseFilename, dfn)
- #os.rename(self.baseFilename, dfn)
- if self.backupCount > 0:
- # find the oldest log file and delete it
- #s = glob.glob(self.baseFilename + ".20*")
- #if len(s) > self.backupCount:
- # s.sort()
- # os.remove(s[0])
- for s in self.getFilesToDelete():
- os.remove(s)
- self.mode = 'w'
- self.stream = self._open()
-
- def emit(self, record):
- """
- Emit a record.
- Output the record to the file, catering for rollover as described
- in doRollover().
-
- For multiprocess, we use file lock. Any better method ?
- """
- try:
- if self.shouldRollover(record):
- self.doRollover()
- FileLock = self._lock_dir + '/' + os.path.basename(self.baseFilename) + '.' + record.levelname
- f = open(FileLock, "w+")
- fcntl.flock(f.fileno(), fcntl.LOCK_EX)
- FileHandler_MP.emit(self, record)
- fcntl.flock(f.fileno(), fcntl.LOCK_UN)
- f.close()
- except (KeyboardInterrupt, SystemExit):
- raise
- except:
- self.handleError(record)
修改的地方:
import fcntl,cPickle
fcntl windows下载不到,她是用于linux下的用于文件锁的模块。
- except IOError, e:
- pass
-
- ================ 修改为
-
- except :
- pass
关键是这部分代码
- f = open(FileLock, "w+")
- fcntl.flock(f.fileno(), fcntl.LOCK_EX)
- FileHandler_MP.emit(self, record)
- fcntl.flock(f.fileno(), fcntl.LOCK_UN)
- f.close()
fcntl.flock(f.fileno(), fcntl.LOCK_EX) 意思是给文件加锁
fcntl.flock(f.fileno(), fcntl.LOCK_UN)意思是给文件解锁
这边是利用fcntl来实现文件加锁,大家可以从网上随便找一个文件加锁的工具类,替换一下即可。
我随便从网上找了一个https://my.oschina.net/1123581321/blog/833480
换一下即可
多线程情况下日志不在丢失错乱但性能较低(相当于强制单线程,毕竟文件只有一个,加锁的话,只能线程一个一个排队进行修改),我们可以思考一下还有什么方法能解决这个问题?
参考:
python logging详解及自动添加上下文信息 - xybaby - 博客园
Python + logging 输出到屏幕,将log日志写入文件 - nancy05 - 博客园
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。