1
0
Fork 0
forked from github/pelican

Refactor logging handling

Old system was using manual string formatting for log messages.
This caused issues with common operations like exception logging
because often they need to be handled differently for Py2/Py3
compatibility. In order to unify the effort:

 - All logging is changed to `logging.level(msg, arg1, arg2)` style.
 - A `SafeLogger` is implemented to auto-decode exceptions properly
in the args (ref #1403).
 - Custom formatters were overriding useful logging functionality
like traceback outputing (ref #1402). They are refactored to be
more transparent. Traceback information is provided in `--debug`
mode for `read_file` errors in generators.
 - Formatters will now auto-format multiline log messages in order
to make them look related. Similarly, traceback will be formatted in
the same fashion.
 - `pelican.log.LimitFilter` was (ab)using logging message which
would result in awkward syntax for argumented logging style. This
functionality is moved to `extra` keyword argument.
 - Levels for errors that would result skipping a file (`read_file`)
changed from `warning` to `error` in order to make them stand out
among other logs.
 - Small consistency changes to log messages (i.e. changing all
to start with an uppercase letter) and quality-of-life improvements
(some log messages were dumping raw object information).
This commit is contained in:
Deniz Turgut 2014-07-22 11:48:15 -04:00
commit ed3209888a
14 changed files with 271 additions and 195 deletions

View file

@ -142,34 +142,65 @@ Logging tips
Try to use logging with appropriate levels.
For logging messages that are not repeated, use the usual Python way:
For logging messages that are not repeated, use the usual Python way::
# at top of file
import logging
logger = logging.getLogger(__name__)
# when needed
logger.warning("A warning that would usually occur only once")
logger.warning("A warning with %s formatting", arg_to_be_formatted)
However, if you want to log messages that may occur several times, instead of
a string, give a tuple to the logging method, with two arguments:
Do not format log messages yourself. Use ``%s`` formatting in messages and pass
arguments to logger. This is important, because Pelican logger will preprocess
some arguments (like Exceptions) for Py2/Py3 compatibility.
1. The message to log for the initial execution
2. A generic message that will appear if the previous one would occur too many
times.
Limiting extraneous log messages
--------------------------------
For example, if you want to log missing resources, use the following code:
If the log message can occur several times, you may want to limit the log to
prevent flooding. In order to do that, use the ``extra`` keyword argument for
the logging message in the following format::
logger.warning("A warning with %s formatting", arg_to_be_formatted,
extra={'limit_msg': 'A generic message for too many warnings'})
Optionally, you can also set ``'limit_args'`` as a tuple of arguments in
``extra`` dict if your generic message needs formatting.
Limit is set to ``5``, i.e, first four logs with the same ``'limit_msg'`` are
outputted normally but the fifth one will be logged using
``'limit_msg'`` (and ``'limit_args'`` if present). After the fifth,
corresponding log messages will be ignored.
For example, if you want to log missing resources, use the following code::
for resource in resources:
if resource.is_missing:
logger.warning((
'The resource {r} is missing'.format(r=resource.name),
'Other resources were missing'))
logger.warning(
'The resource %s is missing', resource.name,
extra={'limit_msg': 'Other resources were missing'})
The log messages will be displayed as follows:
The log messages will be displayed as follows::
WARNING: The resource prettiest_cat.jpg is missing
WARNING: The resource best_cat_ever.jpg is missing
WARNING: The resource cutest_cat.jpg is missing
WARNING: The resource lolcat.jpg is missing
WARNING: Other resources were missing
Outputting traceback in the logs
--------------------------------
If you're logging inside an ``except`` block, you may want to provide the
traceback information as well. You can do that by setting ``exc_info`` keyword
argument to ``True`` during logging. However, doing so by default can be
undesired because tracebacks are long and can be confusing to regular users.
Try to limit them to ``--debug`` mode like the following::
try:
some_action()
except Exception as e:
logger.error('Exception occured: %s', e,
exc_info=settings.get('DEBUG', False))

View file

@ -70,16 +70,16 @@ class Pelican(object):
for plugin in self.settings['PLUGINS']:
# if it's a string, then import it
if isinstance(plugin, six.string_types):
logger.debug("Loading plugin `{0}`".format(plugin))
logger.debug("Loading plugin `%s`", plugin)
try:
plugin = __import__(plugin, globals(), locals(),
str('module'))
except ImportError as e:
logger.error(
"Can't find plugin `{0}`: {1}".format(plugin, e))
"Cannot load plugin `%s`\n%s", plugin, e)
continue
logger.debug("Registering plugin `{0}`".format(plugin.__name__))
logger.debug("Registering plugin `%s`", plugin.__name__)
plugin.register()
self.plugins.append(plugin)
logger.debug('Restoring system path')
@ -99,7 +99,7 @@ class Pelican(object):
for setting in ('ARTICLE_URL', 'ARTICLE_LANG_URL', 'PAGE_URL',
'PAGE_LANG_URL'):
logger.warning("%s = '%s'" % (setting, self.settings[setting]))
logger.warning("%s = '%s'", setting, self.settings[setting])
if self.settings.get('ARTICLE_PERMALINK_STRUCTURE', False):
logger.warning('Found deprecated `ARTICLE_PERMALINK_STRUCTURE` in'
@ -124,7 +124,7 @@ class Pelican(object):
'PAGE_SAVE_AS', 'PAGE_LANG_SAVE_AS'):
self.settings[setting] = os.path.join(structure,
self.settings[setting])
logger.warning("%s = '%s'" % (setting, self.settings[setting]))
logger.warning("%s = '%s'", setting, self.settings[setting])
for new, old in [('FEED', 'FEED_ATOM'), ('TAG_FEED', 'TAG_FEED_ATOM'),
('CATEGORY_FEED', 'CATEGORY_FEED_ATOM'),
@ -203,7 +203,7 @@ class Pelican(object):
for v in value:
if isinstance(v, type):
logger.debug('Found generator: {0}'.format(v))
logger.debug('Found generator: %s', v)
generators.append(v)
return generators
@ -217,11 +217,11 @@ class Pelican(object):
else:
writer = writers[0]
if writers_found == 1:
logger.debug('Found writer: {}'.format(writer))
logger.debug('Found writer: %s', writer)
else:
logger.warning(
'{} writers found, using only first one: {}'.format(
writers_found, writer))
'%s writers found, using only first one: %s',
writers_found, writer)
return writer(self.output_path, settings=self.settings)
@ -308,6 +308,7 @@ def get_config(args):
config['CACHE_PATH'] = args.cache_path
if args.selected_paths:
config['WRITE_SELECTED'] = args.selected_paths.split(',')
config['DEBUG'] = args.verbosity == logging.DEBUG
# argparse returns bytes in Py2. There is no definite answer as to which
# encoding argparse (or sys.argv) uses.
@ -404,7 +405,7 @@ def main():
logger.critical(e.args)
raise
logger.warning(
'Caught exception "{0}". Reloading.'.format(e))
'Caught exception "%s". Reloading.', e)
finally:
time.sleep(.5) # sleep to avoid cpu load
@ -419,13 +420,7 @@ def main():
pelican.run()
except Exception as e:
# localized systems have errors in native language if locale is set
# so convert the message to unicode with the correct encoding
msg = str(e)
if not six.PY3:
msg = msg.decode(locale.getpreferredencoding())
logger.critical(msg)
logger.critical('%s', e)
if args.verbosity == logging.DEBUG:
raise

View file

@ -232,10 +232,11 @@ class Content(object):
self._context['filenames'][path].url))
origin = origin.replace('\\', '/') # for Windows paths.
else:
logger.warning(("Unable to find {fn}, skipping url"
" replacement".format(fn=value),
"Other resources were not found"
" and their urls not replaced"))
logger.warning(
"Unable to find `%s`, skipping url replacement.",
value.geturl(), extra = {
'limit_msg': ("Other resources were not found "
"and their urls not replaced")})
elif what == 'category':
origin = Category(path, self.settings).url
elif what == 'tag':
@ -358,6 +359,5 @@ def is_valid_content(content, f):
content.check_properties()
return True
except NameError as e:
logger.error("Skipping %s: could not find information about "
"'%s'" % (f, e))
logger.error("Skipping %s: could not find information about '%s'", f, e)
return False

View file

@ -22,13 +22,14 @@ from jinja2 import (Environment, FileSystemLoader, PrefixLoader, ChoiceLoader,
from pelican.contents import Article, Draft, Page, Static, is_valid_content
from pelican.readers import Readers
from pelican.utils import (copy, process_translations, mkdir_p, DateFormatter,
FileStampDataCacher)
FileStampDataCacher, python_2_unicode_compatible)
from pelican import signals
logger = logging.getLogger(__name__)
@python_2_unicode_compatible
class Generator(object):
"""Baseclass generator"""
@ -67,7 +68,7 @@ class Generator(object):
extensions=self.settings['JINJA_EXTENSIONS'],
)
logger.debug('template list: {0}'.format(self.env.list_templates()))
logger.debug('Template list: %s', self.env.list_templates())
# provide utils.strftime as a jinja filter
self.env.filters.update({'strftime': DateFormatter()})
@ -153,6 +154,10 @@ class Generator(object):
value = list(value.items()) # py3k safeguard for iterators
self.context[item] = value
def __str__(self):
# return the name of the class for logging purposes
return self.__class__.__name__
class CachingGenerator(Generator, FileStampDataCacher):
'''Subclass of Generator and FileStampDataCacher classes
@ -470,7 +475,8 @@ class ArticlesGenerator(CachingGenerator):
context_signal=signals.article_generator_context,
context_sender=self)
except Exception as e:
logger.warning('Could not process {}\n{}'.format(f, e))
logger.error('Could not process %s\n%s', f, e,
exc_info=self.settings.get('DEBUG', False))
continue
if not is_valid_content(article, f):
@ -492,9 +498,8 @@ class ArticlesGenerator(CachingGenerator):
context_sender=self)
all_drafts.append(draft)
else:
logger.warning("Unknown status %s for file %s, skipping it." %
(repr(article.status),
repr(f)))
logger.error("Unknown status '%s' for file %s, skipping it.",
article.status, f)
self.articles, self.translations = process_translations(all_articles)
self.drafts, self.drafts_translations = \
@ -594,7 +599,8 @@ class PagesGenerator(CachingGenerator):
context_signal=signals.page_generator_context,
context_sender=self)
except Exception as e:
logger.warning('Could not process {}\n{}'.format(f, e))
logger.error('Could not process %s\n%s', f, e,
exc_info=self.settings.get('DEBUG', False))
continue
if not is_valid_content(page, f):
@ -609,9 +615,8 @@ class PagesGenerator(CachingGenerator):
elif page.status == "hidden":
hidden_pages.append(page)
else:
logger.warning("Unknown status %s for file %s, skipping it." %
(repr(page.status),
repr(f)))
logger.error("Unknown status '%s' for file %s, skipping it.",
page.status, f)
self.pages, self.translations = process_translations(all_pages)
self.hidden_pages, self.hidden_translations = (
@ -679,7 +684,7 @@ class StaticGenerator(Generator):
save_as = os.path.join(self.output_path, sc.save_as)
mkdir_p(os.path.dirname(save_as))
shutil.copy2(source_path, save_as)
logger.info('copying {} to {}'.format(sc.source_path, sc.save_as))
logger.info('Copying %s to %s', sc.source_path, sc.save_as)
class SourceFileGenerator(Generator):
@ -694,7 +699,7 @@ class SourceFileGenerator(Generator):
copy(obj.source_path, dest)
def generate_output(self, writer=None):
logger.info(' Generating source files...')
logger.info('Generating source files...')
for obj in chain(self.context['articles'], self.context['pages']):
self._create_source(obj)
for obj_trans in obj.translations:

View file

@ -8,59 +8,76 @@ __all__ = [
import os
import sys
import logging
import locale
from collections import defaultdict
from collections import defaultdict, Mapping
import six
RESET_TERM = '\033[0;m'
class BaseFormatter(logging.Formatter):
def __init__(self, fmt=None, datefmt=None):
FORMAT = '%(customlevelname)s %(message)s'
super(BaseFormatter, self).__init__(fmt=FORMAT, datefmt=datefmt)
COLOR_CODES = {
'red': 31,
'yellow': 33,
'cyan': 36,
'white': 37,
'bgred': 41,
'bggrey': 100,
}
def ansi(color, text):
"""Wrap text in an ansi escape sequence"""
code = COLOR_CODES[color]
return '\033[1;{0}m{1}{2}'.format(code, text, RESET_TERM)
class ANSIFormatter(logging.Formatter):
"""Convert a `logging.LogRecord' object into colored text, using ANSI
escape sequences.
"""
def format(self, record):
msg = record.getMessage()
if record.levelname == 'INFO':
return ansi('cyan', '-> ') + msg
elif record.levelname == 'WARNING':
return ansi('yellow', record.levelname) + ': ' + msg
elif record.levelname == 'ERROR':
return ansi('red', record.levelname) + ': ' + msg
elif record.levelname == 'CRITICAL':
return ansi('bgred', record.levelname) + ': ' + msg
elif record.levelname == 'DEBUG':
return ansi('bggrey', record.levelname) + ': ' + msg
record.__dict__['customlevelname'] = self._get_levelname(record.levelname)
# format multiline messages 'nicely' to make it clear they are together
record.msg = record.msg.replace('\n', '\n | ')
return super(BaseFormatter, self).format(record)
def formatException(self, ei):
''' prefix traceback info for better representation '''
# .formatException returns a bytestring in py2 and unicode in py3
# since .format will handle unicode conversion,
# str() calls are used to normalize formatting string
s = super(BaseFormatter, self).formatException(ei)
# fancy format traceback
s = str('\n').join(str(' | ') + line for line in s.splitlines())
# seperate the traceback from the preceding lines
s = str(' |___\n{}').format(s)
return s
def _get_levelname(self, name):
''' NOOP: overridden by subclasses '''
return name
class ANSIFormatter(BaseFormatter):
ANSI_CODES = {
'red': '\033[1;31m',
'yellow': '\033[1;33m',
'cyan': '\033[1;36m',
'white': '\033[1;37m',
'bgred': '\033[1;41m',
'bggrey': '\033[1;100m',
'reset': '\033[0;m'}
LEVEL_COLORS = {
'INFO': 'cyan',
'WARNING': 'yellow',
'ERROR': 'red',
'CRITICAL': 'bgred',
'DEBUG': 'bggrey'}
def _get_levelname(self, name):
color = self.ANSI_CODES[self.LEVEL_COLORS.get(name, 'white')]
if name == 'INFO':
fmt = '{0}->{2}'
else:
return ansi('white', record.levelname) + ': ' + msg
fmt = '{0}{1}{2}:'
return fmt.format(color, name, self.ANSI_CODES['reset'])
class TextFormatter(logging.Formatter):
class TextFormatter(BaseFormatter):
"""
Convert a `logging.LogRecord' object into text.
"""
def format(self, record):
if not record.levelname or record.levelname == 'INFO':
return record.getMessage()
def _get_levelname(self, name):
if name == 'INFO':
return '->'
else:
return record.levelname + ': ' + record.getMessage()
return name + ':'
class LimitFilter(logging.Filter):
@ -73,37 +90,74 @@ class LimitFilter(logging.Filter):
E.g.: log.warning(('43 is not the answer', 'More erroneous answers'))
"""
ignore = set()
threshold = 5
group_count = defaultdict(int)
_ignore = set()
_threshold = 5
_group_count = defaultdict(int)
def filter(self, record):
# don't limit log messages for anything above "warning"
if record.levelno > logging.WARN:
return record
return True
# extract group
group = None
if len(record.msg) == 2:
record.msg, group = record.msg
group = record.__dict__.get('limit_msg', None)
group_args = record.__dict__.get('limit_args', ())
# ignore record if it was already raised
# use .getMessage() and not .msg for string formatting
ignore_key = (record.levelno, record.getMessage())
to_ignore = ignore_key in LimitFilter.ignore
LimitFilter.ignore.add(ignore_key)
if to_ignore:
if ignore_key in self._ignore:
return False
else:
self._ignore.add(ignore_key)
# check if we went over threshold
if group:
key = (record.levelno, group)
LimitFilter.group_count[key] += 1
if LimitFilter.group_count[key] == LimitFilter.threshold:
self._group_count[key] += 1
if self._group_count[key] == self._threshold:
record.msg = group
if LimitFilter.group_count[key] > LimitFilter.threshold:
record.args = group_args
elif self._group_count[key] > self._threshold:
return False
return record
return True
class LimitLogger(logging.Logger):
class SafeLogger(logging.Logger):
"""
Base Logger which properly encodes Exceptions in Py2
"""
_exc_encoding = locale.getpreferredencoding()
def _log(self, level, msg, args, exc_info=None, extra=None):
# if the only argument is a Mapping, Logger uses that for formatting
# format values for that case
if args and len(args)==1 and isinstance(args[0], Mapping):
args = ({k: self._decode_arg(v) for k, v in args[0].items()},)
# otherwise, format each arg
else:
args = tuple(self._decode_arg(arg) for arg in args)
super(SafeLogger, self)._log(level, msg, args,
exc_info=exc_info, extra=extra)
def _decode_arg(self, arg):
'''
properly decode an arg for Py2 if it's Exception
localized systems have errors in native language if locale is set
so convert the message to unicode with the correct encoding
'''
if isinstance(arg, Exception):
text = str(arg)
if six.PY2:
text = text.decode(self._exc_encoding)
return text
else:
return arg
class LimitLogger(SafeLogger):
"""
A logger which adds LimitFilter automatically
"""

View file

@ -131,7 +131,7 @@ class Page(object):
prop_value = getattr(rule, key)
if not isinstance(prop_value, six.string_types):
logger.warning('%s is set to %s' % (key, prop_value))
logger.warning('%s is set to %s', key, prop_value)
return prop_value
# URL or SAVE_AS is a string, format it with a controlled context

View file

@ -325,18 +325,22 @@ class HTMLReader(BaseReader):
name = self._attr_value(attrs, 'name')
if name is None:
attr_serialized = ', '.join(['{}="{}"'.format(k, v) for k, v in attrs])
logger.warning("Meta tag in file %s does not have a 'name' attribute, skipping. Attributes: %s", self._filename, attr_serialized)
logger.warning("Meta tag in file %s does not have a 'name' "
"attribute, skipping. Attributes: %s",
self._filename, attr_serialized)
return
name = name.lower()
contents = self._attr_value(attrs, 'content', '')
if not contents:
contents = self._attr_value(attrs, 'contents', '')
if contents:
logger.warning((
"Meta tag attribute 'contents' used in file {}, should"
" be changed to 'content'".format(self._filename),
"Other files have meta tag attribute 'contents' that"
" should be changed to 'content'"))
logger.warning(
"Meta tag attribute 'contents' used in file %s, should"
" be changed to 'content'",
self._filename,
extra={'limit_msg': ("Other files have meta tag "
"attribute 'contents' that should "
"be changed to 'content'")})
if name == 'keywords':
name = 'tags'
@ -376,8 +380,8 @@ class Readers(FileStampDataCacher):
for cls in [BaseReader] + BaseReader.__subclasses__():
if not cls.enabled:
logger.debug('Missing dependencies for {}'
.format(', '.join(cls.file_extensions)))
logger.debug('Missing dependencies for %s',
', '.join(cls.file_extensions))
continue
for ext in cls.file_extensions:
@ -414,8 +418,8 @@ class Readers(FileStampDataCacher):
path = os.path.abspath(os.path.join(base_path, path))
source_path = os.path.relpath(path, base_path)
logger.debug('read file {} -> {}'.format(
source_path, content_class.__name__))
logger.debug('Read file %s -> %s',
source_path, content_class.__name__)
if not fmt:
_, ext = os.path.splitext(os.path.basename(path))
@ -423,11 +427,11 @@ class Readers(FileStampDataCacher):
if fmt not in self.readers:
raise TypeError(
'Pelican does not know how to parse {}'.format(path))
'Pelican does not know how to parse %s', path)
if preread_signal:
logger.debug('signal {}.send({})'.format(
preread_signal, preread_sender))
logger.debug('Signal %s.send(%s)',
preread_signal.name, preread_sender)
preread_signal.send(preread_sender)
reader = self.readers[fmt]
@ -463,8 +467,8 @@ class Readers(FileStampDataCacher):
metadata['summary'] = typogrify(metadata['summary'])
if context_signal:
logger.debug('signal {}.send({}, <metadata>)'.format(
context_signal, context_sender))
logger.debug('Signal %s.send(%s, <metadata>)',
context_signal.name, context_sender)
context_signal.send(context_sender, metadata=metadata)
return content_class(content=content, metadata=metadata,
@ -497,9 +501,10 @@ def find_empty_alt(content, path):
)
""", re.X)
for match in re.findall(imgs, content):
logger.warning(('Empty alt attribute for image {} in {}'.format(
os.path.basename(match[1] + match[5]), path),
'Other images have empty alt attributes'))
logger.warning(
'Empty alt attribute for image %s in %s',
os.path.basename(match[1] + match[5]), path,
extra={'limit_msg': 'Other images have empty alt attributes'})
def default_metadata(settings=None, process=None):

View file

@ -30,9 +30,9 @@ class ComplexHTTPRequestHandler(srvmod.SimpleHTTPRequestHandler):
logging.info("Found: %s" % self.path)
found = True
break
logging.info("Tried to find file %s, but it doesn't exist. " % self.path)
logging.info("Tried to find file %s, but it doesn't exist. ", self.path)
if not found:
logging.warning("Unable to find file %s or variations." % self.path)
logging.warning("Unable to find file %s or variations.", self.path)
Handler = ComplexHTTPRequestHandler
@ -40,13 +40,13 @@ socketserver.TCPServer.allow_reuse_address = True
try:
httpd = socketserver.TCPServer(("", PORT), Handler)
except OSError as e:
logging.error("Could not listen on port %s" % PORT)
logging.error("Could not listen on port %s", PORT)
sys.exit(getattr(e, 'exitcode', 1))
logging.info("serving at port %s" % PORT)
logging.info("Serving at port %s", PORT)
try:
httpd.serve_forever()
except KeyboardInterrupt as e:
logging.info("shutting down server")
logging.info("Shutting down server")
httpd.socket.close()

View file

@ -152,7 +152,8 @@ def read_settings(path=None, override=None):
local_settings['PLUGIN_PATHS'] = local_settings['PLUGIN_PATH']
del local_settings['PLUGIN_PATH']
if isinstance(local_settings['PLUGIN_PATHS'], six.string_types):
logger.warning("Defining %s setting as string has been deprecated (should be a list)" % 'PLUGIN_PATHS')
logger.warning("Defining PLUGIN_PATHS setting as string "
"has been deprecated (should be a list)")
local_settings['PLUGIN_PATHS'] = [local_settings['PLUGIN_PATHS']]
elif local_settings['PLUGIN_PATHS'] is not None:
local_settings['PLUGIN_PATHS'] = [os.path.abspath(os.path.normpath(os.path.join(os.path.dirname(path), pluginpath)))
@ -201,7 +202,7 @@ def configure_settings(settings):
' (see pelican --help for more information)')
# specify the log messages to be ignored
LimitFilter.ignore.update(set(settings.get('LOG_FILTER',
LimitFilter._ignore.update(set(settings.get('LOG_FILTER',
DEFAULT_CONFIG['LOG_FILTER'])))
# lookup the theme in "pelican/themes" if the given one doesn't exist
@ -243,10 +244,9 @@ def configure_settings(settings):
]:
if key in settings and not isinstance(settings[key], types):
value = settings.pop(key)
logger.warn(
'Detected misconfigured {} ({}), '
'falling back to the default ({})'.format(
key, value, DEFAULT_CONFIG[key]))
logger.warn('Detected misconfigured %s (%s), '
'falling back to the default (%s)',
key, value, DEFAULT_CONFIG[key])
# try to set the different locales, fallback on the default.
locales = settings.get('LOCALE', DEFAULT_CONFIG['LOCALE'])
@ -319,8 +319,8 @@ def configure_settings(settings):
old_key = key + '_DIR'
new_key = key + '_PATHS'
if old_key in settings:
logger.warning('Deprecated setting {}, moving it to {} list'.format(
old_key, new_key))
logger.warning('Deprecated setting %s, moving it to %s list',
old_key, new_key)
settings[new_key] = [settings[old_key]] # also make a list
del settings[old_key]
@ -343,8 +343,8 @@ def configure_settings(settings):
for PATH_KEY in filter(lambda k: k in settings, path_keys):
if isinstance(settings[PATH_KEY], six.string_types):
logger.warning("Detected misconfiguration with %s setting "
"(must be a list), falling back to the default"
% PATH_KEY)
"(must be a list), falling back to the default",
PATH_KEY)
settings[PATH_KEY] = DEFAULT_CONFIG[PATH_KEY]
# Add {PAGE,ARTICLE}_PATHS to {ARTICLE,PAGE}_EXCLUDES

View file

@ -182,5 +182,5 @@ class TestPelican(LoggedTestCase):
logger.setLevel(orig_level)
self.assertLogCountEqual(
count=2,
msg="writing .*",
msg="Writing .*",
level=logging.INFO)

View file

@ -20,8 +20,10 @@ from six.moves.urllib.error import URLError
from six.moves.urllib.parse import urlparse
from six.moves.urllib.request import urlretrieve
from pelican.utils import slugify, SafeDatetime
# pelican.log has to be the first pelican module to be loaded
# because logging.setLoggerClass has to be called before logging.getLogger
from pelican.log import init
from pelican.utils import slugify, SafeDatetime
logger = logging.getLogger(__name__)
@ -131,7 +133,7 @@ def wp2fields(xml, wp_custpost=False):
title = unescape(item.title.contents[0])
except IndexError:
title = 'No title [%s]' % item.find('post_name').string
logger.warning('Post "%s" is lacking a proper title' % title)
logger.warning('Post "%s" is lacking a proper title', title)
filename = item.find('post_name').string
post_id = item.find('post_id').string
@ -594,24 +596,9 @@ def download_attachments(output_path, urls):
try:
urlretrieve(url, os.path.join(full_path, filename))
locations.append(os.path.join(localpath, filename))
except URLError as e:
error = ("No file could be downloaded from {}; Error {}"
.format(url, e))
logger.warning(error)
except IOError as e: #Python 2.7 throws an IOError rather Than URLError
# For japanese, the error might look kind of like this:
# e = IOError( 'socket error', socket.error(111, u'\u63a5\u7d9a\u3092\u62d2\u5426\u3055\u308c\u307e\u3057\u305f') )
# and not be suitable to use in "{}".format(e) , raising UnicodeDecodeError
# (This is at least the case on my Fedora running Python 2.7.5
# (default, Feb 19 2014, 13:47:28) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux2
try:
error = ("No file could be downloaded from {}; Error {}"
.format(url, e))
except UnicodeDecodeError:
# For lack of a better log message because we could not decode e, let's use repr(e)
error = ("No file could be downloaded from {}; Error {}"
.format(url, repr(e)))
logger.warning(error)
except (URLError, IOError) as e:
#Python 2.7 throws an IOError rather Than URLError
logger.warning("No file could be downloaded from %s\n%s", url, e)
return locations

View file

@ -70,7 +70,7 @@ class URLWrapper(object):
setting = "%s_%s" % (self.__class__.__name__.upper(), key)
value = self.settings[setting]
if not isinstance(value, six.string_types):
logger.warning('%s is set to %s' % (setting, value))
logger.warning('%s is set to %s', (setting, value))
return value
else:
if get_page_name:

View file

@ -293,46 +293,46 @@ def copy(source, destination):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
shutil.copy2(source_, destination_)
logger.info('copying %s to %s' % (source_, destination_))
logger.info('Copying %s to %s', source_, destination_)
else:
logger.warning('skipped copy %s to %s' % (source_, destination_))
logger.warning('Skipped copy %s to %s', source_, destination_)
def clean_output_dir(path, retention):
"""Remove all files from output directory except those in retention list"""
if not os.path.exists(path):
logger.debug("Directory already removed: %s" % path)
logger.debug("Directory already removed: %s", path)
return
if not os.path.isdir(path):
try:
os.remove(path)
except Exception as e:
logger.error("Unable to delete file %s; %s" % (path, str(e)))
logger.error("Unable to delete file %s; %s", path, e)
return
# remove existing content from output folder unless in retention list
for filename in os.listdir(path):
file = os.path.join(path, filename)
if any(filename == retain for retain in retention):
logger.debug("Skipping deletion; %s is on retention list: %s" \
% (filename, file))
logger.debug("Skipping deletion; %s is on retention list: %s",
filename, file)
elif os.path.isdir(file):
try:
shutil.rmtree(file)
logger.debug("Deleted directory %s" % file)
logger.debug("Deleted directory %s", file)
except Exception as e:
logger.error("Unable to delete directory %s; %s" % (
file, str(e)))
logger.error("Unable to delete directory %s; %s",
file, e)
elif os.path.isfile(file) or os.path.islink(file):
try:
os.remove(file)
logger.debug("Deleted file/link %s" % file)
logger.debug("Deleted file/link %s", file)
except Exception as e:
logger.error("Unable to delete file %s; %s" % (file, str(e)))
logger.error("Unable to delete file %s; %s", file, e)
else:
logger.error("Unable to delete %s, file type unknown" % file)
logger.error("Unable to delete %s, file type unknown", file)
def get_relative_path(path):
@ -455,10 +455,10 @@ def process_translations(content_list):
lang_items = list(lang_items)
len_ = len(lang_items)
if len_ > 1:
logger.warning('There are %s variants of "%s" with lang %s' \
% (len_, slug, lang))
logger.warning('There are %s variants of "%s" with lang %s',
len_, slug, lang)
for x in lang_items:
logger.warning(' %s' % x.source_path)
logger.warning('\t%s', x.source_path)
# find items with default language
default_lang_items = list(filter(attrgetter('in_default_lang'),
@ -469,11 +469,11 @@ def process_translations(content_list):
default_lang_items = items[:1]
if not slug:
logger.warning((
'empty slug for {!r}. '
logger.warning(
'empty slug for %s. '
'You can fix this by adding a title or a slug to your '
'content'
).format(default_lang_items[0].source_path))
'content',
default_lang_items[0].source_path)
index.extend(default_lang_items)
translations.extend([x for x in items if x not in default_lang_items])
for a in items:
@ -499,7 +499,7 @@ def folder_watcher(path, extensions, ignores=[]):
try:
yield os.stat(os.path.join(root, f)).st_mtime
except OSError as e:
logger.warning('Caught Exception: {}'.format(e))
logger.warning('Caught Exception: %s', e)
LAST_MTIME = 0
while True:
@ -522,7 +522,7 @@ def file_watcher(path):
try:
mtime = os.stat(path).st_mtime
except OSError as e:
logger.warning('Caught Exception: {}'.format(e))
logger.warning('Caught Exception: %s', e)
continue
if mtime > LAST_MTIME:
@ -596,15 +596,15 @@ class FileDataCacher(object):
with self._cache_open(self._cache_path, 'rb') as fhandle:
self._cache = pickle.load(fhandle)
except (IOError, OSError) as err:
logger.debug(('Cannot load cache {} (this is normal on first '
'run). Proceeding with empty cache.\n{}').format(
self._cache_path, err))
logger.debug('Cannot load cache %s (this is normal on first '
'run). Proceeding with empty cache.\n%s',
self._cache_path, err)
self._cache = {}
except Exception as err:
logger.warning(('Cannot unpickle cache {}, cache may be using '
logger.warning(('Cannot unpickle cache %s, cache may be using '
'an incompatible protocol (see pelican caching docs). '
'Proceeding with empty cache.\n{}').format(
self._cache_path, err))
'Proceeding with empty cache.\n%s'),
self._cache_path, err)
self._cache = {}
else:
self._cache = {}
@ -629,8 +629,8 @@ class FileDataCacher(object):
with self._cache_open(self._cache_path, 'wb') as fhandle:
pickle.dump(self._cache, fhandle)
except (IOError, OSError, pickle.PicklingError) as err:
logger.warning('Could not save cache {}\n{}'.format(
self._cache_path, err))
logger.warning('Could not save cache %s\n ... %s',
self._cache_path, err)
class FileStampDataCacher(FileDataCacher):
@ -656,8 +656,7 @@ class FileStampDataCacher(FileDataCacher):
return hash_func(fhandle.read()).digest()
self._filestamp_func = filestamp_func
except AttributeError as err:
logger.warning('Could not get hashing function\n{}'.format(
err))
logger.warning('Could not get hashing function\n\t%s', err)
self._filestamp_func = None
def cache_data(self, filename, data):
@ -677,8 +676,8 @@ class FileStampDataCacher(FileDataCacher):
try:
return self._filestamp_func(filename)
except (IOError, OSError, TypeError) as err:
logger.warning('Cannot get modification stamp for {}\n{}'.format(
filename, err))
logger.warning('Cannot get modification stamp for %s\n\t%s',
filename, err)
return b''
def get_cached_data(self, filename, default=None):

View file

@ -68,11 +68,11 @@ class Writer(object):
raise RuntimeError('File %s is set to be overridden twice'
% filename)
else:
logger.info('skipping %s' % filename)
logger.info('Skipping %s', filename)
filename = os.devnull
elif filename in self._written_files:
if override:
logger.info('overwriting %s' % filename)
logger.info('Overwriting %s', filename)
else:
raise RuntimeError('File %s is to be overwritten' % filename)
if override:
@ -120,7 +120,7 @@ class Writer(object):
encoding = 'utf-8' if six.PY3 else None
with self._open_w(complete_path, encoding) as fp:
feed.write(fp, 'utf-8')
logger.info('writing %s' % complete_path)
logger.info('Writing %s', complete_path)
return feed
finally:
locale.setlocale(locale.LC_ALL, old_locale)
@ -160,7 +160,7 @@ class Writer(object):
with self._open_w(path, 'utf-8', override=override) as f:
f.write(output)
logger.info('writing {}'.format(path))
logger.info('Writing %s', path)
# Send a signal to say we're writing a file with some specific
# local context.