I'm using Python's logging module to log some debug strings to a file which works pretty well. Now in addition, I'd like to use this module to also print the strings out to stdout. How do I do this? In order to log my strings to a file I use following code:
import logging
import logging.handlers
logger = logging.getLogger("")
logger.setLevel(logging.DEBUG)
handler = logging.handlers.RotatingFileHandler(
LOGFILE, maxBytes=(1048576*5), backupCount=7
)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
and then call a logger function like
logger.debug("I am written to the file")
Thank you for some help here!
DEBUG
does it still log to stdout AND the file? I think it's only if its set to INFO
. Correct me if I'm wrong please.
Just get a handle to the root logger and add the StreamHandler
. The StreamHandler
writes to stderr. Not sure if you really need stdout over stderr, but this is what I use when I setup the Python logger and I also add the FileHandler
as well. Then all my logs go to both places (which is what it sounds like you want).
import logging
logging.getLogger().addHandler(logging.StreamHandler())
If you want to output to stdout
instead of stderr
, you just need to specify it to the StreamHandler
constructor.
import sys
# ...
logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
You could also add a Formatter
to it so all your log lines have a common header.
ie:
import logging
logFormatter = logging.Formatter("%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s")
rootLogger = logging.getLogger()
fileHandler = logging.FileHandler("{0}/{1}.log".format(logPath, fileName))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
Prints to the format of:
2012-12-05 16:58:26,618 [MainThread ] [INFO ] my message
logging.basicConfig()
can take a keyword argument handlers
since Python 3.3, which simplifies logging setup a lot, especially when setting up multiple handlers with the same formatter:
handlers – If specified, this should be an iterable of already created handlers to add to the root logger. Any handlers which don’t already have a formatter set will be assigned the default formatter created in this function.
The whole setup can therefore be done with a single call like this:
import logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("debug.log"),
logging.StreamHandler()
]
)
(Or with import sys
+ StreamHandler(sys.stdout)
per original question's requirements – the default for StreamHandler is to write to stderr. Look at LogRecord attributes if you want to customize the log format and add things like filename/line, thread info etc.)
The setup above needs to be done only once near the beginning of the script. You can use the logging from all other places in the codebase later like this:
logging.info('Useful message')
logging.error('Something bad happened')
...
Note: If it doesn't work, someone else has probably already initialized the logging system differently. Comments suggest doing logging.root.handlers = []
before the call to basicConfig()
.
FileHandler
: logging.FileHandler(filename, mode='a', encoding=None, delay=False)
. This means, that when you just want to log in the same folder, you can just use FileHandler("mylog.log")
. If you want to overwrite the log each time, set "w" as the second argument.
logging.root.handlers = []
before the call to basicConfig
, take a look at the function - it's annoying.
noinspection PyArgumentList
to make PyCharm happy, due to youtrack.jetbrains.com/issue/PY-39762
Adding a StreamHandler without arguments goes to stderr instead of stdout. If some other process has a dependency on the stdout dump (i.e. when writing an NRPE plugin), then make sure to specify stdout explicitly or you might run into some unexpected troubles.
Here's a quick example reusing the assumed values and LOGFILE from the question:
import logging
from logging.handlers import RotatingFileHandler
from logging import handlers
import sys
log = logging.getLogger('')
log.setLevel(logging.DEBUG)
format = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
ch = logging.StreamHandler(sys.stdout)
ch.setFormatter(format)
log.addHandler(ch)
fh = handlers.RotatingFileHandler(LOGFILE, maxBytes=(1048576*5), backupCount=7)
fh.setFormatter(format)
log.addHandler(fh)
DEBUG
does it still log to stdout AND the file? I think it's only if its set to INFO
. Correct me if I'm wrong please.
Here is a complete, nicely wrapped solution based on Waterboy's answer and various other sources. It supports logging to both console and log file, allows for different log level settings, provides colorized output and is easily configurable (also available as Gist):
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# -------------------------------------------------------------------------------
# -
# Python dual-logging setup (console and log file), -
# supporting different log levels and colorized output -
# -
# Created by Fonic <https://github.com/fonic> -
# Date: 04/05/20 -
# -
# Based on: -
# https://stackoverflow.com/a/13733863/1976617 -
# https://uran198.github.io/en/python/2016/07/12/colorful-python-logging.html -
# https://en.wikipedia.org/wiki/ANSI_escape_code#Colors -
# -
# -------------------------------------------------------------------------------
# Imports
import os
import sys
import logging
# Logging formatter supporting colorized output
class LogFormatter(logging.Formatter):
COLOR_CODES = {
logging.CRITICAL: "\033[1;35m", # bright/bold magenta
logging.ERROR: "\033[1;31m", # bright/bold red
logging.WARNING: "\033[1;33m", # bright/bold yellow
logging.INFO: "\033[0;37m", # white / light gray
logging.DEBUG: "\033[1;30m" # bright/bold black / dark gray
}
RESET_CODE = "\033[0m"
def __init__(self, color, *args, **kwargs):
super(LogFormatter, self).__init__(*args, **kwargs)
self.color = color
def format(self, record, *args, **kwargs):
if (self.color == True and record.levelno in self.COLOR_CODES):
record.color_on = self.COLOR_CODES[record.levelno]
record.color_off = self.RESET_CODE
else:
record.color_on = ""
record.color_off = ""
return super(LogFormatter, self).format(record, *args, **kwargs)
# Setup logging
def setup_logging(console_log_output, console_log_level, console_log_color, logfile_file, logfile_log_level, logfile_log_color, log_line_template):
# Create logger
# For simplicity, we use the root logger, i.e. call 'logging.getLogger()'
# without name argument. This way we can simply use module methods for
# for logging throughout the script. An alternative would be exporting
# the logger, i.e. 'global logger; logger = logging.getLogger("<name>")'
logger = logging.getLogger()
# Set global log level to 'debug' (required for handler levels to work)
logger.setLevel(logging.DEBUG)
# Create console handler
console_log_output = console_log_output.lower()
if (console_log_output == "stdout"):
console_log_output = sys.stdout
elif (console_log_output == "stderr"):
console_log_output = sys.stderr
else:
print("Failed to set console output: invalid output: '%s'" % console_log_output)
return False
console_handler = logging.StreamHandler(console_log_output)
# Set console log level
try:
console_handler.setLevel(console_log_level.upper()) # only accepts uppercase level names
except:
print("Failed to set console log level: invalid level: '%s'" % console_log_level)
return False
# Create and set formatter, add console handler to logger
console_formatter = LogFormatter(fmt=log_line_template, color=console_log_color)
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
# Create log file handler
try:
logfile_handler = logging.FileHandler(logfile_file)
except Exception as exception:
print("Failed to set up log file: %s" % str(exception))
return False
# Set log file log level
try:
logfile_handler.setLevel(logfile_log_level.upper()) # only accepts uppercase level names
except:
print("Failed to set log file log level: invalid level: '%s'" % logfile_log_level)
return False
# Create and set formatter, add log file handler to logger
logfile_formatter = LogFormatter(fmt=log_line_template, color=logfile_log_color)
logfile_handler.setFormatter(logfile_formatter)
logger.addHandler(logfile_handler)
# Success
return True
# Main function
def main():
# Setup logging
script_name = os.path.splitext(os.path.basename(sys.argv[0]))[0]
if (not setup_logging(console_log_output="stdout", console_log_level="warning", console_log_color=True,
logfile_file=script_name + ".log", logfile_log_level="debug", logfile_log_color=False,
log_line_template="%(color_on)s[%(created)d] [%(threadName)s] [%(levelname)-8s] %(message)s%(color_off)s")):
print("Failed to setup logging, aborting.")
return 1
# Log some messages
logging.debug("Debug message")
logging.info("Info message")
logging.warning("Warning message")
logging.error("Error message")
logging.critical("Critical message")
# Call main function
if (__name__ == "__main__"):
sys.exit(main())
NOTE regarding Microsoft Windows: For colors to actually appear on Microsoft Windows, additional steps are necessary. There are two options (both successfully tested on Microsoft Windows 10):
1) Enable ANSI terminal mode using the following code (enables terminal to interpret escape sequences by setting flag ENABLE_VIRTUAL_TERMINAL_PROCESSING
; more info on this here, here, here and here):
# Enable ANSI terminal mode on Microsoft Windows
def windows_enable_ansi_terminal_mode():
if (sys.platform != "win32"):
return None
try:
import ctypes
kernel32 = ctypes.windll.kernel32
result = kernel32.SetConsoleMode(kernel32.GetStdHandle(-11), 7)
if (result == 0): raise Exception
return True
except:
return False
2) Use Python package colorama (filters output sent to stdout and stderr and translates escape sequences to native Windows API calls):
import colorama
colorama.init()
main.py
, then import logging
+ logging.debug()
etc. in modules. Not really sure why it is not working in your case.
Either run basicConfig
with stream=sys.stdout
as the argument prior to setting up any other handlers or logging any messages, or manually add a StreamHandler
that pushes messages to stdout to the root logger (or any other logger you want, for that matter).
stream
and filename
arguments together or with a handler
Logging to stdout
and rotating file
with different levels and formats:
import logging
import logging.handlers
import sys
if __name__ == "__main__":
# Change root logger level from WARNING (default) to NOTSET in order for all messages to be delegated.
logging.getLogger().setLevel(logging.NOTSET)
# Add stdout handler, with level INFO
console = logging.StreamHandler(sys.stdout)
console.setLevel(logging.INFO)
formater = logging.Formatter('%(name)-13s: %(levelname)-8s %(message)s')
console.setFormatter(formater)
logging.getLogger().addHandler(console)
# Add file rotating handler, with level DEBUG
rotatingHandler = logging.handlers.RotatingFileHandler(filename='rotating.log', maxBytes=1000, backupCount=5)
rotatingHandler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
rotatingHandler.setFormatter(formatter)
logging.getLogger().addHandler(rotatingHandler)
log = logging.getLogger("app." + __name__)
log.debug('Debug message, should only appear in the file.')
log.info('Info message, should appear in file and stdout.')
log.warning('Warning message, should appear in file and stdout.')
log.error('Error message, should appear in file and stdout.')
ValueError: unsupported format character 'n' (0x6e) at index 46
when used \n
and %s
in the log message
After having used Waterboy's code over and over in multiple Python packages, I finally cast it into a tiny standalone Python package, which you can find here:
https://github.com/acschaefer/duallog
The code is well documented and easy to use. Simply download the .py
file and include it in your project, or install the whole package via pip install duallog
.
I have handled redirecting logs and prints to a file on disk, stdout, and stderr simultaneously, via the following module (the Gist is also available here):
import logging
import pathlib
import sys
from ml.common.const import LOG_DIR_PATH, ML_DIR
def create_log_file_path(file_path, root_dir=ML_DIR, log_dir=LOG_DIR_PATH):
path_parts = list(pathlib.Path(file_path).parts)
relative_path_parts = path_parts[path_parts.index(root_dir) + 1:]
log_file_path = pathlib.Path(log_dir, *relative_path_parts)
log_file_path = log_file_path.with_suffix('.log')
# Create the directories and the file itself
log_file_path.parent.mkdir(parents=True, exist_ok=True)
log_file_path.touch(exist_ok=True)
return log_file_path
def set_up_logs(file_path, mode='a', level=logging.INFO):
log_file_path = create_log_file_path(file_path)
logging_handlers = [logging.FileHandler(log_file_path, mode=mode),
logging.StreamHandler(sys.stdout)]
logging.basicConfig(
handlers=logging_handlers,
format='%(asctime)s %(name)s %(levelname)s %(message)s',
level=level
)
class OpenedFileHandler(logging.FileHandler):
def __init__(self, file_handle, filename, mode):
self.file_handle = file_handle
super(OpenedFileHandler, self).__init__(filename, mode)
def _open(self):
return self.file_handle
class StandardError:
def __init__(self, buffer_stderr, buffer_file):
self.buffer_stderr = buffer_stderr
self.buffer_file = buffer_file
def write(self, message):
self.buffer_stderr.write(message)
self.buffer_file.write(message)
class StandardOutput:
def __init__(self, buffer_stdout, buffer_file):
self.buffer_stdout = buffer_stdout
self.buffer_file = buffer_file
def write(self, message):
self.buffer_stdout.write(message)
self.buffer_file.write(message)
class Logger:
def __init__(self, file_path, mode='a', level=logging.INFO):
self.stdout_ = sys.stdout
self.stderr_ = sys.stderr
log_file_path = create_log_file_path(file_path)
self.file_ = open(log_file_path, mode=mode)
logging_handlers = [OpenedFileHandler(self.file_, log_file_path,
mode=mode),
logging.StreamHandler(sys.stdout)]
logging.basicConfig(
handlers=logging_handlers,
format='%(asctime)s %(name)s %(levelname)s %(message)s',
level=level
)
# Overrides write() method of stdout and stderr buffers
def write(self, message):
self.stdout_.write(message)
self.stderr_.write(message)
self.file_.write(message)
def flush(self):
pass
def __enter__(self):
sys.stdout = StandardOutput(self.stdout_, self.file_)
sys.stderr = StandardError(self.stderr_, self.file_)
def __exit__(self, exc_type, exc_val, exc_tb):
sys.stdout = self.stdout_
sys.stderr = self.stderr_
self.file_.close()
Written as a context manager, you can simply add the functionality to your python script by adding an extra line:
from logger import Logger
...
if __name__ == '__main__':
with Logger(__file__):
main()
Although the question specifically asks for a logger configuration, there is an alternative approach that does not require any changes to the logging
configuration, nor does it require redirecting stdout
.
A bit simplistic, perhaps, but it works:
def log_and_print(message: str, level: int, logger: logging.Logger):
logger.log(level=level, msg=message) # log as normal
print(message) # prints to stdout by default
Instead of e.g. logger.debug('something')
we now call log_and_print(message='something', level=logging.DEBUG, logger=logger)
.
We can also expand this a little, so it prints to stdout
only when necessary:
def log_print(message: str, level: int, logger: logging.Logger):
# log the message normally
logger.log(level=level, msg=message)
# only print to stdout if the message is not logged to stdout
msg_logged_to_stdout = False
current_logger = logger
while current_logger and not msg_logged_to_stdout:
is_enabled = current_logger.isEnabledFor(level)
logs_to_stdout = any(
getattr(handler, 'stream', None) == sys.stdout
for handler in current_logger.handlers
)
msg_logged_to_stdout = is_enabled and logs_to_stdout
if not current_logger.propagate:
current_logger = None
else:
current_logger = current_logger.parent
if not msg_logged_to_stdout:
print(message)
This checks the logger and its parents for any handlers that stream to stdout
, and checks if the logger is enabled for the specified level.
Note this has not been optimized for performance.
For 2.7, try the following:
fh = logging.handlers.RotatingFileHandler(LOGFILE, maxBytes=(1048576*5), backupCount=7)
Success story sharing
StreamHandler
withsys.stdout
, and then it will log to that instead of stderr.import sys
first. And actually initialize the handler, ieconsoleHandler = logging.StreamHandler(sys.stdout)
rootLogger.setLevel(logging.DEBUG)
if you are trying to see info or debug messages