Python Tutorial: Logging Advanced - Loggers, Handlers, and Formatters
Based on Corey Schafer's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Avoid relying on the root logger when multiple modules call logging.basicConfig; imports can silently reconfigure global logging state.
Briefing
Python’s logging becomes reliable only when each module uses its own logger and explicitly wires that logger to the right outputs. The core fix in this tutorial is moving away from the root logger—where configuration can be overwritten or silently ignored—and toward per-module loggers built with dedicated handlers and formatters. That shift matters because it prevents “missing” log files, mismatched log levels, and inconsistent formatting when multiple files in a project share the same global logging state.
The walkthrough starts by recreating the earlier setup: a simple script writes to sample.log using logging.basicConfig with a DEBUG level and a custom format (time, logger name, message). A second module (employee) also configures logging, but with a different file (employee.log), an INFO level, and a different format. When both modules rely on the root logger, importing one module can change the root logger’s configuration for the entire process. The result is counterintuitive behavior: importing employee creates employee.log but leaves sample.log absent, because the root logger has already been configured to INFO, so DEBUG messages from the simple script never pass the threshold. Even when INFO messages do appear, the shared root logger still produces “messy” outcomes—wrong destinations, wrong levels, and wrong formatting.
To correct this, the tutorial introduces a module-specific logger using logging.getLogger with a name derived from the module’s __name__ (double underscore name). Each module then logs through its own logger variable (e.g., logger.debug or logger.info), allowing the hierarchy to fall back to the root logger only when needed. The next step is to stop using basicConfig for module output and instead attach handlers directly to the module logger. For employee, a FileHandler is created for employee.log, a Formatter is attached to that handler, and the logger’s level is set (e.g., INFO). After removing the basicConfig call, reruns produce employee.log with the expected formatting and the correct logger name.
The same pattern is applied to the simple script: a dedicated logger writes to sample.log with its own formatter and level. Once both modules are separated, the project gains flexibility. Handler-level thresholds can further refine output: setting the sample FileHandler to logging.ERROR filters out DEBUG logs while still allowing the module logger to remain at DEBUG. The tutorial also demonstrates richer error reporting by switching from logging.error to logging.exception inside an except block, which automatically includes the traceback.
Finally, the tutorial shows how modular logging scales by adding multiple handlers to one logger. A StreamHandler can be attached alongside the FileHandler so debug messages appear in the console while errors continue to be written to the log file. With handlers and formatters configured per module, logging becomes predictable, debuggable, and easier to extend—whether that means adding console output, email alerts, or rotating logs later.
Cornell Notes
The tutorial shows why Python logging breaks down when multiple modules share the root logger. Importing a module that calls logging.basicConfig can reconfigure the root logger’s file, level, and format for the entire program, causing missing log files and filtered-out messages (e.g., DEBUG never reaching an INFO root logger). The fix is to create a per-module logger with logging.getLogger(__name__), then attach dedicated handlers (FileHandler and optionally StreamHandler) and attach formatters to those handlers. Handler-level log thresholds enable fine-grained control, and using logging.exception inside an except block records tracebacks automatically. This modular setup makes logging predictable and easier to extend.
Why do sample.log entries disappear after importing the employee module?
How does using logging.getLogger(__name__) prevent cross-module interference?
What role do handlers and formatters play compared with basicConfig?
How can the tutorial filter only errors into sample.log while keeping the logger at DEBUG?
What’s the difference between logging.error and logging.exception in this setup?
How does adding a StreamHandler change logging behavior?
Review Questions
- What specific configuration change caused DEBUG messages to stop appearing after importing another module?
- In the per-module approach, which component decides the output destination: the logger or the handler?
- How does setting a FileHandler level to logging.ERROR affect messages when the logger level remains at logging.DEBUG?
Key Points
- 1
Avoid relying on the root logger when multiple modules call logging.basicConfig; imports can silently reconfigure global logging state.
- 2
Create a per-module logger with logging.getLogger(__name__) and log through that logger variable (e.g., logger.debug/info) to keep configurations isolated.
- 3
Attach a FileHandler to each module logger to control where records are written, and attach a Formatter to the handler to control message formatting.
- 4
Use logger.setLevel(...) to control what the logger emits, and use handler.setLevel(...) to control what each output actually records.
- 5
Inside except blocks, prefer logging.exception(...) when you want tracebacks included automatically.
- 6
Add multiple handlers (e.g., FileHandler plus StreamHandler) to send different subsets of logs to different destinations.
- 7
Handler-level thresholds enable practical filtering, such as writing only ERROR and above to a specific log file while keeping DEBUG enabled elsewhere.