Thursday, January 15, 2026

The Full Information to Logging for Python Builders


The Full Information to Logging for Python Builders
Picture by Creator

 

Introduction

 
Most Python builders deal with logging as an afterthought. They throw round print() statements throughout improvement, possibly swap to fundamental logging later, and assume that’s sufficient. However when points come up in manufacturing, they be taught they’re lacking the context wanted to diagnose issues effectively.

Correct logging strategies offer you visibility into software conduct, efficiency patterns, and error circumstances. With the suitable strategy, you may hint person actions, establish bottlenecks, and debug points with out reproducing them regionally. Good logging turns debugging from guesswork into systematic problem-solving.

This text covers the important logging patterns that Python builders can use. You’ll learn to construction log messages for searchability, deal with exceptions with out shedding context, and configure logging for various environments. We’ll begin with the fundamentals and work our means as much as extra superior logging methods that you should use in initiatives straight away. We can be utilizing solely the logging module.

Yow will discover the code on GitHub.

 

Setting Up Your First Logger

 
As a substitute of leaping straight to complicated configurations, allow us to perceive what a logger truly does. We’ll create a fundamental logger that writes to each the console and a file.
 

import logging

logger = logging.getLogger('my_app')
logger.setLevel(logging.DEBUG)

console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)

file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)

formatter = logging.Formatter(
    '%(asctime)s - %(title)s - %(levelname)s - %(message)s'
)
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)

logger.addHandler(console_handler)
logger.addHandler(file_handler)

logger.debug('It is a debug message')
logger.data('Utility began')
logger.warning('Disk house working low')
logger.error('Failed to connect with database')
logger.important('System shutting down')

 

Here’s what every bit of the code does.

The getLogger() operate creates a named logger occasion. Consider it as making a channel in your logs. The title ‘my_app’ helps you establish the place logs come from in bigger functions.

We set the logger stage to DEBUG, which implies it’s going to course of all messages. Then we create two handlers: one for console output and one for file output. Handlers management the place logs go.

The console handler solely reveals INFO stage and above, whereas the file handler captures all the things, together with DEBUG messages. That is helpful since you need detailed logs in recordsdata however cleaner output on display screen.

The formatter determines how your log messages look. The format string makes use of placeholders like %(asctime)s for the timestamp and %(levelname)s for severity.

 

Understanding Log Ranges and When to Use Every

 
Python’s logging module has 5 customary ranges, and realizing when to make use of each is necessary for helpful logs.

Right here is an instance:
 

logger = logging.getLogger('payment_processor')
logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)

def process_payment(user_id, quantity):
    logger.debug(f'Beginning cost processing for person {user_id}')

    if quantity <= 0:
        logger.error(f'Invalid cost quantity: {quantity}')
        return False

    logger.data(f'Processing ${quantity} cost for person {user_id}')

    if quantity > 10000:
        logger.warning(f'Massive transaction detected: ${quantity}')

    strive:
        # Simulate cost processing
        success = charge_card(user_id, quantity)
        if success:
            logger.data(f'Fee profitable for person {user_id}')
            return True
        else:
            logger.error(f'Fee failed for person {user_id}')
            return False
    besides Exception as e:
        logger.important(f'Fee system crashed: {e}', exc_info=True)
        return False

def charge_card(user_id, quantity):
    # Simulated cost logic
    return True

process_payment(12345, 150.00)
process_payment(12345, 15000.00)

 

Allow us to break down when to make use of every stage:

  • DEBUG is for detailed info helpful throughout improvement. You’ll use it for variable values, loop iterations, or step-by-step execution traces. These are normally disabled in manufacturing.
  • INFO marks regular operations that you just need to report. Beginning a server, finishing a activity, or profitable transactions go right here. These verify your software is working as anticipated.
  • WARNING indicators one thing surprising however not breaking. This consists of low disk house, deprecated API utilization, or uncommon however dealt with conditions. The appliance continues working, however somebody ought to examine.
  • ERROR means one thing failed however the software can proceed. Failed database queries, validation errors, or community timeouts belong right here. The particular operation failed, however the app retains working.
  • CRITICAL signifies critical issues that may trigger the applying to crash or lose knowledge. Use this sparingly for catastrophic failures that want speedy consideration.

Whenever you run the above code, you’ll get:
 

DEBUG: Beginning cost processing for person 12345
DEBUG:payment_processor:Beginning cost processing for person 12345
INFO: Processing $150.0 cost for person 12345
INFO:payment_processor:Processing $150.0 cost for person 12345
INFO: Fee profitable for person 12345
INFO:payment_processor:Fee profitable for person 12345
DEBUG: Beginning cost processing for person 12345
DEBUG:payment_processor:Beginning cost processing for person 12345
INFO: Processing $15000.0 cost for person 12345
INFO:payment_processor:Processing $15000.0 cost for person 12345
WARNING: Massive transaction detected: $15000.0
WARNING:payment_processor:Massive transaction detected: $15000.0
INFO: Fee profitable for person 12345
INFO:payment_processor:Fee profitable for person 12345
True

 

Subsequent, allow us to proceed to know extra about logging exceptions.

 

Logging Exceptions Correctly

 
When exceptions happen, you want extra than simply the error message; you want the total stack hint. Right here is learn how to seize exceptions successfully.
 

import json

logger = logging.getLogger('api_handler')
logger.setLevel(logging.DEBUG)

handler = logging.FileHandler('errors.log')
formatter = logging.Formatter(
    '%(asctime)s - %(title)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)

def fetch_user_data(user_id):
    logger.data(f'Fetching knowledge for person {user_id}')

    strive:
        # Simulate API name
        response = call_external_api(user_id)
        knowledge = json.hundreds(response)
        logger.debug(f'Obtained knowledge: {knowledge}')
        return knowledge
    besides json.JSONDecodeError as e:
        logger.error(
            f'Did not parse JSON for person {user_id}: {e}',
            exc_info=True
        )
        return None
    besides ConnectionError as e:
        logger.error(
            f'Community error whereas fetching person {user_id}',
            exc_info=True
        )
        return None
    besides Exception as e:
        logger.important(
            f'Surprising error in fetch_user_data: {e}',
            exc_info=True
        )
        elevate

def call_external_api(user_id):
    # Simulated API response
    return '{"id": ' + str(user_id) + ', "title": "John"}'

fetch_user_data(123)

 

The important thing right here is the exc_info=True parameter. This tells the logger to incorporate the total exception traceback in your logs. With out it, you solely get the error message, which frequently will not be sufficient to debug the issue.

Discover how we catch particular exceptions first, then have a common Exception handler. The particular handlers allow us to present context-appropriate error messages. The final handler catches something surprising and re-raises it as a result of we have no idea learn how to deal with it safely.

Additionally discover we log at ERROR for anticipated exceptions (like community errors) however CRITICAL for surprising ones. This distinction helps you prioritize when reviewing logs.

 

Making a Reusable Logger Configuration

 
Copying logger setup code throughout recordsdata is tedious and error-prone. Allow us to create a configuration operate you may import anyplace in your venture.
 

# logger_config.py

import logging
import os
from datetime import datetime


def setup_logger(title, log_dir="logs", stage=logging.INFO):
    """
    Create a configured logger occasion

    Args:
        title: Logger title (normally __name__ from calling module)
        log_dir: Listing to retailer log recordsdata
        stage: Minimal logging stage

    Returns:
        Configured logger occasion
    """
    # Create logs listing if it would not exist

    if not os.path.exists(log_dir):
        os.makedirs(log_dir)
    logger = logging.getLogger(title)

    # Keep away from including handlers a number of instances

    if logger.handlers:
        return logger
    logger.setLevel(stage)

    # Console handler - INFO and above

    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    console_format = logging.Formatter("%(levelname)s - %(title)s - %(message)s")
    console_handler.setFormatter(console_format)

    # File handler - all the things

    log_filename = os.path.be part of(
        log_dir, f"{title.exchange('.', '_')}_{datetime.now().strftime('%Ypercentmpercentd')}.log"
    )
    file_handler = logging.FileHandler(log_filename)
    file_handler.setLevel(logging.DEBUG)
    file_format = logging.Formatter(
        "%(asctime)s - %(title)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
    )
    file_handler.setFormatter(file_format)

    logger.addHandler(console_handler)
    logger.addHandler(file_handler)

    return logger

 

Now that you’ve got arrange logger_config, you should use it in your Python script like so:
 

from logger_config import setup_logger

logger = setup_logger(__name__)

def calculate_discount(worth, discount_percent):
    logger.debug(f'Calculating low cost: {worth} * {discount_percent}%')
    
    if discount_percent < 0 or discount_percent > 100:
        logger.warning(f'Invalid low cost proportion: {discount_percent}')
        discount_percent = max(0, min(100, discount_percent))
    
    low cost = worth * (discount_percent / 100)
    final_price = worth - low cost
    
    logger.data(f'Utilized {discount_percent}% low cost: ${worth} -> ${final_price}')
    return final_price

calculate_discount(100, 20)
calculate_discount(100, 150)

 

This setup operate handles a number of necessary issues. First, it creates the logs listing if wanted, stopping crashes from lacking directories.

The operate checks if handlers exist already earlier than including new ones. With out this test, calling setup_logger a number of instances would create duplicate log entries.

We generate dated log filenames robotically. This prevents log recordsdata from rising infinitely and makes it simple to seek out logs from particular dates.

The file handler consists of extra element than the console handler, together with operate names and line numbers. That is invaluable when debugging however would litter console output.

Utilizing __name__ because the logger title creates a hierarchy that matches your module construction. This allows you to management logging for particular components of your software independently.

 

Structuring Logs with Context

 
Plain textual content logs are effective for easy functions, however structured logs with context make debugging a lot simpler. Allow us to add contextual info to our logs.
 

import json
from datetime import datetime, timezone

class ContextLogger:
    """Logger wrapper that provides contextual info to all log messages"""

    def __init__(self, title, context=None):
        self.logger = logging.getLogger(title)
        self.context = context or {}

        handler = logging.StreamHandler()
        formatter = logging.Formatter('%(message)s')
        handler.setFormatter(formatter)
        # Verify if handler already exists to keep away from duplicate handlers
        if not any(isinstance(h, logging.StreamHandler) and h.formatter._fmt == '%(message)s' for h in self.logger.handlers):
            self.logger.addHandler(handler)
        self.logger.setLevel(logging.DEBUG)

    def _format_message(self, message, stage, extra_context=None):
        """Format message with context as JSON"""
        log_data = {
            'timestamp': datetime.now(timezone.utc).isoformat(),
            'stage': stage,
            'message': message,
            'context': {**self.context, **(extra_context or {})}
        }
        return json.dumps(log_data)

    def debug(self, message, **kwargs):
        self.logger.debug(self._format_message(message, 'DEBUG', kwargs))

    def data(self, message, **kwargs):
        self.logger.data(self._format_message(message, 'INFO', kwargs))

    def warning(self, message, **kwargs):
        self.logger.warning(self._format_message(message, 'WARNING', kwargs))

    def error(self, message, **kwargs):
        self.logger.error(self._format_message(message, 'ERROR', kwargs))

 

You should utilize the ContextLogger like so:
 

def process_order(order_id, user_id):
    logger = ContextLogger(__name__, context={
        'order_id': order_id,
        'user_id': user_id
    })

    logger.data('Order processing began')

    strive:
        gadgets = fetch_order_items(order_id)
        logger.data('Objects fetched', item_count=len(gadgets))

        whole = calculate_total(gadgets)
        logger.data('Whole calculated', whole=whole)

        if whole > 1000:
            logger.warning('Excessive worth order', whole=whole, flagged=True)

        return True
    besides Exception as e:
        logger.error('Order processing failed', error=str(e))
        return False

def fetch_order_items(order_id):
    return [{'id': 1, 'price': 50}, {'id': 2, 'price': 75}]

def calculate_total(gadgets):
    return sum(merchandise['price'] for merchandise in gadgets)

process_order('ORD-12345', 'USER-789')

 

This ContextLogger wrapper does one thing helpful: it robotically consists of context in each log message. The order_id and user_id get added to all logs with out repeating them in each logging name.

The JSON format makes these logs simple to parse and search.

The **kwargs in every logging technique helps you to add further context to particular log messages. This combines international context (order_id, user_id) with native context (item_count, whole) robotically.

This sample is very helpful in net functions the place you need request IDs, person IDs, or session IDs in each log message from a request.

 

Rotating Log Information to Forestall Disk Area Points

 
Log recordsdata develop rapidly in manufacturing. With out rotation, they are going to finally fill your disk. Right here is learn how to implement computerized log rotation.
 

from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler

def setup_rotating_logger(title):
    logger = logging.getLogger(title)
    logger.setLevel(logging.DEBUG)

    # Measurement-based rotation: rotate when file reaches 10MB
    size_handler = RotatingFileHandler(
        'app_size_rotation.log',
        maxBytes=10 * 1024 * 1024,  # 10 MB
        backupCount=5  # Hold 5 previous recordsdata
    )
    size_handler.setLevel(logging.DEBUG)

    # Time-based rotation: rotate every day at midnight
    time_handler = TimedRotatingFileHandler(
        'app_time_rotation.log',
        when='midnight',
        interval=1,
        backupCount=7  # Hold 7 days
    )
    time_handler.setLevel(logging.INFO)

    formatter = logging.Formatter(
        '%(asctime)s - %(title)s - %(levelname)s - %(message)s'
    )
    size_handler.setFormatter(formatter)
    time_handler.setFormatter(formatter)

    logger.addHandler(size_handler)
    logger.addHandler(time_handler)

    return logger


logger = setup_rotating_logger('rotating_app')

 

Allow us to now attempt to use rotation of log recordsdata:
 

for i in vary(1000):
    logger.data(f'Processing report {i}')
    logger.debug(f'Report {i} particulars: accomplished in {i * 0.1}ms')

 

RotatingFileHandler manages logs based mostly on file measurement. When the log file reaches 10MB (laid out in bytes), it will get renamed to app_size_rotation.log.1, and a brand new app_size_rotation.log begins. The backupCount of 5 means you’ll maintain 5 previous log recordsdata earlier than the oldest will get deleted.

TimedRotatingFileHandler rotates based mostly on time intervals. The ‘midnight’ parameter means it creates a brand new log file day-after-day at midnight. You may additionally use ‘H’ for hourly, ‘D’ for every day (at any time), or ‘W0’ for weekly on Monday.

The interval parameter works with the when parameter. With when='H' and interval=6, logs would rotate each 6 hours.

These handlers are important for manufacturing environments. With out them, your software might crash when the disk fills up with logs.

 

Logging in Completely different Environments

 
Your logging wants differ between improvement, staging, and manufacturing. Right here is learn how to configure logging that adapts to every atmosphere.
 

import logging
import os

def configure_environment_logger(app_name):
    """Configure logger based mostly on atmosphere"""
    atmosphere = os.getenv('APP_ENV', 'improvement')
    
    logger = logging.getLogger(app_name)
    
    # Clear present handlers
    logger.handlers = []
    
    if atmosphere == 'improvement':
        # Improvement: verbose console output
        logger.setLevel(logging.DEBUG)
        handler = logging.StreamHandler()
        handler.setLevel(logging.DEBUG)
        formatter = logging.Formatter(
            '%(levelname)s - %(title)s - %(funcName)s:%(lineno)d - %(message)s'
        )
        handler.setFormatter(formatter)
        logger.addHandler(handler)
        
    elif atmosphere == 'staging':
        # Staging: detailed file logs + necessary console messages
        logger.setLevel(logging.DEBUG)
        
        file_handler = logging.FileHandler('staging.log')
        file_handler.setLevel(logging.DEBUG)
        file_formatter = logging.Formatter(
            '%(asctime)s - %(title)s - %(levelname)s - %(funcName)s - %(message)s'
        )
        file_handler.setFormatter(file_formatter)
        
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.WARNING)
        console_formatter = logging.Formatter('%(levelname)s: %(message)s')
        console_handler.setFormatter(console_formatter)
        
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
        
    elif atmosphere == 'manufacturing':
        # Manufacturing: structured logs, errors solely to console
        logger.setLevel(logging.INFO)
        
        file_handler = logging.handlers.RotatingFileHandler(
            'manufacturing.log',
            maxBytes=50 * 1024 * 1024,  # 50 MB
            backupCount=10
        )
        file_handler.setLevel(logging.INFO)
        file_formatter = logging.Formatter(
            '{"timestamp": "%(asctime)s", "stage": "%(levelname)s", '
            '"logger": "%(title)s", "message": "%(message)s"}'
        )
        file_handler.setFormatter(file_formatter)
        
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.ERROR)
        console_formatter = logging.Formatter('%(levelname)s: %(message)s')
        console_handler.setFormatter(console_formatter)
        
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
    
    return logger

 

This environment-based configuration handles every stage otherwise. Improvement reveals all the things on the console with detailed info, together with operate names and line numbers. This makes debugging quick.

Staging balances improvement and manufacturing. It writes detailed logs to recordsdata for investigation however solely reveals warnings and errors on the console to keep away from noise.

Manufacturing focuses on efficiency and construction. It solely logs INFO stage and above to recordsdata, makes use of JSON formatting for simple parsing, and implements log rotation to handle disk house. Console output is proscribed to errors solely.
 

# Set atmosphere variable (usually carried out by deployment system)
os.environ['APP_ENV'] = 'manufacturing'

logger = configure_environment_logger('my_application')

logger.debug('This debug message will not seem in manufacturing')
logger.data('Consumer logged in efficiently')
logger.error('Did not course of cost')

 

The atmosphere is set by the APP_ENV atmosphere variable. Your deployment system (Docker, Kubernetes, or different cloud platforms) units this variable robotically.

Discover how we clear present handlers earlier than configuration. This prevents duplicate handlers if the operate known as a number of instances in the course of the software lifecycle.

 

Wrapping Up

 
Good logging makes the distinction between rapidly diagnosing points and spending hours guessing what went improper. Begin with fundamental logging utilizing applicable severity ranges, add structured context to make logs searchable, and configure rotation to stop disk house issues.

The patterns proven right here work for functions of any measurement. Begin easy with fundamental logging, then add structured logging while you want higher searchability, and implement environment-specific configuration while you deploy to manufacturing.

Pleased logging!
 
 

Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embody DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and occasional! At present, she’s engaged on studying and sharing her information with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles