Tuesday, January 20, 2026

3 Hyperparameter Tuning Strategies That Go Past Grid Search


3 Hyperparameter Tuning Strategies That Go Past Grid Search
Picture by Writer

 

Introduction

 
When constructing machine studying fashions with reasonable to excessive complexity, there may be an ample vary of mannequin parameters that aren’t realized from information, however as a substitute should be set by us a priori: these are referred to as hyperparameters. Fashions like random forest ensembles and neural networks have a wide range of hyperparameters to be adjusted, such that every one can take certainly one of many various values. Consequently, the attainable methods to configure even a small subset of hyperparameters grow to be practically limitless. This entails an issue: figuring out the optimum configuration of those hyperparameters — i.e. the one(s) yielding the most effective mannequin efficiency — may grow to be like looking for a needle in a haystack — and even worse: in an ocean.

This text builds on a earlier information from Machine Studying Mastery relating to the artwork of hyperparameter tuning, and adopts a hands-on method for instance the usage of intermediate to superior hyperparameter tuning methods in observe.

Particularly, you’ll discover ways to apply these three hyperparameter tuning methods:

  • randomized search
  • bayesian optimization
  • successive halving

 

Performing Preliminary Setup

 
Earlier than starting, we are going to import the mandatory libraries and dependencies — when you’ve got a “Module not Discovered” error for any of those, make sure you pip set up the library in query first. We might be utilizing NumPy, scikit-learn, and Optuna:

import numpy as np
import time
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier
import optuna
import warnings
warnings.filterwarnings('ignore')

 

We will even load the dataset used within the three examples: Modified Nationwide Institute of Requirements and Know-how (MNIST), a dataset for classification of low-resolution photos of handwritten digits.

print("=" * 70)
print("LOADING MNIST DATASET FOR IMAGE CLASSIFICATION")
print("=" * 70)

# Load digits dataset (light-weight model of MNIST: 8x8 photos, 1797 samples)
digits = load_digits()
X, y = digits.information, digits.goal

# Practice-test break up
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

print(f"Coaching cases: {X_train.form[0]}")
print(f"Take a look at cases: {X_test.form[0]}")
print(f"Options: {X_train.form[1]}")
print(f"Lessons: {len(np.distinctive(y))}")
print()

 

Subsequent, we outline a hyperparameter search area; that’s, we determine which parameters and subsets of values inside each we need to attempt together.

print("=" * 70)
print("HYPERPARAMETER SEARCH SPACE")
print("=" * 70)

# Typical hyperparameters to discover in a random forest ensemble
param_space = {
    'n_estimators': (10, 200),      # Variety of bushes
    'max_depth': (5, 50),            # Most tree depth
    'min_samples_split': (2, 20),   # Min samples to separate node
    'min_samples_leaf': (1, 10),    # Min samples in leaf node
    'max_features': (0.1, 1.0)      # Fraction of options to contemplate
}

print("Search area:")
for param, bounds in param_space.gadgets():
    print(f"  {param}: {bounds}")
print()

 

As a remaining preparatory step, we outline a operate that might be reused. It encapsulates the method of coaching and evaluating a random forest ensemble mannequin below one particular hyperparameter configuration, utilizing cross-validation (CV) alongside classification accuracy to find out the mannequin’s high quality. Observe that this operate could also be known as numerous instances by every of the three methods we are going to implement — as many as there are hyperparameter worth combos to attempt.

def evaluate_model(params, X_train, y_train, cv=3):
    # Instantiate a random forest mannequin with given hyperparameters
    mannequin = RandomForestClassifier(
        n_estimators=int(params['n_estimators']),
        max_depth=int(params['max_depth']),
        min_samples_split=int(params['min_samples_split']),
        min_samples_leaf=int(params['min_samples_leaf']),
        max_features=float(params['max_features']),
        random_state=42,
        n_jobs=-1  # Use all CPU cores for pace
    )
    
    # Use CV to measure efficiency
    # This offers us a extra strong estimate than a single prepare/val break up
    scores = cross_val_score(mannequin, X_train, y_train, cv=cv, 
                             scoring='accuracy', n_jobs=-1)
    # Return the common cross-validation accuracy
    return np.imply(scores)

 

Now we’re able to attempt the three methods!

 

Implementing Randomized Search

 
As its identify suggests, randomized search randomly samples hyperparameter combos from the search area, fairly than exhaustively making an attempt all attainable combos in a pre-defined search area, like grid search does. Each trial is impartial, with no data gained from earlier trials. Nonetheless, it is a extremely efficient methodology in lots of conditions, often discovering high-quality options extra rapidly than grid search.

Right here is how a randomized search could be carried out and used on random forest ensembles to categorise MNIST information:

def randomized_search(n_trials=30):
    start_time = time.time() # Non-compulsory: used to measure execution time
    outcomes = []
    
    print(f"nRunning {n_trials} random trials...")
    
    for i in vary(n_trials):
        # RANDOM SAMPLING: hyperparameters are sampled independently utilizing numpy's random quantity era
        params = {
            'n_estimators': np.random.randint(param_space['n_estimators'][0], 
                param_space['n_estimators'][1]),
            'max_depth': np.random.randint(param_space['max_depth'][0], 
                param_space['max_depth'][1]),
            'min_samples_split': np.random.randint(param_space['min_samples_split'][0], 
                param_space['min_samples_split'][1]),
            'min_samples_leaf': np.random.randint(param_space['min_samples_leaf'][0], 
                param_space['min_samples_leaf'][1]),
            'max_features': np.random.uniform(param_space['max_features'][0], 
                param_space['max_features'][1])
        }
        
        # Consider a randomly outlined configuration
        rating = evaluate_model(params, X_train, y_train)
        outcomes.append({'params': params, 'rating': rating})
        
        # Present a progress replace each 10 trials, for informative functions
        if (i + 1) % 10 == 0:
            best_so_far = max(outcomes, key=lambda x: x['score'])
            print(f"  Trial {i+1}/{n_trials}: Greatest rating up to now = {best_so_far['score']:.4f}")
    
    # Measure complete time taken
    elapsed_time = time.time() - start_time
    
    # Determine greatest configuration discovered
    best_result = max(outcomes, key=lambda x: x['score'])
    
    print(f"n✓ Accomplished in {elapsed_time:.2f} seconds")
    print(f"Greatest validation accuracy: {best_result['score']:.4f}")
    print(f"Greatest parameters: {best_result['params']}")
    
    return best_result, outcomes

# Name the tactic to carry out randomized search over 30 trials
random_best, random_results = randomized_search(n_trials=30)

 

Feedback are offered alongside the code to facilitate understanding. The outcomes obtained might be just like the next:

Working 30 random trials...
  Trial 10/30: Greatest rating up to now = 0.9617
  Trial 20/30: Greatest rating up to now = 0.9617
  Trial 30/30: Greatest rating up to now = 0.9617

✓ Accomplished in 64.59 seconds
Greatest validation accuracy: 0.9617
Greatest parameters: {'n_estimators': 195, 'max_depth': 16, 'min_samples_split': 8, 'min_samples_leaf': 2, 'max_features': 0.28306570555707966}

 

Pay attention to the time it took to run the hyperparameter search course of, in addition to the most effective validation accuracy achieved. On this case, it seems 10 trials have been adequate to search out the optimum configuration.

 

Making use of Bayesian Optimization

 
This methodology employs an auxiliary or surrogate mannequin — particularly, a probabilistic mannequin based mostly on Gaussian processes or tree-based buildings — to foretell the best-performing hyperparameter settings. Trials should not impartial; every trial “learns” from earlier trials. Moreover, this methodology makes an attempt to stability exploration (making an attempt new areas within the resolution area) and exploitation (refining promising areas). In abstract, we’ve a better methodology than grid and randomized search.

The Optuna library offers a particular implementation of bayesian optimization for hyperparameter tuning that makes use of a Tree-structured Parzen Estimator (TPE). It classifies trials into “good” or “unhealthy” teams, fashions the probabilistic distribution throughout every, and samples from promising areas.

The entire course of could be carried out as follows:

def bayesian_optimization(n_trials=30):
    """
    Implementation of Bayesian optimization utilizing Optuna library.
    """
    start_time = time.time()
    
    def goal(trial):
        """
        Optuna goal operate: given a trial, returns a rating.
        """
        # Optuna can counsel values based mostly on previous efficiency
        params = {
            'n_estimators': trial.suggest_int('n_estimators', 
                param_space['n_estimators'][0],
                param_space['n_estimators'][1]),
            'max_depth': trial.suggest_int('max_depth',
                param_space['max_depth'][0],
                param_space['max_depth'][1]),
            'min_samples_split': trial.suggest_int('min_samples_split',
                param_space['min_samples_split'][0],
                param_space['min_samples_split'][1]),
            'min_samples_leaf': trial.suggest_int('min_samples_leaf',
                param_space['min_samples_leaf'][0],
                param_space['min_samples_leaf'][1]),
            'max_features': trial.suggest_float('max_features',
                param_space['max_features'][0],
                param_space['max_features'][1])
        }
        
        # Consider and return rating (maximizing by default in Optuna)
        return evaluate_model(params, X_train, y_train)
    
    # The create_study() operate is utilized in Optuna to handle and run
    # the general optimization course of
    print(f"nRunning {n_trials} Bayesian optimization trials...")
    
    examine = optuna.create_study(
        route='maximize',  # We need to maximize accuracy
        sampler=optuna.samplers.TPESampler(seed=42)  # Bayesian algorithm
    )
    
    # Carry out optimization course of with progress callback
    def callback(examine, trial):
        if trial.quantity % 10 == 9:
            print(f"  Trial {trial.quantity + 1}/{n_trials}: Greatest rating = {examine.best_value:.4f}")
    
    examine.optimize(goal, n_trials=n_trials, callbacks=[callback], show_progress_bar=False)
    
    elapsed_time = time.time() - start_time
    
    print(f"n✓ Accomplished in {elapsed_time:.2f} seconds")
    print(f"Greatest validation accuracy: {examine.best_value:.4f}")
    print(f"Greatest parameters: {examine.best_params}")
    
    return examine.best_params, examine.best_value, examine

bayesian_best_params, bayesian_best_score, bayesian_study = bayesian_optimization(n_trials=30)

 

Output (summarized):

✓ Accomplished in 62.66 seconds
Greatest validation accuracy: 0.9673
Greatest parameters: {'n_estimators': 150, 'max_depth': 33, 'min_samples_split': 2, 'min_samples_leaf': 1, 'max_features': 0.19145126698170384}

 

Using Successive Halving

 
The ultimate of the three strategies, successive halving, balances the dimensions of the search area with the allotted computing assets per attainable configuration. It begins with an ample array of configurations however restricted assets (e.g. coaching information) per configuration, steadily eradicating poor performers and allocating extra assets to promising configurations — just like a real-world event the place stronger contestants “survive.”

The next implementation applies successive halving guided by steadily modifying the coaching set measurement.

def successive_halving(n_initial=32, min_resource=0.25, max_resource=1.0):
    
    start_time = time.time()
    
    # Step 1: Defining preliminary hyperparameter configurations at random
    print(f"nGenerating {n_initial} preliminary random configurations...")
    configs = []
    for _ in vary(n_initial):
        config = {
            'n_estimators': np.random.randint(param_space['n_estimators'][0], 
                param_space['n_estimators'][1]),
            'max_depth': np.random.randint(param_space['max_depth'][0], 
                param_space['max_depth'][1]),
            'min_samples_split': np.random.randint(param_space['min_samples_split'][0], 
                param_space['min_samples_split'][1]),
            'min_samples_leaf': np.random.randint(param_space['min_samples_leaf'][0], 
                param_space['min_samples_leaf'][1]),
            'max_features': np.random.uniform(param_space['max_features'][0], 
                param_space['max_features'][1])
        }
        configs.append(config)
    
    # Step 2: apply tournament-like successive rounds of elimination
    current_configs = configs
    current_resource = min_resource
    round_num = 1
    
    whereas len(current_configs) > 1 and current_resource <= max_resource:
        # Decide quantity of coaching cases to make use of within the present spherical
        n_samples = int(len(X_train) * current_resource)
        print(f"n--- Spherical {round_num}: Evaluating {len(current_configs)} configs ---")
        print(f"    Utilizing {current_resource*100:.0f}% of coaching information ({n_samples} samples)")
        
        # Subsample coaching cases
        indices = np.random.selection(len(X_train), measurement=n_samples, substitute=False)
        X_subset = X_train[indices]
        y_subset = y_train[indices]
        
        # Consider all present configs with the present assets
        scores = []
        for i, config in enumerate(current_configs):
            rating = evaluate_model(config, X_subset, y_subset, cv=2)  # Use cv=2 (minimal)
            scores.append(rating)
            
            if (i + 1) % 10 == 0 or (i + 1) == len(current_configs):
                print(f"    Evaluated {i+1}/{len(current_configs)} configs...")
        
        # Elimination coverage: preserve top-performing half solely
        n_keep = max(1, len(current_configs) // 2)
        sorted_indices = np.argsort(scores)[::-1]  # Descending order
        current_configs = [current_configs[i] for i in sorted_indices[:n_keep]]
        
        best_score = scores[sorted_indices[0]]
        print(f"    → Protecting prime {n_keep} configs. Greatest rating: {best_score:.4f}")
        
        # Replace assets, doubling them for the subsequent spherical
        current_resource = min(current_resource * 2, max_resource)
        round_num += 1
    
    # Ultimate analysis of greatest config discovered, given full coaching set
    best_config = current_configs[0]
    final_score = evaluate_model(best_config, X_train, y_train, cv=3)
    
    elapsed_time = time.time() - start_time
    
    print(f"n✓ Accomplished in {elapsed_time:.2f} seconds")
    print(f"Greatest validation accuracy: {final_score:.4f}")
    print(f"Greatest parameters: {best_config}")
    
    return best_config, final_score

halving_best, halving_score = successive_halving(n_initial=32, min_resource=0.25, max_resource=1.0)

 

The ultimate consequence obtained could appear like the next:

✓ Accomplished in 56.18 seconds
Greatest validation accuracy: 0.9645
Greatest parameters: {'n_estimators': 158, 'max_depth': 39, 'min_samples_split': 5, 'min_samples_leaf': 2, 'max_features': 0.2269785516325355}

 

 

Evaluating the Ultimate Outcomes

 
In abstract, all three strategies discovered the optimum configuration with a validation accuracy ranging between 96% and 97%, with bayesian optimization reaching the most effective consequence by a small margin. The outcomes are extra discernible by way of effectivity, with successive halving producing the quickest ends in simply over 56 seconds, in comparison with the 62-64 seconds taken by the opposite two methods.
 
 

Iván Palomares Carrascosa is a frontrunner, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the true world.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles