Hyperparameter Optimization for Transfer Learning

Optimising Hyperparameters using optuna and fastai2
Author

Daniel van Strien

Published

July 1, 2020

#hide
!pip -q install fastai2 optuna swifter toolz 
     |████████████████████████████████| 194kB 3.5MB/s 
     |████████████████████████████████| 204kB 8.2MB/s 
     |████████████████████████████████| 1.1MB 10.9MB/s 
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
     |████████████████████████████████| 81kB 7.6MB/s 
     |████████████████████████████████| 3.6MB 16.8MB/s 
     |████████████████████████████████| 450kB 45.4MB/s 
     |████████████████████████████████| 81kB 7.4MB/s 
     |████████████████████████████████| 112kB 53.0MB/s 
     |████████████████████████████████| 51kB 6.5MB/s 
     |████████████████████████████████| 61kB 8.0MB/s 
     |████████████████████████████████| 645kB 54.2MB/s 
     |████████████████████████████████| 102kB 11.6MB/s 
  Building wheel for alembic (PEP 517) ... done
  Building wheel for optuna (setup.py) ... done
  Building wheel for psutil (setup.py) ... done
  Building wheel for pyperclip (setup.py) ... done
  Building wheel for locket (setup.py) ... done
  Building wheel for contextvars (setup.py) ... done
ERROR: distributed 2.18.0 has requirement tornado>=5; python_version < "3.8", but you'll have tornado 4.5.3 which is incompatible.
#hide
from google.colab import drive
drive.mount('/content/drive')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly

Enter your authorization code:
··········
Mounted at /content/drive
#hide
from pathlib import Path
from fastai2.text.all import *
from fastai2.vision.all import *
from fastai2.vision.all import *
import pandas as pd
import optuna
import pprint
#hide
Path('data').mkdir(exist_ok=True)
out_path = Path('/content/drive/My Drive/Models/')
path = Path('data/')

tl;dr

This post covers:

  • the motivations for ‘pragmatic hyperparameters optimization’
  • how to do this using Optuna (with an example applied to the fastai2 library)

Optimizing hyperparameters?

Deep learning models have a range of Hyperparameters. These include the basic building blocks of a model like the number of layers used or the size of embedding layers, and the parameters for the training of models such as learning rate. Changing some of these parameters will improve the performance of a model. There is therefore a potential win from finding the right values for these parameters.

Auto ML vs pragmatic hyperparameters optimization

As a way of framing ‘pragmatic search’, it is useful to contrast it to Auto ML. If you haven’t come across it before:

The term AutoML has traditionally been used to describe automated methods for model selection and/or hyperparameter optimization. - {% fn 1 %}.

In particular what is termed Auto ML often includes a search across model and Hyperparameters but can also refer to ‘Neural Architecture Search’ in which the objective is to piece together a new model type for a specific problem or dataset. An underlying assumption of some of this Auto ML approach is that each problem or dataset requires a unique model architecture.

In contrast a more ‘pragmatic’ approach uses an existing model architectures which have been shown to work across a range of datasets and tasks, and utilise transfer learning and other ‘tricks’ like cyclical learning rates and data augmentation. In a heritage context, it is likely that there are going to be bigger issues with imbalanced classes, noisy labels etc, and focusing on designing a custom architecture is probably going to lead to modest improvements in the performance of the model.

So what remains to be optimized?

In contrast to Auto ML which can involve looking at huge range of potential architectures and parameters we could instead limit our focus to smaller set of things which may have a large impact on the performance of your model.

As an example use case for hyperparameters optimization I’ll use two datasets which contain transcripts of trials from the Old Bailey online and which are classified into various categories (theft, deception, etc). One of the datasets is drawn the decade 1830 the other one 1730.

The approach taken to classifying these trials will be to follow the “Universal Language Model Fine-tuning for Text Classification” approach. {% fn 2 %}.

I won’t give an in depth summary of the approach here but idea is that:

  • A language model - in this case a LSTM based model - is trained on a Wikipedia text. This provides a “general” language model that learns to “understand” general features of a language, in this case English
  • this language model is then fine-tuned on a target dataset, in the orginal paper this is IMDB movie reviews.
  • one this language model has been fine-tuned on the target dataset this fine-tuned language model is used as input for a classifier

The intuition here is that by utilising a pre-trained language model the Wikipedia part, and the fine-tuning part we get the benefits of a massive training set (Wikipedia) whilst also being able to ‘focus’ the language model on a target corpus which will use language differently. This makes a lot of intuitive sense, but a question in this use case is how much to fine-tune the language model on our target datasets. A reasonable assumption might be that since language will be more different in 1730 compared to 1830 we may want to fine tune the language model trained on Wikipedia more on the 1730 dataset.

We could of course test through some trial and error experiments, but this is a question which may benefit from some more systematic searching for appropriate hyperparameters. Before we get into this example in more depth I’ll discuss the library I’m working with for doing this hyperparameter searching.

Optuna: A hyperparameter optimization framework

In this post I will be using Optuna “an automatic hyperparameter optimization software framework, particularly designed for machine learning”. {% fn 3 %}.

There are some really nice features in Optuna which I’ll cover in this post as I explore the question of language model fine-tuning, so hopefully even if you don’t care about the specific use case it might still provide a useful overview of Optuna.

In this blog post my examples will use version two of the fastai library but there really isn’t anything that won’t translate to other frameworks. Optuna has integrations for a number of libraries (including version 1 of fastai) but for this blog I won’t use this integration.

A simple optimization example

To show the approach used in Optuna I’ll use a simple image classification example. In this case using a toy example of classifying people vs cats in images taken from 19th Century books.

Optuna has two main concepts to understand: study and trial. A study is the overarching process of optimization based on some objective function. A trial is a single test/execution of the objective function. We’ll return to this in more detail. For now lets look at a simple example.

For our first example we’ll just use Optuna to test whether to use a pre-trained model or not. If the option is True then the ResNet18 model we use will use weights from pre-training on ImageNet, if False the model will start with random weights.

Looking at the high level steps of using Optuna (I’ll go into more detail later). We create an objective function:

#collapse-hide
!wget -q https://zenodo.org/record/3689444/files/humancats.zip?download=1
!unzip -q *humancats.zip* -d data/
def objective(trial):
    is_pretrained = trial.suggest_categorical('pre_trained', [True, False])
    dls = ImageDataLoaders.from_folder('data/human_vs_cats/', valid_pct=0.4, item_tfms=Resize(64))
    learn = cnn_learner(dls, resnet18, pretrained=is_pretrained, metrics=[accuracy])
    learn.fit(1)
    acc = learn.recorder.values[-1][-1]
    return acc

Most of this will look familiar if you are have used fastai before. Once we have this we create a study:

study = optuna.create_study(direction='maximize')

and then optimize this study:

study.optimize(objective, n_trials=2)
epoch train_loss valid_loss accuracy time
0 1.503035 0.710954 0.555556 00:06
[I 2020-06-04 16:58:49,862] Finished trial#0 with value: 0.5555555820465088 with parameters: {'pre_trained': False}. Best is trial#0 with value: 0.5555555820465088.
epoch train_loss valid_loss accuracy time
0 1.691165 1.218440 0.555556 00:05
[I 2020-06-04 16:58:56,272] Finished trial#1 with value: 0.5555555820465088 with parameters: {'pre_trained': False}. Best is trial#0 with value: 0.5555555820465088.

Once we’ve run some trials we can inspect the study object for the best value we’re optimizing for. In this case this is the accuracy but it will be whatever is returned by our function. We can also see the parameters which led to this value.

study.best_value, study.best_params
(0.5555555820465088, {'pre_trained': False})

This toy example wasn’t particularly useful (it just confirmed we probably want to use a pre-trained model) but going through the steps provides an overview of the main things required by Optuna. Starting with defining a function objective

def objective(trial):

this is the function we want to optimize. We could call it something else but following the convention in the Optuna docs the function we’ll call it objective. This function takes ‘trial’ as an argument.

is_pretrained = trial.suggest_categorical('pre_trained', [True, False])

here we use trial to “suggest” a categorical in this case one of two options (whether pre trained is set to true or false). We do this using trial.suggest_categorical and pass it the potential options (in this case True or False).

trial.suggest_blah defines the paramater “search space” for Optuna. We’ll look at all of the options for this later on. The final step in defining our objective function i.e. the thing we want to optimize:

return acc

This return value is objective value that Optuna will optimize. Because this is just the return value of a function there is a lot of flexibility in what this can be. In this example it is accuracy but it could be training or validation loss, or another training metrics. Later on we’ll look at this in more detail.

Now let’s look at the study part:

study = optuna.create_study(direction='maximize')

This is the most simple way of creating a study. This creates a study object, again, we’ll look at more options as we go along. The one option we pass here is the direction. This refers to to whether Optuna should try to increase the return value of our optimization function or decrease it. This depends on what you a tracking i.e. you’d want to minimize error or validation loss but increase accuracy or F1 score.

Looking at the overview provided in the Optuna docs we have three main building blocks:

  • Trial: A single call of the objective function

  • Study: An optimization session, which is a set of trials

  • Parameter: A variable whose value is to be optimized

Parameter search space

Borrowing once more from the docs:

The difficulty of optimization increases roughly exponentially with regard to the number of parameters. That is, the number of necessary trials increases exponentially when you increase the number of parameters, so it is recommended to not add unimportant parameters

This is a crucial point. Particularly if we want to use optimization in a pragmatic way. When we have existing knowledge or evidence about what works well for a particular problem, we should use that rather than asking Optuna to find this out for us. There are some extra tricks to make our search for the best parameters more efficient which will be explored below but for now let’s get back to the example use case.

Fine-tuning a language model

#collapse-hide
df_1830 = pd.read_csv('https://gist.githubusercontent.com/davanstrien/4bc85d8a4127a2791732280ffaa43293/raw/cd1a3cc53674b64c8f130edbcb34e835afa665fb/1830trial.csv')
df_1730 = pd.read_csv('https://gist.githubusercontent.com/davanstrien/4bc85d8a4127a2791732280ffaa43293/raw/cd1a3cc53674b64c8f130edbcb34e835afa665fb/1730trial.csv')

For the sake of brevity I won’t cover the steps to generate this dataset the instructions for doing so for the 1830s trials can be found here (and can be easily adapted for the 1730s trial).

#hide_input 
df_1830.head(2)
Unnamed: 0 Unnamed: 0.1 0 file broad narrow text
0 14463.0 t18361128-57a theft-housebreaking t18361128-57a.txt theft housebreaking \n\n\n\n\n57. \n\n\n\n\nJOHN BYE\n the younger and \n\n\n\n\nFREDERICK BYE\n were indicted for\n\n feloniously breaking and entering the dwelling-house of \n\n\n\nJohn Bye, on the \n21st of November, at \nSt. Giles-in-the-Fields, and stealing therein 12 apples, value 9d.; 1 box, value 1d.; 24 pence, and 1 twopenny-piece; the goods and monies of \n\n\n\nMary Byrne.\n\n\n\n\n\n\nMARY BYRNE\n. I sell fruit; I live in Titchbourne-court, Holborn. On the 21st of November I went out at one o'clock, and locked my door?I left 2s. worth of penny-pieces in my drawer, and two dozen large apples?I came...
1 19021.0 t18380917-2214 theft-pocketpicking t18380917-2214.txt theft pocketpicking \n\n\n\n2214. \n\n\n\n\nMARY SMITH\n was indicted\n\n for stealing, on the \n16th of September, 1 purse, value 2d.; 3 half-crowns, and twopence; the goods and monies of \n\n\n\nGeorge Sainsbury, from his person.\n\n\n\n\n\n\nGEORGE SAINSBURY\n. Between twelve and one o'clock, on the 16th of September, I went to sleep in the fields, at Barnsbury-park, Islington, I had three half-crowns, and twopence, in my pocket?I was awoke, and missed my money?I went to the prisoner, and charged her with it?she said she had not got it?I followed her, and saw her drop ray purse down, it had two penny piece...

We load the data using fastai2 TextDataLoaders

# collapse-show
def load_lm_data(df):
    data_lm = TextDataLoaders.from_df(
        df.sample(frac=0.5), text_col="text", is_lm=True, bs=128
    )
    return data_lm


# Classification data
def load_class_data(df, data_lm):
    data_class = TextDataLoaders.from_df(
        df.sample(frac=0.5),
        text_col="text",
        label_col="broad",
        valid_pct=0.3,
        bs=128,
        text_vocab=data_lm.vocab,
    )
    return data_class
data_lm = load_lm_data(df_1830)
data_class = load_class_data(df_1830, data_lm)

Create the language model learner and classifier learner:

# collapse-show
def create_lm():
    return language_model_learner(data_lm, AWD_LSTM, pretrained=True).to_fp16()


def create_class_learn():
    return text_classifier_learner(
        data_class, AWD_LSTM, metrics=[accuracy, F1Score(average="weighted")]
    ).to_fp16()

Optuna trial suggest

In the example above trial.suggest_categorical was used to define the potential parameter. Optuna has five kinds of parameters which can be optimized. These all work through the trial.suggest method.

Categorical

This can be used for models, optimizers, and for True/False flags.

optimizer = trial.suggest_categorical('optimizer', ['MomentumSGD', 'Adam'])

Integer

n_epochs = trial.suggest_int('num_epochs', 1, 3)

Uniform

max_zoom = trial.suggest_uniform('max_zoom', 0.0, 1.0)

Loguniform

learning_rate = trial.suggest_loguniform('learning_rate', 1e-4, 1e-2)

Discrete-uniform

drop_path_rate = trial.suggest_discrete_uniform('drop_path_rate', 0.0, 1.0)

The string value provides a key for the parameters which is used to access these parameters later, it’s therefore important to give them a sensible name.

Limiting parameters?

Adding additional trial.suggest to your optimization function increases the search space for Optuna to optimize over so you should avoid adding additional parameters if they are not necessary.

The other way in which the search space can be constrained is to limit the range of the search i.e. for learning rate

learning_rate = trial.suggest_loguniform('learning_rate', 1e-4, 1e-2)

is preferable over

learning_rate = trial.suggest_loguniform('learning_rate', 1e-10, 1e-1)

if it’s not likely the optimal learning rate will sit outside of this range.

How many parameters you include will also depend on the type of model you are trying to train. In the use case of fine-tuning a language model we will want to limit the options more since language models are generally quite slow to train. If, on the other hand, we were trying to improve an image classification model which only takes minutes to train then searching through a larger parameter space would become more feasible.

Objective function for fine-tuning a language model

The objective function below has two stages; train a language model, use the encoder from this language model for a classifier.

The parameters we’re trying to optimize in this case are:

  • learning rate for the frozen language model
  • number of epochs to train only the final layers of the language model
  • learning rate for the unfrozen language model
  • number of epochs for training the whole language model

We use lm_learn.no_bar() as a context manager to reduce the amount of logging.

def objective(trial):
    lm_learn = create_lm()
    lr_frozen = trial.suggest_loguniform("learning_rate_frozen", 1e-4, 1e-1)
    head_epochs = trial.suggest_int("head_epochs", 1, 5)
    with lm_learn.no_bar():
        lm_learn.fit_one_cycle(head_epochs, lr_max=lr_frozen)
    # Unfrozen Language model
    lr_unfreeze = trial.suggest_loguniform("learning_rate_unfrozen", 1e-7, 1e-1)
    body_epochs = trial.suggest_int("lm_body_epochs", 1, 5)
    lm_learn.unfreeze()
    with lm_learn.no_bar():
        lm_learn.fit_one_cycle(body_epochs, lr_unfreeze)
    lm_learn.save_encoder("finetuned")
    # Classification
    cl_learn = create_class_learn()
    cl_learn.load_encoder("finetuned")
    cl_learn.fit_one_cycle(3)
    f1 = cl_learn.recorder.values[-1][-1]
    return f1

We can give our study a name and also store it in a database. This allows for resuming previous trials later and accessing the history of previous trials. There are various options for database backends outlined in the documentation.

Creating the study

study_name = "tunelm1830"
study = optuna.create_study(
    study_name=study_name,
    direction="maximize",
    storage=f"sqlite:///{out_path}/optuma/example.db",
)
[I 2020-06-05 15:09:05,470] A new study created with name: tunelm1830

Optimize

Now we’ll run 3 trials and use show_progress_bar=True to give an ETA on when the trials will finish.

study.optimize(objective, n_trials=3, show_progress_bar=True)
/usr/local/lib/python3.6/dist-packages/optuna/_experimental.py:61: ExperimentalWarning:

Progress bar is experimental (supported from v1.2.0). The interface can change in the future.
(#4) [0,5.382655620574951,4.875850200653076,'00:24']
(#4) [1,5.292355537414551,4.737764835357666,'00:24']
(#4) [2,5.183778285980225,4.647550106048584,'00:24']
(#4) [3,5.11093282699585,4.608272552490234,'00:24']
(#4) [4,5.072442054748535,4.601930618286133,'00:24']
(#4) [0,4.7495622634887695,4.241390228271484,'00:27']
epoch train_loss valid_loss accuracy f1_score time
0 2.326032 2.070412 0.020000 0.017034 00:10
1 2.302230 2.136864 0.023333 0.003590 00:10
2 2.269061 2.180663 0.016667 0.004408 00:10
[I 2020-06-05 15:12:20,128] Finished trial#0 with value: 0.00440805109922757 with parameters: {'learning_rate_frozen': 0.00014124685078723662, 'head_epochs': 5, 'learning_rate_unfrozen': 0.00010276862511970148, 'lm_body_epochs': 1}. Best is trial#0 with value: 0.00440805109922757.
(#4) [0,4.713407516479492,3.7350399494171143,'00:24']
(#4) [1,3.998744249343872,3.3055806159973145,'00:24']
(#4) [2,3.6486754417419434,3.192685842514038,'00:24']
(#4) [3,3.4996860027313232,3.1756556034088135,'00:24']
(#4) [0,3.4227023124694824,3.163315534591675,'00:27']
(#4) [1,3.3954737186431885,3.140226364135742,'00:27']
(#4) [2,3.3778774738311768,3.125929117202759,'00:27']
(#4) [3,3.357388973236084,3.119621753692627,'00:27']
(#4) [4,3.3542206287384033,3.1186859607696533,'00:27']
epoch train_loss valid_loss accuracy f1_score time
0 2.368984 2.121307 0.013333 0.000759 00:11
1 2.335033 2.022853 0.250000 0.368652 00:10
2 2.296630 1.948786 0.313333 0.452365 00:10
[I 2020-06-05 15:16:49,562] Finished trial#1 with value: 0.45236502121696065 with parameters: {'learning_rate_frozen': 0.0060643425219262335, 'head_epochs': 4, 'learning_rate_unfrozen': 2.734844423029637e-05, 'lm_body_epochs': 5}. Best is trial#1 with value: 0.45236502121696065.
(#4) [0,5.3748459815979,4.851675987243652,'00:24']
(#4) [1,5.247058868408203,4.672318935394287,'00:24']
(#4) [2,5.111597061157227,4.559732437133789,'00:24']
(#4) [3,5.026832103729248,4.512131690979004,'00:24']
(#4) [4,4.982809066772461,4.5044732093811035,'00:24']
(#4) [0,4.915407657623291,4.423311233520508,'00:27']
(#4) [1,4.857243061065674,4.394893646240234,'00:27']
epoch train_loss valid_loss accuracy f1_score time
0 2.368439 2.036706 0.240000 0.360355 00:10
1 2.359790 2.093103 0.033333 0.045878 00:09
2 2.331945 2.140194 0.016667 0.013589 00:10
[I 2020-06-05 15:20:20,119] Finished trial#2 with value: 0.013588651008106425 with parameters: {'learning_rate_frozen': 0.0001971120155925954, 'head_epochs': 5, 'learning_rate_unfrozen': 1.0649951798153689e-05, 'lm_body_epochs': 2}. Best is trial#1 with value: 0.45236502121696065.

Results

You can see how trials are peforming in the logs with the last part of the log reporting the best trial so far. We can now access the best value and best_params.

study.best_value, study.best_params
(0.45236502121696065,
 {'head_epochs': 4,
  'learning_rate_frozen': 0.0060643425219262335,
  'learning_rate_unfrozen': 2.734844423029637e-05,
  'lm_body_epochs': 5})

Multi objective

Optuna has experimental support for multi-objective optimization. This might be useful if you don’t want to optimize for only one metrics.

An alternative to using this approach is to report other things you care about during the trial but don’t directly want to optimize for. As an example, you might mostly care about the accuracy of a model but also care a bit about how long it takes to do inference.

One approach is to use a multi-objective trial. An alternative is to instead log inference time as part of the trial and continue to optimize for other metrics. You can then later on balance the accuracy of different trials with the inference time. It may turn out later that a slightly slower inference time can be dealt with by scaling vertically. Not prematurely optimizing for multi-objectives can therefore give you more flexibility. To show this in practice I’ll use an image classification dataset.

#hide
!unzip -q '{out_path}/optuma/1905_maps.zip' -d data/

The data

The data is images of maps and other things from historic newspapers. The aim is to classify whether the image is a map or something else.

dls = ImageDataLoaders.from_folder(
    "data/1905_maps/", valid_pct=0.3, item_tfms=Resize(256)
)
dls.show_batch()

#hide
learn = cnn_learner(dls, resnet50, metrics=[F1Score(average='weighted')])
learn.unfreeze()
lr_min,unfrozen_lr_steep = learn.lr_find(suggestions=True)

Working with Optuna trial data

There are now ~500 trials which are stored in the study. Each of these trials contains the parameters used, metadata about the trial, the value of the thing being optimized, and importantly for this example the user attribute which stores the validation time. Optuna makes it very easy to export this information to a dataframe.

df = study.trials_dataframe()
df.head(3)
number value datetime_start datetime_complete duration params_apply_tfms params_do_flip params_epochs params_flip_vert params_learning_rate params_max_lighting params_max_rotate params_max_zoom params_model params_mult params_unfrozen_epochs params_unfrozen_learning_rate user_attrs_execute_time system_attrs_completed_rung_0 system_attrs_completed_rung_1 system_attrs_fixed_params state
0 0 0.872964 2020-06-07 15:03:29.911841 2020-06-07 15:03:57.151460 00:00:27.239619 True False 5.0 True 0.000158 0.515536 93.501858 2.50144 resnet50 0.797373 2.0 1.445440e-05 0.82319 NaN NaN {'pre_trained': True, 'apply_tfms': True, 'epochs': 5, 'learning_rate': 0.00015848931798245758, 'model': 'resnet50', 'unfrozen_learning_rate': 1.4454397387453355e-05} COMPLETE
1 1 0.454525 2020-06-07 15:04:11.520248 2020-06-07 15:04:18.419082 00:00:06.898834 False NaN 1.0 NaN 0.000014 NaN NaN NaN resnet18 NaN 1.0 4.041608e-07 0.67698 NaN NaN {'pre_trained': False, 'apply_tfms': False, 'epochs': 1, 'unfrozen_epochs': 1} COMPLETE
2 2 NaN 2020-06-07 15:04:32.047588 NaT NaT NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN RUNNING
#hide
df.to_csv('optuna_study.csv')
#hide
df = pd.read_csv('https://gist.githubusercontent.com/davanstrien/0c9670d02cdf8a9a866b8a467664b690/raw/cb3222f1baf8ae894923e2b8898beaa22ebeadd8/optuna_trials.csv')

We can now easily work with the trial data using pandas. Lets start by getting the best two values

df.sort_values(['value'], ascending=False).head(2)
Unnamed: 0 number value datetime_start datetime_complete duration params_apply_tfms params_do_flip params_epochs params_flip_vert ... params_max_zoom params_model params_mult params_unfrozen_epochs params_unfrozen_learning_rate user_attrs_execute_time system_attrs_completed_rung_0 system_attrs_completed_rung_1 system_attrs_fixed_params state
177 177 177 0.963976 2020-06-07 16:48:36.232551 2020-06-07 16:49:21.393454 0 days 00:00:45.160903000 True True 10.0 False ... 1.614175 densenet121 0.608727 4.0 7.608088e-06 0.880459 0.954955 NaN NaN COMPLETE
302 302 302 0.955064 2020-06-07 18:11:00.667449 2020-06-07 18:11:45.658241 0 days 00:00:44.990792000 True True 10.0 False ... 0.921689 densenet121 0.115708 4.0 6.210737e-10 0.878865 0.945946 NaN NaN COMPLETE

2 rows × 23 columns

We can see how often transforms were applied in the trials

df['params_apply_tfms'].value_counts()
True     360
False    142
Name: params_apply_tfms, dtype: int64

Viewing the number of trials for each model which had a value over 90

df['params_model'][df['value'] >= 0.90].value_counts()
densenet121      181
resnet50           9
squeezenet1_0      2
Name: params_model, dtype: int64

Filtering a bit more aggressively (value above 94)

df94 = df[df['value'] >= 0.94]
len(df94)
13

How often were transforms applied for these trials

df94['params_apply_tfms'].value_counts()
True     11
False     2
Name: params_apply_tfms, dtype: int64

The number of unfrozen epochs

df94['params_unfrozen_epochs'].value_counts()
4.0    6
2.0    3
3.0    2
6.0    1
5.0    1
Name: params_unfrozen_epochs, dtype: int64

Getting back to the validation time we can get the max, min and mean values

df['user_attrs_execute_time'].max(), df['user_attrs_execute_time'].min(), df['user_attrs_execute_time'].mean()
(0.9760787487030028, 0.6313643455505371, 0.8461264789613903)

If we did care about reducing the execution time we could use these values to find the trial with the shortest execution time:

df94['user_attrs_execute_time'].sort_values()
96     0.837618
426    0.848863
394    0.849243
395    0.851704
438    0.852672
500    0.863168
344    0.875520
432    0.877422
302    0.878865
177    0.880459
473    0.884703
372    0.906770
294    0.907221
Name: user_attrs_execute_time, dtype: float64

If we were happy with slightly lower performance we could pick the study with the shortest execution time which is still achieves a f1 above 94%

df94.loc[96]
number                                                   96
value                                              0.945755
datetime_start                   2020-06-07 15:57:20.382634
datetime_complete                2020-06-07 15:57:54.848296
duration                             0 days 00:00:34.465662
params_apply_tfms                                     False
params_do_flip                                          NaN
params_epochs                                             9
params_flip_vert                                        NaN
params_learning_rate                            8.47479e-05
params_max_lighting                                     NaN
params_max_rotate                                       NaN
params_max_zoom                                         NaN
params_model                                    densenet121
params_mult                                             NaN
params_unfrozen_epochs                                    2
params_unfrozen_learning_rate                   4.31178e-07
user_attrs_execute_time                            0.837618
system_attrs_completed_rung_0                           NaN
system_attrs_completed_rung_1                           NaN
system_attrs_fixed_params                               NaN
state                                              COMPLETE
Name: 96, dtype: object

This is a slightly artificial example but hopefully shows the possibility of logging user attributes which can then be accessed easily later without prematurely optimizing for something which may not be important.

Further reading

Hopefully this post has been a helpful overview of Optuna with a somewhat realistic use case. I would recommend reading the Optuna docs which covers things in much more detail.

References

{{‘Auto ml [auto-ml, fastai blog(https://www.fast.ai/2018/07/16/auto-ml2/#auto-ml’ | fndetail: 1 }}

{{‘introducting ulmfit nlp.fast.ai/classification/2018/05/15/introducting-ulmfit.html’ | fndetail: 2}}

{{‘Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta,and Masanori Koyama. 2019. Optuna: A Next-generation Hyperparameter Optimization Framework. In KDD.’ | fndetail: 3 }}