Hyperparameter Optimization for Transfer Learning
optimising Hyperparameters using optuna and fastai2
- tl;dr
- Optimizing hyperparameters?
- Optuna: A hyperparameter optimization framework
- A simple optimization example
- Parameter search space
- Fine-tuning a language model
- Optuna trial suggest
- Objective function for fine-tuning a language model
- Results
- Where to start the search?
- Multi objective
tl;dr
This post covers:
- the motivations for 'pragmatic hyperparameters optimization'
- how to do this using Optuna (with an example applied to the fastai2 library)
Optimizing hyperparameters?
Deep learning models have a range of Hyperparameters. These include the basic building blocks of a model like the number of layers used or the size of embedding layers, and the parameters for the training of models such as learning rate. Changing some of these parameters will improve the performance of a model. There is therefore a potential win from finding the right values for these parameters.
Auto ML vs pragmatic hyperparameters optimization
As a way of framing 'pragmatic search', it is useful to contrast it to Auto ML. If you haven't come across it before:
The term AutoML has traditionally been used to describe automated methods for model selection and/or hyperparameter optimization. - 1.
In particular what is termed Auto ML often includes a search across model and Hyperparameters but can also refer to 'Neural Architecture Search' in which the objective is to piece together a new model type for a specific problem or dataset. An underlying assumption of some of this Auto ML approach is that each problem or dataset requires a unique model architecture.
In contrast a more 'pragmatic' approach uses an existing model architectures which have been shown to work across a range of datasets and tasks, and utilise transfer learning and other 'tricks' like cyclical learning rates and data augmentation. In a heritage context, it is likely that there are going to be bigger issues with imbalanced classes, noisy labels etc, and focusing on designing a custom architecture is probably going to lead to modest improvements in the performance of the model.
So what remains to be optimized?
In contrast to Auto ML which can involve looking at huge range of potential architectures and parameters we could instead limit our focus to smaller set of things which may have a large impact on the performance of your model.
As an example use case for hyperparameters optimization I'll use two datasets which contain transcripts of trials from the Old Bailey online and which are classified into various categories (theft, deception, etc). One of the datasets is drawn the decade 1830 the other one 1730. The approach taken to classifying these trials will be to follow the "Universal Language Model Fine-tuning for Text Classification" approach. 2.
I won't give an in depth summary of the approach here but idea is that:
- A language model - in this case a LSTM based model - is trained on a Wikipedia text. This provides a "general" language model that learns to "understand" general features of a language, in this case English
- this language model is then fine-tuned on a target dataset, in the orginal paper this is IMDB movie reviews.
- one this language model has been fine-tuned on the target dataset this fine-tuned language model is used as input for a classifier
The intuition here is that by utilising a pre-trained language model the Wikipedia part, and the fine-tuning part we get the benefits of a massive training set (Wikipedia) whilst also being able to 'focus' the language model on a target corpus which will use language differently. This makes a lot of intuitive sense, but a question in this use case is how much to fine-tune the language model on our target datasets. A reasonable assumption might be that since language will be more different in 1730 compared to 1830 we may want to fine tune the language model trained on Wikipedia more on the 1730 dataset.
We could of course test through some trial and error experiments, but this is a question which may benefit from some more systematic searching for appropriate hyperparameters. Before we get into this example in more depth I'll discuss the library I'm working with for doing this hyperparameter searching.
Optuna: A hyperparameter optimization framework
In this post I will be using Optuna "an automatic hyperparameter optimization software framework, particularly designed for machine learning". 3.
There are some really nice features in Optuna which I'll cover in this post as I explore the question of language model fine-tuning, so hopefully even if you don't care about the specific use case it might still provide a useful overview of Optuna.
In this blog post my examples will use version two of the fastai library but there really isn't anything that won't translate to other frameworks. Optuna has integrations for a number of libraries (including version 1 of fastai) but for this blog I won't use this integration.
A simple optimization example
To show the approach used in Optuna I'll use a simple image classification example. In this case using a toy example of classifying people vs cats in images taken from 19th Century books.
Optuna has two main concepts to understand: study and trial. A study is the overarching process of optimization based on some objective function. A trial is a single test/execution of the objective function. We'll return to this in more detail. For now lets look at a simple example.
For our first example we'll just use Optuna to test whether to use a pre-trained model or not. If the option is True
then the ResNet18 model we use will use weights from pre-training on ImageNet, if False
the model will start with random weights.
Looking at the high level steps of using Optuna (I'll go into more detail later). We create an objective function:
!wget -q https://zenodo.org/record/3689444/files/humancats.zip?download=1
!unzip -q *humancats.zip* -d data/
def objective(trial):
is_pretrained = trial.suggest_categorical('pre_trained', [True, False])
dls = ImageDataLoaders.from_folder('data/human_vs_cats/', valid_pct=0.4, item_tfms=Resize(64))
learn = cnn_learner(dls, resnet18, pretrained=is_pretrained, metrics=[accuracy])
learn.fit(1)
acc = learn.recorder.values[-1][-1]
return acc
Most of this will look familiar if you are have used fastai before. Once we have this we create a study:
study = optuna.create_study(direction='maximize')
and then optimize this study:
study.optimize(objective, n_trials=2)
Once we've run some trials we can inspect the study object for the best value we're optimizing for. In this case this is the accuracy but it will be whatever is returned by our function. We can also see the parameters which led to this value.
study.best_value, study.best_params
This toy example wasn't particularly useful (it just confirmed we probably want to use a pre-trained model) but going through the steps provides an overview of the main things required by Optuna. Starting with defining a function objective
def objective(trial):
this is the function we want to optimize. We could call it something else but following the convention in the Optuna docs the function we'll call it objective. This function takes 'trial' as an argument.
is_pretrained = trial.suggest_categorical('pre_trained', [True, False])
here we use trial to "suggest" a categorical in this case one of two options (whether pre trained is set to true or false). We do this using trial.suggest_categorical
and pass it the potential options (in this case True
or False
).
trial.suggest_blah
defines the paramater "search space" for Optuna. We'll look at all of the options for this later on. The final step in defining our objective function i.e. the thing we want to optimize:
return acc
This return value is objective value that Optuna will optimize. Because this is just the return value of a function there is a lot of flexibility in what this can be. In this example it is accuracy but it could be training or validation loss, or another training metrics. Later on we'll look at this in more detail.
Now let's look at the study part:
study = optuna.create_study(direction='maximize')
This is the most simple way of creating a study. This creates a study
object, again, we'll look at more options as we go along. The one option we pass here is the direction
. This refers to to whether Optuna should try to increase the return value of our optimization function or decrease it. This depends on what you a tracking i.e. you'd want to minimize error or validation loss but increase accuracy or F1 score.
Looking at the overview provided in the Optuna docs we have three main building blocks:
-
Trial: A single call of the objective function
-
Study: An optimization session, which is a set of trials
-
Parameter: A variable whose value is to be optimized
Parameter search space
Borrowing once more from the docs:
This is a crucial point. Particularly if we want to use optimization in a pragmatic way. When we have existing knowledge or evidence about what works well for a particular problem, we should use that rather than asking Optuna to find this out for us. There are some extra tricks to make our search for the best parameters more efficient which will be explored below but for now let's get back to the example use case.
df_1830 = pd.read_csv('https://gist.githubusercontent.com/davanstrien/4bc85d8a4127a2791732280ffaa43293/raw/cd1a3cc53674b64c8f130edbcb34e835afa665fb/1830trial.csv')
df_1730 = pd.read_csv('https://gist.githubusercontent.com/davanstrien/4bc85d8a4127a2791732280ffaa43293/raw/cd1a3cc53674b64c8f130edbcb34e835afa665fb/1730trial.csv')
For the sake of brevity I won't cover the steps to generate this dataset the instructions for doing so for the 1830s trials can be found here (and can be easily adapted for the 1730s trial).
We load the data using fastai2 TextDataLoaders
def load_lm_data(df):
data_lm = TextDataLoaders.from_df(
df.sample(frac=0.5), text_col="text", is_lm=True, bs=128
)
return data_lm
# Classification data
def load_class_data(df, data_lm):
data_class = TextDataLoaders.from_df(
df.sample(frac=0.5),
text_col="text",
label_col="broad",
valid_pct=0.3,
bs=128,
text_vocab=data_lm.vocab,
)
return data_class
data_lm = load_lm_data(df_1830)
data_class = load_class_data(df_1830, data_lm)
Create the language model learner and classifier learner:
def create_lm():
return language_model_learner(data_lm, AWD_LSTM, pretrained=True).to_fp16()
def create_class_learn():
return text_classifier_learner(
data_class, AWD_LSTM, metrics=[accuracy, F1Score(average="weighted")]
).to_fp16()
Optuna trial suggest
In the example above trial.suggest_categorical
was used to define the potential parameter. Optuna has five kinds of parameters which can be optimized. These all work through the trial.suggest
method.
Categorical
This can be used for models, optimizers, and for True/False flags.
optimizer = trial.suggest_categorical('optimizer', ['MomentumSGD', 'Adam'])
Integer
n_epochs = trial.suggest_int('num_epochs', 1, 3)
Uniform
max_zoom = trial.suggest_uniform('max_zoom', 0.0, 1.0)
Loguniform
learning_rate = trial.suggest_loguniform('learning_rate', 1e-4, 1e-2)
Discrete-uniform
drop_path_rate = trial.suggest_discrete_uniform('drop_path_rate', 0.0, 1.0)
The string value provides a key for the parameters which is used to access these parameters later, it's therefore important to give them a sensible name.
Limiting parameters?
Adding additional trial.suggest
to your optimization function increases the search space for Optuna to optimize over so you should avoid adding additional parameters if they are not necessary.
The other way in which the search space can be constrained is to limit the range of the search i.e. for learning rate
learning_rate = trial.suggest_loguniform('learning_rate', 1e-4, 1e-2)
is preferable over
learning_rate = trial.suggest_loguniform('learning_rate', 1e-10, 1e-1)
if it's not likely the optimal learning rate will sit outside of this range.
How many parameters you include will also depend on the type of model you are trying to train. In the use case of fine-tuning a language model we will want to limit the options more since language models are generally quite slow to train. If, on the other hand, we were trying to improve an image classification model which only takes minutes to train then searching through a larger parameter space would become more feasible.
Objective function for fine-tuning a language model
The objective function below has two stages; train a language model, use the encoder from this language model for a classifier.
The parameters we're trying to optimize in this case are:
- learning rate for the frozen language model
- number of epochs to train only the final layers of the language model
- learning rate for the unfrozen language model
- number of epochs for training the whole language model
We use lm_learn.no_bar() as a context manager to reduce the amount of logging.
def objective(trial):
lm_learn = create_lm()
lr_frozen = trial.suggest_loguniform("learning_rate_frozen", 1e-4, 1e-1)
head_epochs = trial.suggest_int("head_epochs", 1, 5)
with lm_learn.no_bar():
lm_learn.fit_one_cycle(head_epochs, lr_max=lr_frozen)
# Unfrozen Language model
lr_unfreeze = trial.suggest_loguniform("learning_rate_unfrozen", 1e-7, 1e-1)
body_epochs = trial.suggest_int("lm_body_epochs", 1, 5)
lm_learn.unfreeze()
with lm_learn.no_bar():
lm_learn.fit_one_cycle(body_epochs, lr_unfreeze)
lm_learn.save_encoder("finetuned")
# Classification
cl_learn = create_class_learn()
cl_learn.load_encoder("finetuned")
cl_learn.fit_one_cycle(3)
f1 = cl_learn.recorder.values[-1][-1]
return f1
We can give our study a name and also store it in a database. This allows for resuming previous trials later and accessing the history of previous trials. There are various options for database backends outlined in the documentation.
study_name = "tunelm1830"
study = optuna.create_study(
study_name=study_name,
direction="maximize",
storage=f"sqlite:///{out_path}/optuma/example.db",
)
Now we'll run 3 trials and use show_progress_bar=True
to give an ETA on when the trials will finish.
study.optimize(objective, n_trials=3, show_progress_bar=True)
study.best_value, study.best_params
Where to start the search?
As I mentioned at the start I think it's worth trying to think pragmatically about how to use hyper-parameter optimizations. I already mentioned limiting the number of parameters and limiting the potential options in these parameters. However we can also intervene more directly in how Optuna runs a trial.
Suggesting a learning rate
One of the yummiest features in fastai which has also made it into other deep-learning libraries is the learning rate finer lr_find()
. As a reminder:
the LR Finder trains the model with exp onentially growing learning rates from start_lr to end_lr for num_it and stops in case of divergence (unless stop_div=False) then plots the losses vs the learning rates with a log scale.
Since the Learning rate finder often gives a good learning rate we should see if we can use this as a starting point for our trials.
lm_learn = create_lm()
lm_learn.unfreeze()
lr_min,lr_steep = lm_learn.lr_find(suggestions=True)
lr_min, lr_steep
study.enqueue_trial({'learning_rate_unfrozen': lr_steep})
study.optimize(objective, n_trials=1)
Using the learning rate from the LR_finder gives us our best trial so far. This is likely to be because learning rate is a particularly important hyper-parameter. The suggested learning rate from lr_find may not always be the best but using either the suggested one or picking one based on the plot as a starting point for the trial may help Optuna to start from sensible starting point while still giving the freedom for optuna to diverge away from this in later trials if helps the objective function.
Pruning trials
The next feature of Optuna which helps make parameter searching more efficient is pruning. Pruning is a process for stopping bad trials early.
For example if we have the following three trials:
- Trial 1 - epoch 1: 87% accuracy
- Trial 2 - epoch 1: 85% accuracy
- Trial 3 - epoch 1: 60% accuracy
probably it's not worth continuing with trial 3. Pruning trials helps focus computational resources on trials which are likely to improve on previous trials. The likely here is important. It is possible that some trials may be pruned early which actually would have done better in the end. Optuna offers a number of different pruning algorithms, I won't cover these here but the documentation gives a good overview and includes links to the papers which propose the implemented pruning algorithms.
How to do pruning in Optuna?
Optuna has intergrations with various machine learning libraries. These intergrations can help with the pruning but setting up pruning manually is also pretty straight forward to do.
The two things we need to do is report the value and the stage in the training porcess:
trial.report(metric, step)
then we call:
if trial.should_prune():
raise optuna.exceptions.TrialPruned()
Depending on your objective function this will be put in different places. In the example of fine-tuning the language model, because we're trying to optimize the classification part it, it means the pruning step can only be called quite late in the traing loop. Ideally it would be called earlier but we still save a little bit of time on unpromising trials.
The new objective function with pruning:
def objective(trial):
lm_learn = create_lm()
lr_frozen = trial.suggest_loguniform("learning_rate_frozen", 1e-4, 1e-1)
head_epochs = trial.suggest_int("head_epochs", 1, 5)
with lm_learn.no_bar():
lm_learn.fit_one_cycle(head_epochs, lr_max=lr_frozen)
# Unfrozen Language model
lr_unfreeze = trial.suggest_loguniform("learning_rate_unfrozen", 1e-7, 1e-1)
body_epochs = trial.suggest_int("lm_body_epochs", 1, 5)
lm_learn.unfreeze()
with lm_learn.no_bar():
lm_learn.fit_one_cycle(body_epochs, lr_unfreeze)
lm_learn.save_encoder("finetuned")
# Classification
cl_learn = create_class_learn()
cl_learn.load_encoder("finetuned")
for step in range(3):
cl_learn.fit(1)
# Pruning
intermediate_f1 = cl_learn.recorder.values[-1][
-1
] # get f1 score for current step
trial.report(intermediate_f1, step) # report f1
if trial.should_prune(): # let optuna decide whether to prune
raise optuna.exceptions.TrialPruned()
f1 = cl_learn.recorder.values[-1][-1]
return f1
We can load the same study as before using the python load_if_exists
flag.
study_name = "tunelm1830"
study = optuna.create_study(
study_name=study_name,
direction="maximize",
storage=f"sqlite:///{out_path}/optuma/example.db",
load_if_exists=True,
pruner=optuna.pruners.SuccessiveHalvingPruner(),
)
We can now run some more trials. Instead of specifying the number of trials we can also specify how long optuma should search for.
study.enqueue_trial({'learning_rate_unfrozen': lr_steep})
study.optimize(objective, timeout=60*60*0.5)
and get the best trial:
study.best_trial
and best value and pararms:
study.best_value, study.best_params
data_lm = load_lm_data(df_1730)
data_class = load_class_data(df_1730, data_lm)
lm_learn = create_lm()
lm_learn.unfreeze()
lr_min,lr_steep = lm_learn.lr_find(suggestions=True)
def objective(trial):
lm_learn = create_lm()
lr_frozen = trial.suggest_loguniform("learning_rate_frozen", 1e-4, 1e-1)
head_epochs = trial.suggest_int("head_epochs", 1, 5)
with lm_learn.no_bar():
lm_learn.fit_one_cycle(head_epochs, lr_max=lr_frozen)
# Unfrozen Language model
lr_unfreeze = trial.suggest_loguniform("learning_rate_unfrozen", 1e-7, 1e-1)
body_epochs = trial.suggest_int("lm_body_epochs", 1, 5)
lm_learn.unfreeze()
with lm_learn.no_bar():
lm_learn.fit_one_cycle(body_epochs, lr_unfreeze)
lm_learn.save_encoder("finetuned")
# Classification
cl_learn = create_class_learn()
cl_learn.load_encoder("finetuned")
for step in range(3):
cl_learn.fit(1)
intermediate_f1 = cl_learn.recorder.values[-1][-1]
trial.report(intermediate_f1, step)
if trial.should_prune():
raise optuna.exceptions.TrialPruned()
f1 = cl_learn.recorder.values[-1][-1]
return f1
study_name = "tunelm1730"
study = optuna.create_study(
study_name=study_name,
direction="maximize",
storage=f"sqlite:///{out_path}/optuma/example.db",
load_if_exists=True,
pruner=optuna.pruners.SuccessiveHalvingPruner(),
)
study.enqueue_trial({'learning_rate_unfrozen': lr_steep})
study.optimize(objective, timeout=60*60*0.5)
Trials can be accssed as part of the study object. Running trials for 30 mins with early pruning results in 20 trials
len(study.trials)
We can also see which was the best trial.
study.best_trial.number
The number of trials run depends mainly on how long your model takes to train, the size of the paramter search space and your patience. If trials are failing to improve better scores for a long time it's probably better to actively think about how to improve your approach to the problem (better data, more data, chaning model design etc.) rather than hoping hyperaparmet tuning will fix the problem.
study1830 = optuna.load_study('tunelm1830', storage=f'sqlite:///{out_path}/optuma/example.db')
study1730 = optuna.load_study('tunelm1730', storage=f'sqlite:///{out_path}/optuma/example.db')
First comparing the best f1 values for both datasets:
print(f'Best 1830 value was: {study1830.best_value:.3}')
print(f'Best 1730 value was: {study1730.best_value:.3}')
Specific parameters can also be accessed
study1830.best_params['learning_rate_unfrozen'], study1730.best_params['learning_rate_unfrozen']
plot_intermediate_values
shows the intermediate values. This can be useful for getting a sense of how trials progress and also help give a sense of whether some trials are being pruned prematurely
optuna.visualization.plot_intermediate_values(study1830)
plot_parallel_coordinate
plots parameters choices in relation to values. It can be hard to read these plots but they can also be helpful for giving a sense of which choices for parameters work best.
optuna.visualization.plot_parallel_coordinate(study1830)
optuna.importance.get_param_importances(study1730)
optuna.importance.get_param_importances(study1830)
These are broadly similar although learning rate frozen/unfrozen are in different places for the 1730 and 1830 trials.
Multi objective
Optuna has experimental support for multi-objective optimization. This might be useful if you don't want to optimize for only one metrics.
An alternative to using this approach is to report other things you care about during the trial but don't directly want to optimize for. As an example, you might mostly care about the accuracy of a model but also care a bit about how long it takes to do inference.
One approach is to use a multi-objective trial. An alternative is to instead log inference time as part of the trial and continue to optimize for other metrics. You can then later on balance the accuracy of different trials with the inference time. It may turn out later that a slightly slower inference time can be dealt with by scaling vertically. Not prematurely optimizing for multi-objectives can therefore give you more flexibility. To show this in practice I'll use an image classification dataset.
dls = ImageDataLoaders.from_folder(
"data/1905_maps/", valid_pct=0.3, item_tfms=Resize(256)
)
dls.show_batch()
learn.unfreeze()
lr_min,unfrozen_lr_steep = learn.lr_find(suggestions=True)
def objective(trial):
apply_tfms = trial.suggest_categorical("apply_tfms", [True, False])
if apply_tfms:
aug_tfms = aug_transforms(
mult=trial.suggest_uniform("mult", 0.0, 1.0),
do_flip=trial.suggest_categorical("do_flip", [True, False]),
flip_vert=trial.suggest_categorical("flip_vert", [True, False]),
max_rotate=trial.suggest_uniform("max_rotate", 0, 180),
max_zoom=trial.suggest_uniform("max_zoom", 0, 3.0),
max_lighting=trial.suggest_uniform("max_lighting", 0.0, 1.0),
)
else:
aug_tfms = None
dls = ImageDataLoaders.from_folder(
"data/1905_maps/", valid_pct=0.3, item_tfms=Resize(256), aug_transforms=aug_tfms
)
model = trial.suggest_categorical(
"model", ["resnet18", "resnet50", "xresnet50", "squeezenet1_0", "densenet121"]
)
learn = cnn_learner(
dls, arch=eval(model), pretrained=True, metrics=[F1Score(average="weighted")]
).to_fp16()
epochs = trial.suggest_int("epochs", 1, 10)
for step in range(epochs):
with learn.no_bar():
learn.fit_one_cycle(
1, base_lr=trial.suggest_loguniform("learning_rate", 1e-5, 1e-1)
)
unfrozen_epochs = trial.suggest_int("unfrozen_epochs", 1, 10)
unfrozen_lr = trial.suggest_loguniform("unfrozen_learning_rate", 1e-10, 1e-1)
learn.unfreeze()
for step in range(unfrozen_epochs):
with learn.no_bar():
learn.fit_one_cycle(1, lr_max=unfrozen_lr)
int_f1 = learn.recorder.values[-1][-1]
trial.report(int_f1, step)
if trial.should_prune():
raise optuna.exceptions.TrialPruned()
t0 = time.time()
learn.validate()
t1 = time.time()
execute_time = t1 - t0
trial.set_user_attr("execute_time", execute_time)
f1 = learn.recorder.values[-1][-1]
return f1
Create the study
study_name = "mapsmegastudyXL" # Unique identifier of the study.
study = optuna.create_study(
direction="maximize",
load_if_exists=True,
study_name=study_name,
storage=f"sqlite:///{out_path}/optuma/blog.db",
pruner=optuna.pruners.SuccessiveHalvingPruner(min_resource=2),
)
Queue up with some parameters
study.enqueue_trial(
{
"pre_trained": True,
"apply_tfms": True,
"epochs": 5,
"learning_rate": lr_steep,
"model": "resnet50",
"unfrozen_learning_rate": unfrozen_lr_steep,
}
)
study.optimize(objective, n_trials=1, show_progress_bar=True)
queue up with some less sensible defaults
study.enqueue_trial(
{"pre_trained": False, "apply_tfms": False, "epochs": 1, "unfrozen_epochs": 1}
)
study.optimize(objective, n_trials=1, show_progress_bar=True)
Now optimize for 500 trials
study.optimize(objective, n_trials=500,show_progress_bar=True)
study = optuna.load_study('mapsmegastudyXL', storage=f'sqlite:///{out_path}/optuma/blog.db')
The best finishing values and parameters:
study.best_value, study.best_params
optuna.visualization.plot_parallel_coordinate(study)
optuna.importance.get_param_importances(study)
Learning rate is by far the most important learning rate, again this suggests that using learning rate finder makes a lot of sense as a starting point.
Working with Optuna trial data
There are now ~500 trials which are stored in the study. Each of these trials contains the parameters used, metadata about the trial, the value of the thing being optimized, and importantly for this example the user attribute which stores the validation time. Optuna makes it very easy to export this information to a dataframe.
df = study.trials_dataframe()
df.head(3)
We can now easily work with the trial data using pandas. Lets start by getting the best two values
df.sort_values(['value'], ascending=False).head(2)
We can see how often transforms were applied in the trials
df['params_apply_tfms'].value_counts()
Viewing the number of trials for each model which had a value over 90
df['params_model'][df['value'] >= 0.90].value_counts()
Filtering a bit more aggressively (value above 94)
df94 = df[df['value'] >= 0.94]
len(df94)
How often were transforms applied for these trials
df94['params_apply_tfms'].value_counts()
The number of unfrozen epochs
df94['params_unfrozen_epochs'].value_counts()
Getting back to the validation time we can get the max, min and mean values
df['user_attrs_execute_time'].max(), df['user_attrs_execute_time'].min(), df['user_attrs_execute_time'].mean()
If we did care about reducing the execution time we could use these values to find the trial with the shortest execution time:
df94['user_attrs_execute_time'].sort_values()
If we were happy with slightly lower performance we could pick the study with the shortest execution time which is still achieves a f1 above 94%
df94.loc[96]
This is a slightly artificial example but hopefully shows the possibility of logging user attributes which can then be accessed easily later without prematurely optimizing for something which may not be important.
Further reading
Hopefully this post has been a helpful overview of Optuna with a somewhat realistic use case. I would recommend reading the Optuna docs which covers things in much more detail.
References
2. introducting ulmfit nlp.fast.ai/classification/2018/05/15/introducting-ulmfit.html↩
3. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta,and Masanori Koyama. 2019. Optuna: A Next-generation Hyperparameter Optimization Framework. In KDD.↩