Transfer learning & fine-tuning

Transfer learning & fine-tuning

Unveiling the power of Transfer Learning and Fine-tuning, we delve into their potential to revolutionize machine learning models. We explore how these techniques can enhance model performance, reduce training time, and make AI more accessible and efficient.

Transfer learning consists of taking features learned on one problem and leveraging them on a new, similar problem

It is usually done for tasks where your dataset has too little data to train a full-scale model from scratch

The typical transfer-learning workflow

Instantiate a base model and load pre-trained weights into it

Transfer learning & fine-tuning with a custom training loop

Create a base model, then freeze it and create a new model on top

Using random data augmentation

When you don’t have a large image dataset, it’s a good practice to artificially introduce sample diversity by applying random yet realistic transformations to the training images.

Freezing layers: understanding the trainable attribute

Layers & models have three weight attributes: weights, trainable weights, non_trainable weights

Build a model

Rescaling layer to scale input values (initially in the [0, 255] range) to the [-1, 1] range

Recursive setting of the trainable attribute

If you set trainable = False on a model or on any layer that has sublayers, all children layers become non-trainable as well

Standardizing the data

Raw images have a variety of sizes, and each pixel consists of 3 integer values between 0 and 255 (RGB level values).

Train the top layer model.

The epochs = 20 model.fit(train_ds, epochs=epochs, validation_data=validation_ds)

Do a round of fine-tuning of the entire model

Although the base model becomes trainable, it is still running in inference mode since we passed training=False when calling it when we built the model.

Fine-tuning

Once your model has converged on the new data, you can try to unfreeze all or part of the base model and retrain the whole model end-to-end with a very low learning rate.

An end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset

Load the Xception model, pre-trained on ImageNet, and use it on the Kaggle “cats vs dogs” classification dataset.

Getting the data

Transfer learning is most useful when working with very small datasets.

Source

Get in