Warning: This project is deprecated. TensorFlow Addons has stopped development, The project will only be providing minimal maintenance releases until May 2024. See the full announcement here or on github.

TensorFlow Addons Losses: TripletSemiHardLoss

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

Overview

This notebook will demonstrate how to use the TripletSemiHardLoss function in TensorFlow Addons.

Resources:

TripletLoss

As first introduced in the FaceNet paper, TripletLoss is a loss function that trains a neural network to closely embed features of the same class while maximizing the distance between embeddings of different classes. To do this an anchor is chosen along with one negative and one positive sample. fig3

The loss function is described as a Euclidean distance function:

function

Where A is our anchor input, P is the positive sample input, N is the negative sample input, and alpha is some margin you use to specify when a triplet has become too "easy" and you no longer want to adjust the weights from it.

SemiHard Online Learning

As shown in the paper, the best results are from triplets known as "Semi-Hard". These are defined as triplets where the negative is farther from the anchor than the positive, but still produces a positive loss. To efficiently find these triplets you utilize online learning and only train from the Semi-Hard examples in each batch.

Setup

pipinstall-Utensorflow-addons
importio
importnumpyasnp
importtensorflowastf
importtensorflow_addonsastfa
importtensorflow_datasetsastfds

Prepare the Data

def_normalize_img(img, label):
 img = tf.cast(img, tf.float32) / 255.
 return (img, label)
train_dataset, test_dataset = tfds.load(name="mnist", split=['train', 'test'], as_supervised=True)
# Build your input pipelines
train_dataset = train_dataset.shuffle(1024).batch(32)
train_dataset = train_dataset.map(_normalize_img)
test_dataset = test_dataset.batch(32)
test_dataset = test_dataset.map(_normalize_img)

Build the Model

fig2

model = tf.keras.Sequential([
 tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(28,28,1)),
 tf.keras.layers.MaxPooling2D(pool_size=2),
 tf.keras.layers.Dropout(0.3),
 tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'),
 tf.keras.layers.MaxPooling2D(pool_size=2),
 tf.keras.layers.Dropout(0.3),
 tf.keras.layers.Flatten(),
 tf.keras.layers.Dense(256, activation=None), # No activation on final dense layer
 tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1)) # L2 normalize embeddings
])

Train and Evaluate

# Compile the model
model.compile(
 optimizer=tf.keras.optimizers.Adam(0.001),
 loss=tfa.losses.TripletSemiHardLoss())
# Train the network
history = model.fit(
 train_dataset,
 epochs=5)
Epoch 1/5
1875/1875 [==============================] - 13s 4ms/step - loss: 0.5952
Epoch 2/5
1875/1875 [==============================] - 6s 3ms/step - loss: 0.4562
Epoch 3/5
1875/1875 [==============================] - 7s 4ms/step - loss: 0.4207
Epoch 4/5
1875/1875 [==============================] - 6s 3ms/step - loss: 0.4040
Epoch 5/5
1875/1875 [==============================] - 6s 3ms/step - loss: 0.3964
# Evaluate the network
results = model.predict(test_dataset)
313/313 [==============================] - 1s 1ms/step
# Save test embeddings for visualization in projector
np.savetxt("vecs.tsv", results, delimiter='\t')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for img, labels in tfds.as_numpy(test_dataset):
 [out_m.write(str(x) + "\n") for x in labels]
out_m.close()
try:
 fromgoogle.colabimport files
 files.download('vecs.tsv')
 files.download('meta.tsv')
except:
 pass

Embedding Projector

The vector and metadata files can be loaded and visualized here: https://projector.tensorflow.org/

You can see the results of our embedded test data when visualized with UMAP: embedding

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2023年05月26日 UTC.