23.8. The d2l API Document¶ Open the notebook in SageMaker Studio Lab
This section displays classes and functions (sorted alphabetically) in
the d2l package, showing where they are defined in the book so you
can find more detailed implementations and explanations. See also the
source code on the GitHub
repository.
23.8.1. Classes¶
- classd2l.torch.AdditiveAttention(num_hiddens, dropout, **kwargs)[source] ¶
Bases:
ModuleAdditive attention.
Defined in Section 11.3.2.2
- forward(queries, keys, values, valid_lens)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.AddNorm(norm_shape, dropout)[source] ¶
Bases:
ModuleThe residual connection followed by layer normalization.
Defined in Section 11.7.2
- forward(X, Y)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.AttentionDecoder[source] ¶
Bases:
DecoderThe base attention-based decoder interface.
Defined in Section 11.4
- propertyattention_weights¶
- classd2l.torch.Classifier(plot_train_per_epoch=2, plot_valid_per_epoch=1)[source] ¶
Bases:
ModuleThe base class of classification models.
Defined in Section 4.3
- accuracy(Y_hat, Y, averaged=True)[source] ¶
Compute the number of correct predictions.
Defined in Section 4.3
- layer_summary(X_shape)[source] ¶
Defined in Section 7.6
- loss(Y_hat, Y, averaged=True)[source] ¶
Defined in Section 4.5
- classd2l.torch.DataModule(root='../data', num_workers=4)[source] ¶
Bases:
HyperParametersThe base class of data.
Defined in Section 3.2.2
- get_tensorloader(tensors, train, indices=slice(0, None, None))[source] ¶
Defined in Section 3.3
- classd2l.torch.Decoder[source] ¶
Bases:
ModuleThe base decoder interface for the encoder–decoder architecture.
Defined in Section 10.6
- forward(X, state)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.DotProductAttention(dropout)[source] ¶
Bases:
ModuleScaled dot product attention.
Defined in Section 11.3.2.2
- forward(queries, keys, values, valid_lens=None)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.Encoder[source] ¶
Bases:
ModuleThe base encoder interface for the encoder–decoder architecture.
Defined in Section 10.6
- forward(X, *args)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.EncoderDecoder(encoder, decoder)[source] ¶
Bases:
ClassifierThe base class for the encoder–decoder architecture.
Defined in Section 10.6
- forward(enc_X, dec_X, *args)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- predict_step(batch, device, num_steps, save_attention_weights=False)[source] ¶
Defined in Section 10.7.6
- classd2l.torch.FashionMNIST(batch_size=64, resize=(28, 28))[source] ¶
Bases:
DataModuleThe Fashion-MNIST dataset.
Defined in Section 4.2
- get_dataloader(train)[source] ¶
Defined in Section 4.2
- text_labels(indices)[source] ¶
Return text labels.
Defined in Section 4.2
- visualize(batch, nrows=1, ncols=8, labels=[])[source] ¶
Defined in Section 4.2
- classd2l.torch.GRU(num_inputs, num_hiddens, num_layers, dropout=0)[source] ¶
Bases:
RNNThe multilayer GRU model.
Defined in Section 10.3
- classd2l.torch.HyperParameters[source] ¶
Bases:
objectThe base class of hyperparameters.
- save_hyperparameters(ignore=[])[source] ¶
Save function arguments into class attributes.
Defined in Section 23.7
- classd2l.torch.LeNet(lr=0.1, num_classes=10)[source] ¶
Bases:
ClassifierThe LeNet-5 model.
Defined in Section 7.6
- classd2l.torch.LinearRegression(lr)[source] ¶
Bases:
ModuleThe linear regression model implemented with high-level APIs.
Defined in Section 3.5
- configure_optimizers()[source] ¶
Defined in Section 3.5
- forward(X)[source] ¶
Defined in Section 3.5
- get_w_b()[source] ¶
Defined in Section 3.5
- loss(y_hat, y)[source] ¶
Defined in Section 3.5
- classd2l.torch.LinearRegressionScratch(num_inputs, lr, sigma=0.01)[source] ¶
Bases:
ModuleThe linear regression model implemented from scratch.
Defined in Section 3.4
- configure_optimizers()[source] ¶
Defined in Section 3.4
- forward(X)[source] ¶
Defined in Section 3.4
- loss(y_hat, y)[source] ¶
Defined in Section 3.4
- classd2l.torch.Module(plot_train_per_epoch=2, plot_valid_per_epoch=1)[source] ¶
Bases:
Module,HyperParametersThe base class of models.
Defined in Section 3.2
- apply_init(inputs, init=None)[source] ¶
Defined in Section 6.4
- configure_optimizers()[source] ¶
Defined in Section 4.3
- forward(X)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.MTFraEng(batch_size, num_steps=9, num_train=512, num_val=128)[source] ¶
Bases:
DataModuleThe English-French dataset.
Defined in Section 10.5
- build(src_sentences, tgt_sentences)[source] ¶
Defined in Section 10.5.3
- get_dataloader(train)[source] ¶
Defined in Section 10.5.3
- classd2l.torch.MultiHeadAttention(num_hiddens, num_heads, dropout, bias=False, **kwargs)[source] ¶
Bases:
ModuleMulti-head attention.
Defined in Section 11.5
- forward(queries, keys, values, valid_lens)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- transpose_output(X)[source] ¶
Reverse the operation of transpose_qkv.
Defined in Section 11.5
- transpose_qkv(X)[source] ¶
Transposition for parallel computation of multiple attention heads.
Defined in Section 11.5
- classd2l.torch.PositionalEncoding(num_hiddens, dropout, max_len=1000)[source] ¶
Bases:
ModulePositional encoding.
Defined in Section 11.6
- forward(X)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.PositionWiseFFN(ffn_num_hiddens, ffn_num_outputs)[source] ¶
Bases:
ModuleThe positionwise feed-forward network.
Defined in Section 11.7
- forward(X)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.ProgressBoard(xlabel=None, ylabel=None, xlim=None, ylim=None, xscale='linear', yscale='linear', ls=['-', '--', '-.', ':'], colors=['C0', 'C1', 'C2', 'C3'], fig=None, axes=None, figsize=(3.5, 2.5), display=True)[source] ¶
Bases:
HyperParametersThe board that plots data points in animation.
Defined in Section 3.2
- draw(x, y, label, every_n=1)[source] ¶
Defined in Section 23.7
- classd2l.torch.Residual(num_channels, use_1x1conv=False, strides=1)[source] ¶
Bases:
ModuleThe Residual block of ResNet models.
Defined in Section 8.6
- forward(X)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.ResNeXtBlock(num_channels, groups, bot_mul, use_1x1conv=False, strides=1)[source] ¶
Bases:
ModuleThe ResNeXt block.
Defined in Section 8.6.2
- forward(X)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.RNN(num_inputs, num_hiddens)[source] ¶
Bases:
ModuleThe RNN model implemented with high-level APIs.
Defined in Section 9.6
- forward(inputs, H=None)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.RNNLM(rnn, vocab_size, lr=0.01)[source] ¶
Bases:
RNNLMScratchThe RNN-based language model implemented with high-level APIs.
Defined in Section 9.6
- output_layer(hiddens)[source] ¶
Defined in Section 9.5
- classd2l.torch.RNNLMScratch(rnn, vocab_size, lr=0.01)[source] ¶
Bases:
ClassifierThe RNN-based language model implemented from scratch.
Defined in Section 9.5
- forward(X, state=None)[source] ¶
Defined in Section 9.5
- one_hot(X)[source] ¶
Defined in Section 9.5
- output_layer(rnn_outputs)[source] ¶
Defined in Section 9.5
- predict(prefix, num_preds, vocab, device=None)[source] ¶
Defined in Section 9.5
- classd2l.torch.RNNScratch(num_inputs, num_hiddens, sigma=0.01)[source] ¶
Bases:
ModuleThe RNN model implemented from scratch.
Defined in Section 9.5
- forward(inputs, state=None)[source] ¶
Defined in Section 9.5
- classd2l.torch.Seq2Seq(encoder, decoder, tgt_pad, lr)[source] ¶
Bases:
EncoderDecoderThe RNN encoder–decoder for sequence to sequence learning.
Defined in Section 10.7.3
- configure_optimizers()[source] ¶
Defined in Section 4.3
- classd2l.torch.Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers, dropout=0)[source] ¶
Bases:
EncoderThe RNN encoder for sequence-to-sequence learning.
Defined in Section 10.7
- forward(X, *args)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.SGD(params, lr)[source] ¶
Bases:
HyperParametersMinibatch stochastic gradient descent.
Defined in Section 3.4
- classd2l.torch.SoftmaxRegression(num_outputs, lr)[source] ¶
Bases:
ClassifierThe softmax regression model.
Defined in Section 4.5
- forward(X)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.SyntheticRegressionData(w, b, noise=0.01, num_train=1000, num_val=1000, batch_size=32)[source] ¶
Bases:
DataModuleSynthetic data for linear regression.
Defined in Section 3.3
- get_dataloader(train)[source] ¶
Defined in Section 3.3
- classd2l.torch.TimeMachine(batch_size, num_steps, num_train=10000, num_val=5000)[source] ¶
Bases:
DataModuleThe Time Machine dataset.
Defined in Section 9.2
- build(raw_text, vocab=None)[source] ¶
Defined in Section 9.2
- get_dataloader(train)[source] ¶
Defined in Section 9.3.3
- classd2l.torch.Trainer(max_epochs, num_gpus=0, gradient_clip_val=0)[source] ¶
Bases:
HyperParametersThe base class for training models with data.
Defined in Section 3.2.2
- clip_gradients(grad_clip_val, model)[source] ¶
Defined in Section 9.5
- fit_epoch()[source] ¶
Defined in Section 3.4
- prepare_batch(batch)[source] ¶
Defined in Section 6.7
- prepare_model(model)[source] ¶
Defined in Section 6.7
- classd2l.torch.TransformerEncoder(vocab_size, num_hiddens, ffn_num_hiddens, num_heads, num_blks, dropout, use_bias=False)[source] ¶
Bases:
EncoderThe Transformer encoder.
Defined in Section 11.7.4
- forward(X, valid_lens)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- classd2l.torch.TransformerEncoderBlock(num_hiddens, ffn_num_hiddens, num_heads, dropout, use_bias=False)[source] ¶
Bases:
ModuleThe Transformer encoder block.
Defined in Section 11.7.2
- forward(X, valid_lens)[source] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
23.8.2. Functions¶
- d2l.torch.add_to_class(Class)[source] ¶
Register functions as methods in created class.
Defined in Section 3.2
- d2l.torch.bleu(pred_seq, label_seq, k)[source] ¶
Compute the BLEU.
Defined in Section 10.7.6
- d2l.torch.check_len(a, n)[source] ¶
Check the length of a list.
Defined in Section 9.5
- d2l.torch.check_shape(a, shape)[source] ¶
Check the shape of a tensor.
Defined in Section 9.5
- d2l.torch.corr2d(X, K)[source] ¶
Compute 2D cross-correlation.
Defined in Section 7.2
- d2l.torch.cpu()[source] ¶
Get the CPU device.
Defined in Section 6.7
- d2l.torch.gpu(i=0)[source] ¶
Get a GPU device.
Defined in Section 6.7
- d2l.torch.init_cnn(module)[source] ¶
Initialize weights for CNNs.
Defined in Section 7.6
- d2l.torch.init_seq2seq(module)[source] ¶
Initialize weights for sequence-to-sequence learning.
Defined in Section 10.7
- d2l.torch.masked_softmax(X, valid_lens)[source] ¶
Perform softmax operation by masking elements on the last axis.
Defined in Section 11.3
- d2l.torch.num_gpus()[source] ¶
Get the number of available GPUs.
Defined in Section 6.7
- d2l.torch.plot(X, Y=None, xlabel=None, ylabel=None, legend=[], xlim=None, ylim=None, xscale='linear', yscale='linear', fmts=('-', 'm--', 'g-.', 'r:'), figsize=(3.5, 2.5), axes=None)[source] ¶
Plot data points.
Defined in Section 2.4
- d2l.torch.set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend)[source] ¶
Set the axes for matplotlib.
Defined in Section 2.4
- d2l.torch.set_figsize(figsize=(3.5, 2.5))[source] ¶
Set the figure size for matplotlib.
Defined in Section 2.4
- d2l.torch.show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5), cmap='Reds')[source] ¶
Show heatmaps of matrices.
Defined in Section 11.1
- d2l.torch.show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist)[source] ¶
Plot the histogram for list length pairs.
Defined in Section 10.5
- d2l.torch.try_all_gpus()[source] ¶
Return all available GPUs, or [cpu(),] if no GPU exists.
Defined in Section 6.7
- d2l.torch.try_gpu(i=0)[source] ¶
Return gpu(i) if exists, otherwise return cpu().
Defined in Section 6.7
- d2l.torch.use_svg_display()[source] ¶
Use the svg format to display a plot in Jupyter.
Defined in Section 2.4
Table Of Contents
- 23.8. The
d2lAPI Document- 23.8.1. Classes
AdditiveAttentionAddNormAttentionDecoderClassifierDataModuleDecoderDotProductAttentionEncoderEncoderDecoderFashionMNISTGRUHyperParametersLeNetLinearRegressionLinearRegressionScratchModuleMTFraEngMultiHeadAttentionPositionalEncodingPositionWiseFFNProgressBoardResidualResNeXtBlockRNNRNNLMRNNLMScratchRNNScratchSeq2SeqSeq2SeqEncoderSGDSoftmaxRegressionSyntheticRegressionDataTimeMachineTrainerTransformerEncoderTransformerEncoderBlockVocab
- 23.8.2. Functions
- 23.8.1. Classes