tfq.layers.PQC

View source on GitHub

Parametrized Quantum Circuit (PQC) Layer.

tfq.layers.PQC(
 model_circuit,
 operators,
 *,
 repetitions=None,
 backend='noiseless',
 differentiator=None,
 initializer=tf.keras.initializers.RandomUniform(0, 2 * np.pi),
 regularizer=None,
 constraint=None,
 **kwargs
)

Used in the notebooks

Used in the tutorials

Args

model_circuit cirq.Circuit

Circuit with sympy.Symbols to be trained.

operators Union[cirq.PauliSum, cirq.PauliString, list]

Observable(s) to measure.

repetitions Optional[int]

Number of measurement repetitions. If None, analytic
expectation is used.

backend (Union[str, cirq.Sampler, cirq.sim.simulator.SimulatesExpectationValues]): Backend simulator.

differentiator Optional[tfq.differentiators.Differentiator]

Differentiation scheme.

initializer tf.keras.initializers.Initializer

Initializer for circuit parameters.

regularizer Optional[tf.keras.regularizers.Regularizer]

Regularizer for circuit parameters.

constraint Optional[tf.keras.constraints.Constraint]

Constraint for circuit parameters.

**kwargs Additional keyword arguments for the parent class.

Input shape

tf.Tensor of shape [batch_size], each entry a serialized circuit (from tfq.convert_to_tensor).

Output shape

tf.Tensor of shape [batch_size, n_operators], expectation values for each operator.

This layer is for training parameterized quantum models. Given a parameterized circuit, this layer initializes the parameters and manages them in a Keras native way.

We start by defining a simple quantum circuit on one qubit. This circuit parameterizes an arbitrary rotation on the Bloch sphere in terms of the three angles a, b, and c:

q = cirq.GridQubit(0, 0)
(a, b, c) = sympy.symbols("a b c")
circuit = cirq.Circuit(
 cirq.rz(a)(q),
 cirq.rx(b)(q),
 cirq.rz(c)(q),
 cirq.rx(-b)(q),
 cirq.rz(-a)(q)
)

In order to extract information from our circuit, we must apply measurement operators. For now we choose to make a Z measurement. In order to observe an output, we must also feed our model quantum data (NOTE: quantum data means quantum circuits with no free parameters). Though the output values will depend on the default random initialization of the angles in our model, one will be the negative of the other since cirq.X(q) causes a bit flip:

outputs = tfq.layers.PQC(circuit, cirq.Z(q))
quantum_data = tfq.convert_to_tensor([
 cirq.Circuit(),
 cirq.Circuit(cirq.X(q))
])
res = outputs(quantum_data)
res
<tf.Tensor: id=577, shape=(2, 1), dtype=float32, numpy=
array([[ 0.8722095],
 [-0.8722095]], dtype=float32)>

We can also choose to measure the three pauli matrices, sufficient to fully characterize the operation of our model, or choose to simulate sampled expectation values by specifying a number of measurement shots (repetitions) to average over. Notice that using only 200 repetitions introduces variation between the two rows of data, due to the probabilistic nature of measurement.

measurement = [cirq.X(q), cirq.Y(q), cirq.Z(q)]
outputs = tfq.layers.PQC(circuit, measurement, repetitions=200)
quantum_data = tfq.convert_to_tensor([
 cirq.Circuit(),
 cirq.Circuit(cirq.X(q))
])
res = outputs(quantum_data)
res
<tf.Tensor: id=808, shape=(2, 3), dtype=float32, numpy=
array([[-0.38, 0.9 , 0.14],
 [ 0.19, -0.95, -0.35]], dtype=float32)>

A value for backend can also be supplied in the layer constructor arguments to indicate which supported backend you would like to use. A value for differentiator can also be supplied in the constructor to indicate the differentiation scheme this PQC layer should use. Here's how you would take the gradients of the above example using a cirq.Simulator backend (which is slower than the default backend='noiseless' which uses C++):

q = cirq.GridQubit(0, 0)
(a, b, c) = sympy.symbols("a b c")
circuit = cirq.Circuit(
 cirq.rz(a)(q),
 cirq.rx(b)(q),
 cirq.rz(c)(q),
 cirq.rx(-b)(q),
 cirq.rz(-a)(q)
)
measurement = [cirq.X(q), cirq.Y(q), cirq.Z(q)]
outputs = tfq.layers.PQC(
 circuit,
 measurement,
 repetitions=5000,
 backend=cirq.Simulator(),
 differentiator=tfq.differentiators.ParameterShift())
quantum_data = tfq.convert_to_tensor([
 cirq.Circuit(),
 cirq.Circuit(cirq.X(q))
])
res = outputs(quantum_data)
res
<tf.Tensor: id=891, shape=(2, 3), dtype=float32, numpy=
array([[-0.5956, -0.2152, 0.7756],
 [ 0.5728, 0.1944, -0.7848]], dtype=float32)>

Lastly, like all layers in TensorFlow the PQC layer can be called on any tf.Tensor as long as it is the right shape. This means you could replace replace quantum_data with values fed in from a tf.keras.Input.

Attributes

compute_dtype The dtype of the computations performed by the layer.
dtype Alias of layer.variable_dtype.
dtype_policy

input Retrieves the input tensor(s) of a symbolic operation.

Only returns the tensor(s) corresponding to the first time the operation was called.

input_dtype The dtype layer inputs should be converted to.
input_spec

losses List of scalar losses from add_loss, regularizers and sublayers.
metrics List of all metrics.
metrics_variables List of all metric variables.
non_trainable_variables List of all non-trainable layer state.

This extends layer.non_trainable_weights to include all state used by the layer including state for metrics and SeedGenerators.

non_trainable_weights List of all non-trainable weight variables of the layer.

These are the weights that should not be updated by the optimizer during training. Unlike, layer.non_trainable_variables this excludes metric state and random seeds.

output Retrieves the output tensor(s) of a layer.

Only returns the tensor(s) corresponding to the first time the operation was called.

path The path of the layer.

If the layer has not been built yet, it will be None.

quantization_mode The quantization mode of this layer, None if not quantized.
supports_masking Whether this layer supports computing a mask using compute_mask.
symbols The symbols that are managed by this layer (in-order).

trainable Settable boolean, whether this layer should be trainable or not.
trainable_variables List of all trainable layer state.

This is equivalent to layer.trainable_weights.

trainable_weights List of all trainable weight variables of the layer.

These are the weights that get updated by the optimizer during training.

variable_dtype The dtype of the state (weights) of the layer.
variables List of all layer state, including random seeds.

This extends layer.weights to include all state used by the layer including SeedGenerators.

Note that metrics variables are not included here, use metrics_variables to visit all the metric variables.

weights List of all weight variables of the layer.

Unlike, layer.variables this excludes metric state and random seeds.

Methods

add_loss

add_loss(
 loss
)

Can be called inside of the call() method to add a scalar loss.

Example:

classMyLayer(Layer):
 ...
 defcall(self, x):
 self.add_loss(ops.sum(x))
 return x

add_metric

add_metric(
 *args, **kwargs
)

add_variable

add_variable(
 shape,
 initializer,
 dtype=None,
 trainable=True,
 autocast=True,
 regularizer=None,
 constraint=None,
 name=None
)

Add a weight variable to the layer.

Alias of add_weight().

add_weight

add_weight(
 shape=None,
 initializer=None,
 dtype=None,
 trainable=True,
 autocast=True,
 regularizer=None,
 constraint=None,
 aggregation='none',
 overwrite_with_gradient=False,
 name=None
)

Add a weight variable to the layer.

Args
shape Shape tuple for the variable. Must be fully-defined (no None entries). Defaults to () (scalar) if unspecified.
initializer Initializer object to use to populate the initial variable value, or string name of a built-in initializer (e.g. "random_normal"). If unspecified, defaults to "glorot_uniform" for floating-point variables and to "zeros" for all other types (e.g. int, bool).
dtype Dtype of the variable to create, e.g. "float32". If unspecified, defaults to the layer's variable dtype (which itself defaults to "float32" if unspecified).
trainable Boolean, whether the variable should be trainable via backprop or whether its updates are managed manually. Defaults to True.
autocast Boolean, whether to autocast layers variables when accessing them. Defaults to True.
regularizer Regularizer object to call to apply penalty on the weight. These penalties are summed into the loss function during optimization. Defaults to None.
constraint Contrainst object to call on the variable after any optimizer update, or string name of a built-in constraint. Defaults to None.
aggregation Optional string, one of None, "none", "mean", "sum" or "only_first_replica". Annotates the variable with the type of multi-replica aggregation to be used for this variable when writing custom data parallel training loops. Defaults to "none".
overwrite_with_gradient Boolean, whether to overwrite the variable with the computed gradient. This is useful for float8 training. Defaults to False.
name String name of the variable. Useful for debugging purposes.

build

View source

build(
 input_shape
)

Keras build function.

build_from_config

build_from_config(
 config
)

Builds the layer's states with the supplied config dict.

By default, this method calls the build(config["input_shape"]) method, which creates weights based on the layer's input shape in the supplied config. If your config contains other information needed to load the layer's state, you should override this method.

Args
config Dict containing the input shape associated with this layer.

call

View source

call(
 inputs
)

Keras call function.

Args
inputs tf.Tensor

Tensor of shape [batch_size], each entry a serialized circuit (from tfq.convert_to_tensor).

Returns
tf.Tensor Tensor of shape [batch_size, n_operators], expectation values for each operator.

compute_mask

compute_mask(
 inputs, previous_mask
)

compute_output_shape

compute_output_shape(
 *args, **kwargs
)

compute_output_spec

compute_output_spec(
 *args, **kwargs
)

count_params

count_params()

Count the total number of scalars composing the weights.

Returns
An integer count.

from_config

@classmethod
from_config(
 config
)

Creates an operation from its config.

This method is the reverse of get_config, capable of instantiating the same operation from the config dictionary.

if "dtype" in config and isinstance(config["dtype"], dict):
 policy = dtype_policies.deserialize(config["dtype"])

Args
config A Python dictionary, typically the output of get_config.

Returns
An operation instance.

get_build_config

get_build_config()

Returns a dictionary with the layer's input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you're writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns
A dict containing the input shape associated with the layer.

get_config

get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

get_weights

get_weights()

Return the values of layer.weights as a list of NumPy arrays.

load_own_variables

load_own_variables(
 store
)

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args
store Dict from which the state of the model will be loaded.

quantize

quantize(
 mode, type_check=True
)

quantized_build

quantized_build(
 input_shape, mode
)

quantized_call

quantized_call(
 *args, **kwargs
)

rematerialized_call

rematerialized_call(
 layer_call, *args, **kwargs
)

Enable rematerialization dynamically for layer's call method.

Args
layer_call The original call method of a layer.

Returns
Rematerialized layer's call method.

save_own_variables

save_own_variables(
 store
)

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args
store Dict where the state of the model will be saved.

set_weights

set_weights(
 weights
)

Sets the values of layer.weights from a list of NumPy arrays.

stateless_call

stateless_call(
 trainable_variables,
 non_trainable_variables,
 *args,
 return_losses=False,
 **kwargs
)

Call the layer without any side effects.

Args
trainable_variables List of trainable variables of the model.
non_trainable_variables List of non-trainable variables of the model.
*args Positional arguments to be passed to call().
return_losses If True, stateless_call() will return the list of losses created during call() as part of its return values.
**kwargs Keyword arguments to be passed to call().

Returns
A tuple. By default, returns (outputs, non_trainable_variables). If return_losses = True, then returns (outputs, non_trainable_variables, losses).

Example:

model = ...
data = ...
trainable_variables = model.trainable_variables
non_trainable_variables = model.non_trainable_variables
# Call the model with zero side effects
outputs, non_trainable_variables = model.stateless_call(
 trainable_variables,
 non_trainable_variables,
 data,
)
# Attach the updated state to the model
# (until you do this, the model is still in its pre-call state).
for ref_var, value in zip(
 model.non_trainable_variables, non_trainable_variables
):
 ref_var.assign(value)

symbol_values

View source

symbol_values()

Returns a Python dict containing symbol name, value pairs.

Returns
Python dict with str keys and float values representing the current symbol values.

symbolic_call

symbolic_call(
 *args, **kwargs
)

__call__

__call__(
 *args, **kwargs
)

Call self as a function.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026年01月02日 UTC.