- 
  Notifications
 You must be signed in to change notification settings 
- Fork 93
QFI Calculation With PyTorch #194
-
Hello,
I'm trying to use TensorCircuit to perform a QFI calculation. I'm trying to do this on what is essentially a PyTorch module. I know that TensorCircuit has backend support for PyTorch, but I'm unsure of the specifics of how to actually do the calculation. Any help would be appreciated!
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 1 comment 7 replies
-
will code in this tutorial helpful? https://tensorcircuit.readthedocs.io/en/latest/tutorials/imag_time_evo.html (Note that the lhs matrix in imaginary time evolution is the same thing as QFI) The only subtlety is that you have to use pytorch backend, which lacks jit feature for now and might be not that efficient
Beta Was this translation helpful? Give feedback.
All reactions
-
what is your pytorch version? could you attached a runnable minimal demo to reproduce the bug?
Beta Was this translation helpful? Give feedback.
All reactions
-
I have PyTorch 2.1.0. I think that posting this here would be too large - the "m" instance in the above code snippet is its own class which inherits from torch.nn.Module and depends on many other functions.
Beta Was this translation helpful? Give feedback.
All reactions
-
sure, this is why I want a "minimal demo" to reproduce where you need to keep only the necessary ingredients for the module, say, if the module is only a Linear layer, will you have the same error?
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi @refraction-ray. I've been messing with the code to understand it. I'm getting a new error message, so I'll post everything here and hopefully you can help me out!
My general class is defined as
import tensorly as tl import tlquantum as tlq from tensorly.tt_matrix import TTMatrix import torch from torch import randint, rand, arange, complex128 from torch.optim import Adam import matplotlib.pyplot as plt from math import factorial, sqrt import os import numpy as np class Metrology(torch.nn.Module): def __init__(self,nqubits,L,dt,approx,ncontraq,ncontral,axes=['x'],scale=0.1,dtype=complex128,device=None): super().__init__() self.nqubits, self.L, self.dt = nqubits, L, dt self.approx, self.ncontraq, self.ncontral, self.axes = approx, ncontraq, ncontral, axes self.dtype, self.device = dtype, device self.scale=scale self.Js = torch.nn.Parameter(scale*rand([L])).to(device) self.ths = torch.nn.Parameter(scale*rand([L,len(axes)])).to(device) self.O = torch.tensor([1.0], device=device) self.W = torch.nn.Linear(1,1, device=None, bias=False).double() self.sz = Flip(self.nqubits,ncontraq,device=device,dtype=dtype) self.state = make_state(self.nqubits) self.state = tlq.qubits_contract(self.state,self.ncontraq) self.make_U1() self.make_U2() self.Op = make_Sy(self.nqubits) def concatenate_parameters(self): parameters = [] for param in self.parameters(): parameters.append(param.view(-1)) return torch.cat(parameters) def get_concatenated_shape(self): concatenated_parameters = self.concatenate_parameters() return concatenated_parameters.shape def make_U1(self): self.U1=[] for i in range(self.L): self.U1+=get_global_perceptron(self.nqubits, self.ncontraq, self.approx, dt=self.dt, device=self.device, J=self.Js[i]) for a in range(len(self.axes)): self.U1+=get_global_rotation(self.nqubits, self.ncontraq, axis=self.axes[a], param=self.ths[i,a]) return def make_U2(self): self.U2=[self.sz] cnt=0 for l in range(len(self.U1)): mod1 = self.U1[l] if cnt%(len(self.axes)+1)==0: #We have a perceptron for p in mod1.parameters(): J = p.data mod2 = get_global_perceptron(self.nqubits,self.ncontraq,self.approx,grad=False,J=J) self.U2 += mod2 else: #We have an rotation. The axis is given by a = self.axes[l%(len(self.axes)+1)-1] for p in mod1.parameters(): theta = -p.data mod2 = get_global_rotation(self.nqubits,self.ncontraq,axis=a,grad=False,param=theta) self.U2 += mod2 cnt+=1 return def forward(self, phi): P = perturbation(self.nqubits,phi) c = tlq.TTCircuit(self.U1+P+self.U2, self.ncontraq, self.ncontral) #return c.forward_expectation_value(self.state, self.Op).real/(self.nqubits/2) return self.W(c.forward_expectation_value(self.state, self.Op).real/(self.nqubits/2)) def forward_ket(self, phi): def wrapper(params, phi=phi): params = params P = perturbation(self.nqubits,phi) c = tlq.TTCircuit(self.U1+P+self.U2, self.ncontraq, self.ncontral) state = c.to_ket(self.state) return state return wrapper
I'm using tensorly-quantum for quantum simulation. Now, when I try to create an instance of this quantum circuit and calculate the QFI after one pass of my circuit I do
import tensorcircuit as tc import torch import numpy as np import tensorflow as tf Nqubits=10 Ntrails=30 L = 10 Nepochs=50 batchsize=20 axes=['x'] Phis = torch.tensor(np.linspace(-0.5,0.5,100), dtype=torch.float32) Phis_train = 0.5*(2*torch.rand(100)-1) m = Metrology(Nqubits,L,0.05,1,2,2,scale=0.5,axes=axes) from tensorcircuit.experimental import qng, qng2 tc.set_backend('pytorch') # Concatenate parameters params_generator = m.parameters() params_pytorch = [torch.Tensor(param.detach().numpy()) for param in params_generator] # Convert each tensor in the list to the desired dtype # params_pytorch = [param.type(torch.FloatTensor) for param in params_pytorch] params_pytorch[1] = params_pytorch[1].squeeze() params_pytorch[2] = params_pytorch[2].squeeze(dim=1) params_pytorch = torch.cat(params_pytorch, dim=0) state = m.forward_ket(Phis[0]) # Should be a Callable which returns a Tensor qfi_fun = qng(state) qfi_value = qfi_fun(params_pytorch)
After doing this I get
{
	"name": "RuntimeError",
	"message": "You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function being transformed by a functorch transform. This is unsupported, please attempt to use the functorch transforms (e.g. grad, vjp, jacrev, jacfwd, hessian) or call requires_grad_() outside of a function being transformed instead.",
	"stack": "---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
File ~..., in qng.<locals>.wrapper(params, **kws)
 100 psi = f(params)
 101 if mode == \"fwd\":
--> 102 jac = backend.jacfwd(f)(params)
 103 else: # \"rev\"
 104 jac = backend.jacrev(f)(params)
File ~..., in ExtendedBackend.jacfwd.<locals>.wrapper(*args, **kws)
 1507 jjs = []
 1508 for argnum in argnums: # type: ignore
-> 1509 jj = self.vmap(jvp1, vectorized_argnums=2)(
 1510 pf,
 1511 args,
 1512 tuple(
 1513 [
 1514 self.reshape(
 1515 self.eye(self.sizen(arg), dtype=arg.dtype),
 1516 [-1] + list(self.shape_tuple(arg)),
 1517 )
 1518 if i == argnum
 1519 else self.reshape(
 1520 self.zeros(
 1521 [self.sizen(arg), self.sizen(arg)], dtype=arg.dtype
 1522 ),
 1523 [-1] + list(self.shape_tuple(arg)),
 1524 )
 1525 for i, arg in enumerate(args)
 1526 ]
 1527 ),
 1528 )
 1529 jj = self.tree_map(
 1530 partial(
 1531 _transform, input_shape=list(self.shape_tuple(args[argnum]))
 1532 ),
 1533 jj,
 1534 )
 1535 jjs.append(jj)
File ~..., in PyTorchBackend.vmap.<locals>.wrapper(*args, **kws)
 609 def wrapper(*args: Any, **kws: Any) -> Tensor:
 610 in_axes = tuple([0 if i in vectorized_argnums else None for i in range(len(args))]) # type: ignore
--> 611 return torchlib.vmap(f, in_axes, 0)(*args, **kws)
File ~..., in vmap.<locals>.wrapped(*args, **kwargs)
 187 def wrapped(*args, **kwargs):
--> 188 return vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
File ~..., in vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
 262 return _chunked_vmap(func, flat_in_dims, chunks_flat_args,
 263 args_spec, out_dims, randomness, **kwargs)
 265 # If chunk_size is not specified.
--> 266 return _flat_vmap(
 267 func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs
 268 )
File ~..., in doesnt_support_saved_tensors_hooks.<locals>.fn(*args, **kwargs)
 35 @functools.wraps(f)
 36 def fn(*args, **kwargs):
 37 with torch.autograd.graph.disable_saved_tensors_hooks(message):
---> 38 return f(*args, **kwargs)
File ~..., in _flat_vmap(func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs)
 377 try:
 378 batched_inputs = _create_batched_inputs(flat_in_dims, flat_args, vmap_level, args_spec)
--> 379 batched_outputs = func(*batched_inputs, **kwargs)
 380 return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func)
 381 finally:
File ~..., in return_partial.<locals>.wrapper(*args, **kws)
 62 @wraps(f)
 63 def wrapper(*args: Any, **kws: Any) -> Any:
---> 64 r = f(*args, **kws)
 65 nr = [r[ind] for ind in return_argnums] # type: ignore
 66 if one_input:
File ~..., in PyTorchBackend.jvp(self, f, inputs, v)
 594 v = tuple(v)
 595 # for both tf and torch
 596 # behind the scene: https://j-towns.github.io/2017/06/12/A-new-trick.html
 597 # to be investigate whether the overhead issue remains as in
 598 # https://github.com/renmengye/tensorflow-forward-ad/issues/2
--> 599 return torchlib.autograd.functional.jvp(f, inputs, v)
File ~..., in jvp(func, inputs, v, create_graph, strict)
 421 with torch.enable_grad():
 422 is_inputs_tuple, inputs = _as_tuple(inputs, \"inputs\", \"jvp\")
--> 423 inputs = _grad_preprocess(inputs, create_graph=create_graph, need_graph=True)
 425 if v is not None:
 426 _, v = _as_tuple(v, \"v\", \"jvp\")
File ~..., in _grad_preprocess(inputs, create_graph, need_graph)
 83 res.append(inp.clone())
 84 else:
---> 85 res.append(inp.detach().requires_grad_(need_graph))
 86 return tuple(res)
RuntimeError: You are attempting to call Tensor.requires_grad_() (or perhaps using torch.autograd.functional.* APIs) inside of a function being transformed by a functorch transform. This is unsupported, please attempt to use the functorch transforms (e.g. grad, vjp, jacrev, jacfwd, hessian) or call requires_grad_() outside of a function being transformed instead."
}
Is this enough to understand the problem I'm having? I simply want to calculate the QFI of the output state of my quantum circuit with respect to the variational parameters. I didn't begin doing this with TensorCircuit or else I wouldn't be running into these problems! As I said before, I used tensorly-quantum to simulate my circuit, and I don't necessarily have the time to write a function that does this for me in the other package. My thought process is that if i get the output of my tensorly-quantum simulated circuit in the right format (Callable[..., Tensor]) I can then feed this into TensorCircuit to calculate the QFI. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions
-
the error message is even different from the previous one, so I guess the main reason is the implementation of tensorly is not very compatible with pytorch function transformations, I suggest to use tensorcircuit for the whole pipeline or you need to dig into the incompatibility
Beta Was this translation helpful? Give feedback.