Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Full MPS simulation of quantum machine learning model #84

Unanswered
lolazon22 asked this question in Q&A
Discussion options

Hello,
I plan to implement a quantum machine learning model for some classification problems. The circuit is very shallow, but the number of qubits is relatively large, say >30. I tried to follow the beginner tutorial, and I implemented the circuit and optimization procedure:

n = 30
layers = 1
def loss(params, nqubits):
 loss = 0.0
 for k in range(len(ising_groundstates)):
 c = tc.Circuit(nqubits, mps_inputs=ising_groundstates[k])
 i = 0
 for l in range(layers):
 for q in range(nqubits):
 c.ry(q, theta=params[i])
 i+=1
 for q in range(0, nqubits - 1, 2):
 c.cz(q, q+1)
 for q in range(nqubits):
 c.ry(q, theta=params[i])
 i+=1
 for q in range(1, nqubits - 2, 2):
 c.cz(q, q+1)
 c.cz(0, nqubits-1) 
 c.ry(0, theta=params[i])
 
 for m in range(nqubits):
 loss += c.expectation([tc.gates.z(), [m]])
 return K.real(loss)
ising_groundstates = [] # here, one should generate a list of TFIM ground states for different lambdas
vgf = K.jit(K.value_and_grad(loss), static_argnums=1)
params = tf.Variable(tf.random.normal(stddev=0.1, shape=[2*layers*n+1]))
opt = tc.backend.optimizer(tf.keras.optimizers.Adam(learning_rate=0.1))
for j in range(5000):
 loss, gr = vgf(params, n)
 param = opt.update(gr, params)
 if j % 50 == 0:
 print("loss", loss.numpy())

I am facing two main issues with the code.

  1. First, I do not manage to run the full simulation as an MPS. Indeed, since the circuit is very shallow, the MPS simulation would run efficiently. Otherwise, I wouldn't be able to run circuits with >30 qubits.
  2. The input states of the variational circuit are ground states of the 1D TFIM, which are efficiently prepared as MPS since the entanglement is low. I would like to know whether there exists the possibility to create them as an MPS with TensorCircuit, or otherwise, I have to use other packages and import the MPS state (if that is the case, do you have any hint?).

I appreciate your help!

You must be logged in to vote

Replies: 1 comment 26 replies

Comment options

  1. As long as the mps_inputs is in the format of quvector (MPS in tensorcircuit), I believe you can simulate system with more than 30 qubits since nlayers=1. Besides, if the nlayers is larger, one can also use tc.MPSCircuit which provides a MPS TEBD simulator and can simulate more qubits with some approximation truncation trade off. For Circuit, the input argument for the constructor is mps_inputs which accepts quvector; and for MPSCircuit, the input argument can be tensors which accepts a list of tensor for the MPS or wavefunction which accepts quvector.

  2. There are two steps to prepare the MPS inputs. 1) obtain the groud state MPS via DMRG in some packages, here I suggest quimb: see https://quimb.readthedocs.io/en/latest/examples/ex_dmrg_periodic.html for an example of DMRG. 2) We can transform the MPS class in quimb to quvector in tensorcircuit via function provided by tensorcircuit (tc.quantum.quimb2qop), see https://github.com/tencent-quantum-lab/tensorcircuit/blob/master/tests/test_quantum.py#L391-L413 for an example.

Besides, currently in the code, the loss is not correct for supervised learning where instead of expectation on z (ypred), you need further to construct the MSE or crossentropy as the final loss to do gradient descent to train the phase classifier for TFIM.

You must be logged in to vote
26 replies
Comment options

It works, thanks @refraction-ray !

Comment options

Hello @refraction-ray. I was about to open an issue to report a possible bug, but I believe it might be too specific to my implementation. As you know, I am currently using MPSCircuit to generate circuits in an optimization loop to minimize a cost function, and quimb to generate the initial states. It happens that, eventually, after running for several hours the optimization, the following error appears and kills the program, so it is impossible to complete the optimization. I will not attach the code since it may be too large, but if needed I can do so. The error falls back to numpy, but I believe it has to do with the implementation of MPSCircuit.

File "/code.py", line 63, in pooling_layer
 c.cr(q1, q2, theta=param[0], alpha=param[1], phi=param[2])
 File "/.local/lib/python3.9/site-packages/tensorcircuit/abstractcircuit.py", line 122, in apply
 self.apply_general_gate(
 File "/.local/lib/python3.9/site-packages/tensorcircuit/mpscircuit.py", line 581, in apply_general_gate
 self.apply_double_gate(gate, *index, split=split) # type: ignore
 File "/.local/lib/python3.9/site-packages/tensorcircuit/mpscircuit.py", line 305, in apply_double_gate
 self.apply_adjacent_double_gate(
 File "/.local/lib/python3.9/site-packages/tensorcircuit/mpscircuit.py", line 242, in apply_adjacent_double_gate
 err = self._mps.apply_two_site_gate(
 File "/.local/lib/python3.9/site-packages/tensorcircuit/mps_base.py", line 132, in apply_two_site_gate
 U, S, V, tw = self.backend.svd(
 File "/.local/lib/python3.9/site-packages/tensornetwork/backends/numpy/numpy_backend.py", line 622, in svd
 return decompositions.svd(np,
 File "/.local/lib/python3.9/site-packages/tensornetwork/backends/numpy/decompositions.py", line 36, in svd
 u, s, vh = np.linalg.svd(tensor, full_matrices=False)
 File "<__array_function__ internals>", line 180, in svd
 File "/net/opt/python/stretch/3.9.7/lib/python3.9/site-packages/numpy/linalg/linalg.py", line 1648, in svd
 u, s, vh = gufunc(a, signature=signature, extobj=extobj)
 File "/net/opt/python/stretch/3.9.7/lib/python3.9/site-packages/numpy/linalg/linalg.py", line 97, in _raise_linalgerror_svd_nonconvergence
 raise LinAlgError("SVD did not converge")
numpy.linalg.LinAlgError: SVD did not converge
Comment options

According to this, this may be due to there is nan in the data, maybe you need to reduce the demo in terms of the code size and reproducing time, so that the bug (if any) can be identified and fixed, otherwise, nan could be somehow common if singular matrix occurs during optimization

Comment options

Another possibility is this due to memory leakage somewhere. The workaround is, if the crash time is rather fixed, we can first store the weights after some iterations and kill the program. And then open a new program loading the weights. Identifying a place that leak the memory is very challenging otherwise

Comment options

or this I think you need to be patient enough to print intermediate variable and wait for the crash to identify what is the reason first

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet

AltStyle によって変換されたページ (->オリジナル) /