- 
  Notifications
 You must be signed in to change notification settings 
- Fork 93
Shot-based functions on MPSCircuit #184
-
Currently MPSCircuit does not count with shot-based functions such as sample and sample_expectation, is this in the roadmap?
I am particularly interested in the MPSCircuit class for noisy circuit simulation, so I can have control over the bond dimension (contrary to Circuit and DMS).
Related to this, Circuit and DMS objects have the sample and sample_expectation but only the second one has the option to pass a noise_conf, how could I do this for sample if I just want the noisy-counts?
Beta Was this translation helpful? Give feedback.
All reactions
Currently MPSCircuit does not count with shot-based functions such as sample and sample_expectation
try MPSCircuit.measure(), sample API is built as a wrapper for measure API which is lacking in MPSCircuit class for now.
Demo usage:
K = tc.set_backend("tensorflow") n = 10 c = tc.MPSCircuit(n) c.set_split_rules({"max_singular_values": 8}) c.h(range(10)) @K.jit def sample(status): return c.measure(*range(n), status=status)[0] r = [] for _ in range(20): r.append(sample(K.implicit_randu([n]))) K.stack(r)
Replies: 2 comments 15 replies
-
Currently MPSCircuit does not count with shot-based functions such as sample and sample_expectation
try MPSCircuit.measure(), sample API is built as a wrapper for measure API which is lacking in MPSCircuit class for now.
Demo usage:
K = tc.set_backend("tensorflow") n = 10 c = tc.MPSCircuit(n) c.set_split_rules({"max_singular_values": 8}) c.h(range(10)) @K.jit def sample(status): return c.measure(*range(n), status=status)[0] r = [] for _ in range(20): r.append(sample(K.implicit_randu([n]))) K.stack(r)
Beta Was this translation helpful? Give feedback.
All reactions
-
@EmilianoG-byte , this is trickier for MPSCircuit. As currently we dont have primitive such as general_kraus for MPSCircuit. So in principle for quantum noise with unitary kraus operator (such as depolarizing or dephasing), it is easy to implement circuit_with_noise by trajectory method (apply the corresponding kraus unitary with proper probability). However, for MPSCircuit support for general_kraus, I think we need more effort to enrich the codebase to enable.
Beta Was this translation helpful? Give feedback.
All reactions
-
Oh, I see. Do you think then that general_kraus for MPSCircuit would then look like:
- Find Linbladian operators from Kraus list. For instance, up to first order, one could use the procedure from 8.4.2
- Use trajectory method for MPS with the linbladian operators as in III.B
?
Beta Was this translation helpful? Give feedback.
All reactions
-
@EmilianoG-byte I think the implementation will look like the one in Circuit:
tensorcircuit/tensorcircuit/circuit.py
Line 505 in f56437d
Beta Was this translation helpful? Give feedback.
All reactions
-
So what would be different in the MPSCircuit with respect to the Circuit implementation? As you mention that in the non-unitary case some functions are missing.
Beta Was this translation helpful? Give feedback.
All reactions
-
I think one needs to reimplement 
Beta Was this translation helpful? Give feedback.
All reactions
-
Related to this, Circuit and DMS objects have the sample and sample_expectation but only the second one has the option to pass a noise_conf, how could I do this for sample if I just want the noisy-counts
Nice catch! We should have implemented sample with support for noise_conf, the feature is on the roadmap though still lacking for now. But it is easy to wrapper the sample API with noise model support at the user level.
Demo usage:
from tensorcircuit.noisemodel import NoiseConf, circuit_with_noise import numpy as np def sample_with_noise(c, noise_conf, shots, status, statusc=None, **kws): if statusc is None: num_quantum = noise_conf.channel_count(c) statusc = np.random.uniform(size=[shots, num_quantum]) @K.jit def sample(status, statusc): cnoise = circuit_with_noise(c, noise_conf, statusc) # type: ignore return cnoise.sample( batch=1, status=status, format="sample_bin", allow_state=True, **kws ) r = [] for i in range(shots): r.append(sample(status[i], statusc[i])[0]) return K.stack(r) n = 10 c = tc.Circuit(10) c.h(range(n)) noise_conf = NoiseConf() error1 = tc.channels.depolarizingchannel(0.01, 0.01, 0.01) noise_conf.add_noise("h", error1) r = sample_with_noise(c, noise_conf, 32, np.random.uniform(size=[32, 1]))
(Actually a slightly generalized implementation from the above is sufficient to PR on the support of noise_conf for sample, the tricky parts are all from jit/external random interplay)
Or an integrated example for noisy sampling prior to the noise_conf setup: https://github.com/tencent-quantum-lab/tensorcircuit/blob/master/examples/noisy_sampling_jit.py 
Reference code module: https://github.com/tencent-quantum-lab/tensorcircuit/blob/master/tensorcircuit/noisemodel.py
Beta Was this translation helpful? Give feedback.
All reactions
-
- for the efficiency, actually the most time is spent on the first call of the jitted sample function (compiling time), you can benchmark the running time for each call of sample within the for loop to see the difference. For the exact case I gave in the demo, the first calling time is 1.5s while the latter call takes only 0.001s each which is super fast, so the question is reduced to how to lower the compiling time for the jitted function (of course, when you want a final batch in 1e6 order, the jit compiling time may be well amortized)
Beta Was this translation helpful? Give feedback.
All reactions
-
see some examples to further speed up compiling time in noisy circuit case, by incorporating these techniques in circuit_with_noise impl., they may help to reduce the compiling time: https://github.com/tencent-quantum-lab/tensorcircuit/blob/master/examples/mcnoise_boost_v2.py 
https://github.com/tencent-quantum-lab/tensorcircuit/blob/master/examples/hea_scan_jit_acc.py 
Beta Was this translation helpful? Give feedback.
All reactions
-
Thank you @refraction-ray! I will look into the sources you sent :)
2. by switching allow_state=False in the sample API, you are using the perfect_sampling algorithm, which may be or may not be faster than the plain sampling approach, depends on the number of shots, circuit depth, qubit number etc.
Is the perfect_sampling algorithm that you implemented what is called here: https://tensornetwork.org/mps/algorithms/sampling/#METTS_Algorithms_1 as "two-norm sampling algorithm"?
Also, do you think my statement about generating expectation values being faster than generating samples to be most times true? In the source I just sent, the algorithm for sampling scales as 
Beta Was this translation helpful? Give feedback.
All reactions
-
Is the perfect_sampling algorithm that you implemented what is called here: https://tensornetwork.org/mps/algorithms/sampling/#METTS_Algorithms_1 as "two-norm sampling algorithm"?
yes
Beta Was this translation helpful? Give feedback.
All reactions
-
Also, do you think my statement about generating expectation values being faster than generating samples to be most times true? In the source I just sent, the algorithm for sampling scales as while for local expectation values, the expectation value does not scale with N.
For expectation calculation, the default mode is still firstly to compute the full wavefunction, unless you specify c.expectation(..., reuse=False) where the tensornetwork contraction mode is enabled. In this case, the contraction complexity can still be related to N if the circuit is deep when the casual light cone can cover all qubits. Sampling also can be called with allow_state=True/False mode, (True) will first compute the full wavefunction while False will use perfect sampling. The practical running time for each case can only be benchmarked in specific case for given circuit and problem, and I dont think a general thumb of rule is enough to capture the complexity.
Beta Was this translation helpful? Give feedback.