Making Every Quantum Measurement Count: Cost Reduction for Quantum Chemistry in Qrunch

Published on March 24, 2026

Kvantify Research and Applications

Quantum computing technology is advancing quickly, but the hardware is still immature. The next generation of devices – especially those based on neutral atoms and trapped ions – will likely deliver more qubits and higher accuracy. However, this leap in accuracy will come at the price of increased execution times, often by a factor of 1000x, which makes time one of the most critical resources on this type of hardware.

At Kvantify, a core technology focus is to extract maximum value from every second that a workload consumes, and therefore we have set out to remedy the problem of elevated compute times on such hardware. One way to approach this problem is to reduce the number of circuits that need to be run on the hardware, which may be accomplished by using smarter measurement strategies.

A basic principle of quantum mechanics is that Nature is probabilistic. So, in order to obtain results from a quantum computer, we need to repeatedly measure the quantity of interest. Consequently, to minimize the amount of resources spent to do so, careful considerations about what to measure – and how many times – are crucial. A naïve approach can lead to large waste of resources as one needs to restart the quantum computer over and over again. Hence, to maximize the predictive power and at the same time minimize the overall computational costs of a calculation of, e.g., an energy of a chemical system, it is of utmost importance to have optimized measurement strategies. Technically, this is about reducing the so-called variance of a calculation. In simple terms, a low variance means high certainty of a good result, and that means the quantum computer has been used most efficiently!

Measuring Quantum Observables

One way to understand what goes into solving the problem of measurement optimization of quantum observables is by an analogy to coin flips. Suppose that we have 3 coins, a, b, and c. Each coin has a score; let’s say that the scores are 20, 30, and 50, respectively, and we want to understand what total score we should expect to get on average if we flip all of them. If we know that the coins are fair, i.e. that they land heads or tails with the same probability, the expected score from coin a is 20/2 = 10, and similarly the expected scores are 15 and 25 for coins b and c, respectively. As such, the total expected score for all three coins is 10 + 15 + 25 = 50.

In the measurement optimization problem, this situation is complicated in a number of ways. First and foremost, the coins are generally not fair coins, and we do not know their probabilities of landing heads or tails. Instead, we have a budget of, say, 10 total flips that we have to allocate as wisely as possible to achieve the most accurate estimate of the expected score.

If, in each of the 10 flips, we are restricted to flip only a single coin, one strategy immediately emerges: since coin c has the largest score, it has a larger impact on the total expected score, so it makes sense to allocate more flips to that coin.

In the quantum world, we have a bit more flexibility: Each coin comes with one or more colors, and in each of our 10 flips, we are allowed to flip multiple coins, as long as there is a single color that they all have in common.

Image

Here, for instance, in one of our 10 allowed flips, we could choose to flip both of a and c, since they are both yellow, or we could choose to flip both b and c, since they are both blue, but we are not allowed to flip a and b simultaneously, since they do not have a color in common. In other words, rather than choosing which coins to flip, we now have to choose which colors to flip.

With this additional flexibility, how do we allocate the 10 allowed flips to minimize the error in the estimate of the expected score? Are some colors more important than others? One possible answer is given at the bottom of the post.

As it turns out, quantum mechanics has one additional trick up its sleeve to make life more complicated: the coins can be correlated. This means that if we choose to measure b and c at the same time, it may be impossible for one coin to land heads if the other lands tails, and this correlation is also unknown to us.

In quantum chemistry applications, the total expected score corresponds to the energy of a quantum state, the energy calculation itself being the combination of multiple individually measurable pieces represented by the coins. Combinations of these pieces may be simultaneously measurable, as indicated by the colors of the coins, and the scores per coin represent the couplings between molecular orbitals, and the unknown probabilities of flipping heads are defined by the quantum state itself.

Our goal at Kvantify is straightforward: deliver high‑quality quantum chemistry results at the lowest possible cost per calculation. A reduction in measurements leads directly to improved quality of calculations and cost savings.

In this post, we showcase how Kvantify Qrunch uses advanced variance reduction methods to optimize performance on available hardware, by minimizing the amount of resources necessary to estimate the quantities of interest, by almost two orders of magnitude. The technical paper is published as a preprint on arXiv.

The Case

To illustrate the performance of these methods, we revisit our simulation of covalent ligands. Covalent ligands are a fascinating and powerful class of drugs with the promise of fewer adverse effects and lower required dosing. Well-established candidates of this drug class are, for example, Aspirin (acetylsalicyclic acid) and penicillin. Despite these very early examples, pharma companies have for many years focused on non-covalent binding modes as adverse effects of covalent drug molecules are difficult to detect in vitro. Modern computational tools and experimental techniques have revived the interest in covalent drug research with the aim of producing better medicine against various conditions ranging from cancer to infectious disease.

The Problem

To understand the behavior of the chemical system, our end goal is generally to estimate the lowest energy of the molecule along different points of the reaction path; that is, the state of the system defines a particular superposition of the electrons in the orbitals in the active space, and our task is to determine the superposition that has the lowest energy.

At Kvantify, we have developed powerful algorithms to find low-energy states of molecular systems, and you can build workflows with them using Kvantify Qrunch. The core idea behind the approach is that we can search the space of all possible superpositions to find the lowest-energy state: By starting in a suitable reference state (for instance the so-called Hartree–Fock state), one can make a sequence of adjustments to the superposition to end up in the desired lowest-energy or ground state.

Finding this sequence of adjustments and ensuring progress towards lower-energy states boils down to estimating the energy of the system, as defined by the electronic Hamiltonian, and its derivatives along the possible search directions; together, these form a collection of quantum observables. Estimating the energy of a state generally requires that the state be measured, but each measurement provides information only about a portion of the energy; only by combining many such measurements is it possible to achieve a complete, unbiased estimate, and only by performing many measurements is the estimate precise enough to be useful in the context searching for low-energy states.

Thus, there is great incentive to make the most out every measurement and to figure out which measurements to perform in order to learn as much about the energy and its derivatives as possible.

Key Results

Kvantify Qrunch solves this problem by implementing state-of-the-art variance reduction methods: these are methods for splitting up the quantum observables into measurable parts and allocating measurements to those parts, in order to gain as much information about those observables while using as few quantum computing resources as possible. By combining state-of-the-art algorithms from the academic literature with a collection of in-house-developed heuristics, we can make the most out of the hardware at hand.

In the plot below, we consider the problem of estimating the ground-state energy in the covalent ligand problem at one particular point along the reaction path, and the question is how many measurements are needed to reach a chemically accurate result with a given success rate. The active space is represented by 12 qubits, meaning that we can calculate classically and analytically the resource requirements if one were to run the problem on error-corrected quantum hardware. In the plot, we compare this result for the best estimator in Qrunch to that obtained by the otherwise popular “LDF grouping”.

LDF grouping methods are what you typically find in other quantum development toolkits, and you can see in the plot that you can get significantly different results by using Kvantify Qrunch here.

Success rates plot

The success rates, i.e. the probabilities of getting estimates of the ground-state energies in the covalent ligand example which are within chemical accuracy, as a function of the number of measurements. These success rates are shown for two different measurement strategies, one based on LDF grouping, and one based on a Cliffordized version of ShadowGrouping. Note that the latter requires significantly fewer measurements to reach the same precision; or, alternatively, it reaches significantly higher precision at a given measurement budget.

Conclusion

By exploiting advanced statistical methods in Qrunch, we can reduce the resource requirements for a particularly crucial quantum computing problem by a factor of about 30 compared to a baseline provided by other popular quantum frameworks for the same problem. This is one important step in maximizing the potential of available quantum hardware, allowing more complicated calculations to be done already today.

Tutorial

The code for generating the comparison above, illustrating how to use Qrunch to construct the problem for this particular case, is available here.

One possible answer to the question in the info box: The decision about which 10 flips to make is the choice of how many times to measure yellow or blue. Let us say that we have decided to use $n_{\mathrm{yellow}}$ yellow flips and $n_{\mathrm{blue}}$ blue flips, such that $n_{\mathrm{yellow}} + n_{\mathrm{blue}} = 10$. This means that the coin a is flipped a total of $n_a = n_{\mathrm{yellow}}$ times, coin b is flipped a total of $n_b = n_{\mathrm{blue}}$ times, and coin c is flipped a total of $n_c = n_{\mathrm{yellow}} + n_{\mathrm{blue}}$ times. Let us write $n^+_a$, $n^+_b$ and $n^+_c$ for the number of times each of the coins landed heads. Then we can choose as our estimate of the total expected score

$$20 \frac{n^+_a}{n_a} + 30 \frac{n^+_b}{n_b} + 50 \frac{n^+_c}{n_c}.$$

This way of estimating only works if $n_a \not= 0$, $n_b \not= 0$, $n_c \not= 0$, i.e., each coin is flipped at least once, so we will go ahead and require that as well.

We want to choose the flips such that the error is as small as possible, so how erroneous is our estimate? Unfortunately, that depends on the unknown actual probabilities of each of the coins landing heads. If we know nothing about whether the three coins are more likely to land heads or tails, one option is to assume that they are in fact all fair. Then, with a bit of probability theory, one can show that the mean squared error of the estimate becomes proportional to

$$n_{\mathrm{yellow}}\left(\frac{20^2}{n_a^2} + \frac{50^2}{n_c^2}\right) +n_{\mathrm{blue}}\left(\frac{30^2}{n_b^2} + \frac{50^2}{n_c^2}\right).$$

By calculating this expression for all valid choices of 10 colors, we find that under the assumption of fairness, the particular choice of estimate, and the particular choice of how to evaluate errors, the best option is to make 4 yellow coin flips and 6 blue flips, at which point the expression above has a value of 500. This matches the intuition that coin c, having the largest score, is the most important to learn about, and that it is better to learn about coin b than coin a, yet we still have to allocate some measurements to learning about a.