Montecarlo Method equivalent in the QC world
Looking for an approach to implement a simple stochastic model onto a QC framework.
Where X, Z and W are random variables obeying an uniform, normal and triangular distribution respectively (or any other distribution for that matter).
Using a standard MC approach I shall implement such model in some computational platform, ran it with N attempts and watch out the distribution of the outcome Y, sensibilities and other characteristics of the Y variable.
Using the Chebychev inequality I can estimate the number N for a given number of trials, having some fixing of the error desired and the variance of the outcome. Normally a reasonable number of tries yield a realistically low error on normal digital computers, but as the error is reduced the number of tries grows very fast. And this is why I’m willing to explore que QC way.
I’m looking for more complex models but the approach to solve this one I feel would short cut myself into the approach using QC.
Any clue? Reference? Experiences? Comment? Article? Recommendation?
Thanks in advance, Pedro.
Dr. Pedro E. Colla
Instituto Universitario Aeronàutico
I am not an expert in Monte Carlo methods. However, I have re-posted your question on 1QBit’s internal message board, and we’ll see what emerges in the way of helpful information.
As background, there is a paper from 2014 that compares population annealing (a Monte Carlo algorithm) with simulated annealing and parallel tempering Monte Carlo. https://arxiv.org/abs/1412.2104
I am mentioning it because one of the authors is Helmut Katzgraber, who heads the Optimization Team at 1QBit. Helmut has published extensively, and with a large number of co-authors. One way to begin your search for relevant articles would be to start with the above paper on arXiv and look through the citations for its authors.
I’m not sure if you need to explore QC to specifically address this particular problem. Current computers have enough CPU power to manage millions of simulations efficiently (depending on the specific simulation you’re running) There is also a number of techniques which you can use to reduce the simulation error while optimizing the simulation algorithm. I suggest looking at a few MC implementation resources and tricks while exploring QC. Many are available on amazon and free online.
I’m quite familiar with Montecarlo methods applied to stochastic systemic models, used them for a couple of decades now, and ran my PhD thesis on them. However there are certain high dimensionality problems (related to certain multivariate analysis problems) where a curse of dimensionality solution space together with sparse distribution yield the number of trials very high to achieve a small error. Even if I know that I can pump billion of trials to get a result and I can run them in a reasonable time in some power computer I’m still curious on how it compares with using QC for that, and what it takes to do so. Obviously the problem I posted is a “toy problem” useful to understand the very basics involved. So far I’ve found little or no literature on the subject and this is the reason of my posting. You can rearrange my question as “how to implement a random variable of a given distribution using QC”,. Thanks, Pedro.
An additional paper worth looking at is Physics-inspired optimization for constraint-satisfaction problems using a digital annealer, recently posted on arXiv by some of my colleagues here at 1QBit and Fujitsu. Its 65 references are mostly from the physics literature, and may therefore provide a fresh perspective.
Link to arXiv: https://arxiv.org/pdf/1806.08815.pdf
The Fujitsu Digital Annealer can treat Ising-type optimization problems of a size up to 1024 variables, with 26 and 16 bits of (fixed) precision for the biases and variable couplers, respectively. According to the authors, physics-inspired optimization techniques, such as simulated annealing and parallel tempering Monte Carlo, have been shown to outperform specialized quantum hardware, such as the currently available D-Wave devices.