Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

March 19, 2005

Rasetti: Quantum Computers based on TQFT

Posted by Urs Schreiber

Today’s sessions in Vietri are about quantum computation. Most of them go into more and different detail than I currently care about, but one had an interesting provocative theme related to high energy physics.

M. Rasetti is a theorist from Torino university. He says he is working on trying to figure out if quantum computation (QC) is fundamentally stronger than classical Turing-like computations. In other words, if QC can solve problems which are not in P in polynomial time.

I know that there is a meme floating around some brains which says that if you had a quantum system goverened by a Hamiltonian whose discrete eigenvalues are a sufficiently hard to compute function of the integers you could, by measuring these eigenvalues in a suitably prepared system, essentially compute this function and hence solve not-P problems in polynomial time.

This argument I always found rather less than convincing. It suffices to note that by precisely the same argument you could claim that a classical analog computer may compute NP problems easily: Just partition classical phase space into a denumerable number of subsets and pick a system whose energy is a non-computable function of the ordinal associated to each such subset. Up to some smoothness issues this is precisely the analogous idea as above – and clearly not a reasonable way to get a stronger-than-classical computer.

What Rasetti is talking about sounds much more sophisticated – though in the end it seems to reduce to more or less the above argument. Or does it?

So the idea is that there may be more types of quantum computers than currently considered in the standard literature. In particular, so Rasetti’s idea, using systems that have to be described by quantum field theory one might be able to (in principle) construct new and more powerful types of quantum computers.

This idea apparently goes back to Michael Friedman, who first proposed that non-abelian topological quantum field theories might exhibit features necessary so support a model of computation capable of solving not-P problems in polynomial time?

Apparently Friedman, Kitaev and Wang in particular proposed that Chern-Simons theory is a promising candidate.

I have not yet managed to talk to Rasetti and ask him some questions about his work, but if I understood correctly what he said in the talk the idea is:

The computation of non-abelian holonomy as well as that of Jones polynomials is not in P. Since the observables of CS theory are precisely these quantities all we have to do is to set up a system which is goverened by CS theory, somehow prepare it to be in a given knot state and then observe its observables. Since these will be not-P functions of these knots, we effectively have an analog quantum computer which computes not-P problems easily.

Does that make sense? Isn’t it precisely the same idea as mentioned at the very beginning, that given any system whose observables are a not-P function of its states it superficially looks like a device that computes not-P problems in polynomial time.

I have the feeling that something is wrong here.

I also feel that there should be a high-brow theoretical answer for what is wrong or why this is not wrong and am surprised that among QC-theorists there apparetly is no clarity about this.

So for me the conclusion of Rasetti’s talk is currently just that if you are willing to be interested in systems whose observables are non-P funtons of their states, then some quantum field theories in general and Chern-Simons theory in particular suggst themselves as laboratories since they do enjoy this property naturally.

Update: After rereading what I wrote here it now seems to me that the issue is related to the general question in as much we want to regard analog computers as computers when it comes to complexity issues. I am not a computation theorist so what I am saying here will sound crude to those who are, but let me say it anyway:

Clearly when talking about computational complexity one has to remove analog computers from the picture somehow. For instance protein folding is a famouns hard problem. It remains hard, even though I can figure out how a protein folds easily by just synthesizing it and seeing how it folds. That’s an analog computation and I bet it scales linearly with the size of the protein. You might rather want to call it an experiment, though.

And that’s the point, I think. It’s an experiment rather than a computation. And the same is true for the Chern-Simons computer proposed by Rasetti et al. They propose to make experiments in field theory and regard the measurement of the outcome as a calculation.

For practical purposes that may be fine. If I really want to know the value of some Jones polynomial and I really find it easier to come up with a system governed by CS theory, prepare it suitably and measure its observables somehow with sufficient accuracy, than to compute the polynomial on a digital computer up to that accuracy then that’s fine I guess.

But

a) this has little to do with quantum computation and is all about analog computers, and

b) it does hence not show that quantum computers are inherently more powerful than classical ones.

I assume the problem is related to the fact that we really want to study universal computers, not just special-purpose ones. I believe the problem is to figure out if a universal quantum computer is more powerful than the universal Turing machine.

Posted at March 19, 2005 9:08 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/534

4 Comments & 0 Trackbacks

Re: Rasetti: Quantum Computers based on TQFT

Maybe I can help clarify a bit here.

It is true, of course, that one can define an observable for particular TQFT which is the Jones polynomial. Calculating the Jones polynomial is very hard (it’s in a complexity class called #P). So Freedman’s original idea was that TQFTs might be powerful enough to help us calculate the Jones polynomial.

But this logic (that there is an observable whose expectation value is classically hard to calculate and therefore we can just use the quantum system to get this expectation vallue) doesn’t work. Why? Because what you get from running the quantum experiment is not the expectation value, but a particular outcome which, when averaged over many runs yields the expectation value. In computer science jargonese you would say that you have an algorithm for “estimating” the value of the Jones polynomial. Now this in and of itself is also interesting: how does a quantum systems estimate compare with classical algorithms for estimating the Jones polynomial. As far as I know, no one has shown a quantum based estimate for the Jones polynomial which is better than a classical algorithm.

I would also say that among quantum computing complexity theorists there IS clarity about this issue. Also note that the current “faster than classical” quantum algorithms don’t work in this manner (expectation value of an observable.) Instead, they work like more like a probabilistic classical computer. Note that there is a huge difference between an analog computer (whose power scales poorly with precision) and a probabilistic computer (whose power scales nicely with the precision of the probabilities.) In fact, this simple difference is, I think, what causes many physicists to disregard quantum computers as “merely” analog machines. They’re not, they’re more similar to probabilistic machines, about which, computer scientists and reasonable people all agree, there is very little problem calling them true realisic computers.

Another thing to note is that for a large class of TQFTs Freedman, Kitaev, and Zhang have shown that these TQFTs are equal in power to a traditional quantum computer (see, for example quant-ph/0001071). For some TQFTs it is also true that they aren’t even as powerful as a quantum computer. But so far, I know of no candidates for TQFTs which are more powerful than the traditional quantum circuit model. The main reason one might hope that the models are more powerful is that TQFTs don’t necessarily have a built in locality structure. What Freedman, Kitaev, and Zhang showed was that for a large class of models, there is indeed a notion of locality hidden in the particular TQFTs they examine.

Posted by: Dave Bacon on April 15, 2005 11:05 PM | Permalink | Reply to this

Re: Rasetti: Quantum Computers based on TQFT

Thanks a lot for your help.

I should mention that there were also some reactions to my post over at sci.physics.research from Ralph Hartley, who also corrected my usage of complexity class terminology.

Another reader pointed out to me by email the following: In reply to my statement that we can imagine classical as well as quantum systems whose observables are #P functions of their states, he remarked that due to exponential growth of errors in classical analog computers such systems would need to be quantum in order to be useful.

Probably that’s the same point that you state as

[…] analog computer (whose power scales poorly with precision) […]

You also write:

But so far, I know of no candidates for TQFTs which are more powerful than the traditional quantum circuit model.

and

I would also say that among quantum computing complexity theorists there IS clarity about this issue.

Good to know. From listening to Rasetti’s talk I did not get this impression, but that may have been my fault.

Moreover you mention

[…] probabilistic computer (whose power scales nicely with the precision of the probabilities.) […]

I have looked up the definition of a probabilistic computer here. Seems to me that we can think of a prob. computer also as a decohered quantum computer, in a sense.

Ok, so I have learned a couple of things. One more question I’d have is this:

I don’t quite get in which sense some people hope to find quantum field theory computers which are inherently stronger than ‘traditional’ quantum computers. After all, quantum field theory is not another type of quantum theory than quantum mechanics. But then, I may not fully be aware of the fine print in the definition of a ‘traditional’ quantum computer.

But what is interesting in any case is to see quantum computing people get interested in similar classes of systems as formal HEP people.

When you hear about somebody proposing a quantum computer based for instance on string field theory, please let me know. ;-)

Posted by: Urs on April 16, 2005 12:46 PM | Permalink | Reply to this

Re: Rasetti: Quantum Computers based on TQFT

Well, there is a real sense in which there really isn’t anything other than probablistic computers! When you think about the physical devices we use for computing, like the transistor and the magnetic storage medium, these systems act digital and deterministic in spite of the fact that microscopically there are all sorts of errors occuring to the system (electrons are going the wrong way, spins flipping). What happens is that the physics of the devices enacts error correction for these errors. So we get to sit up here at our macroscopic level and talk about “the total magnetic field” or the “current.” In effect the physics makes the likelihood of error exceedingly small by enacting error correction.

This seems, at first, like a technicality. But in a few years time it will be much more important. Even now, one the main problems with transistors is that they are “leaking” too much. And think about the state of the art when we get down to truely nanoscale components for our computers. Do we really think we can engineer 100% efficient switches at the nanoscale level?

The deterministic digital computer is a nice approximation (i.e. would make any physicist proud), but an approximation it is nonetheless.

As to why a quantum field theory computer would outperform a standard quantum computer, I can make two comments. I don’t think anyone believes that a computer based on, say QED, would outperform a quantum computer. But the situation isn’t as clear for more exotic theories. Especially noteworthy are issues of locality. Suppose that in some physical theory we get interactions which are of a highly non-local nature. Often times it is possible to bootstrap such large non-local interactions up into a powerful computer. For example there are certain quantum gates that if you could perform them in one time step accross any number of computers you would be able to do quite miraculous things from a computational complexity perspective. As far as I know, however, no such QFT has been proposed, let alone whether this field theory has anything to do with actual physics.

I’ve not yet seen a string theory quantum computer. Scott Aaronson spent some time trying to figure out if one could take anything from loop quantum gravity and get a new computational complexity class.

But here is a sort of interesting twist on this subject. From what I understand, the computational complexity of classifying the topology of certain manifolds has been studied. The situation is something like, for less than two dimensional manifolds the task is easy. For three dimensional manifolds, no one knows the computational complexity. And for higher than four dimensional manifolds, the task is noncomputable! So we really need to worry about theories of quantum gravity where the topology of 3+1 dimensional manifolds is variable: it may be that properties of the theory are noncomputable. That would certainly make my head spin.

Posted by: Dave Bacon on April 16, 2005 9:35 PM | Permalink | Reply to this

Re: Rasetti: Quantum Computers based on TQFT

Suppose that in some physical theory we get interactions which are of a highly non-local nature. Often times it is possible to bootstrap such large non-local interactions up into a powerful computer. For example there are certain quantum gates that if you could perform them in one time step accross any number of computers you would be able to do quite miraculous things from a computational complexity perspective.

Hm, could you be more specific here? I am not sure what you are thinking of. What do you mean by to ‘perform a quantum gate across any number of computers’? Is this referring to operating several quantum computers which are mutually entangled? Why would it be important to think of them as seperate computers instead of as a single big one?

You seem to be talking about a point here whose importance I was not aware of. Apparently you are saying that it matters where in space a computation is performed. Why is that?

For three dimensional manifolds, no one knows the computational complexity. And for higher than four dimensional manifolds, the task is noncomputable! So we really need to worry about theories of quantum gravity where the topology of 3+1 dimensional manifolds is variable: it may be that properties of the theory are noncomputable. That would certainly make my head spin.

I assume ‘noncompuatble’ here means ‘not primitively recursive’, i.e. ‘not computable by a Turing machine or by λ-calculus’.

But since we were talking about it: I can easily write down a classical topological field theory whose observables are these non-computable quantities. I don’t think this field theory would be realized in nature, but in principle it could. So how does that fit into a more general theory of computability?

More generally, in principle I can think of analog computers designed to solve every imaginable problem. Can’t I? Previously we said that analog computers scale poorly with precision. But if we could solve problems with them that are otherwise non-computable, they would still be infinitely more powerful than any ‘digital’ computer.

Do we want to reject such devices and not call them ‘computers’ but instead ‘experiments’? Or if not, do we have a general theory of computing which incorporates them?

Somebody emphasized to me recently that ‘computing is nothing but doing physics’?

I can see how this is meant, but it worries me, somehow.

Ah, I made a quick websearch. Wikipedia has an entry on something that ironically is called a real computer by which is meant an ideal anlog-like computer with the full precision of the real numbers. Related terms are super Turing-machines and hypercomputation, as you probably know.

I also see a reference to LQG there with a comment on possible discreness of space(time).

Not sure if I understand what that would have to do with computability. Seems to me that the continuum which is essential for instance for a quantum computer is not that of physical space but that of the abstract Hilbert space of states. Maybe the most interesting question for the limits of computability here is the question how fundamental quantum mechanics really is, i.e. if for instance a state can really have an arbitrary phase taken from the real (or imaginary, if you wish) numbers, or maybe rather from some very large subset of them.

Posted by: Urs Schreiber on April 17, 2005 5:04 PM | Permalink | Reply to this

Post a New Comment