Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 19, 2016

Logical Uncertainty and Logical Induction

Posted by Qiaochu Yuan

Quick - what’s the 10 10010^{100}th digit of π\pi?

If you’re anything like me, you have some uncertainty about the answer to this question. In fact, your uncertainty probably takes the following form: you assign a subjective probability of about 110\frac{1}{10} to this digit being any one of the possible values 0,1,2,90, 1, 2, \dots 9. This is despite the fact that

  • the normality of π\pi in base 1010 is a wide open problem, and
  • even if it weren’t, nothing random is happening; the 10 10010^{100}th digit of π\pi is a particular digit, not a randomly selected one, and it being a particular value is a mathematical fact which is either true or false.

If you’re bothered by this state of affairs, you could try to resolve it by computing the 10 10010^{100}th digit of π\pi, but as far as I know nobody has the computational resources to do this in a reasonable amount of time.

Because of this lack of computational resources, among other things, you and I aren’t logically omniscient; we don’t have access to all of the logical consequences of our beliefs. The kind of uncertainty we have about mathematical questions that are too difficult for us to settle one way or another right this moment is logical uncertainty, and standard accounts of how to have uncertain beliefs (for example, assign probabilities and update them using Bayes’ theorem) don’t capture it.

Nevertheless, somehow mathematicians manage to have lots of beliefs about how likely mathematical conjectures such as the Riemann hypothesis are to be true, and even about simpler but still difficult mathematical questions such as how likely some very large complicated number NN is to be prime (a reasonable guess, before we’ve done any divisibility tests, is about 1lnN\frac{1}{\ln N} by the prime number theorem). In some contexts we have even more sophisticated guesses like the Cohen-Lenstra heuristics for assigning probabilities to mathematical statements such as “the class number of such-and-such complicated number field has pp-part equal to so-and-so.”

In general, what criteria might we use to judge an assignment of probabilities to mathematical statements as reasonable or unreasonable? Given some criteria, how easy is it to find a way to assign probabilities to mathematical statements that actually satisfies them? These fundamental questions are the subject of the following paper:

Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor, Logical Induction. ArXiv:1609.03543.

Loosely speaking, in this paper the authors

  • describe a criterion called logical induction that an assignment of probabilities to mathematical statements could satisfy,
  • show that logical induction implies many other desirable criteria, some of which have previously appeared in the literature, and
  • prove that a computable logical inductor (an algorithm producing probability assignments satisfying logical induction) exists.

Logical induction is a weak “no Dutch book” condition; the idea is that a logical inductor makes bets about which statements are true or false, and does so in a way that doesn’t lose it too much money over time.

A warmup

Before describing logical induction, let me describe a different and more naive criterion you could ask for, but in fact don’t want to ask for because it’s too strong. Let φ(φ)\varphi \mapsto \mathbb{P}(\varphi) be an assignment of probabilities to statements in some first-order language; for example, we might want to assign probabilities to statements in the language of Peano arithmetic (PA), conditioned on the axioms of PA being true (which means having probability 11). Say that such an assignment φ(φ)\varphi \mapsto \mathbb{P}(\varphi) is coherent if

  • ()=1\mathbb{P}(\top) = 1.
  • If φ 1\varphi_1 is equivalent to φ 2\varphi_2, then (φ 1)=(φ 2)\mathbb{P}(\varphi_1) = \mathbb{P}(\varphi_2).
  • (φ 1)=(φ 1φ 2)+(φ 1¬φ 2)\mathbb{P}(\varphi_1) = \mathbb{P}(\varphi_1 \wedge \varphi_2) + \mathbb{P}(\varphi_1 \wedge \neg \varphi_2).

These axioms together imply various other natural-looking conditions; for example, setting φ 1=\varphi_1 = \top in the third axiom, we get that (φ 2)+(¬φ 2)=1\mathbb{P}(\varphi_2) + \mathbb{P}(\neg \varphi_2) = 1. Various other axiomatizations of coherence are possible.

Theorem: A probability assignment such that (φ)=1\mathbb{P}(\varphi) = 1 for all statements φ\varphi in a first-order theory TT is coherent iff there is a probability measure on models of TT such that (φ)\mathbb{P}(\varphi) is the probability that φ\varphi is true in a random model.

This theorem is a logical counterpart of the Riesz-Markov-Kakutani representation theorem relating probability distributions to linear functionals on spaces of functions; I believe it is due to Gaifman.

For example, if TT is PA, then the sort of uncertainty that a coherent probability assignment conditioned on PA captures is uncertainty about which of the various first-order models of PA is the “true” natural numbers. However, coherent probability assignments are still logically omniscient: syntactically, every provable statement is assigned probability 11 because they’re all equivalent to \top, and semantically, provable statements are true in every model. In particular, coherence is too strong to capture uncertainty about the digits of π\pi.

Coherent probability assignments can update over time whenever they learn that some statement is true which they haven’t assigned probability 11 to; for example, if you start by believing PA and then come to also believe that PA is consistent, then conditioning on that belief will cause your probability distribution over models to exclude models of PA where PA is inconsistent. But this doesn’t capture the kind of updating a non-logically omniscient reasoner like you or me actually does, where our beliefs about mathematics can change solely because we’ve thought a bit longer and proven some statements that we didn’t previously know (for example, about the values of more and more digits of π\pi).

Logical induction

The framework of logical induction is for describing the above kind of updating, based solely on proving more statements. It takes as input a deductive process which is slowly producing proofs of statements over time (for example, of theorems in PA), and assigns probabilities to statements that haven’t been proven yet. Remarkably, it’s able to do this in a way that eventually outpaces the deductive process, assigning high probabilities to true statements long before they are proven (see Theorem 4.2.1).

So how does logical induction work? The coherence axioms above can be justified by Dutch book arguments, following Ramsey and de Finetti, which loosely say that a bookie can’t offer a coherent reasoner a bet about mathematical statements which they will take but which is in fact guaranteed to lose them money. But this is much too strong a requirement for a reasoner who is not logically omniscient. The logical induction criterion is a weaker version of this condition; we only require that an efficiently computable bookie can’t make arbitrarily large amounts of money by betting with a logical inductor about mathematical statements unless it’s willing to take on arbitrarily large amounts of risk (see Definition 3.0.1).

This turns out to be a surprisingly useful condition to require, loosely speaking because it corresponds to being able to “notice patterns” in mathematical statements even if we can’t prove anything about them yet. A logical inductor has to be able to notice patterns that could otherwise be used by an efficiently computable bookie to exploit the inductor; for example, a logical inductor eventually assigns probability about 110\frac{1}{10} to claims that a very large digit of π\pi has a particular value, intuitively because otherwise a bookie could continue to bet with the logical inductor about more and more digits of π\pi, making money each time (see Theorem 4.4.2).

Logical induction has many other desirable properties, some of which are described in this blog post. One of the more remarkable properties is that because logical inductors are computable, they can reason about themselves, and hence assign probabilities to statements about the probabilities they assign. Despite the possibility of running into self-referential paradoxes, logical inductors eventually have accurate beliefs about their own beliefs (see Theorem 4.11.1).

Overall I’m excited about this circle of ideas and hope that they get more attention from the mathematical community. Speaking very speculatively, it would be great if logical induction shed some light on the role of probability in mathematics more generally - for example, in the use of informal probabilistic arguments for or against difficult conjectures. A recent example is Boklan and Conway’s probabilistic arguments in favor of the conjecture that there are no Fermat primes beyond those currently known.

I’ve made several imprecise claims about the contents of the paper above, so please read it to get the precise claims!

Posted at September 19, 2016 7:15 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2909

25 Comments & 0 Trackbacks

Re: Logical Uncertainty and Logical Induction

You stopped just before the fun bit! How does their logical inductor work?

Posted by: Jamie Vicary on September 20, 2016 6:20 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

They create a super-bookie by simulating all the efficiently computable bookies and taking a weighted combination. (The paper calls the bookies “traders” and the super-bookie a “trading firm”.) The super-bookie isn’t itself efficiently computable but it is computable, and it has the property that if any of the efficiently computable bookies can exploit a given algorithm for assigning probabilities then it can too.

One of the important details in the paper is that they demand the bookies’ strategies be continuous. So they can’t just say “Offer a bet in favour of statement P if the probability assigned to it is above 60%, and Against otherwise”. Instead they would have to interpolate between For and Against in some small interval around 60%. This continuity assumption allows us to use Brouwer’s fixed point theorem to show that for any given bookie there must be some probability assignment where the bookie doesn’t want to make any offers of bets at all.

The algorithm finds this probability assignment for the super-bookie and uses it. Since the super-bookie doesn’t make any bets, the algorithm definitely isn’t exploited by the super-bookie, and therefore it also isn’t exploited by any of the computationally efficient ones.

Posted by: Oscar Cunningham on September 20, 2016 11:49 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

I wrote a chapter on Bayesianism in mathematics in my book ‘Towards a Philosophy of Real Mathematics’. The idea of considering the plausibility of mathematics in terms of degrees of belief is due to Polya, who gave various qualitative patterns. I took it that the most important of these involved analogy, where one sensed a ‘common ground’ between what had been established and the target.

Back in 2003, there was quite a degree of scepticism towards the Bayesianism in mathematics idea, but as I said somewhere back then, its hardly uncommon for mathematicians to talk about changing strengths of beliefs. It means the following reasoning is not ‘paradoxical’:

…it is my view that before Thurston’s work on hyperbolic 3-manifolds and his formulation of the general Geometrization Conjecture there was no consensus amongst experts as to whether the Poincare Conjecture was true or false. After Thurston’s work, notwithstanding the fact that it has no direct bearing on the Poincare Conjecture, a consensus developed that the Poincare Conjecture (and the Geometrization Conjecture) were true. Paradoxically, subsuming the Poincare Conjecture into a broader conjecture and then giving evidence, independent from the Poincare Conjecture, for the broader conjecture led to a firmer belief in the Poincare Conjecture.(John W. Morgan, ‘Recent Progress on the Poincare Conjecture and the Classification of 3-Manifolds’, Bulletin of the American Mathematical Society 2004, 42(1): 57-78)

I also thought Bayesianism about mathematics helped with the ‘problem of old evidence’:

The problem runs: imagine you have a piece of firmly established evidence ee, which you believe with total confidence, Pr(e)=1Pr(e) = 1. You devise some theory TT, and rate how likely it is to be true, Pr(T)Pr(T). You then discover that TT accounts for ee. What should this do to your degree of belief in TT? Well applying Bayes’ theorem:

Pr(T|e)=Pr(e|T).Pr(T)/Pr(e)=1.Pr(T)/1=Pr(T).Pr(T|e) = Pr(e|T).Pr(T)/Pr(e) = 1.Pr(T)/1 = Pr(T).

Apparently, there should be no boost to your degree of belief in TT. This seems odd because scientists often do get encouraged when their theories turn out to explain some already observed phenomenon.

After Polya, I suggested the analogy with someone doing some mathematics. You’ve worked out as best you can a formula for the curved surface area of a frustrum of a cone (a cone with its nose chopped off), in terms of the height, hh, and the radii of the upper and lower circles, rr and RR. You’re not too sure you’ve got it right, but are encouraged to find the formula works for the area of a cylinder when you set R=rR = r. Let’s imagine you’re now 50% certain that your formula is correct. A friend now says to you, “How would you feel about your formula, if I told you that if you set h=0h = 0, you’ll find your formula gives you the correct answer for the area of an annulus?”. You say,”Much more confident, thank you”, although you already know the area of an annulus. The key point is that you’ve learned something new, that your formula works in a special case. So, let FF = “formula is correct”, and AA = “formula gives right result for annulus”. Then Pr(F|A)=Pr(A|F).Pr(F)/Pr(A).Pr(A|F)=1Pr(F|A) = Pr(A|F).Pr(F)/Pr(A). Pr(A|F) = 1, but now Pr(A)=Pr(A&F)+Pr(A&notF)Pr(A) = Pr(A \& F) + Pr(A \& not F). The first of the summands is 0.5, and the second your belief that the formula is wrong and yet still gives the right answer for the annulus, let’s say 0.1. So you update your belief in your formula to around 83%.

From their answer to Desideratum 15 on page 69 of the Logical Inductors paper,

The algorithm for assigning probabilities to logical claims should be able to target specific, decision-relevant claims, and it reason about those claims as efficiently as possible given the computing resources available

it seems we are to expect very little on this front. But isn’t whatever it is that gives a mathematician a reasoned degree of belief in a conjecture one of the most interesting aspects of this field?

Posted by: David Corfield on September 20, 2016 8:29 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

I don’t quite understand. Their discussion of Desideratum 15 in section 7.4 suggests to me that the issue is that their logical inductor can’t be “directed” to “look for evidence bearing on the truth of X”. If that turns out to be a fundamental limitation (rather than simply a defect of their particular algorithm), then it suggests that truth in mathematics is “nonlocal”, i.e. there’s no way to tell in advance whether two statements are “logically related” – which would be really interesting, given the surprising propensity for apparently-different fields of mathematics to suddenly overlap. But how is this related to “whatever it is that gives a mathematician a reasoned degree of belief in a conjecture”?

Posted by: Mike Shulman on September 20, 2016 10:06 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

…it suggests that truth in mathematics is “nonlocal”

It certainly will be if the space and metric are defined by (untyped) first order logic! Get the right synthetic language and there’s more of a chance.

In a sense that was my point. The situations interesting to me (and Polya) are the professional’s hunches of similarity. These are usually set in the absence of an overarching theory.

One of Polya’s ‘rules’ is

If AA is analogous to BB, and AA is found to be true, then BB becomes more likely.

He then speaks of an unknown common ground to AA and BB, called HH, which jointly implies them. Your ability to expect reasonably such an HH he attributes to the particularity of your experience. He tells us of examples through the history of maths, such as Euler’s ‘factorization’ of (sinx)/x(sin\; x)/x to find the sum of reciprocals of squares of natural numbers. What was it about the resemblance between this function and ordinary polynomials that made him think that sums of roots can be read from a coefficient of the series expansion?

So later Euler’s hunch was justified by the common ground of the Weierstrass factorization theorem, written in a framework unavailable to Euler.

I devoted a chapter to analogies, as their detection is surely central to mathematical thought:

A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies. (Banach)

In that all the really important decisions that you make about what to explore, and what you think is likely to work, rely heavily on this analogy-detection, it should find some place in any (relatively complete) discussion of degree of mathematical belief.

Posted by: David Corfield on September 20, 2016 11:34 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

Posted by: Tim Porter on September 20, 2016 5:57 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

In the 1990s I gave a presentation on “How mathematics gets into knots” to pupils, part of which is the analogy between prime numbers and prime knots; an emphasis is that the analogy is actually between relations between knots and relations between numbers, between addition of knots and products of numbers, rather than the objects themselves.

Afterwards a teacher came up to me and said: “That is the first time in my mathematical career that anyone has used the word “analogy” in relation to mathematics!” Maths education may leave students starved of contextual discussion, of how maths does its job and progresses.

The exhibition related to the talk is now at

http://www.groupoids.org.uk/popmath/cpm/exhib/knotexhib.html

So I welcome David’s contribution!

Posted by: Ronnie Brown on September 21, 2016 10:56 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

One question to ask, I guess, is whether their algorithm for logical induction (or any such algorithm) can “detect analogies”. You seem to be assuming that it can’t. I haven’t read anything about how it works, so I don’t have any basis to judge.

Posted by: Mike Shulman on September 20, 2016 6:19 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

whether their algorithm for logical induction (or any such algorithm) can “detect analogies”

There is a certain sense in which a logical inductor will detect any analogy that can be detected in polynomial time (or in some other fixed complexity class). That is, suppose there’s an efficient method for writing down two sequences (ϕ n)(\phi_n) and (ψ n)(\psi_n) of logical statements; perhaps the ϕ n\phi_n all talk about one class of mathematical object, and the ψ n\psi_n all talk about another. Suppose also that these sequences are “analogous” in the sense that they are hard to decide but are highly correlated, so usually ϕ n\phi_n and ψ n\psi_n are either both true or both false.

Then a logical inductor will pick up on and learn from this “analogy” between the domains that (ϕ n)(\phi_n) and (ψ n)(\psi_n) talk about. That is, n(ϕ n)\mathbb{P}_n(\phi_n) will be asymptotically close to n(ψ n)\mathbb{P}_n(\psi_n)—except insofar as, in cases where ϕ n\phi_n and ψ n\psi_n actually differ, n(ϕ n)\mathbb{P}_n(\phi_n) differs from n(ψ n)\mathbb{P}_n(\psi_n) in a way that predicts the truth of ϕ n\phi_n even more accurately than just copying the judgment about n(ψ n)\mathbb{P}_n(\psi_n).

Posted by: Tsvi Benson-Tilsen on September 20, 2016 11:06 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

That’s interesting, at least philosophically. In practice I’m not sure that very many of the analogies in question can be phrased precisely as polynomially-enumerable sequences of statements corresponding one-for-one, but it’s at least some indication that logical induction is telling us something about the detection of analogies.

Posted by: Mike Shulman on September 20, 2016 11:38 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

I agree with your analysis.

In another sense, a logical inductor will take advantage of any analogy that can be cashed out as explicit predictions about logical sentences. Suppose there’s some cognitive procedure that human mathematicians implement for repeatedly making “creative leaps”, and thereby making accurate judgments about mathematical statements, even before any formal verification. Then (presumably) some polynomial-time algorithm uses this cognitive procedure to make trades against a logical inductor P, and thereby forces P to assign probabilities at least as accurate as the predictions made by the mathematician.

Posted by: Tsvi Benson-Tilsen on September 21, 2016 12:23 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

…at least philosophically. In practice…

That’s so last century, Mike. In this country (UK) at least, we are expected to carry out ‘impactful’ research, where impact is defined as

an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia.

You can check up on our 315 pieces of impact here.

Posted by: David Corfield on September 21, 2016 9:35 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

The question of how in fact human mathematicians use analogy to think about mathematics is very interesting, but the answer probably has more to do with cognitive science than with formal mathematical theories of logical uncertainty (although it would be great if these topics turn out to be more connected than I currently think they are).

The question that is more relevant to this paper is something like, how ought an ideal but computationally limited mathematician (say, an AI mathematician) reason about logical uncertainty? Once we have an answer to both of these questions we can then compare what humans are doing to the ideal standard.

This is analogous to the difference between asking how in fact humans reason about ordinary uncertainty and asking how an ideal reasoner ought to reason about ordinary uncertainty (e.g. using Bayesian probability).

Posted by: Qiaochu Yuan on September 21, 2016 7:54 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

This seems odd because scientists often do get encouraged when their theories turn out to explain some already observed phenomenon.

To me this just seems to imply that the conditional probability Pr(T|e)Pr(T | e) doesn’t do a good job of modeling the hypothetical scientist’s degree of belief in TT given that ee is known to be true.

Posted by: Mark Meckes on September 20, 2016 6:34 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

But from a Bayesian perspective, conditional probability is supposed to model degree of belief in a proposition given some other fact. I agree that it’s failing here, but there’s an obligation to explain why. The consensus seems to be that conditional probability assumes “logical omniscience”, i.e. immediately knowing all logical consequences of a fact as soon as you learn it.

Posted by: Mike Shulman on September 20, 2016 8:23 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

Yeah, I think I agree that the old evidence problem is a logical omniscience problem: when you formulate your theory you don’t yet know all of its logical consequences, and learning that it predicts some old evidence is a new logical fact to you.

The paper has some discussion of the old evidence problem in section 7.4.

Posted by: Qiaochu Yuan on September 21, 2016 7:45 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

There’s an account which maintains that you can condition on the fact that your theory has accounted for something you already knew. A guy called Garber wrote this as P(T|Te)P(T| T \vdash e).

I preferred (Chap. 5 of my book) to speak more informally of the probability of TT given you found out that it accounted for some known evidence ee. That gives some extra leeway beyond a mere deduction of ee from TT, to include “accounting well”.

You then need to have a sense of how easy it will be for rivals also to account well for ee.

Other solutions have included pretending to yourself that you never knew ee. However, as with Mercury’s perihelion, knowledge might have been gained many years ago, and it may not be straightforward to consider excising ee from your stock of knowledge.

You can also have fun wondering how scientific theories can gain a boost from predicting true mathematical results.

Posted by: David Corfield on September 20, 2016 10:12 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

David’s comment gives me an excellent excuse to link to the very satisfying Wikipedia page on Cromwell’s rule, which

states that the use of prior probabilities of 0 (“the event will definitely not occur”) or 1 (“the event will definitely occur”) should be avoided, except when applied to statements that are logically true or false, such as 2+2 equalling 4 or 5

and is not named after some obscure mathematician called Cromwell, but after famed monarchy-abolisher and Catholic-massacrer Oliver Cromwell, who wisely, memorably and hypocritically wrote:

I beseech you, in the bowels of Christ, think it possible that you may be mistaken.

Posted by: Tom Leinster on September 20, 2016 10:30 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

Interesting to read the full text of the letter that sentence is taken from.

A note there explains the choice of ‘bowels’:

Although the Sacred Heart, Precious Blood, and Body of Christ are venerated by Christians, this odd-looking phrase should not be taken literally. Cromwell used an archaic figurative meaning, eg (from thefreedictionary.com) “The seat of pity or the gentler emotions.”

For some context of the letter itself,

Cromwell was reluctant to fight the Covenanters, with whom he shared similar religious convictions. Early in August 1650, he appealed directly to the Scottish clergy, asking them to consider whether Charles Stuart was a fitting king for a godly people. There were signs of doubt among the Covenanter leaders and army officers. The Kirk urged Charles to sign a declaration disavowing his parents’ religion and affirming his own fidelity to the Covenant, which he refused to do. Covenanter leaders became alarmed at Charles’ apparent popularity among the Scottish troops when he toured the army lines around Edinburgh and insisted that he withdraw across the Firth of Forth to Dunfermline.

Posted by: David Corfield on September 21, 2016 8:27 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

Boaz Barak has proposed trying to model logical uncertainty by using a “Bayesian” flavour of complexity theory, with a core notion being that of a Bayesian pseudo-distribution. Unfortunately, there isn’t yet much of a rigorous framework to discuss this concept, in contrast to the much more well developed “frequentist” side to complexity theory. See his blog post at https://windowsontheory.org/2015/10/01/a-different-type-of-pseudo/

Posted by: Terence Tao on September 21, 2016 11:31 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

Posted by: David Corfield on September 22, 2016 8:03 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

It might be interesting to test this against single-digit extraction algorithms-cf. for instance http://www.pi314.net/eng/nieme_digit.php They haven’t as far as the 10 the 100th power digit of pi, of course.

Another field might, also, be game theory, where strategies can be and have been efficiently coded.

Posted by: Stam Nicolis on September 25, 2016 9:21 PM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

Posted by: Todd Trimble on September 26, 2016 12:50 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

Mike wrote:

The consensus seems to be that conditional probability assumes “logical omniscience”, i.e. immediately knowing all logical consequences of a fact as soon as you learn it.

I don’t have anything to contribute except a buzzphrase: bounded rationality. To model realistic decision-making agents, instead of assuming they have infinite reasoning powers, we should take into account the cost of reasoning.

This idea is mainly used in attempts to make economics more realistic, but I suspect it’ll be important in developing a theory of probability for mathematical statements, too. And I think that, in turn, will be important for developing AI that can do mathematics.

So it may be worth reading what the economists have done with bounded rationality.

(They may need help from mathematicians, too.)

Posted by: John Baez on September 26, 2016 2:23 AM | Permalink | Reply to this

Re: Logical Uncertainty and Logical Induction

I’m pretty hopeful that this logical induction work will lead to a good theory of bounded rationality. It definitely gives us a start, in that it gives us a model of a finite-but-large reasoner with “reasonable beliefs about mathematics”. John, I’m curious: are any particular directions you’d be excited to see this framework pushed, in order to get good models of boundedly rational reasoners?

As an aside, this is a topic I’m highly interested in, and here are some questions I have and some pointers to relevant literature, for folks who are interested:

  • Game theory for bounded agents: Which results of classical game theory hold for agents with bounded reasoning capabilities? What new phenomena arise in this setting? [1,2,3,4]
  • Metacognition: How do you select which computations to run so that you don’t waste too much compute time, but you still get the “logical information” that you need? [5,6]
  • Decision theory: In a setting with deterministic algorithmic agents, how (in general) should we evaluate the expected value of the hypothetical possible worlds in which the agent’s decision algorithm takes on different outputs, given that all but one of those worlds is logically impossible? For instance, if two copies of the algorithm are instantiated in two agents playing a prisoner’s dilemma against each other, and the algorithm outputs “defect”, then we intuitively want to be able to say that if the algorithm had output “cooperate” then the first agent would have done better, despite the fact that this is asking about an incoherent world (where an deterministic algorithm that outputs “defect” actually outputs “cooperate”). [7]

[1] Rubinstein (1998). “Modeling Bounded Rationality”

[2] Critch (2016). “Parametric Bounded Löb’s Theorem and Robust Cooperation of Bounded Agents”

[3] Bárász et al. (2014). “Robust Cooperation on the Prisoner’s Dilemma: Program Equilibrium via Provability Logic”

[4] Leike et al. (UAI 2016). “A Formal Solution to the Grain of Truth Problem”

[5] Hay et al. (UAI 2012). “Selecting Computations: Theory and Applications”

[6] Hay et al. (2011). “Metareasoning for Monte Carlo Tree Search”

[7] Soares et al. (abridged form in AGI 2015). “Toward Idealized Decision Theory”

Posted by: Nate Soares on September 28, 2016 12:47 AM | Permalink | Reply to this

Post a New Comment