In orthodox first-order logic, variables and expressions are only allowed to take one value at a time; a variable
, for instance, is not allowed to equal
and
simultaneously. We will call such variables completely specified. If one really wants to deal with multiple values of objects simultaneously, one is encouraged to use the language of set theory and/or logical quantifiers to do so.
However, the ability to allow expressions to become only partially specified is undeniably convenient, and also rather intuitive. A classic example here is that of the quadratic formula:


Strictly speaking, the expression
is not well-formed according to the grammar of first-order logic; one should instead use something like

or

or

in order to strictly adhere to this grammar. But none of these three reformulations are as compact or as conceptually clear as the original one. In a similar spirit, a mathematical English sentence such as

is also not a first-order sentence; one would instead have to write something like

or


instead. These reformulations are not all that hard to decipher, but they do have the aesthetically displeasing effect of cluttering an argument with temporary variables such as

which are used once and then discarded.
Another example of partially specified notation is the innocuous
notation. For instance, the assertion

when written formally using first-order logic, would become something like


which is not exactly an elegant reformulation. Similarly with statements such as

or

Below the fold I’ll try to assign a formal meaning to partially specified expressions such as (1), for instance allowing one to condense (2), (3), (4) to just

When combined with another common (but often implicit) extension of first-order logic, namely the ability to reason using
ambient parameters, we become able to formally introduce
asymptotic notation such as the
big-O notation 
or the little-o notation

. We will explain how to do this at the end of this post.
— 1. Partially specified objects —
Let’s try to assign a formal meaning to partially specified mathematical expressions. We now allow expressions
to not necessarily be a single (completely specified) mathematical object, but more generally a partially specified instance of a class
of mathematical objects. For instance,
denotes a partially specified instance of the class
of numbers consisting of
and
; that is to say, a number which is either
and
. A single completely specified mathematical object, such as the number
, can now be also interpreted as the (unique) instance of a class
consisting only of
. Here we are using set notation to describe classes, ignoring for now the well known issue from Russell’s paradox that some extremely large classes are technically not sets, as this is not the main focus of our discussion here.
For reasons that will become clearer later, we will use the symbol
rather than
to denote the assertion that two partially specified objects range across exactly the same class. That is to say, we use
as a synonym for
. Thus, for instance, it is not the case that
, because the class
has instances that
does not.
Any finite sequence
of objects can also be viewed as a partially specified instance of a class
, which I will denote
in analogy with regular expressions, thus we now also have a new name
for the set
. (One could in fact propose
as the notation for
, as is done implicitly in assertions such as “
is true for
“, but this creates notational conflicts with other uses of the comma in mathematics, such as the notation
for an
-tuple, so I will use the regular expression symbol
here to avoid ambiguity.) For instance,
denotes an partially specified instance from the class
, that is to say a number which is either
,
, and
. Similarly, we have

and

One can mimic set builder notation and denote a partially specified instance of a class
as
(or one can replace
by any other variable name one pleases); similarly, one can use
to denote a partially specified element of
that obeys the predicate
. Thus for instance

would denote a partially specified odd number. By a slight abuse of notation, we can abbreviate

as

or simply

, if the domain

of

is implicitly understood from context. For instance, under this convention,

refers to an partially specified odd integer, while

refers to a partially specified integer. Under these conventions, it now becomes theoretically possible that the class one is drawing from becomes empty, and instantiation becomes vacuous. For instance, with our conventions,

refers to a partially specified
odd perfect number, which is conjectured to not exist. As it turns out, our notation can handle instances of empty classes without difficulty (basically thanks to the concept of a
vacuous truth), but we will avoid dwelling on this edge case much here since this concept is not intuitive for beginners. (But if one does want to confront this possibility, one can use a symbol such as

to denote an instance of the empty class, i.e., an object that has no specifications whatsoever.
The symbol
introduced above can now be extended to be a binary operation on partially specified objects, defined by the formula

Thus for instance

and

One can then define other logical operations on partially specified objects if one wishes. For instance, we could define an “and” operator
by defining

Thus for instance


and

(Here we are deviating from the syntax of regular expression, but I am not insisting that the entirety of mathematical notation conform to that syntax, and in any event regular expressions do not appear to have a direct analogue of this “and” operation.) We leave it to the reader to propose other logical operations on partially specified objects, though the “or” operator

and the “and” operator

will suffice for our purposes.
Any operation on completely specified mathematical objects
can be extended to partially specified of mathematical objects
by applying that operation to arbitrary instances of the class
, with the convention that if a class appears multiple times in an expression, then we allow each instance of that class to take different values. For instance, if
are partially specified numbers, we can define
to be the class of all numbers formed by adding an instance of
to an instance of
(this is analogous to the operation of Minkowski addition for sets, or interval arithmetic in numerical analysis). For example,

and

(recall that there is no requirement that the signs here align). Note that

but that

So we now see the first sign that some care has to be taken with the
law of substitution; we have

but we do
not have

However, the law of substitution works fine as long as the variable being substituted appears exactly once on both sides of the equation.
One can have an unbounded number of partially specified instances of a class, for instance
will be the class of all integers between
and
with the same parity as
.
Remark 1 When working with signs
, one sometimes wishes to keep all signs aligned, with
denoting the sign opposite to
, thus for instance with this notation one would have the geometric series formula 
whenever
. However, this notation is difficult to place in the framework used in this blog post without causing additional confusion, and as such we will not discuss it further here. (The syntax of regular expressions does have some tools for encoding this sort of alignment, but in first-order logic we also have the perfectly servicable tool of named variables and quantifiers (or plain old mathematical English) to do so also.)
One can also extend binary relations, such as
or
, to partially specified objects, by requiring that every instance on the left-hand side of the relation relates to some instance on the right-hand side (thus binary relations become
sentences). Again, if class is instantiated multiple times, we allow different appearances to correspond to different classes. For instance, the statement
is true, because every instance of
is less than or equal to
:

But the statement

is false. Similarly, the statement

is true, because

is an instance of

:

The statement

is also true, because every instance of

is less than some instance of

:

The relationship between a partially specified representative

of a class

can then be summarised as

Note how this convention treats the left-hand side and right-hand side of a relation involving partially specified expressions asymmetrically. In particular, for partially specified expressions
, the relation
is no longer equivalent to
; the former states that every instance of
is also an instance of
, while the latter asserts the converse. For instance,
is a true statement, but
is a false statement (much as “
is prime” is a true statement (or “
in our notation), but “primes are
” (or
in our notation) is false). In particular, we see a distinction between equality
and equivalence
; indeed,
holds if and only if
and
. On the other hand, as can be easily checked, the following three basic laws of mathematics remain valid for partially specified expressions
:
- (i) (Reflexivity)
. - (ii) (Transitivity) If
and
, then
. Similarly, if
and
, then
, etc.. - (iii) (Substitution) If
, then
for any function
. Similarly, if
, then
for any monotone function
, etc..
These conventions for partially specified expressions align well with informal mathematical English. For instance, as discussed in the introduction, the assertion

can now be expressed as

Similarly, the even
Goldbach’s conjecture can now be stated as

while the
Archimedean property of the reals can be reformulated as the assertion that

for any

. Note also how the equality symbol

for partially specified expressions corresponds well with the multiple meanings of the word “is” in English (consider for instance “two plus two is four”, “four is even”, and “the sum of two odd numbers is even”); the set-theoretic counterpart of this concept would be a sort of amalgam of the ordinary equality relation

, the inclusion relation

, and the subset relation

.
There are however a number of caveats one has to keep in mind, though, when dealing with formulas involving partially specified objects. The first, which has already been mentioned, is a lack of symmetry:
does not imply
; similarly,
does not imply
. The second is that negation behaves very strangely, so much so that one should basically avoid using partially specified notation for any sentence that will eventually get negated. For instance, observe that the statements
and
are both true, while
and
are both false. In fact, the negation of such statements as
or
involving partially specified objects usually cannot be expressed succinctly in partially specified notation, and one must resort to using several quantifiers instead. (In the language of the arithmetic hierarchy, the negation of a
sentence is a
sentence, rather than an another
sentence.)
Another subtlety, already mentioned earlier, arises from our choice to allow different instantiations of the same class to refer to different instances, namely that the law of universal instantiation does not always work if the symbol being instantiated occurs more than once on the left-hand side. For instance, the identity

is of course true for all real numbers

, but if one naively substitutes in the partially specified expression

for

one obtains the claim

which is a false statement under our conventions (because the two instances of the sign

do not have to match). However, there is no problem with repeated instantiations on the right-hand side, as long as there is at most a single instance on the left-hand side. For instance, starting with the identity

we can validly instantiate the partially specified expression

for

to obtain

A common practice that helps avoid these sorts of issues is to keep the partially specified quantities on the right-hand side of one’s relations, or if one is working with a chain of relations such as
, to keep the partially specified quantities away from the left-most side (so that
,
, and
are allowed to be multi-valued, but not
). This doesn’t automatically prevent all issues (for instance, one may still be tempted to “cancel” an expression such as
that might arise partway through a chain of relations), but it can reduce the chance of accidentally making an error.
One can of course translate any formula that involves partially specified objects into a more orthodox first-order logic sentence by inserting the relevant quantifiers in the appropriate places – but note that the variables used in quantifiers are always completely specified, rather than partially specified. For instance, if one expands “
” (for some completely specified quantities
) as “there exists
such that
“, the quantity
is completely specified; it is not the partially specified
. (If
or
were also partially specified, the first-order translation of the expression “
” would be more complicated, as it would need more quantifiers.)
One can combine partially specified notation with set builder notation, for instance the set
is just the four-element set
, since these are indeed the four real numbers
for which the formula
is true. I would however avoid combining particularly heavy uses of set-theoretic notation with partially specified notation, as it may cause confusion.
Our examples above of partially specified objects have been drawn from number systems, but one can use this notation for other classes of objects as well. For instance, within the class of functions
from the reals to themselves, one can make assertions like

where

is the class of monotone increasing functions; similarly we have








(with

denoting the Fourier transform) and so forth. Or, in the class of topological spaces, we have for instance


and

while conversely the
classifying space construction gives (among other things)

Restricting to metric spaces, we have the well known equivalences

Note in the last few examples, we are genuinely working with
proper classes now, rather than sets. As the above examples hopefully demonstrate, mathematical sentences involving partially specified objects can align very well with the syntax of informal mathematical English, as long as one takes care to distinguish the asymmetric equality relation

from the symmetric equivalence relation

.
As an example of how such notation might be integrated into an actual mathematical argument, we prove a simple and well known topological result in this notation:
Proposition 2 Let
be a continuous bijection from a compact space
to a Hausdorff space
. Then
is a homeomorphism.
Proof: We have


(since

is a bijection)

(since

is compact)

(since

is continuous)

(since

is Hausdorff)

Thus

is open, hence

is continuous. Since

was already continuous,

is a homeomorphism.
— 2. Working with parameters —
In order to introduce asymptotic notation, we will need to combine the above conventions for partially specified objects with separate common adjustment to the grammar of mathematical logic, namely the ability to work with ambient parameters. This is a special case of the more general situation of interpreting logic over an elementary topos, but we will not develop the general theory of topoi here. As this adjustment is orthogonal to the adjustments in the preceding section, we shall for simplicity revert back temporarily to the traditional notational conventions for completely specified objects, and not refer to partially specified objects at all in this section.
In the formal language of first-order logic, variables such as
are understood to range in various domains of discourse (e.g.,
could range in the real numbers,
could range in the natural numbers, and
in the class of sets). One can then construct various formulas, such as
, in which involve zero or more input variables (known as free variables), and have a truth value in
for any given choice of free variables. For instance,
might be true for some triples
, and false for others. One can create formulas either by applying relations to various terms (e.g., applying the inequality relation
to the terms
gives the formula
with free variables
), or by combining existing formulas together with logical connectives (such as
) or quantifiers (
and
). Formulas with no free variables (e.g.
) are known as sentences; once one fixes the domains of discourse, sentences are either true or false. We will refer to this first-order logic as orthodox first-order logic, to distinguish it from the parameterised first-order logic we shall shortly introduce.
We now generalise this setup by working relative to some ambient set of parameters – some finite collection of variables that range in some specified sets (or classes) and may be subject to one or more constraints. For instance, one may be working with some natural number parameters
with the constraint
; we will keep this particular choice of parameters as a running example for the discussion below. Once one selects these parameters, all other variables under consideration are not just single elements of a given domain of discourse, but rather a family of such elements, parameterised by the given parameters; we will refer to these variables as parameterised variables to distinguish them from the orthodox variables of first-order logic. For instance, with the above parameters, when one refers to a real number
, one now refers not just to a single element of
, but rather to a function
that assigns a real number
to each choice of parameters
; we will refer to such a function
as a parameterised real, and often write
to indicate the dependence on parameters. Each of the ambient parameters can of course be viewed as a parameterised variable, thus for instance
can (by abuse of notation) be viewed as the parameterised natural number that maps
to
for each choice
of parameters.
The specific ambient set of parameters, and the constraints on them, tends to vary as one progresses through various stages of a mathematical argument, with these changes being announced by various standard phrases in mathematical English. For instance, if at some point a proof contains a sentence such as “Let
be a natural number”, then one is implicitly adding
to the set of parameters; if one later states “Let
be a natural number such that
“, then one is implicitly also adding
to the set of parameters and imposing a new constraint
. If one divides into cases, e.g., “Suppose now that
is odd… now suppose instead that
is even”, then the constraint that
is odd is temporarily imposed, then replaced with the complementary constraint that
is even, then presumably the two cases are combined and the constraint is removed completely. A bit more subtly, parameters can disappear at the conclusion of a portion of an argument (e.g., at the end of a proof of a lemma or proposition in which the parameter was introduced), replaced instead by a summary statement (e.g., “To summarise, we have shown that whenever
are natural numbers with
, then …”) or by the statement of the lemma or proposition in whose proof the parameter was temporarily introduced. One can also remove a variable from the set of parameters by specialising it to a specific value.
Any term that is well-defined for individual elements of a domain, is also well-defined for parameterised elements of the domain by pointwise evaluation. For instance, if
and
are parameterised real numbers, one can form the sum
, which is another parameterised real number, by the formula

Given a relation between terms involving parameterised variables, we will interpret the relation as being true (for the given choice of parameterised variables) if it holds for
all available choices of parameters

(obeying all ambient constraints), and false otherwise (i.e., if it fails for at least one choice of parameters). For instance, the relation

would be interpreted as true if one has

for all choice of parameters

, and false otherwise.
Remark 3 In the framework of nonstandard analysis, the interpretation of truth is slightly different; the above relation would be considered true if the set of parameters for which the relation holds lies in a given (non-principal) ultrafilter. The main reason for doing this is that it allows for a significantly more general transfer principle than the one available in this setup; however we will not discuss the nonstandard analysis framework further here. (Our setup here is closer in spirit to the “cheap” version of nonstandard analysis discussed in this previous post.)
With this convention an annoying subtlety emerges with regard to boolean connectives (conjunction
, disjunction
, implication
, and negation
), in that one now has to distinguish between internal interpretation of the connectives (applying the connectives pointwise for each choice of parameters before quantifying over parameters), and external interpretation (applying the connectives after quantifying over parameters); there is not a general transfer principle from the former to the latter. For instance, the sentence

is false in parameterised logic, since not every choice of parameter

is odd. On the other hand, the internal negation

or equivalently

is also false in parameterised logic, since not every choice of parameter

is even. To put it another way, the internal disjunction

is true in parameterised logic, but the individual statements

and

are not (so the
external disjunction of these statements is false). To maintain this distinction, I will reserve the boolean symbols (

) for internal boolean connectives, and reserve the corresponding English connectives (“and”, “or”, “implies”, “not”) for external boolean connectives.
Because of this subtlety, orthodox dichotomies and trichotomies do not automatically transfer over to the parameterised setting. In the orthodox natural numbers, a natural number
is either odd or even; but a parameterised natural number
could be neither even all the time nor odd all the time. Similarly, given two parameterised real numbers
, it could be that none of the statements
,
,
are true all the time. However, one can recover these dichotomies or trichotomies by subdividing the parameter space into cases. For instance, in the latter example, one could divide the parameter space into three regions, one where
is always true, one where
is always true, and one where
is always true. If one can prove a single statement in all three subregions of parameter space, then of course this implies the statement in the original parameter space. So in practice one can still use dichotomies and trichotomies in parameterised logic, so long as one is willing to subdivide the parameter space into cases at various stages of the argument and recombine them later.
There is a similar distinction between internal quantification (quantifying over orthodox variables before quantifying over parameters), and external quantification (quantifying over parameterised variables after quantifying over parameters); we will again maintain this distinction by reserving the symbols
for internal quantification and the English phrases “for all” and “there exists” for external quantification. For instance, the assertion

when interpreted in parameterised logic, means that for all parameterised reals

and

, the assertion

holds for all

. In this case it is clear that this assertion is true and is in fact equivalent to the orthodox sentence

. More generally, we do have a restricted transfer principle in that any true sentence in orthodox logic that involves only quantifiers and no boolean connectives, will transfer over to parameterised logic (at least if one is willing to use the axiom of choice freely, which we will do in this post). We illustrate this (somewhat arbitrarily) with the
Lagrange four square theorem 
This sentence, true in orthodox logic, implies the parameterised assertion that for every parameterised natural number

, there exist parameterised natural numbers

,

,

,

, such that

for all choice of parameters

. To see this, we can
Skolemise the four-square theorem
(5) to obtain functions

,

,

,

such that

for all orthodox natural numbers

. Then to obtain the parameterised claim, one simply sets

,

,

, and

. Similarly for other sentences that avoid boolean connectives. (There are some further classes of sentences that use boolean connectives in a restricted fashion that can also be transferred, but we will not attempt to give a complete classification of such classes here; in general it is better to work out some examples of transfer by hand to see what can be safely transferred and which ones cannot.)
So far this setup is not significantly increasing the expressiveness of one’s language, because any statement constructed so far in parameterised logic can be quickly translated to an equivalent (and only slightly longer) statement in orthodox logic. However, one gains more expressive power when one allows one or more of the parameterised variables to have a specified type of dependence on the parameters, and in particular depending only on a subset of the parameters. For instance, one could introduce a real number
which is an absolute constant in the sense that it does not depend on either of the parameters
; these are a special type of parameterised real, in much the same way that constant functions are a special type of function. Or one could consider a parameterised real
that depends on
but not on
, or a parameterised real
that depends on
but not on
. (One could also place other types of constraints on parameterised quantities, such as continuous or measurable dependence on the parameters, but we will not consider these variants here.)
By quantifying over these restricted classes of parameterised functions, one can now efficiently write down a variety of statements in parameterised logic, of types that actually occur quite frequently in analysis. For instance, we can define a parameterised real
to be bounded if there exists an absolute constant
such that
; one can of course write this assertion equivalently in orthodox logic as

One can also define the stronger notion of

being
-bounded, by which we mean

, or in orthodox logic

In the opposite direction, we can assert the weaker statement that

is bounded in magnitude by a quantity

that depends on

but not on

; in orthodox logic this becomes

As before, each of the example statements in parameterised logic can be easily translated into a statement in traditional logic. On the other hand, consider the assertion that a parameterised real
is expressible as the sum
of a quantity
depending only on
and a quantity
depending on
. (For instance, the parameterised real
would be of this form, but the parameterised real
cannot.) Now it becomes significantly harder to translate this statement into first-order logic! One can still do so fairly readily using second-order logic (in which one also is permitted to quantify over operators as well as variables), or by using the language of set theory (so that one can quantify over a set of functions of various forms). Indeed if one is parameterising over proper classes rather than sets, it is even possible to create sentences in parameterised logic that are non-firstorderisable; see this previous blog post for more discussion.
Another subtle distinction that arises once one has parameters is the distinction between “internal” or `parameterised” sets (sets depending on the choice of parameters), and external sets (sets of parameterised objects). For instance, the interval
is an internal set – it assigns an orthodox set
of reals to each choice of parameters
; elements of this set consist of all the parameterised reals
such that
for all
. On the other hand, the collection
of bounded reals – i.e., parameterised reals
such that there is a constant
for which
for all choices of parameters
– is not an internal set; it does not arise from taking an orthodox set of reals
for each choice of parameters. (Indeed, if it did do so, since every constant real is bounded, each
would contain all of
, which would make
the set of all parameterised reals, rather than just the bounded reals.) To maintain this distinction, we will reserve set builder notation such as
for internally defined sets, and use English words (such as “the collection of all bounded parameterised reals”) to denote external sets. In particular, we do not make sense of such expressions as
(or
, once asymptotic notation is introduced). In general, I would recommend that one avoids combining asymptotic notation with heavy use of set theoretic notation, unless one knows exactly what one is doing.
— 3. Asymptotic notation —
We now simultaneously introduce the two extensions to orthodox first order logic discussed in previous sections. In other words,
- We permit the use of partially specified mathematical objects in one’s mathematical statements (and in particular, on either side of an equation or inequality).
- We allow all quantities to depend on one or more of the ambient parameters.
In particular, we allow for the use of partially specified mathematical quantities that also depend on one or more of the ambient parameters. This allows us now formally introduce asymptotic notation. There are many variants of this notation, but here is one set of asymptotic conventions that I for one like to use:
Definition 4 (Asymptotic notation) Let
be a non-negative quantity (possibly depending on one or more of the ambient parameters).
- We use
to denote a partially specified quantity in the class of quantities
(that can depend on one or more of the ambient parameters) that obeys the bound
for some absolute constant
. More generally, given some ambient parameters
, we use
to denote a partially specified quantity in the class of quantities
that obeys the bound
for some constant
that can depend on the
parameters, but not on the other ambient parameters. - We also use
or
as a synonym for
, and
as a synonym for
. (In some fields of analysis,
,
, and
are used instead of
,
, and
.) - If
is a parameter and
is a limiting value of that parameter (i.e., the parameter space for
and
both lie in some topological space, with
an adherent point of that parameter space), we use
to denote a partially specified quantity in the class of quantities
(that can depend on
as well as the other the ambient parameters) that obeys a bound of the form
for all
in some neighborhood of
, and for some quantity
depending only on
such that
as
. More generally, given some further ambient parameters
, we use
to denote a partially specified quantity in the class of quantities
that obey a bound of the form
for all
in a neighbourhood of
(which can also depend on
) where
depends on
and goes to zero as
. (In this more general form, the limit point
is now also permitted to depend on the parameters
).
Sometimes (by explicitly declaring one will do so) one suppresses the dependence on one or more of the additional parameters
, and/or the asymptotic limit
, in order to reduce clutter.
(This is the “non-asymptotic” form of the
notation, in which the bounds are assumed to hold for all values of parameters. There is also an “asymptotic” variant of this notation that is commonly used in some fields, in which the bounds in question are only assumed to hold in some neighbourhood of an asymptotic value
, but we will not focus on that variant here.)
Thus, for instance,
is a free variable taking values in the natural numbers, and
are quantities depending on
, then the statement
denotes the assertion that
for all natural numbers
, where
is another quantity depending on
such that
for all
, and some absolute constant
independent of
. Similarly,
denotes the assertion that
for all natural numbers
, where
is as above.
For a slightly more sophisticated example, consider the statement

where again

is a free variable taking values in the natural numbers. Using the conventions for multi-valued expressions, we can translate this expression into first-order logic as the assertion that whenever

are quantities depending on

such that there exists a constant

such that

for all natural numbers

, and there exists a constant

such that

for all natural numbers

, then we have

for all

, where

is a quantity depending on natural numbers

with the property that there exists a constant

such that

. Note that the first-order translation of
(6) is substantially longer than
(6) itself; and once one gains familiarity with the big-O notation,
(6) can be deciphered much more quickly than its formal first-order translation.
It can be instructive to rewrite some basic notions in analysis in this sort of notation, just to get a slightly different perspective. For instance, if
is a function, then:
-
is continuous iff one has
for all
. -
is uniformly continuous iff one has
for all
. - A sequence
of functions is equicontinuous if one has
for all
and
(note that the implied constant depends on the family
, but not on the specific function
or on the index
). - A sequence
of functions is uniformly equicontinuous if one has
for all
and
. -
is differentiable iff one has
for all
. - Similarly for uniformly differentiable, equidifferentiable, etc..
Remark 5 One can define additional variants of asymptotic notation such as
,
, or
; see this wikipedia page for some examples. See also the related notion of “sufficiently large” or “sufficiently small”. However, one can usually reformulate such notations in terms of the above-mentioned asymptotic notations with a little bit of rearranging. For instance, the assertion 
can be rephrased as an alternative: 
When used correctly, asymptotic notation can suppress a lot of distracting quantifiers (“there exists a
such that for every
one has…”) or temporary notation which is introduced once and then discarded (“where
is a constant, not necessarily the same as the constant
from the preceding line…”). It is particularly well adapted to situations in which the order of magnitude of a quantity of interest is of more importance than its exact value, and can help capture precisely such intuitive notions as “large”, “small”, “lower order”, “main term”, “error term”, etc.. Furthermore, I find that analytic assertions phrased using asymptotic notation tend to align better with the natural sentence structure of mathematical English than their logical equivalents in other notational conventions (e.g. first-order logic).
On the other hand, the notation can be somewhat confusing to use at first, as expressions involving asymptotic notation do not always obey the familiar laws of mathematical deduction if applied blindly; but the failures can be explained by the changes to orthodox first order logic indicated above. For instance, if
is a positive integer (which we will assume to be at least say
, in order to make sense of quantities such as
), then
- (i) (Asymmetry of equality) We have
, but it is not true that
. In the same spirit,
is a true statement, but
is a false statement. Similarly for the
notation. This of course stems from the asymmetry of the equality relation
that arises once one introduces partially specified objects. - (ii) (Intransitivity of equality) We have
, and
, but
. This is again stemming from the asymmetry of the equality relation. - (iii) (Incompatibility with functional notation)
generally refers to a function of
, but
usually does not refer to a function of
(for instance, it is true that
). This is a slightly unfortunate consequence of the overloaded nature of the parentheses symbols in mathematics, but as long as one keeps in mind that
and
are not function symbols, one can avoid ambiguity. - (iv) (Incompatibility with mathematical induction) We have
, and more generally
for any
, but one cannot blindly apply induction and conclude that
for all
(with
viewed as an additional parameter). This is because to induct on an internal parameter such as
, one is only allowed to use internal predicates
; the assertion
, which also quantifies externally over some implicit constants
, is not an internal predicate. However, external induction is still valid, permitting one to conclude that
for any fixed (external)
, or equivalently that
if
is now viewed instead as a parameter. - (v) (Failure of the law of generalisation) Every specific (or “fixed”) positive integer, such as
, is of the form
, but the positive integer
is not of the form
. (Again, this can be fixed by allowing implied constants to depend on the parameter one is generalising over.) Like (iv), this arises from the need to distinguish between external (fixed) variables and internal parameters. - (vi) (Failure of the axiom schema of specification) Given a set
and a predicate
involving elements
of
, the axiom of specification allows one to use set builder notation to form the set
. However, this is no longer possible if
involves asymptotic notation. For instance, one cannot form the “set”
of bounded real numbers, which somehow manages to contain all fixed numbers such as
, but not any unbounded free parameters such as
. (But, if one uses the nonstandard analysis formalism, it becomes possible again to form such sets, with the important caveat that such sets are now external sets rather than internal sets. For instance, the external set
of bounded nonstandard reals becomes a proper subring of the ring of nonstandard reals.) This failure is again related to the distinction between internal and external predicates. - (vii) (Failure of trichotomy) For non-asymptotic real numbers
, exactly one of the statements
,
,
hold. As discussed in the previous section, this is not the case for asymptotic quantities: none of the three statements
,
, or
are true, while all three of the statements
,
, and
are true. (This trichotomy can however be restored by using the nonstandard analysis formalism, or (in some cases) by restricting
to an appropriate subsequence whenever necessary.) - (viii) (Unintuitive interaction with
) Asymptotic notation interacts in strange ways with the
symbol, to the extent that combining the two together is not recommended. For instance, the statement
is a true statement, because for any expression
of order
, one can find another expression
of order
such that
for all
. Instead of using statements such as
in which one of
contain asymptotic notation, I would instead recommend using the different statement “it is not the case that
“, e.g. “it is not the case that
“. And even then, I would generally only use negation of asymptotic statements in order to demonstrate the incorrectness of some particular argument involving asymptotic notation, and not as part of any positive argument involving such notations. These issues are of course related to (vii). - (ix) (Failure of cancellation law) We have
, but one cannot cancel one of the
terms and conclude that
. Indeed,
is not equal to
in general. (For instance,
and
, but
.) More generally,
is not in general equal to
or even to
(although there is an important exception when one of
dominates the other). Similarly for the
notation. This stems from the care one has to take in the law of substitution when working with partially specified quantities that appear multiple times on the left-hand side. - (x) (
,
do not commute with signed multiplication) If
are non-negative, then
and
. However, these laws do not work if
is signed; indeed, as currently defined
and
do not even make sense. Thus for instance
cannot be written as
. (However, one does have
and
when
is signed.) This comes from the absolute values present in the
-notation. For beginners, I would recommend not placing any signed quantities inside the
and
symbols if at all possible. - (xi) (
need not distribute over summation) For each fixed
,
, and
, but it is not the case that
. This example seems to indicate that the assertion
is not true, but that is because we have conflated an external (fixed) quantity
with an internal parameter
(the latter being needed to define the summation
). The more precise statements (with
now consistently an internal parameter) are that
, and that the assertion
is not true, but the assertion
is still true (why?). - (xii) (
does not distribute over summation, I) Let
, then for each fixed
one has
; however,
. Thus an expression of the form
can in fact grow extremely fast in
(and in particular is not of the form
or even
). Of course, one could replace
here by any other growing function of
. This is a similar issue to (xi); it shows that the assertion 
can fail, but if one has uniformity in the
parameter then things are fine: 
- (xiii) (
does not distribute over summation, II) In the previous example, the
summands were not uniformly bounded. If one imposes uniform boundedness, then one now recovers the
bound, but one can still lose the
bound. For instance, let
, then
is now uniformly bounded in magnitude by
, and for each fixed
one has
; however,
. Thus, viewing
now as a parameter, the expression
is bounded by
, but not by
. (However, one can write
since by our conventions the implied decay rates in the
summands are uniform in
.) - (xiv) (
does not distribute over summation, III) If
are non-negative quantities, and one has a summation of the form
(noting here that the decay rate is not allowed to depend on
), then one can “factor out” the
term to write this summation as
. However this is far from being true if the sum
exhibits significant cancellation. This is most vivid in the case when the sum
actually vanishes. For another example, the sum
is equal to
, despite the fact that
uniformly in
, and that
. For oscillating
, the best one can say in general is that 
Similarly for the
notation. I see this type of error often among beginner users of asymptotic notation. Again, the general remedy is to avoid putting any signed quantities inside the
or
notations.
Perhaps the quickest way to develop some basic safeguards is to be aware of certain “red flags” that indicate incorrect, or at least dubious, uses of asymptotic notation, as well as complementary “safety indicators” that give more reassurance that the usage of asymptotic notation is valid. From the above examples, we can construct a small table of such red flags and safety indicators for any expression or argument involving asymptotic notation:
Red flag | Safety indicator |
Signed quantities in RHS | Absolute values in RHS |
Casually using iteration/induction | Explicitly allowing bounds to depend on length of iteration/induction |
Casually summing an unbounded number of terms | Keeping number of terms bounded and/or ensuring uniform bounds on each term |
Casually changing a “fixed” quantity to a “variable” or “bound” one | Keeping track of what parameters implied constants depend on |
Casually establishing lower bounds or asymptotics | Establishing upper bounds and/or being careful with signs and absolute values |
Signed algebraic manipulations (e.g., cancellation law) | Unsigned algebraic manipulations |
| Negation of ; or, better still, avoiding negation altogether |
Swapping LHS and RHS | Not swapping LHS and RHS |
Using trichotomy of order | Not using trichotomy of order |
Set-builder notation | Not using set-builder notation (or, in non-standard analysis, distinguishing internal sets from external sets) |
When I say here that some mathematical step is performed “casually”, I mean that it is done without any of the additional care that is necessary when this step involves asymptotic notation (that is to say, the step is performed by blindly applying some mathematical law that may be valid for manipulation of non-asymptotic quantities, but can be dangerous when applied to asymptotic ones). It should also be noted that many of these red flags can be disregarded if the portion of the argument containing the red flag is free of asymptotic notation. For instance, one could have an argument that uses asymptotic notation in most places, except at one stage where mathematical induction is used, at which point the argument switches to more traditional notation (using explicit constants rather than implied ones, etc.). This is in fact the opposite of a red flag, as it shows that the author is aware of the potential dangers of combining induction and asymptotic notation. Similarly for the other red flags listed above; for instance, the use of set-builder notation that conspicuously avoids using the asymptotic notation that appears elsewhere in an argument is reassuring rather than suspicious.
If one finds oneself trying to use asymptotic notation in a way that raises one or more of these red flags, I would strongly recommend working out that step as carefully as possible, ideally by writing out both the hypotheses and conclusions of that step in non-asymptotic language (with all quantifiers present and in the correct order), and seeing if one can actually derive the conclusion from the hypothesis by traditional means (i.e., without explicit use of asymptotic notation ). Conversely, if one is reading a paper that uses asymptotic notation in a manner that casually raises several red flags without any apparent attempt to counteract them, one should be particularly skeptical of these portions of the paper.
As a simple example of asymptotic notation in action, we give a proof that convergent sequences also converge in the Césaro sense:
Proposition 6 If
is a sequence of real numbers converging to a limit
, then the averages
also converge to
as
.
Proof: Since
converges to
, we have

so in particular for any

we have

whenever

. For

, we thus have





whenever

. Setting

to grow sufficiently slowly to infinity as

(for fixed

), we may simplify this to

for all

, and the claim follows.