It seemed to me that John Armstrong’s comments on the proper viewpoint on differentiation deserved to be taken up more fully, partly for its own sake, partly to see what can and can’t be done in Th(Calc), and partly in the way of tying together some loose strands in this blog entry.
The objects of could be called “affine spaces”, according to the rules (1) 1 is an affine space, (2) if is an affine space, then so is , (3) if and are affine spaces, then so is . So we are trying to do calculus on affine spaces.
John A. proposed that differentiation could be seen as a natural family of type
or equivalently of type
(from this point on I am going to switch to exponential notation for internal homs). Intuitively, this should be the mapping
where affine spaces and are identified with their tangent spaces at each of their points .
Elsewhere, I had suggested that one way of doing calculus might be to add a new type and mimic Synthetic Differential Geometry purely in the cartesian closed setting. On objects the tangent bundle of a type would be , and there would be an (internal) derivative operator
arising from the internal hom . Under the Kock-Lawvere axiom we would have an isomorphism . Very simple calculations which hold generally in cartesian closed categories would then show that for all types or affine spaces. Thus the internal derivative would be a natural family
intuitively described by the mapping
a minor tweak on what John proposed.
The question I want to consider is how much of this family is expressible in the theory that John B introduced.
I only got so far; I was getting a little stuck just at the “calculus of variations” point, which might lie just beyond the purview of what one can do just with a commutative ring and a single derivative operator
Here’s what I have so far: suppose we have a family of natural transformations
which is definable in for some full subcategory of types . Then for any we can define
as the evident composite
Moreover, if natural operators and are defined for the same class of objects , then there is an operator defined by the evident composite
Thus, if we have derivative operators
for some class of “admissible” types , then we have derivative operators for admissible and any type .
Clearly is admissible. If and are admissible, so is their product : the requisite map
may be defined by a lambda term
with a free variable of type , as
where and are (with a little good will) partial derivative operators definable from
and , and the algebraic operations of sum and multiplication are defined on every type by the obvious pointwise and coordinatewise definitions which stem from the commutative ring structure on .
Taking stock of what we have so far, we have thus defined internal derivative operators
which are natural in and . It would be nice to extend to all types in place of (it suffices to do this for types of the form ; cf. calculus of variations), but I don’t see how to do this yet within the language . Maybe it’s not possible.(?)
Re: Classical vs Quantum Computation (Week 16)
In class, Tom Payne asked if we could use the lambda calculus to talk about things like partial differentiation with respect to a given variable: and not just the abstract differentiation operator
I found this fascinating, but I couldn’t think of anything intelligent to say about it.
Later, he emailed me and wrote:
I replied: