Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

July 9, 2004

Scandinavian but not Abelian

Posted by urs

Stepping out of the propeller plane on Karlstad airport, I found myself surrounded by pine forests and in an atmosphere quite unlike that on larger airports – but what I did not find was my luggage.

Karlstad airport

Apart from the obvious inconveniences this meant that a couple of papers on non-abelian 2-form fields which I had brought with me were spending the night in Copenhagen, instead of attending the conference ‘NCG and rep theory in math-phys’ with me.

Not that there weren’t plenty of other things to think about, like Schweigert’s talk on how modular tensor categories and Frobenius algebras know about open strings, as well as many very mathematical talks with categories here and functors there

commuting diagrams

but after I had given my talk on Loop space methods in string theory it turned out that several people were interested in nonabelian 2-form gauge theories, and on my way back to the hotel I had a very interesting conversation with Martin Cederwall about precisely the lost hep-th/0206130, hep-th/0207017, hep-th/0312112 which I had intended to pull out of my hat on precisely such an occasion.

But maybe I was lucky after all, because when on the next day at lunch I talked about gauge invariances in 2-form theories with Jens Fjelstad, we had to reproduce the essential formulas by ourselves on a sheet of scrap paper, instead of just looking them up, and somehow this triggered the right neurons for me, and after a nap that evening I got up and saw the light.

Karlstad center

[Update 07/15/04: The issue discussed below can now be found discussed in hep-th/0407122.]

The point is that the 2-form on target space gives rise to an ordinary 1-form connection on loop space, of course, and that I think that I know precisely how this 1-form connection looks like, because I can derive it from boundary state deformations.

In a somewhat schematical and loose fashion we can write

(1)=d+ A(B),

following the notation in Hofman’s paper, but including a second factor of the A-holonomy, as I have discussed before.

Using this connection and the ordinary formula for its gauge transformations, one can check that global gauge transformations on loop space correspond to the ordinary 1-form gauge transformations

(2)AUAU +U(dU )
(3)BUBU

on target space, while local gauge transformations on loop space give rise to

(4)AA
(5)BB+d Aλ

up to some correction terms which don’t have a target space analogue. I have given a little more detailed discussion of this on sci.physics.strings.

As with any riddle, after having written this down it looks pretty obvious, but at least I haven’t seen this clearly before.

The question now seems to be: Can we even expect to be able to write down a theory of point particles that is local in target space and respects the above gauge symmetries. What happens to the correction terms?

Rather I’d suspect something like an OSFT which has the true loop space 2-form gauge invariance, but whose level truncated effective field theory breaks some of it. But I don’t know.

When I mentioned to Martic Cederwall that we should maybe consider YM on loop space using the field strength (d+ A(B)) 2 he remarked that this would be a theory local in loop space, while ordinary OSFT is non-local in loop space (because the 3 ‘loops’ (or rather open intervals) involved in an interaction are not small deformations of each other and hence do not correspond to nearby points in loop space).

Well, so I don’t know what all this means. But as far as I can see nobody else does either, at least nobody understands it completely. Amitabha Lahiri kindly made me aware of a couple of paper he has on attempts to construct field theories with some reasonable 2-form gauge invariances. I will try to have a look at these papers and see if the Lagrangians considered there might be understood in terms of the loop space connection

(6)d+ A(B)= (μ,σ)( (μ,σ)+U A(0 ,σ)B μνX νU A(σ,0 )).
Posted at July 9, 2004 8:28 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/394

116 Comments & 0 Trackbacks

Re: Scandinavian but not Abelian

Hi -

In a private email (before noticing the new SCT entry), I said…

I haven’t mastered the loop space formulation, but the idea seems really natural to me. I am sure that you are correct and people will take notice of this and your deformation stuff. An obvious question, can the process be iterated? What would be the loop space of loop space? :) Would you get non-abelian 3-form stuff? :)

You responded…

Good question. My advisor asked me the same question today at lunch. My answer
was that, yes, I think this iterates. Naively at least I can imagine the loop
space over loop space, for instance (and I seem to recall John Baez having
mentioned something like that before). This would be “torus space”, the space
of all maps from the torus into target space. It should be relevant for the
supermembrane, which indeed couples to the 3-form that appears in 11d
supergravity. There should then also be nonabelian 3-forms and so on.

But - wait - we should be discussing this at the String Coffee Table. It could
need some activity. :-)

Ok! Ok! :)

Another obvious question…

Is it possible to generalize “loop” space to “string” space that includes both open and closed strings? If you could construct a “string space of string space”, then you could talk about general maps from seemingly general 2d manifolds (branes?) into target space.

Just curious, but would even the loop space of loop space admit things more general than tori? I can almost picture a Klein bottle among other things, e.g. closed strings that twist around.

[snip of some stuff about Pohlmeyer invariants I don’t think you want me reproducing in public ;)]

I also said (in regard to the lousy communication skills of most string theorists)

I know! I see you being assimilated! :) Your notes are becoming less and less comprehensible ;)

to which you replied

Ok. So we should start a project: Rephrase everything that I think can be said
about strings in loop space formulation in a way that is understandable for
non-experts in strings. I think there is a very good chance that this is
possible. Many aspects of string theory look surprisingly natural in loop
space formulation. For instance isn’t it kind of remarkable that the concept
of the spacefilling D9 brane translates in loop space language just to the
constant 0-form on loop space? What are called ‘boundary states’ in string
theory are really just the constant 0-form acted on by some unitary operators
on loop space.

This sounds like a good idea :) Maybe we can work on this together and put something on the arxives. Then again, knowing how notoriously slow I am at writing things up, you might want to go ahead on your own. I’d be happy to at least make suggestions :) We started to do this for your string theory seminar, where I was able to write down a pretty sleek coordinate independent version of the Polyakov action. Doing so made the relation to BF-YM theory kind of obvious to me. Knowing that there are probably infinite many ways to write down an action that reduced to Nambu-Goto “on shell” caused me to lose interest :)

This loop space idea is pretty neat though. Of course, I would suggest presenting the discrete version first, which would be much simpler :) Contraction “integrals” of continuum indices become summations, which more closely resembles the usual contraction of indices.

Just a thought…

I also said

One of the things that inspired me to learn some elementary string theory was a statement you made (several times in fact) in response to complaints that string theory does not predict anything. You said that the fact that string theory does not predict anything should not be considered necessarily a bad thing because Newton’s gravitational theory doesn’t predict anything either. Not without specifying initial conditions. Finding the right vacua in string theory is like trying to find the right initial configuration of planets in the solar system. Once this is done, you can make many predictions about the subsequent orbits of the planets. This made a lot of sense to me.

You replied

Nice to hear. I get kind of frustrated hearing [people] repeat [their] claim ‘no predictions’ no matter what. In precisely the same sense field theory as such does not make any predictions. I mean, just givben the statement: “The world is described by field theoy.” does not allow you to predict anything. You instead need to pick a particular Lagrangian. String theory has the advantage that all these Lagrangians are not just there to be picked but are part of a larger framework which in principle should tell you what t choose. So in principle string theory is more predictive than field theory.

I continued…

On the other hand, I am not quite convinced that the situation is as easy as that. Before Newton could write down his gravitational theory, he first had to invent his calculus. Once you have the calculus, you need some basic physical laws, like F = dp/dt. Once you have the physical laws, THEN you can construct a physical model, e.g. the solar system.

to which you replied

I am not sure about your destinction between laws and models. But one should
certainly agree with what everybody is saying, namely that the full picture of
string theory has not yet emerged.

In this case of Newton’s gravitational theory, the “law” is the law of gravitation. The “model” is the specific placement of planets. The model is governed by the law. I don’t know how this analogy quite translates for strings.

I continued

I get the feeling that string theory has not even passed the “calculus” phase yet, i.e. the “calculus of strings” is still being developed.

To which you replied

BTW, I think Martinec used to call conformal field theory the “calculus of
string”.

Neat :)

I continued

On another note, I have actually been working hard on the discrete stuff. I don’t have a lot to show for it though :) It might be a waste of time, but I have been LaTeX’ifying Robb’s 21 postulates for axiomatizing Minkowski space. Then I will LaTeX’ify Zeeman’s postulates and finally Penrose’s. I hope to compare and contrast them. I will then try to distill out the “meaning” of what they are trying to say. I will then try to use this “meaning” to motivate the discrete approaches of Sorkin, D&MH, and our notes. Well, that is the plan anyway :)

to which you replied

Robb has 21 axioms? So many?

Yes. Although all of them rely only on the relations “before” and “after”, it does seem unnecessarily convoluted. By the way, I forgot to mention Mundy in my list. Mundy basically reformulated what Robb did in a much more concise manner using the relation of being “light-like” separated as opposed to Robb’s “time-like” relation. Another motivation for :aTeX’ifying everthing is to help me verify that Mundy actually does reproduce everything Robb does.

Gotta run!

Eric

Posted by: Eric on July 9, 2004 5:35 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Oops!

… sleek coordinate independent version of the Polyakov action.

Of course Polyakov is coordinate independent. I meant that I wrote down a sleek notation that was “coordinate free”. If coordinates do not even appear in the expression, then it is obviously coordinate independent. I prefer “coordinate free” notation whenever possible.

Eric

Posted by: Eric on July 9, 2004 5:44 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Is it possible to generalize ‘loop’ space to ‘string’ space that includes both open and closed strings?

Hm, well, er, in principle, why not? I mean, this beast certainly exists somehow. But I think an important insight is that lots of open string physics can be captured instead much more elegantly by boundary state formalism, where open string physics happens inside closed string inner products whith a closed string state on one side inserted.

Just curious, but would even the loop space of loop space admit things more general than tori? I can almost picture a Klein bottle among other things, e.g. closed strings that twist around.

Depends on how precisely you define loop space. In order for a Klein bottle to appear as a point in the loop space of loop space of target space the loop space of target space has to be that of unoriented loops, i.e. where a single point in loop space corresponds to a given loop or its orientation reverse.

Contraction ‘integrals’ of continuum indices become summations, which more closely resembles the usual contraction of indices.

That would be polygon space. Polygon space is different from a discretized loop space. But of course one could consider it, too. The problem is that it badly breaks conformal invariance on the worldsheet. I have speculated before how one could try to make sense of it anyway. I am not sure yet that there is anything interesting to be found. But maybe there is.

In this case of Newton’s gravitational theory, the ‘law’ is the law of gravitation. The ‘model’ is the specific placement of planets. The model is governed by the law. I don’t know how this analogy quite translates for strings.

So by model you mean a point in phase space, i.e. one solution of the system. Take the IKKT version of strings, as a drastic example. The law is [A μ,[A μ,A ν]]=0 and a ‘model’ in your sense is any set of 10 large matrices A μ that solve this equation.

Posted by: Urs Schreiber on July 9, 2004 6:17 PM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

BTW, concerning loop space and CFT and the ‘calculus of string’ I should emphasize that loop space formulation and usual CFT language are two sides of the same coin. Usual CFT is working in the Heisenberg pciture with worldsheet-time dependent operators (fields), whereas the loop space context uses canonical Schrödinger picture formulation of the worldsheet field theory. As always, some aspects of a theory are more easily visible in Heisenberg picture, other in Schrödinger pricture. In field theory the Schrödinger picture is usually very awkward. But I think at least on the 2d worldsheet it proves to be a useful point of view for some questions.

Posted by: Urs Schreiber on July 9, 2004 7:21 PM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Ok. So we should start a project: Rephrase everything that I think can be said about strings in loop space formulation in a way that is understandable for non-experts in strings.

I am spending a little time thinking about this. The first thing I would suggest is that you stop calling

(1)d K=d+i K

a “deformation of the exterior derivative.” This is not a deformation of the exterior derivative because the exterior derivative is the transpose of the boundary operator. Instead, d K is some other operator constructed from d and i K, i.e. it is the “square root” of the Lie derivative. Since the motivation is to try to make things more transparent to the non-experts (like myself), I think it is worth some haggling over defining terminology and notation.

Another thing, why is the kinematical configuration space of bosonic strings the space of parameterized strings on target space as opposed to unparameterized strings? Why introduce parameterizations if nothing is going to depend on them?

Another thing, can you define integration of forms on loop space? Stokes’ theorem? This is how you should define the exterior derivative on loop space and not via some abstract algebraic (unmotivated) definition. I believe that Stokes’ theorem on loop space should play a central role and as far as I can see, it hasn’t even been mentioned yet (unless I missed it).

More coming soon…

Eric

Posted by: Eric on July 10, 2004 10:14 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

In http://arxiv.org/abs/hep-th/0401175 you say

Let (,g) be a pseudo-Riemannian manifold, the target space, with metric g, and let ℒℳ be its loop space consisting of smooth embeddings of the circle into :

(1)ℒℳ:=C (S 1 ,).

To me this seems to only define the point set of ℒℳ and says nothing about its topology. Are two points that are “close” in ℒℳ corresponding to two loops that are “close” in ? In both instances, how is “close” defined?

How is the topology of loop space defined?

Eric

Posted by: Eric on July 10, 2004 10:51 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Several years ago, I went bonkers over how beautiful loop space methods were and how I thought it was a shame that it didn’t seem to have a place in the “northern” approach to loop quantum gravity. I was rooting for the southern approach because it made use of the “loop derivative.” I did believe and I do believe that loop space methods are too beautiful not to be important for something.

Now here I am full circle approaching it again from a slightly different angle. I thought that for historical purposes, it might be entertaining to take a look at

Loop derivative

Eric

PS: For even more fun, check this out.

Posted by: Eric on July 11, 2004 3:06 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Walking through memory lane, i.e. rereading old s.p.r. posts, I found a url I gave to be no longer valid. Some minor tweaking turned up the current valid url

http://www-dft.ts.infn.it/~ansoldi/RedTape/Curriculum/HTML/Articles.html

In particular, I see something you (Urs) might find interesting (if you haven’t seen it)

String Propagator: A Loop Space Representation

To quote the abstract

The string quantum kernel is normally written as a functional sum over the string coordinates and the world-sheet metrics. As an alternative to this quantum field-inspired approach, we study the closed bosonic string propagation amplitude in the functional space of loop configurations. This functional theory is based entirely on the Jacobi variational formulation of quantum mechanics, without the use of a lattice approximation. The corresponding Feynman path integral is weighed by a string action which is a reparametrization invariant version of the Schild action. We show that this path integral formulation is equivalent to a functional “Schrödinger” equation defined in loop-space. Finally, for a free string, we show that the path integral and the functional wave equation are exactly solvable.

Eric

Posted by: Eric on July 11, 2004 3:19 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

String Propagator: A Loop Space Representation

I am aware of this paper. In spite of what it seems to indicate in the abstract I could never really relate it to what I am concerned with, though. Maybe my fault.

Concerning the ‘loop derivative’, I have looked at some (possibly not all) the links that you proivded, in particular the simple explanation that John baez gives here.

Seems to me that when the space of diffential forms on loop space is taken to be well-behaved enough this derivative exists and is pretty much just a Lie derivative on loop space. From what John Baez says in that message it looks like the reason this object does not exist in ‘northern LQG’ is the fact which we have discussed before in the context of the ‘LQG string’, namely that there we have non-weakly continuous reps and nonseperable Hilber spaces.

Posted by: Urs Schreiber on July 11, 2004 9:46 AM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

How is the topology of loop space defined?

The best is really to think of loop space as an -dimensional manifold with coordinates {X (μ,σ)}. Then the natural topology is just the usual one, where an open set is given by open intervals in each of the coordinates.

Heuristically, two loops are close together if one is obtained from the other by deforming it ever so slightly.

Posted by: Urs Schreiber on July 11, 2004 9:27 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

I would suggest is that you stop calling d K=d+i K a ‘deformation of the exterior derivative’

Here ‘deformation of X’ is meant in the sense of ‘a continuous 1-parameter familiy of objects such that for parameter=0 we have the original object and for parameter 0 we have the deformed object’. In this sense this is a deformation, since you really should have d K=d+iTι K, where T, the string’s tenstion, is the parameter, which, when turned on, deforms this operator away from the original exterior derivative.

why is the kinematical configuration space of bosonic strings the space of parameterized strings on target space as opposed to unparameterized strings? Why introduce parameterizations if nothing is going to depend on them?

Good point. The answer is: Because it is easier. The question is pretty much the same as ‘Why work with coordinates in GR if nothing is going to depend on them anyway?’ We need the coordinates to even write down the formulas which express their irrelevance! :-)

So for the string we have for instance the Polyakov action. It has a redundency, namely conformal invariance. That’s why we get constraints, which express that physical states should not have this redundancy. But the constraints hence must act on a space of states which does have the redundancy. The subspace annihilated by them is that subspace where the redundancy is gone - the physical subspace.

But the ‘deformed’ exterior derivative above is a pretty neat way to deal with this. It is not nilpotent, but it’s nilpotence is restored when restricted to the rep-invariant subspace.

So this is somewhat similar to your construction of chains from path algebras. Paths, on which the ‘boundary operator’ is not nilpotent restrict to chains, on which it is.

Another thing, can you define integration of forms on loop space?

Yes. The Hodge inner product on forms over loop space is essentially (up to a certain switch of sign in the 0-modes) the inner product on the superstring’s Hilbert space. A state in the string’s Hilbert space is a (inhomogeneous) differential form on loop space.

Stokes’ theorem?

In principle, yes. I think I haven’t come across its analogue in the CFT language yet, but it should be there.

Posted by: Urs Schreiber on July 11, 2004 9:21 AM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Hi Urs,

I think I am going to put a little more effort into trying to convince you to consider changing terminology about deformed exterior derivatives.

As we discussed here, there are at least three profound ways to “deform” the exterior derivative:

(1)1 .)d+A
(2)2 .)d+i X
(3)3 .)d+d .

The first one is the “square root” of the curvature, the second is the square root of the Lie derivative, and the third is the square root of the Lapace-Beltrami operator. In each case, we could append a factor T to the second term that we could, in principle, let go to zero and recover the usual exterior derivative. In this sense, we could consider each one to be a deformation of the exterior derivative. However, I would argue that this belies the geometrical meaning behind each one.

Would it be so controversial to think of some name besides “deformed exterior derivative” for d+i X? The first item is called the “covariant exterior derivative”. I am not too thrilled about this name either, but at least it makes it clear that it is geometrically different than the exterior derivative. The third item is called the Dirac-Kaehler operator, which I am perfectly happy with. I think I recall you referring to d+i X as the susy generator. Is that true? Could we call the second item the “susy generator” or something instead of deformed exterior derivative?

Ah ha! :) In your paper

On deformations of 2d SCFTs

you have a footnote referring to the equation

(4)d Ke Wd Ke W,d K e W d K e W (1.2 )

that says

Throughout this paper we use the term “deformation” to mean the operation (1.2) on the superconformal generators, the precise definition of which is given in §3.2 (p.15). These “deformations” are actually isomorphisms of the superconformal algebra, but affect its representations in terms of operators on the exterior bundle over loop space.

Not to mention the fact that you give a precise meaning to the word “deformation” for which d+i X is not one, it looks like you are referring to d K as a “superconformal generator”. Why can’t we just use this terminology for d K?

If we refer to d K as a deformation of d, then we need to refer to e Wd Ke W as a “deformation of a deformation of d.” I understand that it is not incorrect to refer to d K as a deformation of d, but if our goal is to make things clearer, should we try to avoid clumsy phrases like the above? Then again, I am not too sure I am fond of the term “superconformal generator” either because it is so scary :) Is there a less intimidating term we can use? Something like “the square root of the Lie derivative”? :)

Another reason why I would like to try to keep the exterior derivative as a more sacred operator is that I am enamored by the beauty of and profoundness of the generalized Stokes’ theorem. We should leave this temple unspoiled if we can :)

If we are going to deform d, then we had better deform the boundary map as well. Although we could, in principle, deform in a way to have a deformed Stokes’ theorem, I don’t think anyone would think this to be a natural thing to do.

Random thought…

I wonder if there is some natural operator H X:C pC p+1 such that

(5) Si Xα= H XSα.

I can imagine X determining a flow which sweeps the p-chain S along forming a (p+1 )-dimensional chain. If there was such a thing, then maybe it wouldn’t be so unnatural to define

(6) K=+H K

so that

(7) Sd Kα= KSα.

Hmm…

Anyway…

Stokes’ theorem?

In principle, yes. I think I haven’t come across its analogue in the CFT language yet, but it should be there.

If this is a hole, I think it is a significant hole and maybe we should fill it.

Eric

Posted by: Eric on July 11, 2004 3:54 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Hi again,

I wonder if there is some natural operator H X:C pC p+1 such that

(1) Si Xα= H XSα.

I can imagine X determining a flow which sweeps the p-chain S along forming a (p+1 )-dimensional chain.

I get the strong feeling I am reinventing old well-known material here, but this is kind of interesting :)

Given a p-chain S and a (smooth) vector field X, then the flow generated by X will drag S along sweeping out a (p+1 ) chain. Let ϕ(t): be the flow and let ϕ(t) *S denote the p-chain S carried along the flow to time t. Then define

(2)H X(t)S= 0 ttϕ(t) *S

to be the (p+1 )-chain obtained by sweeping S along X for a time t.

I could very well be (and probably am) wrong, but it looks like we have

(3) Si Xα=ddt[ H X(t)Sα] t=0 .

This feels right. If it is, that would be pretty neat.

It seems to be right for the case where α=df is a 1-form, i.e. for the left-hand side we have

(4) pi Xdf=(i Xdf) p=df(X) p,

which is the directional derivative of f along X evaluated at point p. For the right-hand side we have

(5)ddt[ H X(t)pdf] t=0 =ddt[ H X(t)pf] t=0 =ddt[f(ϕ(t) *p)f(p)],

which is also the directional derivative of f along X evaluated at p. So it works for this case, but this might be too special because i Xdf= Xf.

I think the case when α is exact is covered in Frankel (I’ll need to check), but I don’t recall seeing this relation

(6) Si Xα=ddt[ H X(t)Sα] t=0 .

for α not exact.

Eric

Posted by: Eric on July 11, 2004 4:41 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

I think the case when α is exact is covered in Frankel (I’ll need to check), but I don’t recall seeing this relation

(1) Si Xα=ddt[ H X(t)Sα] t=0

for α not exact.

Ok. I just checked Frankel and he has the expression

(2)ddt[ S(t)α] t=t= S(t) Xα,

where S(t)=ϕ(t) *S. This is pretty much different from what I am proposing in the quote above. However, I did check to make sure that the two expressions are compatible and they are (or seem to be).

From the expression in Frankel, we have

(3)ddt[ S(t)α] t=t= S(t) Xα= S(t)(i Xdα+di Xα)= S(t)i Xdα+ S(t)i Xα.

Now considering the first of the two terms on the right-hand side above we have (using my expression)

(4) S(t)i Xdα=ddt[ H X(t)S(t)dα] t=t=ddt[ [H X(t)S(t)]α] t=t

and considering the second term we have

(5) S(t)i Xα=ddt[ H X(t)S(t)α] t=t.

It seems kind of miraculous the way it works out, but unless I made a mistake, it appears that (while keeping track of orientation) we have

(6)[H X(t)S(t)]+H X(t)S(t)=S(t)S(t)

so that

(7)ddt[ [H X(t)S(t)]+H X(t)S(t)α] t=t=ddt[ S(t)S(t)α] t=t=ddt[ S(t)α] t=t

as it should :)

This gives me a boost of confidence that perhaps my expression is not too far off (and might actually be correct).

Recall that the reason I am interested in this in the first place is that I am trying to see if there is some natural way to extend Stokes’ theorem into something that might be called a “super Stokes’ theorem” :) I really like the sound of that :) The super Stokes’ theorem should look something like

(8) Sd Kα= KSα.

From the above, it appears that this is not possible in the continuum. However, there certainly is a natural extension of this to the discrete theory. Yet another reason why the discrete theory is superior to the continuum ;)

I claim that in the discrete theory that you can certainly write down a very natural “super Stokes’ theorem”, where the word “super” is not meant to mean “great”, it relates to supersymmetry :)

Eric

Posted by: Eric on July 11, 2004 6:46 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

By the way, if any of this can be made to make sense and their is a “super Stokes’ theorem”, then this justified a proposal to call d K the “super exterior derivative” and K the “super boundary” :)

Just a thought :)

This way we can refer to

(1)d+A+i K

as the “covariant super exterior derivative.” Phew! What a mouthful :)

Eric

Posted by: Eric on July 11, 2004 6:56 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Would it be so controversial to think of some name besides ‘deformed exterior derivative’ for d+ι X ?

You certainly have a point, since in any case the deformation here is different from these other deformations - at least on the surface of it. (But there is a subtle relation: Actually it is possible to get d+iι K from similarity transformations and taking some linear combinations from d alone. This is the content of equations (701) and (702) of this. But I so far haven’t managed to make any good use of this.)

I wonder if there is some natural operator H X:C pC p+1 such that

(1) Sι Xα= H XSα.

I don’t know if this works for general vector fields, or for general Killing vector fields, X, but I think something like that should work for X=K the reparametrization Killing vector on loop space.

The resaon is essentially that ι K is nothing but the T-dual of d!

I give the explanation of that weird sounding statement in that deformation paper. The point is that under the exchange of form creator with annihilators and of partial derivatives with multiplication by X , the canonical supercommutation relation remain intact. So algebraically there is little difference between d and ι K and as d generates a differential calculus based on the 0-form, ι K should generate a differential calculus based on the top form.

You write:

(2)H X(t)S= 0 t tϕ(t ) *S

to be a (p+1 )-chain

Wait, now I am confused: Are you claiming that a linear combination of p-chains can be a p+1 -chain? I don’t see how this should work, even in the discrete case, but maybe I am misunderstanding your notation.

As I said above, I would expect that the best way to get something like Stokes for ι K would be to regard this operator as another exterior derivative, a T-dual one, and devise a T-dual Stokes law for it.

Posted by: Urs Schreiber on July 12, 2004 10:14 AM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Too bad! :)

I was so looking forward to hearing what you had to say that I almost couldn’t sleep last night. I rush in (half dressed) to check SCT this morning and this is all you have to say?!?!? :)

You write:

(1)H X(t)S= 0 ttϕ(t) *S

to be a (p+1 )-chain

Wait, now I am confused: Are you claiming that a linear combination of p-chains can be a p+1 -chain? I don’t see how this should work, even in the discrete case, but maybe I am misunderstanding your notation.

The notation could probably use some work, but think of what I am saying. We have a p-chain S and a vector field X that determines a 1-parameter flow ϕ(t):. Following Frankel, we can define

(2)S(t)=ϕ(t) *S

to be the p-chain S carried along X for a time t. The p-chain S is going to “sweep out” a (p+1 )-dimensional region as it gets carried along X. The region is going to be time dependent and I denote the corresponding (p+1 )-chain by

(3)H X(t)S.

There is a little more to the story because a chain is more than just a point set, but involves orientation as well. This is not too difficult to work out. In the continuum, I found that I could not construct some h X:C pC p+1 that satisfies

(4) Si Xα= h XSα.

The best I could do in the continuum was (the closely related version)

(5) Si Xα=ddt[ H X(t)Sα] t=0 .

This is the best you can do using standard continuum differential geometry because there, you think of X p as being a tangent vector at a point p, whereas a tangent vector should really be associated with a little infinitessimal line segment. If instand of standard differential geometry, you were dealing with synthetic differential geometry that does treat little infinitessimal line segments, then you can get rid of the time derivative out front. Since little line segments are fundament in the discrete theory, things work out perfectly natural and you obviously do not need a time derivative out front (meaning you can have a “T-dual” Stokes theorem (I need to read you notes because I have no idea what “T-dual” means, but what you say sounds very interesting, i.e. i X is “T-dual” to d.)).

Without reading your notes yet, it seems that if you can get a T-dual Stokes theorem, then it might mean that T-dual differential geometry is more closely related to synthetic differential geometry than to standard differential geometry. That would be interesting if true :)

Eric

PS: This

(6) Si Xα=ddt[ H X(t)Sα] t=0 .

is NEAT!!! I am pretty sure it is correct too.

Posted by: Eric on July 12, 2004 1:11 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Too bad! :)

Oops, you are right. I wasn’t paying attention closely enough.

To my defence I could cite that my brain was really occupied with another calculation concerning nonablian loop space connections, as well as with some responses that I got on my latest spr post (unfortunately only by private email). But I shouldn’t.

You are right, the formula you give seems to be correct. It becomes pretty obvious when one picks some Cartesian set of coordinates and everything else rectangular, too.

For instance if α=f(x)dx 1 dx 2 and X= x 2 and S={x 2 =0 }, then H X(t)={0 x 2 t} and ι Xα=f(x)dx 1 and we get

(1) Sι Xα= f(x)dx 1

and

(2) H X(t)Sα= 0 tdx 2 dx 1 f(x).

This very manifestly satisfies your formula. And if it works for little cubes then, because we are physicists, it works for everything.

Yeah, cool. This formula looks like it should have been known for ages, but I, too, cannot remember having seen it stated explicitly anywhere.

I think if we allow ourselfs to be a little cavalier with notation then we can even put it in a form that is very suggestive of the application that you have in mind, namely I suggest writing

(3) Sι Xα= H X (0 )Sα.

Then it would indeed be possible to write down the ‘super Stokes’ theorem’ as

(4) S(d+ι X)α= H X (0 )S+Sα.

Yes, good, I like that. Is there anything we can say about the chain H X (0 )S+S?

Seems to be something like the infinitely tight ‘wrapping’ of S.

BTW, I think I made some progress with nonabelian connections on loop space. I found that one should maybe first concentrate on connections which are flat on loop space, i.e. which assign the identity group element to every (contractible) closed curve in loop space, i.e. to every torus.

This is not quite as trivial as it may sound. Indeed, for a loop space connection to be flat both the 1-form A and the 2-form B are generically non-flat by themselves.

Moreover, for a true boundary effect in string theory, i.e. a scenario where all the nontrivial background is really living on the brane and coupled only to ends of open strings, the flat loop space connection is precisely what we need and want.

As far as I can see everybody including me is pretty much in the dark concerning the physical interpretation of nonablian 2-form backgrounds in string theory, but what I just wrote makes a lot of sense to me, now that I think about it. In particular it removes the confusion how anything nonablian could couple to a closed string. (Since, as you may have heard, in string theory non-abelianness comes from open strings that attach to several D-branes. The nonablian N×N matrices are essentially coincidence matrices describing which end of which string ends on which of the N branes.)

So maybe flat nonablian connections on loop space is precisely what we should really be looking for. And I think this case is non-trivial and maybe only here all the desired properties hold.

If I find the time I’ll write that up in more detail.

Posted by: Urs Schreiber on July 12, 2004 3:26 PM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Too bad! :)

Oops, you are right. I wasn’t paying attention closely enough.

Don’t worry about it :) I know you’ve got a million things on your mind. I almost feel guilty for bugging you with this stuff (note: I only said “almost” :)).

I like the new (?) way to view i X very much and the neat geometrical interpretation should carry over to loop space very naturally, which I think may help significantly in making everything more understandeable. Especially considering the somewhat prominent role of i X and X on loop space.

Yes, good, I like that. Is there anything we can say about the chain H X(0 )S+S?

Well, since it is hard to picture H X(0 ), I would suggest to instead consider the visualizable chain

(1)H X(t)S+S

for some finite t. We can even try to understand the operator

(2) X(t)=H X(t)+.

The first thing to note is the important property

(3)H X(t) 2 =H X(t)H X(t)=0 .

The operator H X(t) has the nice geometrical picture of sweeping a p-chain S along forming a (p+1 )-chain. If you sweep a p-chain once and then sweep it again, you get a degenerate (p+2 )-chain, the integral over which will always vanish.

The neat thing about X(t) is that it squares to

(4) X(t) 2 =H X(t)+H X(t)=ϕ *(t)1 ,

i.e.

(5) X(t) 2 S=S(t)S.

This gives a beautiful interpretation of the Lie derivative and in fact is the transpose of the Lie derivative once you put a d/dt out in front of the integral.

On loop space, reparameterization invariance means that

(6)ϕ(t) *S=S

where ϕ(t) is the flow generated by sweeping points around the closed string. Therefore, it seems you could similarly state parameterization invariance via

(7) K(t) 2 =0 .

Neat, huh? :)

BTW, I think I made some progress with nonabelian connections on loop space.

Nice to hear. I bet you wish you could clone yourself now more than ever :)

Remember, these are the best years of your life so no complaining ;)

Eric

Posted by: Eric on July 12, 2004 4:02 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Since things are a little easier to understanding using some finite t, I just got the idea to define some operator

(1)i X(t):Ω pΩ p1

via

(2) Si X(t)α:= H X(t)Sα.

Then we could define operators

(3)d X(t)=d+i X(t)

and

(4) X(t)=d X(t) 2 =di X(t)+i X(t)d.

Using your “cavalier” notation, then the regular Lie derivative is actually

(5) X= X(0 ).

Fun fun :)

Eric

Posted by: Eric on July 12, 2004 4:13 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

In an email to Urs, I said

You mentioned that i_X was somehow “T-dual” to d. Do you know where I might be able to find a discussion about this?

to which you replied

You will find the discussion as soon as you ask about it in the SCT! :-)

Ok. Here you go! :)

Eric

Posted by: Eric on July 15, 2004 3:57 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

The answer to this question is given in section 4.2.1 of hep-th/0401175, and in particular in equation (4.12).

The canonical supercommutation relations on loop space are

(1)[ (μ,σ)X (ν,κ)]=δ μ νδ (σκ)

and

(2)[ (μ,σ), (ν,κ)]=δ ν μδ(σκ).

These canonical commutation relations are preserved under the exchange

(3) (μ,σ)X (μ,σ)
(4) (μ,σ) (μ,σ).

Physically this corresponds to exchanging the canonical momentum at each point of the string with its ‘winding’ excitation X =ddσX. Since the canonical brackets are preserved (as long as the 0-mode of X is not involved), this operation preserves the constraint algebra of the string and hence maps consistent string backgrounds to consistent string backgrounds. This duality is known as T-duality. In the above mentioned paper I demonstrate how the usual facts about T-fuality - plus a little more - can be deduced from this algebra isomorphism.

Ok, so this answers your question: The exterior derivative (μ,σ) (μ,σ) on loop space and the operator (μ,σ)X (μ,σ) of interior multiplication with the reparameterization Killing vector are interchanged under T-duality

(5) (μ,σ) (μ,σ) (μ,σ)X (μ,σ).

A subtlety is that the above isomorphism does not work as soon as the undifferentiated coordinate field plays a role. This is related to the fact that you can T-dualize only (as far as I know) along Killing directions in target space. Namely in these cases you can find coordinates so that the target space metric is independent of the dualized directions, so that no X along these directiosn appears in the string’s constraints.

Posted by: Urs Schreiber on July 15, 2004 4:35 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Hmm…

Does this have any meaning for target space? I mean, what if d and i X are defined on target space? Is there some duality

(1)di X

even in this case or do we have to go to loop space to see it?

Sorry if it is obvious :)

Eric

Posted by: Eric on July 15, 2004 5:57 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Well, if you forget about loop space you can note that all I really used is that X (μ,σ) is a Killing vector on loop space, which is not trivially a constant vector field.

So assume in more generality that there is a covariant derivative on some manifold and a Killing vector v, so that (if application of the derivative is written as commutation with the respective operator)

(1)[̂ μ,v ν]+[̂ ν,v μ]=0 .

This implies that the bracket

[̂ μ,v ν]

is invariant under the exchange μv μ, because

(2)[̂ μ,v ν][v μ,̂ ν]=[̂ ν,v μ]=+[̂ μ,v ν],

by the condition that v is Killing.

So that’s formally what is going on, and it has as such nothing to do with loop space.

However, I would not know what this exchange means when the space it is used on is not loop space.

But maybe you can figure it out… :-)

Posted by: Urs Schreiber on July 15, 2004 6:14 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Is this anything like saying

(1) X=[d,i X]

is invariant under the change di X? I doubt it because this is true regardless of whether X is Killing.

I know I’m being dense (I feel dense at the moment), but I don’t see how invariance under

(2) μX μ

implies (or is related to)

(3)di X.

In what sense is the interchange a duality.

Sorry :)

Eric

Posted by: Eric on July 15, 2004 7:21 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

It is true that X is invariant under this transformation, but that’s just a subcase, so it is no contradiction that here this is true for all X.

To see the T-dualness of d and ι X just do the substitutions which I had given.

In the expressions

(1)d= (μ,σ) (μ,σ)

and

(2)ι K= (μ,σ)X (μ,σ)

substitute, literally, X (μ,σ) for (μ,σ) and vice versa, as well as (μ,σ) for (μ,σ) and vice versa.

Posted by: Urs Schreiber on July 15, 2004 8:16 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Hi Urs,

I am extremely distracted by other things at the moment, but I’d still like to try to understand this. Is there some obvious way to express this duality (on target space) in a “coordinate free” manner?

I smell some kind of neat geometrical trick lying somewhere just under the surface, but can’t put my finger on it (yet).

Eric

Posted by: Eric on July 16, 2004 3:18 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

You are right that coordinates played an unduely crucial role in what I said. I haven’t thought much about coordinate-free relaization. The most invariant statement I know is that T-duality can be done every Killing vector of target space. But it should be possible to improve on that.

Posted by: Urs Schreiber on July 16, 2004 9:13 AM | Permalink | Reply to this

T-duality

Hi Eric -

here is one way that maybe makes the geometric interpretation more manifest:

Concentrate on the fermionic part of T-duality for the moment. The duality simply exchanges ONB form creators σ a with ONB form annihilator ι σ a.

The creation and annihilation operators generate an algebra isomorphic to two Clifford algebras with generators

(1)γ ± a:=σ a±ι σ a.

You see that T-duality acts on these as

(2)γ + aγ + a
(3)γ aγ a.

This manifestly preserves the Clifford algebra. In fact, we can regard this as an orthogonal transformation in O(D,D), where D is the manifold’s dimension.

The interesting thing is, that no matter what signature the manifold has, the above γ ± a generators always generate Cl(D,D). T-dulities are just the O(D,D) transformations which acto on this doubled Clifford algebra, leaving it invariant.

The point about loop space is that there you naturally have bosonic analogs of the above fermionc construction.

Posted by: Urs Schreiber on July 16, 2004 2:15 PM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Hello,

I have seen this paper before and we probably even discussed it once upon a time, but here it is again

Yang-Mills Theory on Loop Space
S. G. Rajeev

We will describe some mathematical ideas of K. T. Chen on calculus on loop spaces. They seem useful to understand non-abelian Yang–Mills theories.

In it, the author makes the curious statement

The set of loops on space-time is an infnite dimensional space; calculus on such spaces is in its infancy. It is too early to have rigorous definitions of continuity and differentiablity of such functions. Indeed most of the work in that direction is of no value in actually solving problems of interest (rather than in showing that the solution exists.)

Now, you are throwing around all the tools of differential geometry on loop space as if it was something trivial. Is it possible that, almost as an afterthought, you came up with a legitimate differential geometry on loop space that has been apparently elluding understanding for ages? :)

If that is true, then I think it is a BIG deal :) So far you have been treating it as a mere tool to study your deformation theory, but it seems that differential geometry on loop space is something that a lot of people would find interesting. I would suggest that you maybe consider writing up a quick preprint called “Differential Geometry on Loop Space” or “Loop Space Differential Geometry” or something like that (instead of having the material stuffed away in an appendix in some other paper). Am I getting excited about nothing or is your idea more significant than either of us have realized so far :)

Come to think of it, writing up this report on differential geometry on loop space is what you already suggested, but I think maybe it should take a higher priority than I originally thought. As a warmup for that, I’ve been working on regular old differential geometry. I am very happy to have (re?)discovered this geometrical definition of the interior product i X in terms of sweeping a chain along the flow generated by X. I haven’t seen this before, but it is beautiful. All textbooks on differential geometry should present i X this way. Maybe we should write a book on differential geometry :) That is what I was trying to do in my PhD thesis, but after 6 years and 350 pages, they finally kicked me out and made me graduate. I should finish that up one of these days :)

Eric

PS: This paper is also interesting

The Extended Loop Group: An Infinite Dimensional Manifold Associated with the Loop Space
Cayetano Di Bartolo, Rodolfo Gambini, Jorge Griego

A set of coordinates in the non parametric loop-space is introduced. We show that these coordinates transform under infinite dimensional linear representations of the diffeomorphism group. An extension of the group of loops in terms of these objects is proposed. The enlarged group behaves locally as an infinite dimensional Lie group. Ordinary loops form a subgroup of this group. The algebraic properties of this new mathematical structure are analized in detail. Applications of the formalism to field theory, quantum gravity and knot theory are considered.

and I’m trying to see the relation, if any, to your stuff.

Posted by: Eric on July 12, 2004 4:36 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

calculus on such spaces is in its infancy

Mathematicians will naturally have a different attitude towards this stuff. I have cited a couple of mathematical paper that do work on loop space and whose techniques and results are compatible with the maybe naive relations. My guiding principle is that I know that I am just doing CFT in a different picutre. This for instance tells me (and it can be checked) that the naive computations work when the background (the metric on loop space for instance) satisfies the string’s background field equations of motion.

All I say about loop space can equivalently be rephrased in more ordinary string Hilbert space language. I would tend to think that all problems that one would encounter in rigorously defining calculu on loop space can be understood in terms of opertor ordering effects and related divergences in this string Hilbert space language. But I agree that it would be desirable to have formulations that bridge between mathematical and physical perspectives on loop space.

Posted by: Urs Schreiber on July 12, 2004 10:29 AM | Permalink | PGP Sig | Reply to this

Calculus on Loop Space

Eric had cited S. G. Rajeev as writing in hep-th/0401215

calculus on such spaces is in its infancy

I had replied, essentially, that I don’t care. :-)

But is there need to be worried?

In that paper cited above Rajeev summarizes some basic elements of the approach to loop space calculus by the mathematician K.-T. Chen who apparently has developed most of what is known about calculus on loop space, using his method of ‘iterated intgrals’.

The question is: How is that compared to the apparently more naive approach that I give here?

The answer is: It is precisely the same, up to the fact that Chen makes explicit the notion of ‘loop space functions as formal power series’.

What I mean by this is that if you take the ‘naive’ notion of loop space calculus and apply it to functions on loop space of the form as in equation (14) of the above paper, which have the form of ‘iterated integrals’ (I’d rather call them path-ordered integrals), then the definition of the product in (20) as well as the definition of the exterior derivative in (24) follows. The first fact is obvious, the second second is the content of equation (3.8) in my hep-th/0407122, which is a review of a paper by Getzler, Jones and Petrack - which again probably goes back to Chen himself, who presumeably used this calculation to motivate his definition.

So it seems to me that all that would be necessary to make what I have written about loop space so far rigorous and compatiblee with Chen is to say that the function space on loop space that I am considering is that of formal power series in these ‘iterated integrals’.

Posted by: Urs Schreiber on July 16, 2004 7:07 PM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Urs,

The question is: How is that compared to the apparently more naive approach that I give here?

I am a little confused because here you say

Yes, if we are thinking of the configuration space of string then it is

- unbased

- parameterized

- consisting of functions from the circle into target space that have a Fourier decomposition

- oriented or unoriented depending on whether we want to do type II or type I strings.

However, in your paper, you define loop space () as simply the space of smooth embeddings of S 1 into

(1)():=C (S 1 ,).(2.1 )

Would it be possible to give a more precise definition of what loop space is? :) If it is just the space of maps from S 1 to , then I can see that it is unbased, but I don’t see how it is parameterized. I also don’t see how you can make that statement about Fourier decomposition precise either. This reminds me of an old discussion with Toby Bartels about the difference between “oriented” and “orientable.” Should loop space be the space of “parameterized” embeddings of S 1 into or the space of “parameterizable” embeddings of S 1 into . The latter seems redundant. The former means that we actually chose a parameterization for the loop and two different parameterizations would be different points in loop space. I really don’t think that is what you want.

I tend to think that loop space should be the space of unparameterized (but parameterizable) maps from S 1 to and a choice of parameterization is akin to choosing a coordinate chart.

Just prior to your equation

(2)UV X=dσg μν(X(σ))U μ(X(σ))V ν(X(σ)),(2.2 )

I think you should say something like, “In a coordinate chart and for a given parameterization, the inner product is given by…” In fact, what are you intergrating over? Is it σ:[0,2 π] with σ(0 )=σ(2 π)? If so, you should put the domain of integration explcitily, e.g.

(3)UV X= [0,2 π]dσg μν(X(σ))U μ(X(σ))V ν(X(σ)),(2.2 ),

or something. At the least, you should say that σ is a parameterized loop instead of a parameterizable loop.

Argh. My brain hurts. It seems like you need quite a bit of surgery to clean things up. Let’s say we have vector fields U,V on and a map

(4)γ:S 1 ,

then could we simply define an inner product on loop space via

(5)UV γ= S 1 γ *(g(U,V))vol,

where g(U,V) is, of course, a 0-form on and vol is a volume form on S 1 obtained from γ *(g), i.e. pulling back the metric from to S 1 ? A rule of thumb of mine, and I think it is a good rule, is that you do not really understand something until you can express it in a coordinate free manner. If the above is not correct, could you give the correct coordinate free version of your inner product on loop space? This does seem to be different than what you have because it seems like your vol is normalized so that

(6) S 1 vol=2 π.

Which is more natural? I think mine is, but I could be wrong :) Then again, mine has the affect that if the loop shrinks to a point, then the inner product goes to zero as well. I am not sure if this is a good or bad thing, but it almost seems like a good thing to me. Let’s say that we are in a region of where g(U,V) is constant. Your definition will give the same inner product regardless of how big the loop is within the region and regardless of how many times it wraps around. Is this really what you want? I would almost expect that if a loop wrapped around the same curve twice, then the inner product would be scaled by a factor of 2.

Here is a crazy thought…

What if we define an inner product on points AND loops via

(7)UV (p,γ)=g(U,V) p+1 T S 1 γ *(g(U,V))vol.

In this case, if the loop were tiny with respect to g, i.e. if

(8) S 1 volT,

then the point inner product will dominate, but when the loop grows, then it begins to offset the contribution from the point. In this way, the point inner product is kind of like a low energy limit. Or something :) Of course, once you iterate once, why not continue? Perhaps the inner product of higher dimension objects may be written something like

(9)UV=exp(αL)g(U,V),

where

(10)Lg(U,V)= S 1 γ *(g(U,V))vol.

Sidenote: this makes me want to consider “point space” 𝒫(), i.e. the space of maps from a point into . If you make this rigorous, you should end up with 𝒫(). Doing so might shed some light on how to make () rigorous.

Continuing with the previous thought…

(11)L 2 g(U,V)= S 1 γ 2 *[ S 1 γ 1 *(g(U,V))vol 1 ]vol 2 ,

where γ 1 in () and γ 2 2 ():=(()). Finally,

(12)L ng(U,V)= S 1 γ n *[ S 1 γ n1 *[ S 1 γ 1 *(g(U,V))vol 1 ]vol n1 ]vol n,

where γ i i().

I am kind of on the fence with this one. It could either be brilliant, or just another one of my whacky ideas. What do you think? :)

Eric

PS: A friend of mine and I have a saying, “All ideas are brilliant ideas for the first five minutes.” Let’s see if this one survives :)

Posted by: Eric on July 18, 2004 8:16 AM | Permalink | Reply to this

Re: Calculus on Loop Space

One more thing…

It is probably obvious, but I just want to point out a couple of things. First, if ϕ is a 0-form on , then

(1)Lϕ= S 1 γ *(ϕ)vol

is a 0-form on () although it might be interpretted as a 1-form on ,

(2)L nϕ= S 1 γ n *[ S 1 γ 1 *(ϕ)vol 1 ]vol n

is a 0-form on n() although it might be interpretted as an n-form on .

In particular,

(3)L n1

is a 0-form on n(). When you integrate, i.e. evaluate, it at a point γ n n(), you get

(4)L n1 γ n="volume of γ n ".

Of course, a point γ n n() maps to an n-loop in and integrating the volume form on over the image of γ n also gives the volume of the image. In this way, L n1 may also be thought of as the volume form on .

I like this :) I am anxious to hear what you think.

Another thing…

It might be true in general, but it would be interesting to find under what conditions there exists a 1-form α such that for any 0-form ϕ we have

(5) S 1 γ *(ϕ)vol= S 1 γ *(α).

If such an α existed, we would have

(6)Lϕ= γα,

which would make it explicit how Lϕ may be thought of as a 1-form on .

Eric

PS: This somehow reminds me of the discussion over in the “big boy” thread :) I think that if you define things correctly, then for a given torus in , there should be only one point in 2 () having this torus as its image. If you get anything different, I would think that perhaps you have inappropriately defined 2 () and should rethink things. Maybe 2 () should be equivalence classes of “loops of loops” since one loop of a loop that has the same image as another loop of a loop is really the same 2-loop, but with a different (generalized notion of) parameterization. A 2-loop is not a choice of a loop of a loop, it is something such that such a parameterization can be made. Just like a loop should not be a choice of parameterization, but something that can be parameterized in a certain way, i.e. it is parameterizable. A 2-loop is a parameterizable loop of loops, not a particular choice of parameterization.

If I am not being redundant enough, let me say it another way. If you look at a torus, there are tons of ways to write it as a loop of loops. However, I am suggesting that making such a choice is a kind of parameterization of the 2-loop, but the 2-loop, and consequently the space of 2-loops, should be independent of how you choose to split a torus into loops of loops.

This is not just a made up idea out of the blue. Rather, it is suggested by the explicit form of

(7)L nϕ.

When you look at the full expanded version, it is clear that the iterated integrals (not necessarily having anything to do with those of Chen) amount to choices of parameterizations of an n-loop and the integral should be independent of this choice. It is like a wicked version of Fubini’s theorem :)

Posted by: Eric on July 18, 2004 9:32 AM | Permalink | Reply to this

Re: Calculus on Loop Space

Hi again,

It is painfully obviously how a choice of a way to express a torus as a loop of loops is a kind of parameterization. In fact, it is more than “kind of” a parameterization. It is nothing BUT a choice of coordinates. Well almost :) It is a choice of coordinate curves. What I described as a wicked version of Fubini’s theorem is nothing but plane old Fubini’s theorem :) The only difference is that we specify coordinate curves without putting numbers on them :)

Now I can say it with a fair amount of confidence. There should be only one (Highlander?) point in 2 () having a given torus as its image.

Eric

PS: This statement will probably have to be altered if you start considering oriented loops.

Posted by: Eric on July 18, 2004 9:55 AM | Permalink | Reply to this

Re: Calculus on Loop Space

Good morning!

It seems that perhaps you took the day off of SCT today. I hope you had a nice day :)

I was just thinking about this stuff a little more. Unless I am just way off, it seems like given a 0-form ϕ on , we can think of

(1)L pϕ

as either a 0-form on p() or a p-form on . However, we should not forget all the intermediate interpretations. L pϕ may also be thought of as a (p1 )-form on (), a (p2 )-form on 2 (), or in general as a (pq)-form on q().

In particular, given a p-form α on , we can think of Lα as a p-form on (). Similarly, if X is a vector field on , then LX is a vector field on ().

With that said, after looking at your paper some more, it seems like I might have an alternative way to look at things.

First, given a 1-form α and a vector field X on , define a map

(2)contract(αX):=α,X.

We want the contraction map to commute with the loop map, i.e.

(3)Lcontract=contractL

so that

(4)Lα,X=contract(L(αβ)).

Furthermore, define a tensor product on loop space in terms of the tensor product on the target space via

(5)(Lα)(Lβ):=L(αβ).

so that we have

(6)Lα,LX=Lα,X.

due to the fact that L and contract commute. This also gives us the wedge product on loop space in terms of that on target space via

(7)(Lα)(Lβ)=L(αβ).

Now if we write

(8)XY=g(X,Y)=contract[contract(gX)Y]

and

(9)LXLY=(Lg)(LX,LY)=contract[contract((Lg)(LX))(LY)],

then using the above relations, we have

(10)LXLY=L(XY).

This seems to be the essence of your Equation (2.2). It seems like the important thing for constructing differential geometric operators on loop space is that

(11)(1 )Lcontract=contractL

and

(12)(2 )L is an algebra morphism.

Not that I know anything about category theory, but it almost seems like L is a functor or something :)

I am also willing to bet that we can write

(13)(3 )dL=Ld

so that

(14)d[(Lα)(Lβ)]=[d(Lα)](Lβ)+(1 ) p(Lα)d(Lβ).

This is all pretty neat and if it is at least partially correct, it takes a lot of the mystery out of the stuff in your notes. However, there still are some questions.

What are the conditions such that for any p-form α L on () there exists a p-form α on such that α L=Lα?

When this condition is satisfied, we can easily translate back and forth between () and . In fact, we could easily translate back and forth between p() and q(). We can easily get differential geometry on more loopy spaces just by further applications of L :)

(15)(1 )L pcontract=contractL p
(16)(2 )L p is an algebra morphism.
(17)(3 )dL p=L pd.

Ok. Maybe one more thing before submitting this :)

We have

(18)i Xα=contract(αX).

so that

(19)L(i Xα)=i LXLα.

Therefore,

(20)d LX=d+i LX

and

(21) LX=d LX 2 =[d,i LX].

Furthermore, in a coordinate chart, we have

(22){L μ,L ν}=L{ μ, ν}=0
(23){L μ,L ν}=L{ μ, ν}=0
(24){L μ,L ν}=L{ μ, ν}=Lδ μ ν.

Unless I’m mistaken, this seems to be a simple restatement of your Equation (2.21), but this makes a little more sense to me.

Now, it also seems that since

(25)d LX=Ld X

that we’d also have

(26)e LWd LXe LW=L[e Wd Xe W].

Is it possible that my dream can be fulfilled and the stuff you are doing with deformations on loop space can be interpretted as deformations on followed simply by an application of L? :)

I really gotta run now! More later :)

Eric

Posted by: Eric on July 19, 2004 12:37 AM | Permalink | Reply to this

Re: Calculus on Loop Space

Hello again,

I was just working through some details to see if we’d also have

(1)(4 )Ld =d L.

It appears that this is probably the case, but the derivation highlighted something I hadn’t thought of yet. If vol is the volume n-form on , then it makes sense to consider Lvol as the volume n-form on (). The somewhat strange thing is that a p-form on () appears like a (p+1 )-form on . If Lvol is an n-form on (), then there is no corresponding (n+1 )-form on . So there are some forms on () that have no counterpart on . If we don’t let this bother us, then we can write down a global inner product of forms on () via

(2)[Lα,Lβ]= ()(LαLβ)Lvol= ()L[(αβ)vol].

It seems reasonable that we should have

(3) ()L[(αβ)vol]= (αβ)vol,

which reminds me of pull-back, so that

(4)[Lα,Lβ]=[α,β].

Once you have this, a few lines gives Equation (4) above.

So far I have no evidence to suggest that anything I am saying makes any sense, but that has never stopped me before :) Now, we can take all of this and write down an obvious loop action for loop Maxwell’s equations on loop space. Following standard procedures, after a few lines we’d end up with the equations of motion for loop Maxwell’s equations, which are given by

(5)dLF=0

and

(6)d LF=Lj,

where LF=dLA for some 1-form LA on loop space. If this is correct, then it means that given any solution F to Maxwell’s equations on , we automatically get a solution LF to loop Maxwell’s equations by simply apply L to F. However, loop Maxwell’s equations are more like 2-form Maxwell’s equations when viewed from the perspective of .

Since Maxwell’s equations are conformal, I don’t see any reason why loop Maxwell’s equations would not be conformal. Hence, we’ve got a conformal field theory on loop space. This sounds a lot like string theory to me :) Wouldn’t it be wild if loop Maxwell’s equations were related to string theory? :) I can’t possibly imagine that they aren’t.

It is probably time to review some old discussions on p-form electromagnetism :)

Eric

PS: This stuff has messed with my sleep schedule. I couldn’t get to sleep until nearly 5am last night. Then I slept all day today. What are the chances I will be able to sleep tonight. I’ve got to be to work at 8:30am. What are the chances that is going to happen? :)

Posted by: Eric on July 19, 2004 5:06 AM | Permalink | Reply to this

Re: Calculus on Loop Space

It seems that perhaps you took the day off of SCT today. I hope you had a nice day :)

Yes, we had a big party on Saturday, and including preparation and ‘post-production’, this really took all of the week end!

Now I am working hard to reply to all the comments that you have posted while I was relaxing. ;-)

It is great to see you thinking so much about this stuff, but we need to get in sync again!

Most of what you write about that L operator makes sense to me, but, as I have said in another comment posted a few minutes ago, I think one should not restrict attention to loop space objects which have target space counterparts, and one should not necessarily include that induced volume factor.

So I would not give the operator L the fundamental meaning that you seem to have in mind, even though it certainly exists and plays its role. For instance, except for the volume factor, it is, I think, precisely the operation that I for instance indicate in equation (3.1) of hep-th/0407122 applied to a differential form on target space.

So the range of L would be those objects on loop space which directly come from integrating any target space object around the loop, roughly.

I think what you write about deformations at the very end is true, but it only applies to deformations with e something where ‘something’ is in the range of L. This is not true for all deformations that one might want to consider. For instance the deformation which induces a gauge field background involves the vector field X , which does not have any target space counterpart. On the other hand, in the cases where ‘something’ is of the correct form, then I think the equation that you write down is correct.

I’ll stop at this point and wait for your response first.

Posted by: Urs Schreiber on July 19, 2004 11:14 AM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

I am anxious to hear what you think.

I think that what you write does make sense. But, as in I said in my previous comment, it is not quite what I need, I think, due to that volume factor, for instance.

Here is how I would suggest to think about these matters:

So let loop space be a suitable space of maps from (0,2 π) into target space. A vector field V on this space is formally something like

(1)V=V (μ,σ)(X) (μ,σ)= 0 2 πdσV μ(σ)δδX μ(σ).

A 1-form W on loop space is dual to that and the pairing must be

(2)W(V)=W (μ,σ)V (μ,σ)= 0 2 πW μ(σ)V μ(σ)dσ.

In order to get an inner product we have to specify a metric on loop space. The metric that I need is

(3)G (μ,σ)(ν,κ)(X)=g μν(X(σ))δ(σ,κ).

You could in principle pick another metric, for instance one involving the induced volume factor. That would give the inner product that you are proposing, I think.

One important aspect of the above metric is that K (μ,σ)=X (μ,σ) is a Killing vector.

I think that if you define things correctly, then for a given torus in , there should be only one point in 2 () having this torus as its image.

That would be the case on unparameterized loop space. In the end, this is what is physically relevant, but to set up the formalism we really want parameterized loop space. The space of functions on parameterized loop space is our ‘kinematical space’. By restricting to those functions which satisfy the physical constraints, which in particular includes rep invariance, we get to the ‘physical space’ of functions on unparameterized loop space.

If you look at a torus, there are tons of ways to write it as a loop of loops. However, I am suggesting that making such a choice is a kind of parameterization of the 2-loop,

Yes, exactly. All these are physically equivalent, but correspond to different points in the parameterized loop-loop space.

I think this is important. Because if you want to associate a ‘holonomy’ with a surface in you really need a connection on parameterized () which assignes the same holonomy to all the corresponing elements in 2 (). This is a pretty strong condition. It is solved trivially by flat connections on parameterized (). Perhaps it is solved only by these. This is what I expect from what I wrote here.

Posted by: Urs Schreiber on July 19, 2004 10:27 AM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Eric -

you are right, that definition I give is not good, and even wrong. For one, it should not read ‘embedding’ because self-intersections must be allowed. I’ll change that. Thanks for noticing, somehow this embarrasing mistake went unnoticed so far.

Then, I am really thinking of a space of maps from the interval σ(0,2 π) into target space which are periodic with period 2 π. So there is a parameterization, and we can get rid of it by going to the subspace of functions on this loop space which are invariant under reparameterization, i.e. the space of functions on which d K and all of its modes are nilpotent.

Let’s say we have vector fields U ,V on and a map

Wait, that’s not what we need. We really need vector fields on loop space. Not every vector field on loop space is related to one on target space. This remark pertains to the other constructions that you mention. They don’t seem to fit into the context which I am considering.

Still, one can consider point particle limits ans the like. These correspond to maps which are almost constant on (0,2 π).

Posted by: Urs Schreiber on July 19, 2004 9:35 AM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

One more comment. You wrote:

Your definition will give the same inner product regardless of how big the loop is within the region and regardless of how many times it wraps around. Is this really what you want? I would almost expect that if a loop wrapped around the same curve twice, then the inner product would be scaled by a factor of 2.

The definition (2.2) that I give (which, by the way, is not my invention) does not give the same result for loop that wind several times around themselves. For instance if you substitute X(σ) with X˜(σ):=X(2 σ) in that equation you do not get the same result.

Posted by: Urs Schreiber on July 19, 2004 9:46 AM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Urs,

It is somewhat encouraging that you think that maybe the stuff I wrote is not completely garbage :)

About the inner product, maybe I do not understand your notation. Let’s consider a region of where g(U,V) is constant and without loss of generality set g(U,V)=1 so that we have

(1)UV X= [0,2 π]dσ.

How is this ever going to give you anything different than 2 π regardless of whether you reparameterize the curve. If you set σ=2 σ you still get

(2)UV X= [0,2 π]dσ=1 2 [0,4 π]dσ=2 π.

I think I misunderstood what you meant :) I almost suspect that you want to tell me that if you set σ=2 σ, then the loop is really the same curve traversed twice. If this is the case, then I would argue that this goes against the way you stated the definitions, i.e. a curve is a map c:[0,2 π]. If you adhere to this then the inner product over a constant g(U,V) does always give the same result regardless of the size of the loop or how many times it wraps around. If you want to allow parameterized curves from any connected interval IR to , then I think you might need to state the definition of your inner product differently. Otherwise, it seems the inner product will depend on what parameterization you choose, which would be unacceptible I think.

I also do understand how my inner product differs from your’s in that I have a volume form, and whether or not your inner product is your invention or not, I think that we should consider the slight possibility that mine is the one we want to use. Maybe :)

Gotta get ready for work. Argh! :)

More later,
Eric

Posted by: Eric on July 19, 2004 12:48 PM | Permalink | Reply to this

Re: Calculus on Loop Space

Hi Eric -

you wrote:

It is somewhat encouraging that you think that maybe the stuff I wrote is not completely garbage :)

When wondering about my enthusiasm you should always keep in mind that when I write my morning comments I do that after having moderated 30+ spr posts, possibly some sps posts, have looked into my private email and read a couple of comments on the SCT. This may be the reason, as happened recently, that I may not immediately see all the benefits of the constructions that you propose. Just be patient with me, I will get it eventually! :-)

Regarding the inner product, let me say the following:

In my conventions σ always runs from 0 to 2 π, by definition. The loop parameterized by this σ is X:(0,2 π)X(σ). To any given point X in parameterized loop space there is a point X˜ defined by X˜(σ)=X(2 σ) which is the same loop as X but traversed twice. In general, UV XUV X˜, as is manifest from that formula (2.2), which in full glory reads

(1)UV X= 0 2 πdσg μν(X(σ))U μ(X)(σ)V ν(X)(σ).

If, of course, the target space contraction is independent of X, then so is UV X. But there is nothing wrong with that, as far as I can see.

I think that we should consider the slight possibility that mine is the one we want to use. Maybe :)

Ok. The reason why I use the particular loop space metric that I do is that this way the inner product on loop space reproduces the usual inner product on the string’s Hilbert space. So in particular this way d K+d K really is one of the supercharges on the worldsheet. This is related to the fact that in this metric K (μ,σ)=X (μ,σ) is a Killing vector.

But you are right that in principle we could consider other metrics on loop space, given some particular metric on target space. For general choices X won’t be a Killing vector anymore, though, which would be undesirable, since it is the flow of X that we want to divide out by in the end.

For these reasons I pretty strongly tend to stick to the metric that I am using. But I have to admit that I didn’t think about if there are other metrics that might also be intersting. Maybe there are. But I expect that the metric you proposed does not have X as a Killing vector.

Posted by: Urs Schreiber on July 19, 2004 1:46 PM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Urs,

I said

It is somewhat encouraging that you think that maybe the stuff I wrote is not completely garbage :)

and your response almost seemed apologetic. No need! I am genuinely happy if you think what I wrote is not completely garbage! :) If you think it is even more than slightly interesting, that is a bonus :)

As usual, I think we have some notational issues to work through, but I think it is worth struggling through the pain so that we get on the same page.

For general choices X won’t be a Killing vector anymore, though, which would be undesirable, since it is the flow of X that we want to divide out by in the end.

I could be wrong, but I think that our inner product are related by a scale factor. If that is the case, then I would think X should still be a Killing vector.

I’m still not gone for work yet! :)

More later,
Eric

Posted by: Eric on July 19, 2004 1:59 PM | Permalink | Reply to this

Re: Calculus on Loop Space

I could be wrong, but I think that our inner product are related by a scale factor. If that is the case, then I would think X should still be a Killing vector.

That’s a straightforward computation, but right now I don’t have the leisure to do it. One would first need to compute the covariant Levi-Civita derivative of the metric that you propose and then compute the symmetrized covariant derivative of X .

But why exactly do you think the inner product that you proposed is ‘good’, or better than any other choice, anyway?

Originally it seemed that it was the coordinate-dependent formulation of ‘my’ inner product that you objected to. But all I do is specify a metric G on loop space, induced from that on taregt space, and then set the inner product to UV=G(U,V), as usual.

Posted by: Urs Schreiber on July 19, 2004 2:32 PM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Urs,

I’m in a rush, but here is a quick note…

But why exactly do you think the inner product that you proposed is ‘good’, or better than any other choice, anyway?

The answer to this question lies in a fact that we seem to agree upon. Namely,

If, of course, the target space contraction is independent of X, then so is UV X. But there is nothing wrong with that, as far as I can see.

I think that maybe we should reconsider this fact. It is somewhat troubling to me. I think that if the contraction on target space is independent of X, then UV X should be proportional to the length of X in . In this way, the generalized inner product I wrote down

(1)UV=exp(αL)g(U,V)

reduces to

(2)UV=g(U,V)

as the string (and all subsequent higher n-loops) shrink to zero.

This is very nice in my opinion :)

I also very much like the fact that

(3)L n1 vol,

i.e. L n1 corresponds to the volume form on .

I am not immovable about this idea, but I am growing more fond of it by the minute.

I think the concerns you brought up will disappear once the details are worked out. Maybe not, in which case the original inner product would clearly be superior :)

Eric

Posted by: Eric on July 19, 2004 3:50 PM | Permalink | Reply to this

Inner Products

I just had a thought…

Now, it seems to me like the original inner product you are suggesting is like an averaging of the inner product I am suggesting. To make this explicit, in the notation I will keep

(1)UV

for my inner product and write the original one as

(2)UV,

where the two are related by

(3)UV X=UV X Xvol.

As I’ve said, UV vanishes as X shrinks to a point p, but we have

(4)lim XpUV X=g(U,V) p.

I can see why this would be desirable. It provides a way for a tiny loop to behave like a point. This is nice. On the other hand, I seem to be suggesting something a bit more radical. Well, which is really more radical? I am including points, loops, 2-loops, … in my inner product

(5)UV=exp(αL)g(U,V)

and I get my low energy limit making string stuff vanish as the string shrinks to a point. We are left with pointlike stuff because pointlike stuff is included in the inner product. In your approach, you begin with 1-loops and work your way up from there. Points never make an appearance. The stringy physics disappears as the string shrinks because of an assumed smoothness of g(U,V) and the average of g(U,V) over a loop approaches g(U,V) at a point as the loop shrinks to the point.

Ok. I see that I am butting heads with 30 years of string theory :) The question is, “What inner product is more natural?” On the one hand, we have an averaging inner product that converges to a point-like value when the p-loop shrinks to a point. On the other hand, we have a kind of “instantaneous” inner product that vanishes as the p-loops shrink to a point, but the entire inner product does not vanish when evaluate at points. In such cases, it gives precisely the expected value. I see both as being viable and each would give rise to different physics in subtle ways.

I feel like I probably didn’t explain that very clearly so I will most likely try again later, but I’ll let this go for now. I hope you see how your inner product is really an average. To see this, try expressing your (when I say “your”, I mean the “the standard inner product you are suggesting”. It’s just shorthand :)) inner product in a coordinate free and parameter free manner. You will find that you cannot :) Your inner product is coordinate and parameter independent, as it should be, (well I hope it is!), but expressing it in a coordinate free and parameter free notation will not be easy. You’ll see what I’m talking about if you try :)

Gotta run!

Eric

Posted by: Eric on July 19, 2004 8:09 PM | Permalink | Reply to this

Re: Inner Products

Hi Eric -

in which sense does ‘your’ inner product generalize to higher degrees while ‘mine’ does not? I’d think what you can do with one of these you can do with the other.

You may be right about your intuition about ‘your’ inner product, but right now I don’t see any fully convincing argument that it is one that is interesting.

In fact, even on purely aesthetical grounds I like the standard inner product better, because here the integration over σ is just index contraction. In particular, we could consider polygon space instead of loop space and then σ would take on discrete values and would even more look like an index just like μ. From this perspective just summing over σ seems much more natural than having a weighted sum over it, as in your proposal.

(All these are arguments over and above the fact that the standard inner product is the very obvious one in Fock space language. But, ok, I can imagine that there are in principle other interesting inner products than that standard one.)

Posted by: Urs Schreiber on July 19, 2004 10:51 PM | Permalink | PGP Sig | Reply to this

Re: Inner Products

Hi Urs! :)

I was wondering why you were so quiet today. Now I see that you got to spend a great day talking with t’Hooft :) Very nice :)

in which sense does ‘your’ inner product generalize to higher degrees while ‘mine’ does not? I’d think what you can do with one of these you can do with the other.

Sorry. I knew I wasn’t explaining myself very well. I did not mean to imply that your inner product doesn’t generalize to higher degrees. It obviously does. Rather, what I was trying to say is that mine has a kind of natural way to bundle all of the inner products together in a way that I think seems kind of natural. I didn’t see an obvious way to do the same with the standard inner product, but there might very well be. Then again, maybe bundling all the inner products isn’t even a good idea to start with.

You may be right about your intuition about ‘your’ inner product, but right now I don’t see any fully convincing argument that it is one that is interesting.

Sorry if you feel like I am spewing nonsense. I probably am :|

Here is something that might be a little interesting about my inner product.

On , we have a global inner product of forms

(1)[α,β]= (αβ)vol.

It seems to me that we need to use my L map together with its vol weighting in order to define a meaningful global inner product on () such that

(2)[Lα,Lβ]=[α,β].

I could be wrong, but it seems like this relation depends crucially on having

(3)Lvol

be the volume form on loop space. For this to make sense, I think you need the vol incorporated in L. If vol is incorporated in L, the vol should likewise be incorporated in the inner product.

Let me just clarify. I am not suggesting that the standard inner product is wrong in any way and we should get rid of it. I am simply exploring the possibility that maybe there is an alternative. This alternative would give slightly different physics. My inner product could very well turn out to be junk.

In fact, even on purely aesthetical grounds I like the standard inner product better, because here the integration over σ is just index contraction.

Yes yes. Sorry. This is a very good point. I hope I am not trying your patience.

All these are arguments over and above the fact that the standard inner product is the very obvious one in Fock space language.

I will try to find references (on my own, maybe Szabo or something) and look this up, but I’ll just remind you that I don’t even know what is the standard Fock space language for strings. I imagine I might have similar reservations there, but maybe I should keep those to myself or I might push you over the edge :)

Gotta run!

More later,
Eric

Posted by: Eric on July 20, 2004 12:56 AM | Permalink | Reply to this

Re: Inner Products

I was wondering why you were so quiet today. Now I see that you got to spend a great day talking with t’Hooft :) Very nice :)

Yes, it was in fact so nice that I am now suffering from the same sleeplessness that you had yesterday (or was it today). ;-) It is half past three in the morning right now and since I am still not sleeping I thought I could just as well check the SCT.

Ok, let’s see. You write:

Rather, what I was trying to say is that mine has a kind of natural way to bundle all of the inner products together in a way that I think seems kind of natural. I didn’t see an obvious way to do the same with the standard inner product, but there might very well be.

Now I am confused. If in the definition of L n which you gave here we just remove all the volume factors, don’t we get the L n version of the ‘standard’ inner product?

Then you write:

(1)[Lα,Lβ]=[α,β].

Is there supposed to be an L in fron of the right hand side? I suppose that’s what you mean. But - I apologize in advance for saying this, recall that I am very tired in a way ;-) - still, I don’t quite see it.

Maybe it would help if you could write this equation out in detail, perhaps I have some wrong definition in mind.

I hope I am not trying your patience.

Not at all, really. It was my fault that I didn’t comment earlier on all the ideas that you had over the weekend! :-)

I will try to find references (on my own, maybe Szabo or something) and look this up

Oh, don’t bother. This is easily explained in a line or two.

The key is that if the adjointness relations

(2)(i (μ,σ)) =i (μ,σ)

and

(3)(X (μ,σ)) =X (μ,σ)

hold, then the objects

(4)a n μ:=1 2 π 0 2 π(iη μν (ν,σ)+X (μ,σ))e inσdσ,

which satisfy the generalized oscillator algebra

(5)[a n μ,a m ν]=nη μνδ n,m

(or at least they would if I got the signs right…),

which identifies a n as an annihilation operator for n>0 and as a creation operator for n<0 satisfy the Fock space requirement that we would like to have creators and annihilators to be mutually adjoint. And indeed, this is what happens due to the above adjointness relations of and X:

(6)(a n μ) =a n μ.

That’s all there is to it, essentially.

Posted by: Urs Schreiber on July 20, 2004 2:52 AM | Permalink | PGP Sig | Reply to this

Re: Inner Products

Now I see that you got to spend a great day talking with t’Hooft

Oops! ’t Hooft :)

It took more time than I’d like to admit to get that apostrophe to come out right in html :)

Then you write:

(1)[Lα,Lβ]=[α,β].

Is there supposed to be an L in fron of the right hand side? I suppose that’s what you mean. But - I apologize in advance for saying this, recall that I am very tired in a way ;-) - still, I don’t quite see it.

You know, my wife is an accountant. She is so perfect with money. Not only that, she is always perfect about making sure all the lights are out when we leave the house. Once in a while, I leave a light on and seeing that light on as we pull into the driveway instills a certain amount of fear. I’m guilty! :O However, ever so rarely, she will herself leave a light on. For some reason, when this happens it fills me with a great feeling of joy :)

Granted, it is 3am for you, but your statement filled me with a similar sense of joy. Et tu Urs! :)

All I need to do is remind you that both sides are global inner products :)

The rest of what you say about oscillator algebras is very interesting, but I am too tired at the moment to comment or else I will surely leave a light on somewhere :)

Good night!

Eric

Posted by: Eric on July 20, 2004 4:34 AM | Permalink | Reply to this

Re: Inner Products

Et tu Urs! :)

All I need to do is remind you that both sides are global inner products :)

Aha!

Just leave a light on for me… ;-)

Sorry for being dense. But I am going to switch a still more lights on, because apparently I am still in the dark:

When you write (LαLβ)Lvol=L[(αβ)vol], why does that work? I think I see why (LαLβ)=L(αβ), but I don’t see why in general L(AB)=(LA)(LB). L is just the operation of integrating some object over the loop, right?

Posted by: Urs Schreiber on July 20, 2004 9:36 AM | Permalink | PGP Sig | Reply to this

Re: Inner Products

Good morning! :)

A quick note before I’m off to work…

When you write (LαLβ)Lvol=L[(αβ)vol], why does that work? I think I see why (LαLβ)=L(αβ), but I don’t see why in general L(AB)=(LA)(LB). L is just the operation of integrating some object over the loop, right?

I could be wrong, but I explained this here.

Basically, αβ is a 0-form and vol is an n-form and (αβ)vol is really (αβ)vol and L is an algebra morphism (by definition). I defined wedge product on () in terms of wedge product on . Maybe that is a bad move though. It feels right :) Besides, I used that and the fact that L commutes with the contraction map to derive L(αβ)=(Lα)(Lβ), so it might actually be correct :)

Eric

Posted by: Eric on July 20, 2004 12:42 PM | Permalink | Reply to this

Re: Inner Products

It took more time than I’d like to admit to get that apostrophe to come out right in html :)

Ah, thanks for pointing out that the apostrophe should be encoded as &#8217;. I’ll change that in my entry.

Posted by: Urs Schreiber on July 20, 2004 11:27 AM | Permalink | PGP Sig | Reply to this

Re: Inner Products

One quick note…

Maybe it would help if you could write this equation out in detail, perhaps I have some wrong definition in mind.

See here.

Eric

Posted by: Eric on July 20, 2004 4:41 AM | Permalink | Reply to this

Re: Inner Products

Hello,

This might sound a little silly, but how are we defining the Dirac delta? Doesn’t it require vol, i.e. isn’t it

(1) δ pfvol=f(p)

where f is a 0-form and δ p is the Dirac delta? I’m pretty sure it does require vol or else there would be some ambiguity under reparameterization, e.g. consider a case where vol=2 dσ so that

(2) δ pfvol= δ p(2 f)(1 2 vol)= δ p(2 f)dσ.

Should this be f(p) or 2 f(p)?

Put in another way, let’s say we have a curve γ:[0,2 π] and a 1-form α. We can write down

(3) γα= [0,2 π]γ *α.

Now, let’s say that p is a point on γ with p=γ(1 ). We can also write down

(4) γδ pα= [0,2 π]γ *(δ p)γ *(α)= [0,2 π]δ 1 γ *α.

What is the result going to be?

To answer, it seems we need to express γ *α in terms of some basis. Let γ be parameterized by s, i.e. γ(s) with s[0,2 π]. Let ϕ,ψ:[0,2 π]R such that

(5)γ *(α)=ϕds=ψvol,

where vol is the volume form obtained by pulling back the metric g on to [0,2 π]R. When you evaluate the integral

(6) γδ pα= [0,2 π]δ 1 γ *α= [0,2 π]δ 1 ϕds= [0,2 π]δ 1 ψvol,

is the result going to be ϕ(1 ) or ψ(1 )? The answer to this is important I think. At least it is important to me if I am ever going to understand this stuff.

According to your notes, it seems you would say the integral evaluates to ϕ(1 ). If so, then you are secretly assuming that ds is a unit 1-form, i.e. your volume form under some metric. Another possibility (I thought of just before submitting) is that you are choosing some parameter specific normalization for the Dirac delta, which would seem weird. These conclusions are based on examining Equation (2.6) and similar ones in your paper. If the answer is ψ(1 ), which I think it should be, then the Dirac delta is defined with respect to the induced volume form. This seems more natural to me.

What do you think?

I hope I can communicate how important I think this is. I can imagine that you could look at this and brush it off as being too trivial to worry about. But I worry about these things! :) I hope you can put some thought into it too if for no other reason than to put my mind at ease :)

Eric

Posted by: Eric on July 20, 2004 9:46 PM | Permalink | Reply to this

Re: Inner Products

Hi Eric -

it seems that you want to define a metric on the interval (0,2 π) and derive a measure from that. But this is not what I have in mind.

I just define, by fiat, a measure on (0,2 π), namely the standard one denoted by dσ, which assigns ordinary parameter interval length to each open subset of (0,2 π).

The δ-distributions that appear are defined with respect to this measure dσ.

If you want to define other measures, like vol=2 dσ then of course you need to specify what a given δ-sign is supposed to denote. But if we take all δs to be defined with respect to dσ then δ p(f)dσ=f(p), by definition.

Now consider your example of a 1-form α on target space which is of the form α(x)=δ (x)α μdx μ. In writing that down, we need to specify which δ we mean here. Let me assume it is the δ with respect to the measure d Dx, for the same coordinates x that appear in the definition of α above. In other words, we have δ (x)=1 (2 π) Dd Dke ikx.

Now if we pull that back to a given loop X:(0,2 π) and integrate with the measure dσ we can apply the usual rules to obtain

(1) 0 2 πX *(α)= 0 2 πdσδ (X(σ))α μX μ(σ).

Let’s assume for brevity of the discussion that X(σ) takes the value X(σ)=0 only once and is invertible in the vicinity of that point.

Moreover, rigidly rotate the coordinate system on the target so that the coordinate line of x 1 is parallel to X at that point and that all other coordinates are orthogonal.

Write δ (x)=δ (x 1 ) i>1 δ (x i).

- From now on the factor i>1 δ (x i) will just be a spectator.

We can now go from δ to the δ on (0,2 π) by writing

(2)dσδ (X 1 (σ))=dσ˜((X 1 ) 1 ) (σ˜)δ (σ˜)=dσ˜((X 1 ) 1 ) (0 )δ (σ˜)

for σ=(X 1 ) 1 (σ˜) in the vicinity of that point.

Now we perform the integal and find that

(3) 0 2 πX *(α)=((X 1 ) 1 ) (0 )α μX μ(X 1 (0 )) i>1 δ (X i(X 1 (0 ))).

That’s it in gory detail. I think that even if I messed up somewhere the message is that the integration over (0,2 π) is well defined and you can compute everything by taking care of how all the objects are defined.

Posted by: Urs Schreiber on July 21, 2004 11:07 AM | Permalink | PGP Sig | Reply to this

Re: Inner Products

Hi Urs,

Thank you for taking the time to explain that. At least now I can follow what you are saying :)

On the other hand, let me express a certain dissatisfaction for this approach and you can either agree or disagree, but I doubt I will be able to convince you to change (because there might not be a compelling reason to). That is ok as long as we understand each other :)

Here is why I do not like what you said…

Consider a smooth map

(1)γ:S 1

and two distinct diffeomorphisms

(2)σ,σ:[0,2 π]S 1 .

It seems that the only way we can define an unambiguous, parameterization invariant Dirac delta is if we have a volume form vol. If you have a volume form, you can define an unambiguous, coordinate independent Dirac delta via

(3) δ pfvol=f(p).

Any other choice is going to be coordinate dependent.

Now let’s define

(4)γ *α=fvol,
(5)σ *(γ *α)=ϕdσ,

and

(6)σ *(γ *α)=ϕdσ.

where f is a 0-form on S 1 and ϕ,ϕ are 0-forms on [0,2 π]. Also let

(7)p=γ(q)

and σ,σ be parameterizations such that

(8)p=γσ(a)=γσ(a).

We can now define three distinct δ-functions

(9)(1 ) γδ pα= S 1 δ qfvol=f(q),
(10)(2 ) γδ p σα= 0 2 πδ aϕdσ=ϕ(a),
(11)(3 ) γδ p σα= 0 2 πδ aϕdσ=ϕ(a).

Unless I am making a blunder, which is always possible, we will have in general

(12)f(q)ϕ(a)ϕ(a).

Your response suggests, and maybe I am reading too much into it, that string theorists use a coordinate dependent δ-function in their formulations. They end up with coordinate dependent expressions and then, at the end of it all, you look for things that are coordinate independent, because those are of course the only meaningful things to look at.

But why?!?!??!

This is like saying you want to work with coordinate dependent maps between manifolds in standard geometry and then at the end try to mod out by coordinate transformations to reduce to things that are coordinate independent. This sounds just silly to me, especially when there are tools in place that are coordinate independent already.

If you do not use the covariant δ-function, then I can understand why you need to consider loop space to be that of parameterized loops and why you need to consider two different parameterizations of the same loop as different points in loop space. This seems to be solely due to a bad choice of δ-function, which propagates through to all loop space operators. It seems to me, but understand I am keenly aware I could be wrong and almost expect to be wrong, that if you defined you operators on loop space using the covariant δ-function, then you could work directly with unparameterized (but parameterizable) loops. In other words, of course you may need to express things in coordinates to help you with algebra, but whatever you write down will be independent of the parameterization you choose. The way you have formulated things in your notes, different parameterizations give inequivalent objects because the δ function buried in the operators is coordinate dependent.

Please forgive me if I am wrong, but this seems to be absolutely fundamental to formulating differential geometry on loop space. How you define your δ-function may seem like an innocent enough thing, but it has repurcussions that resonate throughout the entire formulation. It makes the difference between having to deal with parameterization dependent expressions or parameterization independent expressions.

To be even more bold, while still remembering that I could be completely wrong, I claim that if you were to reformulate things with the covariant δ-function, then d K would ALWAYS be nilpotent because K would be zero be construction. Now, you end up with K not always being zero because your expressions are parameterization dependent. I can’t help but think things would be simpler if you had d K 2 =0 on the nose for all fields on loop space.

Gotta run to work, but I hope I am getting my point across.

Best wishes,
Eric

Posted by: Eric on July 21, 2004 1:22 PM | Permalink | Reply to this

The need for parameterized loops

Yes, I fully agee with what you say. Indeed, the string/loop is parameterized and the same loop with different parameterization is taken to be a different object.

Maybe it would be more suggestive to use the terminology map space or more precisely space of periodic maps on (0,2 π) instead of parameterized loop space.

Now why don’t we just quotient out by reparametrizations and go to the true unparameterized loop space, on which indeed K=0 ?

The reason is that the general state of the quantum string is a wave function on parameterized loop space which does not necessarily take the same value on any two maps which only differ by a reparameterization.

Most prominently, the ground state of the string is not reparameterization invariant.

Why that?

Recall the long disucssion about the LQG string. There such a complete rep invariance was indeed built in by hand and in such a context one could certainly do what you have in mind, namely take the config space of the string to be the space of unparameterized loops, which automatically ‘solves’ all the reparameterization coinstraints, and then on that space only impose the remaining Hamiltonian constraint.

In fact, this is precisely what is done in LQG: The reparameterization constraints are ‘solved’ by explicitly constructing states that do not depend on the parameterization. (Recall that we can think of the string as 1+1 dimensional gravity coupled to scalar fields on the worldsheet.)

So what you have in mind here is a LQG-like quantization of the string.

Now, as we discussed at great length before, there is a reason why this is a dubious procedure, even though it may look so extremely natural, that apparently nobody had noticed that there is a subtlety until we began to discuss the LQG-string paper.

The subtlety is that, while it is true classicall that everything is perfectly reparameterization invariant, the quantization of the constraints tells us a different picture. There one finds that, due to the anomaly, not all of the constraints as they follow from canonical quantization can be imposed at once! Only half of them can.

Two remarks on that:

1) It is important that I am talking about canonical quantization. The LQG people are thinking of a relaxation of the ordinary recipe of canonical quantization, and only this relaxation allows to have fully rep invariant ‘quantum’ states.

2) The fact that a state is not rep invariant may sound strange, but when one remembers that what counts in quantum theory is not so much a given state but rather the expectation values computed from a state, it is plausible that we should only demand that the expectation value of the reparameterization constraint operators vanish. And this is still true if a single state is only annihilated by half of the constraints. Similarly, all amplitudes computed on the worldsheet are independent of the parameterization. And this is what counts.

So at this point it may sound counterintuitive that parameterization still plays a role in a sense in doing the quantum theory. But it is demanded by ordinary quantum formalism - and it all works out consistently.

In the end it can be best understood from path integral language. In order to evaluate the string’s path integral we have to compute the string’s action for several configurations. This action is diff invariant, but for purely practical reasons it is necessary to introduce coordinates merely to compute it. That’s how they enter. They don’t affect expectation values and amplitudes in the end, but other formal objects, that don’t directly describe something physically observable, they may well show up.

On the other hand, I have noted in my recent paper that loop space formalism is particularly natural if we are working with boundary states. This are states of the closed string which are indeed annihilated by all the reparameterization constraints and so these truly live on unparameterized loop space. (It follows that they are not annihilated by the Hamiltonin constraint of the string. These boundary states are ‘off-shell’ in a sense. That’s not too surprising since they represent background configurations different from empty flat spacetime, and so they are not on-shell with respect to the constraints of empty flat spacetime.)

So as far as boundary states are concerned we could in principle indeed do as you propose and go to the space which is obtained from parameterized loop space by identifying all loop-maps which only differ by a parameterization. On this space d K is indeed nilpotent.

I am currently working on a paper where I will make some use of this perspective and I will disucss how one can construnct ‘boundary DDF invariants’ and ‘boundary Pohlmeyer invariants’ which are objects that are well defined also on the unparameterized loop space, because they take rep invariant states to rep-invariant states.

As soon as I have some rudiments finished I’ll put a pdf about that on the web.

Posted by: Urs Schreiber on July 21, 2004 5:34 PM | Permalink | PGP Sig | Reply to this

Re: The need for parameterized loops

Hi Urs,

Thanks again for your help.

I know you are busy, but I was hoping that maybe you can give me a quick homework assignment? If I am able to complete the assignment, then maybe I might have something interesting to say.

From your last response, I hear you saying that the individual states are parameterization dependent, but that any expectation values you compute are independent of parameterization so it is ok. It seems like the “finish line” is an expectation value. The “starting line” is an assumption that strings are somehow important. So we have a path

(1)"strings""expectation values".

Could you give me an example of an expectation value computation involving strings that is simple enough that I might be able to follow, but nontrivial enough that I can see how the basic mechanics of computing expectation values in general works? A pointer to a worked out example would do if you are pressed for time.

My homework assignment will be to reproduce this calculution using an entirely paramaterization independent formalism from start to finish.

A string is independent of parameterization (or I believe it should be) and the expectation value is parameterization independent. In carrying out the computation, I see that it might help to introduce parameters, but I claim that parameters can be introduced in a way such that each step along the way is independent of the parameters chosen.

I don’t question that following the standard approach can get you from start to finish. I just question the efficiency of the path in getting you there. My experience as an engineer has taught me that often the most beautiful solution is also the most efficient one, i.e. beauty and efficiency go hand in hand. Therefore, I care about efficiency almost as much as I care about beauty :)

Besides, I’ve always tried to work with a guiding principle:

You can’t complain about something unless you offer up something better as an alternative.

I want to stop complaining about the way the standard approach works and attempt to offer something better instead. I fully expect to arrive at the same answer, but I hope to demonstrate that there is a better way to do things.

Eric

Posted by: Eric on July 21, 2004 7:39 PM | Permalink | Reply to this

Re: The need for parameterized loops

The simplest thing is to consider the following:

The generator of arbitrary reparameterization is K(σ)=X μ(σ)δδX μ(σ)+fermionicterms, where we can ignore the fermionic terms (form creators/annihilators) for the moment. (You can find the full expression in equation (3.5) of hep-th/0401175). It is very convenient to take linear combinations for different σ by making a Fourier transformation:

(1) K,n:= 0 2 πdσe inσX μ(σ) (μ,σ)+fermionicterms.

As we have discussed before, taking adjoints gives us (X μ(σ) (μ,σ)) =X μ(σ) (μ,σ). This implies directly that

(2)( K,n) = K,n.

A state ψ (a function on parameterized loop space) is reparameterization invariant if it is annihilated by all the K,n. But say we want to impose at most

(3) K,nψ=0 n0

for all positive integer n instead of for all integer n. Then, still, the expectation value of all K,n vanishes, because

(4)ψ K,nψ= K,nψψ.
Posted by: Urs Schreiber on July 21, 2004 8:02 PM | Permalink | Reply to this

Covariant Formulation of Loop Space

On a manifold M with metric g, let K be a vector field,

(1)γ:S 1

a loop, and

(2)σ:[0,2 π]S 1

a parameterization of γ such that

(3)K γσ(s)=Tddσ γσ(s)

for all s[0,2 π], where T is a constant. The flow

(4)ϕ s:

generated by K will carry a point around the loop. Let

(5)p=γσ(0 )

and define

(6)p(s)=ϕ s(p).

For any 1-form α it is clear that we always have

(7) γα= (ϕ s) *γα.

Consequently, we have

(8) γ Kα=dds[ (ϕ s) *γα] s=0 =0

for any loop γ on . Therefore, we can take

(9) Kα=0

as a statement that the integral of α around the loop γ is independent of the parameterization of the loop. On , this is obvious. However, we would like to define an unparameterized loop space for which the above expression is also manifest.

We will consider loop space to be simply the set of unparameterized (but parameterizable) maps

(10)γ:S 1 .

Given a p-form α on , we obtain a p-form on loop space via the loop map

(11)L:Ω p()Ω p(())

defined by

(12)Lα= S 1 γ *(α)vol,

where vol is the induced volume form on S 1 obtained by pulling back the metric g from to S 1 and γ is a, yet to be specified, loop. Note that because neither coordinates nor parameterizations were used to define the loop map that the loop map is trivially independent of coordinates and parameterizations.

Proposition: The loop map is 1-1.

Let γ p be a loop that is contractible to a point p, then

(13)α p==limγ ppLα γ pγ p=limγ pp S 1 γ p *(α)volγ p,

where

(14)γ p= S 1 vol

is the length of the loop.

We can define a tensor product on () in terms of the tensor product on via

(15)LαLβ=L(αβ).

We can also define a contraction map on () such that

(16)Lcontract=contractL.

Given a vector field X on , we can use the above relations to define a vector field LX on () via

(17)Lα,LX=Lα,X.

Warning: I am running out of steam and it’s getting late. :)

Before I sign off, I’ll just state that everything I’ve said is equally valid for open curves as well as loop. I was also trying to build up to

(18)[Lα,Lβ]=[α,β].

Once I have this, I can start looking at expectation values

(19)F=[Lα,FLα][Lα,Lα].

If I can get this far, I was going to show that

(20) LXLα=L( Xα)

so that

(21) LK Lα=[Lα, LKLα][Lα,Lα]=[Lα,L( Kα)][Lα,Lα]=[α, Kα][α,α]= K α=0

because

(22) Kα=0 .

This would almost seem like a solution to my homework problem.

Good night! :)

Eric

Posted by: Eric on July 22, 2004 4:51 AM | Permalink | Reply to this

Re: Covariant Formulation of Loop Space

Hi Eric -

I have a couple of comments and questions:

You write:

On a manifold M with metric g , let K be a vector field,

Do you really want K to be defined on target space? Then you have to use different K for different loops in order that K restricted to a given loop really gives σ along that loop.

BTW, here is an important point which I may not have emphasized enough but which I mentioned in my last comment:

For something on the loop to be reparameterization invariant it is not sufficient for it to be annihilated by K. That’s because K alone only generates rigid reparameterizations, i.e. those which send σσ+const. In general we need σf(σ) and this is the reason for the use of the ‘modes’ of K which I mentioned last time.

Next, I am not sure why you restrict to things like Kα=0 , where α is a form on target space. More generally we have objects on loop space that don’t directly come from target space.

Then, I have to apologize and to admit that I don’t fully understand the definition of L. Usually when you pull back via γ *(α) a p form α to a 1d manifold like a loop, the result vanishes for p>1 . This seems to be something else than you have in mind.

Could you write out explicitly the action of L that you have for example for a 2-form?

Posted by: Urs Schreiber on July 22, 2004 10:38 AM | Permalink | PGP Sig | Reply to this

Re: Covariant Formulation of Loop Space

Good morning :)

Do you really want K to be defined on target space? Then you have to use different K for different loops in order that K restricted to a given loop really gives σ along that loop.

I think this is ok. Besides, for a single K on we cover LOTS of loops. Granted, I understand why you wouldn’t like this.

In general we need σf(σ) and this is the reason for the use of the ‘modes’ of K which I mentioned last time.

Ok. I’ll think about this, but I think what I have in mind also works for arbitrary f(σ) without having to resort to modes.

Next, I am not sure why you restrict to things like Kα=0 , where α is a form on target space. More generally we have objects on loop space that don’t directly come from target space.

Are you convinced that we need forms on loop space that are not obtained from forms on target space? If so, what is it that has convinced you?

Physics is about explaining things that can be measured. Anything that can be measured must be expressible as a differential form on target space. If it cannot be expressible as a form on target space, it is not measureable. Hence, it is not physics unless it can be mapped to something on target space.

If I am wrong about this, please tell me because this is a cornerstone of everything I understand about physics :) It would be shattering for me to learn that even this is wrong :)

Then, I have to apologize and to admit that I don’t fully understand the definition of L.

Why are you apologizing for my inability to write down something that makes sense? :)

You are absolutely correct. What I wrote is really only valid for 0-forms. I was being “cavalier” assuming that some such L existed for higher degree forms. I am still fairly sure that one must exist, but I’ll have to put in a little more effort to give it a rigorous definition.

Gotta run!

Eric

Posted by: Eric on July 22, 2004 2:20 PM | Permalink | Reply to this

Lost in Space

Hi Urs,

Believe it or not, I’m not trying to cause trouble :) I’m just trying to understand this stuff. When I try as hard as I have to understand you deformation paper, I sometimes give up and try to reformulate things myself. Sometimes this works, but I don’t seem to be getting anywhere here.

I’m thinking about trying a simplified version of loop space, i.e. sphere space on R n. In this model, a point in R n+1 corresponds to an oriented (n1 )-sphere in R n. For some notation, let

(1)±S a(x 1 ,...,x n)

denote an oriented (n1 )-sphere on R n centered at (x 1 ,...,x n)R n having radius a, where the ± denotes opposite orientations.

Although it might not technically be a “map”, let me call 𝒮 * a map anyway from chains on R n to chains on R n1 defined by

(2)𝒮 *(x 1 ,...,x n)=sign(x n)S x n(x 1 ,...,x n1 ).

In particular, we can consider “circle space on R 2 ”, i.e. every point (x,y,z)R 3 corresponds to an oriented circle

(3)𝒮(x,y,z)=sign(z)S z(x,y)

in R 2 . This toy model seems to capture some of the essence of what is going on in loop space, but because oriented circles in the plane can be completely described by three real numbers, then circle space is only three dimensional. My poor brain might actually be able to comprehend that.

What do you think? Could this model be helpful?

For the time being, I will assume it is a valid toy model of loop space and develop some ideas.

Given a 1-form α on R 2 and a point pR 3 , we can define a 0-form 𝒮 *(α) on R^3 via

(4) p𝒮 *(α)= 𝒮 *(p)α.

This is a generalized notion of push forward and pull back.

Things get a little interesting if we look at an oriented curve γ in R 3 . Fortunately, my previous disussion of the extrusion map H X(t) helps here. An oriented curve γ in R 3 corresponds to extruding a circle around on R 2 with orientation defined as I defined it for the extrusion map. Therefore,

(5)𝒮 *(γ)

is a valid 2-chain on R 2 . Now, given a 1-chain γ on R 3 and a 2-form β on R 2 , we obtain a 1-form 𝒮 *(β) on R 3 via

(6) γ𝒮 *(β)= 𝒮 *(γ)β.

After doodling some special cases, I’ve almost convinced myself that 𝒮 * is natural in the sense that

(7)𝒮 *=𝒮 *,

which, if true, implies

(8)d𝒮 *=𝒮 *d.

I think there is a decent chance that I haven’t made any serious blunders here, but before proceeding I’ll wait for your blessing. Do you think this is a valid toy model for loop space? if so, then I might be on the way to gaining some understanding.

The first lesson: we need to generalize our notions of push forward and pull back.

But this seems not too difficult.

Anyway…

Good night! :)

Eric

Posted by: Eric on July 23, 2004 4:12 AM | Permalink | Reply to this

Re: Lost in Space

Hi Eric -

yes, I think this is a sensible toy model and that your constructions do make sense.

(Maybe an even better toy model would be some ‘n-gon space’, i.e. the space of n-tuples of points in target space. When taking n with a suitable continuity condition this flows to loop space.)

Concerning that ‘naturalness’ condition. For loop space this is the content of the little theorem that I recall in equation (3.8) of hep-th/0407122. In general I think that what you are thinking about goes in the same direction as the stuff mentioned in that section 3.1.

Posted by: Urs Schreiber on July 23, 2004 12:01 PM | Permalink | PGP Sig | Reply to this

Re: Lost in Space

Hi Urs,

I was just looking at hep-th/0407122 and noticed something that I had seen before, but never mentioned it.

I don’t know if it is deep or a coincidence, but Equation (2.6) is essentially the Ito formula from stochastic calculus. At least if you squint your eyes enough :)

Eric

Posted by: Eric on July 23, 2004 8:24 PM | Permalink | Reply to this

Re: Lost in Space

To me it rather looks like a generalized flatness condition on some curvature. When ignoring the mode indices for a moment we are just dealing with a structure very similar to what we talked about in the context of string field theory:

There is some odd graded operator Q=d=whateversymbolyoulike which squares to something. Now we want to deform it by adding some A so that it still squares to that same thing, i.e. (d+A) 2 =d 2 . The condition on A is hence dA+AA=0 . If d were just the ordinary exterior derivative this would simply say that A must be a flat connection.

The question that I hint at by saying ‘One large class of solution of this equation’ is whether one could find A which don’t come from a ‘gauge’ transformation A=U (dU) of the trivial connection. I.e. are there large transformation not continuously connected to the identity.

Probably there are and deforming the superconformal algebra by them might yield something interesting. But so far I haven’t come across any. On the other hand, I haven’t checked if there is a A which gives the transformation (4.33) in hep-th/0401175.

Posted by: Urs Schreiber on July 24, 2004 12:25 AM | Permalink | PGP Sig | Reply to this

n-Gon Space

Hi Urs,

I was taking your advice and looking at n-gon space. To make things as simple as possible, I started with something that is not really an n-gon, but I thought I could still learn something from. I started with “segment space,” i.e. the space of all straight line segments in R n. I started turning the crank on things like push forward and pull back when I found (what is probably obvious) that in order to have these natural relations

(1)S *=S *,

for a point p in segment space, we must always have (S *p)=0 , i.e. the image of a point in segment space needs to be closed in target space, because we will always have S *(p)=0 . Since a straight line segment is not closed, things don’t really have a chance to work out as far as I can see.

The point is, whatever toy model we come up with, a point in the “higher” space must map to a closed object in target space. A downside of this is that you cannot pullback a coordinate basis on target space to a form on n-gon space because all closed forms pull back to zero.

My next toy model might be “triangle space” although it is tempting to look at “point space” :)

A thought just before submitting…

Wait a minute! Isn’t it true that

(2)S *(αβ)=(S *α)(S *β)

and

(3)S *(α+β)=(S *α)+(S *β)

for forms α,β on target space?

Since any form can be expresses as

(4)α= iϕ idβ i,

where ϕ i are 0-forms, then if this is true about pull back we have

(5)S *α= iS *(ϕ i)S *(dβ i)=0

because

(6)S *(dβ i)=0 .

This would mean that there are no forms that pull back from target space to a non-zero form on the higher space. The only way out is if I am mistaken about the pull back map (which is possible, but I doubt) or the operations are not natural.

Does this mean I am back to the drawing board? :|

Eric

Posted by: Eric on July 24, 2004 2:29 PM | Permalink | Reply to this

Re: Lost in Space

Hi Urs,

I think my latest train of thought is going nowhere (surprise). I think I should probably return to your previous question

Usually when you pull back via γ *(α) a p-form α to a 1d manifold like a loop, the result vanishes for p > 1. This seems to be something else than you have in mind.

Could you write out explicitly the action of L that you have for example for a 2-form?

Let’s assume we will be working with n-loop space n(), so consider the n-torus

(1)T n=S 1 ××S 1 n times

and a map

(2)γ:T p

so that we have

(3) γα= S 1 ××S 1 γ *α

for some p-form α. For the time being, consider the case where p=2 .

What I am looking for is related to Fubini’s theorem. I am looking for some map γ satisfying

(4) S 1 ×S 1 γ *α= S 1 S 1 γ α.

In other words, γ α is something that when integrated over S 1 results in a 1-form. When the resulting 1-form is integrated over S 1 , the result is the same as if you integrated the 2-form γ *α over S 1 ×S 1 . Does such a thing exist? It doesn’t seem to be that crazy of a thing, so maybe it can be made to make sense. Any ideas?

Eric

Posted by: Eric on July 24, 2004 5:38 PM | Permalink | Reply to this

Re: Lost in Space

Hi Eric -

Does such a thing exist?

Yes it does. Now we are converging! :-) This is what is used in that section 3.1 of hep-th/0407122.

Pick a 2-form B=1 2 B μνdx μdx ν on target space. ‘Pull it back on 1 index’ to the loop X=X(σ) to obtain dσ μ(σ)B μνX ν(σ)

(as Peter Woit has indicated here this can be expressed in more mathematical terms, but I won’t do that here) to obtain a 1-form on loop space, which, when integrated over a loop on loop space (parameterized by τ) gives the integral of the pullback of B over the surface swept out by the loop-loop.

My apologies if my notation and wording is bad. You could alternatively have a look at equations (1.4)/(3.20) in Ferreira et al.’s hep-th/9710147. Even though they use essentialyy the same notation, maybe it helps to see it discussed in somewhat different words. (Note that for the moment you can just think of all the Ws appearing in that paper as being the unit element and just ignore it.)

Posted by: Urs Schreiber on July 25, 2004 3:55 PM | Permalink | PGP Sig | Reply to this

Still Trying

Hi Urs,

Besides obvious probable causes, e.g. too few brain cells, I don’t know why I am struggling so hard to understand differential geometry on loop space.

In my first attempts at interpretting your deformation paper, I introduced this map L. At the time, aside from a volume form, I didn’t think I was doing anything new, but basically rewriting what you did in a coordinate-free manner.

In your paper, you have the expression

(1)UV X= 0 2 πdσg(X(σ))(U(σ),V(σ)).

Just for clarity, could we write this as

(2)UV X= 0 2 πdσg(X(σ))(U(X(σ)),V(X(σ)))?

If so, could you write this as

(3)UV X= 0 2 πdσg(U,V) X(σ)?

This was/is my understanding, so please correct me if I am wrong.

If I am not in trouble already, here is where I probably get into trouble. I then tried to reverse engineer this express in order to write the tangent vector

(4)U X= 0 2 πdσU μ(X(σ)) μ(X(σ))= 0 2 πdσU X(σ).

Is that OK? I assumed it was and proceeded in my previous posts. If this is not correct, could you please correct me? I thought this was nice because it allows us to use the clever multi-index notation to write

(5)U=U (μ,σ) (μ,σ)= 0 2 πdσU μ(X(σ)) μ(X(σ))= 0 2 πU X(σ).

I didn’t think it felt right to denote the U on the LHS with the same letter as the U on the RHS, so I’ll denote the LHS with

(6)(L σU) X=U (μ,σ) (μ,σ)= 0 2 πU X(σ).

I have a lot more to say, but I’ll wait to see if I’m already headed in the wrong direction before proceeding.

Cheers!

Eric

Posted by: Eric on July 25, 2004 3:41 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

seems like once again notation kept us from communication properly. ;-)

I can only agree with what you wrote here, if the right hand sides of the last three equations are defined by the left hand sides.

This was probably part of the problem, because naively these right hand sides seem to mean something else - at least to me! :-)

If U is a vector field on target space then I would expect U X(σ) to be the vector of that field at the point X(σ). Then 0 2 πdσU X(σ) would be a formal continuous sum of vectors in target space.

But this is not what the left hand side is. The left hand side is really one single vector field, but on loop space.

But now I finally understand what you mean by that map L. Sorry for being so slow. I guess you want to have

(1)L(U μ μ)= 0 2 πdσU(X(σ))δδX μ(σ)
(2)L(V μdx μ)= 0 2 πdσ μ(σ)V μ(X(σ))

etc.

P.S.

Just for clarity, could we write this as

Yes, precisely. I should have made this clearer.

Posted by: Urs Schreiber on July 25, 2004 4:15 PM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Hi Urs,

Thanks for that explanation. I feel like we are making progress, but I still have a ways to go before understanding this stuff.

First, what is the difference between

(1) 0 2 πdσU μ(X(σ))δδX μ(σ)

and

(2) 0 2 πdσU μ(X(σ))X μ X(σ)?

Oh yeah. I didn’t understand this at all…

If U is a vector field on target space then I would expect U X(σ) to be the vector of that field at the point X(σ).

So would I.

Then 0 2 πdσU X(σ) would be a formal continuous sum of vectors in target space.

Yep.

But this is not what the left hand side is. The left hand side is really one single vector field, but on loop space.

Huh?

*light bulb*

A point in loop space is a loop in target space. A point p in target space has a basis for tangent vectors at a point

(3)X μ p,

where μin{1 ,,n}.

A point γ in loop space has a basis for tangent vectors

(4)X (μ,σ) γ,

where μin{1 ,,n} and σ(0,2 π). Therefore, there is a continuum of basis vectors on loop space.

Ok. I’ll have to think about that before I give it my blessing (as if it needs it) :)

If this is correct, than you have been being extremely/excessively “cavalier” with your use of the the symbol “”. A tangent vector LU γ in loop space should be written something more like

(5)LU γ=pγU p,

i.e. it is a continuum of target space tangent vectors, one for each point p on γ. Then you have

(6)LU γ+LV γ=pγ(U p+V p).

This means that

(7)LU γ=pγU p=U (μ,p) (μ,p) γ

is really a sum over μ and a continuum direct sum over points p of the loop, i.e.

(8)U (μ,p) (μ,p) γ=pγ μ=1 nU μ μ p.

Argh!! Your use of the symbol “” killed me! :) This is so simple :)

Ok. Have you seen those big oversized plastic yellow wiffle bats that kids sometimes hit these big oversized wiffle balls around with? Well, if what I write above is correct, then imagine I’ve just *bonked* over the head with a wiffle bat!! :) Better yet, go out and get one of those wiffle bats and *bonk* yourself over the head for me please :)

Ok. Now that I think I understand what the basic idea is, I’ll need to go back and rethink a lot of things.

I’ll say this is significant progress.

Thanks!
Eric

PS: If LU is a vector field on loop space and γ, γ are distinct loops that intersect at a point p in target space, do we demand that

(9)(LU γ) p=(LU γ) p

or do we allow more general vector fields that might be multi-valued as viewed from target space? My first reaction is that I would not allow such multi-valued vector fields and demand that every vector field on loop space correspond to a vector field on target space. I understand that this would rule out the reparameterization Killing vector σ γ that you use so heavily, but I’m not sure that is a bad thing. There are other ways to display reparameterization invariance, e.g. have it built in by construction.

Posted by: Eric on July 26, 2004 5:36 AM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

I agree with some things - but not with everything! :-) Let’s see:

Therefore, there is a continuum of basis vectors on loop space.

Sure. There are as many basis vectors as the space has dimension!

it is a continuum of target space tangent vectors

In a sense for special cases. More precisely, it is the integral over functional derivatives. Not all these functional derivatives need to come from target space vectors. For instance 0 2 πdσe inσδδX 1 (σ) is a vector on (parameterized) loop space which does not come from a target space vector the way you indicated. What you write is true for those elements of the tangent bundle over loop space which do come from applying the map L to some target space vector. But not all vectors on loop space are of this form.

or do we allow more general vector fields that might be multi-valued as viewed from target space?

It is not in our hands to allow a vector field or not. These vector fields just exist. And on parameterized loop space in general (LU γ) p(LU γ ) p. This is nothing to worry about. It is just an indication that the map L does not have so many natural properties as you are maybe expecting.

There are other ways to display reparameterization invariance, e.g. have it built in by construction.

Parameterized loop space is the object that I am interested in. I don’t know how to handle unparameterized loop space efficiently except for getting it from parameterized loop space my dividing out by the action of reparameterizations; and, worse, it does not pertain to the applications that I need loop space for. Restricting to vector fields on loop space which sit in the range of L similarly is not an option for these applications - and to be honest I don’t understand why that that would be interesting or desirable. In stringy terms it would correspond to restricting attention to the center-of-mass mode of the string, while ignoring all excitations.

Posted by: Urs Schreiber on July 26, 2004 10:28 AM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Hi Urs :)

I agree with some things - but not with everything! :-)

Well, I hope when we iron out that last notational issues that you will agree with MOST of what I said :)

For instance, I hope that you agree that an arbitrary tangent vector LU γ in loop space can be expressed as

(1)LU γ=pS 1 U γ(p),

where

(2)γ:S 1 .

Wait! I can feel you objecting already, but wait :) Let me explain the notation.

The tangent vector

(3)U γ(p)

is meant to be just some choice of tangent vector from the tangent space and, for the time being, we do not need to think of U as a vector field on target space, i.e. we are allowing multi-valued tangent vectors on target space.

Do you agree with this so far?

Just to clarify, if p,qS 1 and pq with γ(p)=γ(q) we could have

(4)U γ(p)U γ(q).

For now.

Eric

Posted by: Eric on July 26, 2004 3:10 PM | Permalink | Reply to this

Re: Still Trying

Hi Urs,

In private email, we said

Hi Eric -

I am typing with single digits on my left hand. I was involved in a cycling accident (on my brand new racing bike!). I am OK, but broke my wrist. I won’t make it to work.

O dear. I wish you all the best then!

Fortunately we are living in an age of information technology where we can keep you in touch with the outside world - such as the latest on flat connections on loop space! ;-)

Of course you can think of the tuple of components as the vector if you want. I agree! :-)

Whoa! Where did this statement come from? I would be the last person to promote such an idea :)

I would have thought so, too. ;-)

So in finte dimensions you would always write a vector as

(1)V= μV μ μ

instead of

(2)V= μV μ,

right?

Apparently I caused a problem by, without much ado, turning the sum in

(3) μV μ μ

into an integral when we are working on a space of uncountably large dimension.

(4)dσV(σ)δδX(σ).

But recall that δδX(σ) is the fucntional derivative. It acts on a “coordinate” X(σ) on our space as

(5)δδX(σ)X(κ)=δ(σκ).

This delta-distribution on the right forces us to use the integral, because otherwise we’d get something like

(6)( σV(σ)δδX(σ))X(κ)=V(κ)δ(0 ),

which is not well defined.

Therefore the integral really is the right generalization of the sum over basis elements of the vector space.

But let’s continue this on the SCT.

Ok! :)

I never thought of physics as an endurance sport, but with the shape my wrist is in (I can’t even see an orthopedist to have it set until tommorrow!) just getting that last bit to compile was exhausting :)

I think we are making good progress so thank you for bearing with me.

I think that expressing a tangent vector on loop space in terms of a continuum direct sum of tangent vectors on target space is a good idea. What I am suggesting is not as bad as writing a vector as a direct sum of its components as you said here

So in finte dimensions you would always write a vector as

(7)V= μV μ μ

instead of

(8)V= μV μ,

right?

First of all, of course that would be terribly coordinate dependent, which goes against everything I stand for and I think you know that :)

Please take a moment and draw some pictures or something to really think about what I mean when I write

(9)LU γ=pS 1 U γ(p).

This expression is absolutely parameterization and coordinate independent. To see what this looks like, draw a picture of a loop γ(S 1 ) in some target space and for each point p of the loop assign (draw) a tangent vector U p. If the loop is self-intersecting, then you can acually choose two distinct tangent vectors at the same point in target space. Hence, U need not be a vector field on because of this possible multi-valuedness. The simplest example would be to draw the tangent vector

(10)K γ=s[0,2 π]dds γσ(s),

where

(11)γ:S 1

and

(12)σ:[0,2 π]S 1 .

In other words, at each point of the loop draw a vector tangent to the loop. If the loop is a figure “8”, then at the intersection point, you will have two distinct tangent vectors.

I hope this helps clarify what I mean when I write

(13)LU γ=pS 1 U p.

It seems I should also clarify another important point. When I write S 1 , I mean the manifold, i.e. it is parameterizable, but not yet parameterized. A parameterization of S 1 would be a map

(14)σ:[0,2 π]S 1 .

I have tried to make this explicit throughout. Hence, I am really justified in saying that

(15)LU γ=pS 1 U γ(p)

is parameterization and coordinate independent.

Ok. So far I feel like I’ve exerted the energy equivalant of a 15K run (and have spent about the same amount of time :)). Let’s see how much further I can go :)

For the time being, let’s not sum over repeated indices, i.e. I will write all summations explicitly.

The tangent vector

(16)X μ(γ(p)) γ=X μ γ(p)

is a basis vector for T γ(ℒℳ), where pS 1 . There is obviously a continuum of these so the tangent space at a point in loop space is infinite dimensional (stating the obvious). However, this infinity is split up into a countable part and an uncountable part. In particular, we have

(17)U(γ(p)) γ= μ=1 nU μ(γ(p))X μ(γ(p)) γ.

In general, there is an uncountably many such tangent vectors, one for each pS 1 . The crucial question now is,

How are going to combine this continuum of tangent vectors into a single tangent vector?

Unless I am mistaken, I think you would suggest that we introduce a parameterization σ:[0,2 π]S 1 and integrate them, i.e.

(18)U σ γ= 0 2 πdσU(γσ) γσ.

However, before we can integrate, we need to choose a measure on the loop. It is obvious that the choice you have been suggesting is parameterization specific. This is why I included the subscript σ above. My first suggestion was to choose the measure vol obtained by pulling back the metric tensor from target space to the loop. I think this would be an improvement over the blatantly parameterization specific construction U σ γ, but it still has some drawback, e.g. it requires that there should be a metric on target space. I think the correct construction should allow us to study solely topological properties so this is out for the time being.

I am now suggesting that maybe we should consider an alternative. Instead of integrating, we take a continuum direct sum

(19)U γ=pS 1 U(γ(p)) γ.

This does not force us to choose an ad hoc measure and is clearly parameterization and coordinate independent by construction.

Although I think it would be neat if we could treat σ as simply a continuum indices and compute away, it is probably wise to think about whether this is really justified. My gut tells me that these continuum indices are manifestly different than their countable counterparts and require a little more care than we have been providing so far.

Phew! 20K! I need some Gatorade (not mention Percocet!!) :)

Best wishes,
Eric

Posted by: Eric on July 27, 2004 10:00 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

have we agreed if we are talking about parameterized or unparameterized loop space? I for one mean and always have meant parameterized loop space.

I am still a little confused about what you write, for instance because you have X and γ appearing together in your formulas as if this were two different things. Maybe by γ you mean the unparameterized loop? At least on parameterized loop space X:(0,2 π) is the loop.

But I feel the discussion would benefit from some actual calculations.

Consider two vector fields K and V on parameterized loop space with components K (μ,σ)(X)=X μ(σ) and V (μ,σ)(X)=cos(X λ(σ)X κ(σ)η λκ)e inσ.

What is their Lie bracket?

You probably know how I would caclulate that, but I would like to see you calculating it in your notation. I am hoping seeing your notation in action will clarify it for me. (Of course I simply made up these vectors in order to have an example. Pick any other not too trivial vector fields if you like.)

Posted by: Urs Schreiber on July 28, 2004 10:41 AM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Good morning :)

have we agreed if we are talking about parameterized or unparameterized loop space? I for one mean and always have meant parameterized loop space.

Too bad :) Did you at least draw the pictures I asked you to? :) After the pain I went through typing it, I hope you could at least do that! :)

For the record, let me state once and for all that I know you are dealing with a parameter-dependent loop space (PLS). I will refer to it as “parameter dependent” instead of “parameterized” because the latter does not make it as obvious what a nasty space it really is :)

I am trying to do a few things simultaneously. The first is to understand the parameter dependent machinery you are working with. I think I am forming a pretty good idea about that. Another thing I am attempting to do is to demonstrate that there is an alternative parameter independent machinery that can do the same thing you want to do, but only more naturally. With that said, please understand that I am not claiming that there is anything mathematically incorrect in your papers (nor all the other papers you refer to that deal with parameter dependent loop space). Rather, I am suggesting that there is a better way to get the same job done.

Here is the way I’m thinking about it. Take ordinary differential geometry (ODG) on manifolds for example. There, we can pretty much demonsstrate all the machinery in pictures because ODG is coordinate independent by construction. However, let’s forget that we know about diffeomorphisms and convince ourselves that we must describe a point as an n-tuple of numbers. Call the resulting coordinate dependent space “parameter-dependent point space” (PPS). Before long we would discover that the same point in some manifolds actually occupies many different points in PPS. If the original manifold was n-dimensional, then PPS could be thought of as R N for Nn. We could obviously define calculus on R N, but we would find that many things we compute will depend on coordinates, which shouldn’t come as a surprise. This sad situation is cured one day when someone discovers that we can get coordinate independent results by considering the kernel of some operator K. We are saved! :)

No. Not really because then some guy named Riemann comes along and demonstrates that we could have built up a theory called ODG that is independent of coordinates by construction so that we no longer need to look at kernels of operators to get coordinate independent results.

This is precisely the same situation I see when I look at the loop space literature you refer to.

I’m obviously not qualified to play the role of Riemann for loop space. The best I can do is stand up on the highest mountain I can find and scream, “We need a Riemann to fix this!” Something like a mathematical physics version of the “Bat Signal.” :)

Let me refer to the, yet to be defined, differential geometry of unparameterized loop space as OLS in contrast to PLS. The situation with OLS snd PLS might not be as obviously severe as that between ODG and PPS because the latter is finite dimensional so it is clear that Nn. With the former, it might not be so obvious because both PLS and OLS are infinite dimensional.

To summarize once again the answer to your question, I understand that you are dealing with PLS while I am simultaneously trying to learn PLS and develop OLS.

I am still a little confused about what you write, for instance because you have X and γ appearing together in your formulas as if this were two different things. Maybe by γ you mean the unparameterized loop? At least on parameterized loop space X:(0,2 π) is the loop.

I am getting tired and nauseous from pain medication, so let me just quote my earlier remark

It seems I should also clarify another important point. When I write S 1 , I mean the manifold, i.e. it is parameterizable, but not yet parameterized. A parameterization of S 1 would be a map

(1)σ:[0,2 π]S 1 .

I have tried to make this explicit throughout. Hence, I am really justified in saying that

(2)LU γ=pS 1 U γ(p)

is parameterization and coordinate independent.

Therefore, since S 1 is unparameterized, the loop

(3)γ:S 1

is unparameterized. To get a parameterized map we need to compose with a parameterization so that

(4)X=γσ:[0,2 π].

This is pretty much what is normally done in ODG.

What is their Lie bracket?

I will do this later. I need to lay down now :)

Eric

Posted by: Eric on July 28, 2004 1:59 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

I will do this later.

No need. It applies only to parameterized loop space (which, I would like to emphasize, should be read as ‘[parameterized loop] space’.

The vectors on the space of unparameterized loops are indeed just vector fields on these loops in target space, at least as long as there are no self-intersections (which could be subtle to deal with). I was all along not fully aware that this was all you meant to say. Sorry for that.

Now that we have clarified this (hopefully :-) I can’t refuse to add a little ‘philosophical’ comment myself: It is not always good to divide out all redundancies in physical theories. Gauge theory is elegant when the redundancies are kept and becomes awkward when a gauge is fixed or when group averaging is performed. The gauge freedom of parameter on the string is closely related to the gauge freedom of the target space fields it represents.

Posted by: Urs Schreiber on July 28, 2004 6:58 PM | Permalink | PGP Sig | Reply to this

Re: Still Trying

I will do this later.

No need.

Phew :) I just got back from the orthopedist. It looks like my wrist is not in good shape. I’ll need a minor surgery in the morning. Let’s see how long I can hold out here before passing out again :)

It applies only to parameterized loop space (which, I would like to emphasize, should be read as ‘[parameterized loop] space’.

I should be able to do this, or something like it, for unparameterized loop space (ULS). With my first thoughts, I didn’t get very far because it is not obvious how to define smooth vector fields on loop space, be it PLS or ULS. I have to admit I didn’t get much further than my “first thoughts” yet :)

It is not always good to divide out all redundancies in physical theories. Gauge theory is elegant when the redundancies are kept and becomes awkward when a gauge is fixed or when group averaging is performed. The gauge freedom of parameter on the string is closely related to the gauge freedom of the target space fields it represents.

I understand this and agree 100%. However, I think it is also possible to take this idea too far. My example of “[parameterized point] space” is an extreme case, where too much redundancy was introduced. I may soon realize that having this redundancy for “[parameterized loop] space” is not overkill, but I am not at that point yet. It still seems excessive to me.

Now, I need to pass out once again. Hopefully, I can continue work in my dreams.

Good night for now! :)

Eric

Posted by: Eric on July 28, 2004 8:22 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

I wish you all the best and that your wrist will heal soon.

Maybe we should postpone this discussion until you feel better. I’ll be on holidays the next two weeks, anyway.

I haven’t thought much about unparameterized loop space, because I have little use for it in the contexts that interest me. On parameterized loop space it is easy to get the vector fields. But on unparameterized loop space things seem to become really complicated.

Single self-intersections are only the easiest aspect of the problem, which one can probably deal with in the naive way.

But it seems to become really intricate as soon as we have degenerate loops.

For instance consider the constant loop, which occupies just a single point in target space. It is not clear at all how to define a vector on loop space at this point in loop space.

Also, it is hard to deal with loops that are supposed to wrap around themselves several times.

But never mind for the moment. Your health is more important than loop space - even unparameterized loop space!

Posted by: Urs Schreiber on July 29, 2004 12:27 PM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Hi Urs,

I’ll miss the discussions while you are on vacation, but I guess the timing is optimal in the sense that I can’t really write very well for the next several weeks anyway :) You and your girlfriend deserve the break, so have a great time! :)

In the meantime, I’ll keep thinking about this stuff. Maybe I will have something useful to say, but regardless, I think it is good (for me at least) to think about these things carefully. My copy of Zwiebach’s book arrived and I’ve been reading it when I get enough energy. It has given me several ideas about parameterizations.

I am still pretty weak, but wanted to say a couple of things before signing off…

I haven’t thought much about unparameterized loop space, because I have little use for it in the contexts that interest me.

I don’t understand how formulating a parameterization independent loop space differential geometry would not be interesting to you. I understand that there is already in place a set of tools that are parameterization dependent and that you can use these tools for calculating things of interest, e.g. expectation values, but I don’t understand why finding a better(?) way to do things would not be interesting.

On parameterized loop space it is easy to get the vector fields. But on unparameterized loop space things seem to become really complicated.

At the moment, I don’t see why parameterized loop space would be any simpler to get vector fields.

But it seems to become really intricate as soon as we have degenerate loops.

Naively, I don’t see why completely degenerate loops would be any more difficult than simple self-intersecting loops.

Ok. One technical remark before laying down again :)

When thinking of unparameterized (but parameterizable) loop space, I’m finding this idea of viewing things as continuum direct sums on target space to be helpful. Once you start doing this, I found some intermediate concepts to be helpful. These intermediate concepts can probably be referred to as “p-loop forms” and “p-loop vector fields.” Both of which are defined on target space. Let

(1)γ p:T p,

denote a p-loop, where

(2)T p=S 1 ××S 1 ptimes.

is a p-torus. For a point r, we can denote 0-loops via

(3)γ r 0 :T 0

with

(4)γ r 0 (T 0 )=r.

A p-loop q-form at a loop γ p may then be defined as

(5) pα γ p=rT pα γ p(r),

where α γ p(r) is a q-covector at the point γ p(r). For 1-loop q-forms we can drop the indices and write simply

(6)α γ=pS 1 α γ(p),

as I had written in a previous post.

A p-loop vector field can be defined similarly, where the 1-loop vector field at a loop γ is given by

(7)U γ=pS 1 U γ(p).

In general, p-loop forms and p-loop vector fields will be multi-valued on target space due to the possibility of self-intersecting and degenerate loops. An exception would be for 0-loop forms and 0-loop vector fields, in which case we have

(8) 0 αα

and

(9) 0 UU.

In particular, consider a 1-loop 0-form

(10)ϕ γ=pS 1 ϕ γ(p).

For each point pS 1 we assign a value at γ(p). If γ is self-intersecting or degenerate in any way, we may get multiple values assigned to the same point in target space.

If we denote the space of p-loop q-forms on by Ω (p,q)(), we will want a map

(11)tr:Ω (p,q)()Ω q( p),

i.e. a p-loop q-form on target space gives rise, via tr, to a q-form on (unparameterized) p-loop space.

To understand the importance of the tr map, consider a 1-loop 0-form ϕ. If this is to make sense, we should have

(12) γtr(ϕ)=tr(ϕ) γ.

I want to demand that tr commutes with the evaluation map so that

(13)tr(ϕ) γ=tr(ϕ γ)R.

Since ϕ γ is essentially a string of values assigned to each point on γ, it is clear that tr obtains a single value from the string of values. The obvious interpretation for tr is now clear. It is essentially integration along the loop.

I really hope that I can convince you of the importance of this issue because I think it transcends whether we are dealing with parameterized or unparameterized loops. In both cases, I believe this tr map plays a crucial role.

If that is the case, which I really do think it is, then we should really think about what it means to integrate along the loop. In order to integrate along the loop, we must first choose a measure. I believe the choice of measure we make is very important. Even if we are dealing with parameterized loop space, I think there is a case to be made for choosing a measure that is parameterization independent.

I have more to say, but I need to lay down. I hope I can convince you to meditate about this for at least ten minutes. I think you will quickly see what I am talking about even if I haven’t successfully communicated my ideas yet.

Take care!
Eric

Posted by: Eric on July 31, 2004 2:01 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

here is a very quick response:

Parameterized loop space is important because most string states are not rep invariant. Only their expectation values are. The only rep invariant states are the so-called boundary states. So when you restrict to unparameterized loop space you can only deal with boundary states, not with the usual string states. But even in that case it is much easier to use the parameterized looks and project on functions which take the same value on equivalence classes of loops.

On parameterized loop space it is very simple to get the vector fields, because they are simply linear combinations of the holonomic basis vectors δδX μ(σ). As I have said before, all you have to do is to think of sigma as an extra index and then everything goes through as usual.

I am currently writing on a machine without MathML support, so I cannot read and comment on the formulas you give.

Posted by: Urs on July 31, 2004 3:26 PM | Permalink | Reply to this

Re: Still Trying

Hi Urs :)

It’s good to hear from you :) I will go ahead and reply to this and I may even write several subsequent posts, but let me just say once again that I hope you have a great trip and I will be patiently awaiting your return. Even when you return, I know you will have a lot of things to catch up on and this will be here for whenever you settle down. No matter how long it takes. In fact, I will completely understand if you never get around to responding. Sometimes I learn a lot already just by asking the question :) The last thing I want to do is be a gnat in your ear :)

Parameterized loop space is important because most string states are not rep invariant. Only their expectation values are.

Hmm… I don’t understand this statement. I am thinking that if the expectation values you compute are all independent of parameterization, then you should be able to find representations which are independent of parameterization as well. Is that obviously incorrect somehow? You seem to be responding, although understandeably due to shortage of time, in broad strokes, where you seem to suggest that everything about loop space, parameterized or not, is already known. For example, here

The only rep invariant states are the so-called boundary states. So when you restrict to unparameterized loop space you can only deal with boundary states, not with the usual string states.

it seems like you are telling me that even if I successfully develop (rediscover?) a full working framework of unparameterized (but parameterizable) loop space differential geometry that I will only be able to capture a small subset of relevant string states, i.e the so-called boundary states. I could be wrong, but this sounds a little premature given that, as far as I know, such a working loop space differential geometry does not exist :)

Maybe I should make my reservations a little more blatant so that perhaps you can understand my motivation a little better. On occassion, I get the impression that you are actually trying to study string theory. I don’t know what gives me this impression :) This implies a certain reverence, i.e. you seem to have some belief that string theory is somehow correct as is. If it isn’t already obvious, I do not share this reverence. Not at all. In fact, I do not hesitate to think (I actually assume it) that string theory needs to be improved.

That is not to say that I think string theory doesn’t have anything interesting to say because it obviously has a lot of interesting things say. If I weren’t interested, I wouldn’t spend so much time and money trying to learn it! :)

The most basic and profound thing I’ve learned from string theory is that extended objects, e.g. strings, branes, etc., are important :) On the other hand, if you start out with the notion that

Strings are fundamental

I don’t think there is only one road this principle leads down. So basically I am starting with this notion and seeing where it takes me. As far as I’m concerned, if it leads me to string theory as we know it, then that is good for string theory, but it isn’t really a concern of mine whether it does or not. I only care that I uncover something sensible.

So in my personal journey, with a lot of help from you, I have come to expect that the correct formulation of loop space differential geometry will have something important to say for a theory of strings (and other extended objects). To be blunt again, I have reservations about your formulation of loop space differential geometry. You seem to be reassured by the fact that your formulation corresponds to well-known objects in string theory. Rather than take this as reassurance that you are doing something right, my reservations about parameter-dependent loop space differential geometry suggests to me that the corresponding objects in string theory may be problematic. Imagine that! :)

To be a little more specific, I think the measure you have chosen in order to change a summation over indices to an integral is bogus. Please correct me if you think I’m wrong, but I don’t see any way that

(1)α,U γ=α (μ,σ)U (μ,σ)=1 2 π 0 2 πdσα μ(σ)U μ(σ)

can be correct. Sorry! :) Using a notation more similar to Zwiebach’s, an acceptible definition would be

(2)α,U γ= 0 adsα μ(s)U μ(s)= 0 2 πdσsσα μ(σ)U μ(σ),

where s is parameterized by length (energy) along the string and α μ(σ) is short for α μ(sσ). This has the, perhaps undesirable, effect that I discussed before that the evaluation vanishes as the loop shrinks to zero. If this really is undesirable (I’m not sure it is), we can consider the alternative

(3)α,U γ=1 a 0 adsα μ(s)U μ(s)=1 a 0 2 πdσsσα μ(σ)U μ(σ).

This version would coincide with your’s as the loop shrinks to zero, but would differ for finite loops. It would also agree if s=aσ2 π.

This has a kind of neat QM interpretation as an expectation value of an operator over a string vacuum state 1 γ. We can define

(4)α γ=α 1 γ

and

(5)(i Uα) γ=Uα γ=Uα 1 γ

so that

(6)Uα γ=1 γUα 1 γ1 γ1 γ.

Now we have

(7)1 γUα 1 γ= 0 adsα μ(s)U μ(s)

and

(8)1 γ1 γ= 0 ads=a

so that

(9)Uα γ=α,U γ=1 a 0 adsα μ(s)U μ(s).

This is not completely uninteresting :)

Gotta run for now.

Best wishes,
Eric

Posted by: Eric on July 31, 2004 7:39 PM | Permalink | Reply to this

Re: Still Trying

Please correct me if you think I’m wrong, but I don’t see any way that

(1)α,U γ=α (μ,σ)U (μ,σ)=1 2 π 0 2 πdσα μ(σ)U μ(σ)

can be correct.

Perhaps another way to say this more succinctly is that if X μ(σ) is a different coordinate patch with different parameterization, we should demand

(2)α,U γ=α (μ,σ)U (μ,σ)=α (μ,σ)U (μ,σ).

This requirement seems obvious and unavoidable to me. To satisfy this requires a better choice of measure when writing a summation over indices as an integral. The length-based (energy-based) measure clearly satisfies this criterion.

Eric

Posted by: Eric on July 31, 2004 8:45 PM | Permalink | Reply to this

Re: Still Trying

On parameterized loop space it is very simple to get the vector fields, because they are simply linear combinations of the holonomic basis vectors δδX μ(σ).

I hope that you will agree that this is nothing special about parameterized loop space. I can do precisely the same thing with unparameterized (but parameterizable) loop space. The only difference as far as I can see is that when you construct the proper loop space you can write down

(1)U=U (μ,σ) (μ,σ)=U (μ,σ) (μ,σ),

where X μ(σ) and X μ(σ) are different parameterizations of the same loop.

As I have said before, all you have to do is to think of sigma as an extra index and then everything goes through as usual.

Right. I understand this, but again it is not special about parameterization dependent loops. They only need to be parameterizable. If you choose the correct measure for changing the summation to integration everything works out in a parameterization independent manner. I hope that if I keep saying the same thing enough times in different ways that I will eventually make sense :)

Eric

Posted by: Eric on July 31, 2004 11:56 PM | Permalink | Reply to this

Re: Still Trying

Hi Urs,

After rereading your post, I see something you will probably disagree with in my response. Hopefully this will clarify things. You said

For instance 0 2 πdσe inσδδX 1 (σ) is a vector on (parameterized) loop space which does not come from a target space vector the way you indicated.

First of all, I would suggest rewriting this as

(1)s[0,2 π]e insδδX μ γσ(s),

where

(2)σ:[0,2 π]S 1

and

(3)γ:S 1 .

This might not be 100% correct either, but the point is you need to stop using that “” symbol to mean continuum direct sum. In an integral you do not remember the point you summed over. In a direct sum, you do.

Second of all, I’m willing to bet that you don’t really care about this tangent vector on loop space anyway. What you probably care about is

(4)s[0,2 π]e insdX μdsδδX μ γσ(s),

which, I believe, can be rewritten as

(5)s[0,2 π]e inss γσ(s).

This tangent vector obviously has a target space representation as the bunch of tangent vectors tangent to the loop at each point in target space weighted by a Fourier mode.

Regardless of whether the above tangent vector on loop space is equivalent to the counter example you gave, I suspect that the tangent vector

(6)s[0,2 π]e inss γσ(s)

does what you want it to do, i.e. computes Fourier modes of functions defined on loops. Or something like that :)

Ok. Now you can let me have it. I’m ready for brutal enlightenment :)

Best wishes,
Eric

Posted by: Eric on July 26, 2004 3:55 PM | Permalink | Reply to this

Re: Still Trying

I admit I am not being careful about distinguishing between δ, , and d, but I think what I wanted to write was

(1)s[0,2 π]e insdds γσ(s).

Eric

Posted by: Eric on July 26, 2004 4:01 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

I am bot a notation purist and will be content if we find some way to communicate abstract ideas - but… :-)

First but: A vector is not a direct sum, is it? It is an element of a vector space and if that has a basis it is a linear combination of the basis elements of that vector space. Here the basis elements are δδX μ(σ) and when I write a linear combination of these I do want to write an integral, so that the result really is a nice derivation on the algebra of fucntions over loop space, as it should be.

Second but: I do need modes of vector fields other than the rep Killing vector. I do not understand why you want to ignore most of the vector fields that there are on loop space.

Let me emphasize again that I think the best way to think about loop space differential geometry is to put the spacetime index μ together with the parameter σ into a single index I=(μ,σ) and then proceed precisely as in finitely many dimensions. If you accept this prescription it automatically answers all the questions that we are currently discussing. If you do not accept it then let me know why! :-)

Posted by: Urs Schreiber on July 26, 2004 4:50 PM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Hi Urs :)

First but: A vector is not a direct sum, is it? It is an element of a vector space and if that has a basis it is a linear combination of the basis elements of that vector space.

A direct sum can be a vector and in what I described it is a vector. Really all we need to check is that we have well defined scalar multiplication and addition. For scalar multiplication we have

(1)k(LU γ)=pS 1 k(U γ(p)).

Addition is similarly obvious

(2)LU γ+LV γ=pS 1 (U γ(p)+V γ(p)).

This seems to be a well defined vector space to me. For each point on the loop we have a different vector space and the vector space structure on loops inherits from that on target space.

Here the basis elements are δδX μ(σ) and when I write a linear combination of these I do want to write an integral, so that the result really is a nice derivation on the algebra of fucntions over loop space, as it should be.

I would define a function Lf at a point γ in loop space as a direct sum of functions f at points along a loop in target space. Again, the function can be multi-valued as seen from target space, i.e.

(3)Lf(γ)=pS 1 f(γ(p)).

With this definition, my tangent vectors on loop space are certainly derivation of functions on loop space, which is directly inheritted from target space, i.e.

(4)LU(LfLg) γ=LU(Lf) γLg(γ)+Lf(γ)LU(Lg) γ.

Before I proceed, please remember that I am just trying to understand this stuff and not trying to cause trouble :) In fact, this whole thing is just my way of attempting to rewrite what you have already done. From what you say, I suspect that I am further away from understanding than I thought I was. Nonetheless, I don’t think the construction I am building is vaccuous and uninteresting. In fact, it may end up being equivalent once we start computing expectation values, which is my goal. So I appreciate your patience even though it may not be that obvious why I’m bothering with this :)

Second but: I do need modes of vector fields other than the rep Killing vector. I do not understand why you want to ignore most of the vector fields that there are on loop space.

Actually, I wouldn’t say that I necessarily want to ignore them. Not yet anyway. I may decide to ignore them later if there appears a reason to do so. It might be possible that this tangent vector you say you need can be replaced by

(5)s[0,2 π]e insX μ γσ(s).

Granted Lf is not what one might usually think of as a 0-form on loop space because it results in a string of scalar numbers when evaluation at a point γ in loop space, but we can probably correct this if necessary by adding some measure to the loop and integrating the numbers to a single value. Something like a trace around the loop. Using notation similar to our notes, this would be something like

(6)1 Lf(γ).

Before leaving, let me also say that everything I am trying to do here is motivated by a sentence in your deformation paper

The tangent space T Xℒℳ of ℒℳ at a loop X:S 1 is the space of vector fields along that loop.

My equation

(7)LU γ=pS 1 U γ(p)

is nothing more than my attempt to write down what I thought “vector fields along a loop” should look like.

Eric

Posted by: Eric on July 26, 2004 8:07 PM | Permalink | Reply to this

Parameterized vs Unparameterized Loops

Hi Urs,

I have started writing up some notes on loop space differential geometry. I think you will like them very much. Especially all the figures :)

I am very happy to announce that I have finally come to a clear understanding about something that has been causing us frustrating communication problems :)

The key for us to understand one another is for us to understand these essential points:

What I have been calling an “unparameterized loop” is a map

(1)γ:S 1 ,

where S 1 is to be thought of as an intrinsic smooth manifold not necessarily embedded in R 2 . In particular, there is no predefined parameterization of S 1 . Conceptually, I think of S 1 as something like a loop of thread that can be tossed around and smoothly deformed without changing its nature, i.e. it is just a bunch of points making up a closed loop. I certainly do not think of S 1 as having a pre-existing parameterization. It is parameterizable, not parameterized. This is a crucial point and is why I refer to a map

(2)γ:S 1

as an unparameterized loop. Aside from some manageable details, we can define a parameterization of S 1 as a diffeomorphism

(3)σ:R/2 πS 1 .

The loops you are dealing with are what I would consider to be a composition of a parameterization of S 1 followed by an unparameterized loop, i.e.

(4)X=γσ:R/2 π.

I am pretty sure that you would agree with everything I’ve said up to this point. But here is something crucial that I think we’ve both misunderstood (or I haven’t explained myself clearly enough yet) about my version of unparameterized loop space so far. Given two distinct unparameterized loops

(5)γ,γ:S 1

we could still have their images coincide in target space, i.e.

(6)γ(S 1 )=γ(S 1 )

even though we may have

(7)γ(p)γ(p)

for all pS 1 . Because of this, I can still define vector fields K corresponding to your Killing vectors in my unparameterized loop space. In fact, I was puzzled by why you thought I couldn’t define K on my loop space. The confusion is most definitely my fault because I never gave you a precise definition of my loop space (then again you never gave me a precise definition of your’s either! :)). Worse, I probably gave you conflicting information as my ideas were evolving. That is the cost for brainstorming in the open like this, but I think the reward far outweighs the cost, so I will continue providing half-baked ideas here. When an idea begins to solidify, we can write things up more formally. That is what I’m doing now. I’m pretty sure you will like the result (I hope!) :)

Best wishes,
Eric

PS: Here is a parting thought…

If

(8)γ,γ:S 1

are distinct unparameterized loops and

(9)σ,σ:R/2 πS 1 .

are distinct parameterizations of S 1 , then you might still end up with

(10)X=γσ=γσ.

In other words, the way that γ differs from γ might be compensated in the way σ differs from σ. This subtlety may complicate a comparison of our frameworks.

Posted by: Eric on August 2, 2004 3:05 PM | Permalink | Reply to this

Re: Parameterized vs Unparameterized Loops

Hi Urs,

Last night, I reread all 92 posts in this thread so far :) Now that I am writing up these semi-formal notes, things are becoming more and more clear.

Just for the record, I see that when you have been talking about unparameterized loop space, you are often talking about loop space where all points in loop space that have the same image in target space are identified. At one point, I suggested this idea for “loop space of loop space.” For terminology, let me refer to this as “reduced loop space”, i.e. in reduced loop space, two points are identified if they have the same image in target space.

Let me just clarify that there is an intermediate loop space between your space of parameterized loops and that of reduced loop space. That is the space of all “loops”, where I define a loop as a smooth map

(1)γ:S 1

and we are to think of S 1 intrinsically as a manifold without a predefined parameterization. On the other hand, a parameterized loop is a smooth map

(2)X:R/2 π.

Therefore, it seems there are three versions of loop space we can deal with:

1.) Loop Space
2.) Parameterized Loop Space
3.) Reduced Loop Space

Since a loop γ:S 1 and a parameterized loop X:R/2 π are related by a diffeomorphism

(3)Σ:R/2 πS 1 ,

I am beginning to think that loop space and parameterized loop space are actually equivalent :) If that is the case, the I propose we work with loop space, i.e. loops

(4)γ:S 1 .

This is the approach I am taking in our notes and as I work out the details, it seems I am getting essentially the same kind of machinery that you’ve been working with all along, but the parameterization free formulation seems a bit more natural. Everything that you discuss is still perfectly fine, you would just preface everything by saying something like “in a coordinate chart and for a particular choice of parameterization, we have…”

That’s all for now…

Eric

Posted by: Eric on August 3, 2004 4:17 PM | Permalink | Reply to this

Loop Space Differential Geometry

It’s a start :)

Loop Space Differential Geometry
Urs Schreiber and Eric Forgy

Abstract: These are informal notes on loop-space methods resulting from a series of discussions between the authors.

1. Loop Space

Coordinates are evil.

One of the most ironic things a student encounters when first learning differential geometry is the amazing effort that is made in defining all the geometrical gadgets, e.g. manifolds and tensor fields, in terms of coordinates and parameterizations only to spend an equally amazing amount of effort subsequently proving that all of the gadgets were independent of the coordinates and parameterizations chosen in the first place. This irony is even more prevalent when considering the role coordinates and parameterizations play when dealing with loop spaces.

One of the primary purposes of differential geometry is to provide a means to study intrinsic properties of manifolds without thinking of them as being embedded in IR n for some sufficiently large n. With this in mind, let’s have our first

Definition 1.1 Given a manifold , a loop is a smooth map γ:S 1 .

Although S 1 may be thought of as being embedded in IR 2 , we will instead think of it intrinsically as a manifold with no predefined parameterization. In other words, S 1 is parameterizable, but not yet parameterized.

Definition 1.2 A parameterization of S 1 is a diffeomorphism Σ:IR/2 πS 1 .

Definition 1.3 A parameterized loop is a smooth map γΣ:IR/2 π for some loop γ and a specific choice of parameterization Σ:IR/2 πS 1 .

Definition 1.4 Loop space ℒℳ is the set of all loops γ:S 1 .

Definition 1.5 The loop map L:ℒℳ is defined pointwise via L(γ)=γ(S 1 ).

In other words, for each point γℒℳ, there is a corresponding loop L(γ) (see Figure 1).

Figure 1: A point in loop space ℒℳ corresponds to a loop in via the loop map L:ℒℳ.

1.1 Vector Fields

Tangent vectors are easily envisioned as being tangent to some curve on a manifold. This is also true for tangent vectors on loop space. However, a curve on loop space corresponds to sweeping along a loop in target space so the relation between tangent vectors on loop space and those on target space is a little more involved.

Consider the curve on the left side of Figure 2 extending from the point γ to the point γ in loop space. The point γ in loop space corresponds to the outer loop L(γ) in target space, while the point γ in loop space corresponds to the inner loop L(γ) in target space as shown in the right side of the Figure. As the curve is traversed in loop space, each point on L(γ) traverses its own curve in target space so that the tangent vector X γ on loop space corresponds to a continuum of tangent vectors in target space as illustrated. In other words, we have a generalized notion of the push forward of tangent vectors

(1)L *:T γ(ℒℳ) pS 1 T γ(p)(),

where each tangent vector X γT γ(ℒℳ) gets sent to a continuum direct sum of corresponding tangent vectors in target space.

Figure 2: A tangent vector X γ on loop space pushes forward to a continuum direct sum of tangent vectors on target space.

The need for a direct sum can be motivated by considering a tangent vector X γ at a self-intersecting loop. If γ is self intersecting, i.e. if there are distinct points p,qS 1 with γ(p)=γ(q), then the tangent vector X γ will in general map to two distinct tangent vectors in T γ(p)()=T γ(q)() as illustrated in Figure 3. Consequently, if X is a vector field on loop space, then L *(X) will, in general, not correspond to a vector field on target space due to this multi-valuedness.

Figure 3: A tangent vector X γ on loop space pushes forward to multi-valued tangent vectors on target space when the loop is self intersecting.

1.2 Equivalent Loops

As defined, there is a certain amount of redundancy built into loop space. To see this, consider two distinct loops

(2)γ,γ:S 1

having the same image in target space, i.e.

(3)L(γ)=L(γ).

This happens whenever we have any diffeomorphism

(4)F:S 1 S 1

and set

(5)γ=γF.

It is tempting to introduce a reduced loop space ℒℳ/~ by defining an equivalence relation

(6)γ~γL(γ)=L(γ),

which identifies these equivalent loops. For the time being, we will content ourselves by simply noting this redundancy and thinking of it as a new kind of gauge freedom. In the remainder, we will deal with the full, non-reduced, loop space.

1.3 Equivalent Vector Fields

With the redundancy on loop space as discussed in the previous Subsection, it follows that for each γℒℳ there will be a loop subspace

(7)ℒℳ γ={γℒℳL(γ)=L(γ)}.

Any curve in ℒℳ γ will, by definition, connect equivalent loops having the same image in target space. The tangent vectors for such curves push forward to distinguished collections of tangent vectors on target space.

Since the image of equivalent loops in loop space is the same loop in target space, as the curve is traversed in loop space, the corresponding points in target space sweep around the single loop. Therefore, pushing forward a tangent vector K γ on such a curve in loop space gives rise to a collection of tangent vectors in target space, each of which is tangent to the given loop (see in Figure 4.

Figure 4: A tangent vector K γ on a curve connecting equivalent loops pushes forward to tangent vectors on target space that are all tangent to the loop.

If two points γ,γℒℳ have the same image in target space, we would expect such points to be physically indistinguishable. We’ve just seen that this redundancy in loop space gives rise to tangent vectors in target space that are tangent to the loop. This should make us suspect that only tangent vectors in target space that are transverse to the loop are physically relevant, i.e.

(8)L *:[T γ(ℒℳ)] physical pS 1 [T γ(p)()] transverse.

1.4 Coordinate Bases

Given a point γℒℳ whose image L(γ) lies completely within a coordinate patch on , a tangent vector U γ pushed forward to a continuum direct sum of tangent vectors on target space may be expressed as

(9)L *(U γ)= pS 1 U μ(γ(p))x μ γ(p).

Furthermore, if a specific parameterization

(10)Σ:IR/2 πS 1

is chosen, the push forward may also be expressed as

(11)L *(U γ)= σIR/2 πU (μ,σ)(γ)x μ γΣ(σ),

where

(12)U (μ,σ)(γ)=U μ(γΣ(σ)).

This choice of notation is made because we would like to express U γ on loop space via

(13)U γ=U (μ,σ)(γ)x (μ,σ) γ

so that σ takes on the role of a continuum index for summation. This motivates us to write

(14)L *(x (μ,σ) γ)=x μ γΣ(σ),

which takes care of the individual basis elements, but we still need to specify how we are to combine the continuum of basis elements into a general tangent vector on loop space.

For the time being, lets drop the implied summation over σ and write this as an integral over a measure dΩ[γΣ(σ)] that is yet to be determined, i.e.

(15)U γ= 0 2 πdΩ[γΣ(σ)]U (μ,σ)(γ) (μ,σ) γ,

where we have set

(16) (μ,σ) γ=x (μ,σ) γ.

The measure should be chosen so that

(17)U γ= 0 2 πdΩ[γΣ(σ)]U (μ,σ)(γ) (μ,σ) γ= 0 2 πdΩ[γΣ(σ)]U (μ,σ)(γ) (μ,σ) γ,

which would allow is to bring back the implied summation resulting in the simplified expression

(18)U γ=U (μ,σ)(γ) (μ,σ) γ=U (μ,σ)(γ) (μ,σ) γ.(*)

If we push forward both expansions for the tangent vector to target space, we find that in order to satisfy Equation (*), we must have

(19)dΩ[γΣ(σ)]=dΩ[γΣ(σ)],

i.e. the measure should be independent of the parameterization. The only measure that is independent of the choice of parameterization would be proportional to the volume form. Therefore, we will set

(20)dΩ[γΣ(σ)]=T γds[γΣ(σ)]=T γdσs[γΣ(σ)]σ,

where ds is the line element, i.e. volume form on the loop, and T γ is some constant so that

(21)U γ=T γ 0 2 πdσs[γΣ(σ)]σU (μ,σ)(γ) (μ,σ) γ.

Finally, we can use the above as the implied summation so that we arrive at the desired expression

(22)U γ=U (μ,σ)(γ) (μ,σ) γ,

where

(23)U (μ,σ)(γ) (μ,σ) γT γ 0 2 πdσs[γΣ(σ)]σU (μ,σ)(γ) (μ,σ) γ.

For points γℒℳ such that L(γ) lies entirely within a given coordinate patch, this allows us to formally manipulate vector fields

(24)U=U (μ,σ) (μ,σ)

on loop space as we are accustomed to on target space. The main difference being that we have a continuum of indices. It should be noted, however, that additional care would be necessary if target space did not admit a single coordinate patch that covers the entire manifold. If L(γ) extended beyond the coordinate patch, this notation would obviously cease to be valid.

Posted by: Eric on August 3, 2004 9:01 PM | Permalink | Reply to this

Re: Loop Space Differential Geometry

Hi Urs,

I wish you were here :)

I’ve been working hard on this write up. I was at the point where I could finally write down Stokes’ theorem on loop space. But, alas, the way I have defined things, it doesn’t seem to work out.

As you know, by the time you read this anyway, that I have been making tons of arguments about choices of measures when converting a sum to an integral over indices. I had argued that the correct measure should be

(1)U (μ,σ) (μ,σ)T γ 0 2 πdσs[γΣ(σ)]σU (μ,σ) (μ,σ)

As of this moment, the way things are defined in the paper, I can’t see how this is going to allow us to write down Stokes’ theorem.

However, in my dispair, I noticed that if we simply use the measure you have been using all along, i.e.

(2)U (μ,σ) (μ,σ) 0 2 πdσU (μ,σ) (μ,σ),

then I can prove Stokes’ theorem on loop space in a few lines.

Grumble!! How can this be?!?! :)

If this is true, then it could mean that we really do NOT want loop space to be the space of maps

(3)γ:S 1

and we really do want to work with parameterized loop space being the space of all parameterized loops

(4)X:IR/2 π

after all.

If this turns out to be the case, I doubt you will be very surprised :) On the bright side, if Stokes’ theorem dictates that we work on parameterized loop space, then that would be another notch on the ax for Stokes’ theorem :)

Hmm…

Eric

PS: Fortunately, the way I developed the paper, changing it from loop space to parameterized loop space(and hence to agree with your papers) will require only very minor modifications. I think it will make a very nice little paper. Especially the bit about Stokes’ theorem on loop space.

Posted by: Eric on August 6, 2004 8:39 AM | Permalink | Reply to this

Re: Loop Space Differential Geometry

PS: Fortunately, the way I developed the paper, changing it from loop space to parameterized loop space(and hence to agree with your papers) will require only very minor modifications.

As promised, it didn’t take long to make the changes to parameterized loop space. Now I have two versions of the paper. I’m still a little befuddled why I couldn’t get my unparameterized loop space Stokes’ theorem to work out. I don’t give up that easily, so I may still try to find a way. At least it is good to have something that works as a backup :)

If we do end up going with parameterized loop space, which seems likely, then my arguments about the measure being parameter dependent will be moot because EVERYTHING in sight will be parameter dependent. This is probably what you have been saying all along :)

Sorry I’m a “little” slow :)

The good news is that if we write up some notes on loop space differential geometry that even I can understand, then it means anybody could understand it :) This was the goal from the beginning.

As a parting thought, I just wrote a very cute section

1.4.3 Fundamental Theorem of Loop Calculus

(1) Γdϕ = 0 1 trdϕ LΓ(t),ddt LΓ(t)dt = 0 1 [1 2 π 0 2 πdϕ L(σ)Γ(t),ddt L(σ)Γ(t)dσ]dt =1 2 π 0 2 π[ 0 1 dϕdt L(σ)Γ(t)dt]dσ =1 2 π 0 2 π[ϕ L(σ)Γ(1 )ϕ L(σ)Γ(0 )]dσ =tr(ϕ LΓ(1 ))tr(ϕ LΓ(0 ))

so that we have the fundamental theorem of loop calculus

(2) Γdϕ= Γϕ.

Phew! It took me longer to get that equation aligned correctly in WebTeX than it took me to make all the changes in the paper :)

Of course you will need to see the paper to understand all the notation, but you can get an idea for how the mechanics works from the equation.

Eric

Posted by: Eric on August 6, 2004 4:02 PM | Permalink | Reply to this

Re: Loop Space Differential Geometry

Hi Urs,

I imagine you will be returning soon. I hope you had a nice escape while it lasted :) The notes are coming along nicely. Of course the progress is slow, but there is progress.

I’m still a little befuddled why I couldn’t get my unparameterized loop space Stokes’ theorem to work out.

I have found a new map to be pretty handy that helps explain this a little bit. For some σIR/2 π, let

(1)L σ:ℒℳ

be defined by

(2)L σ(γ)=γ(σ).

This map is only defined for parameterized loop space. For a curve

(3)Γ:[0,1 ]ℒℳ

on parameterized loop space, we have a family of curves

(4)L σΓ:[0,1 ]

on target space. The existence of this family of curves is special to parameterized loop space and is what allows us to push forward a tangent vector on loop space to a continuum direct sum of tangent vectors defined along a loop on target space.

For the space of maps

(5)γ:S 1

there is no such natural way to define the push forward of tangent vectors on loop space to target space.

This is one compelling reason, among others, why we need to work with the space of parameterized loops. Kind of neat :)

Ok. I am now officially convinced that we should be dealing with parameterized loop space :) Having the ability to push forward tangent vectors is crucial in order to be able to define integration. Integration is crucial to physics because every measurement involves integration in some way or other. Therefore, if loop space is to have anything to do with physics, it should be parameterized loop space. Ahhh… enlightenment :)

Eric

Posted by: Eric on August 12, 2004 3:14 AM | Permalink | Reply to this

Re: Loop Space Differential Geometry

Hi Eric -

I am back from vacation and ready to do some work again.

I have had a look at your comments. As you note, apparently I didn’t fully understand in which sense you were talking about unparameterized loop space. Indeed, to me unparameterized loop space was the space of equivalence classes of maps from (0,2 π] into target space, where maps are taken to be equivalent if they they have the same image in target space.

I believe that this is a very awkward space to deal with directly.

I see that you have convinced yourself of some of the merits of parameterized loop space. But I am surprised about what you call Stokes’ theorem on loop space. Seems to me that what you wrote down as section 1.4.3 is some fancy version of Stokes’ theorem (the ‘fundamental theorem’) on a single loop not on loop space.

Stokes’ theorem on parameterized loop space must involve some integration over loops, not over points on a single loop. And it can, formally, be written down easily, since when treating σ as a continuous index everything looks excactly as in finitely many dimensions. The only subtlety is to take care that the formal expressions used this way are well defined. But that part can be dealt with by writing functions on loop space as formal power series, the way it was done by Chen, as described in hep-th/0401215, which we talked about before. If you/we are serious about writing some mathy stuff about loop space differential geometry it would probably be indispensable to first carefully read Chen’s book on this topic. I suspect that most of what you are currently thinking about has been done there, already.

I’ll see if I can get ahold of a used copy of this book (since it is apparently out of print). Maybe the papers collected in that book are even available online somewhere?

Posted by: Urs Schreiber on August 12, 2004 4:13 PM | Permalink | PGP Sig | Reply to this

Re: Loop Space Differential Geometry

I am back from vacation and ready to do some work again.

Hey hey! :) Good to see you back :)

Seems to me that what you wrote down as section 1.4.3 is some fancy version of Stokes’ theorem (the ‘fundamental theorem’) on a single loop not on loop space.

No worries, there are many details missing in that blurb. Later tonight, I will email a copy of my notes. With some more work and your blessing, maybe we can make them available on the arxives.

For the time being, let me just reassure you that in Section 1.4.3 above, dϕ is a 1-form on loop space and

(1)Γ:[0,1 ]ℒℳ

is a curve on loop space. You’ll see right away how it works when you have the notes in your hands. I tried to be thorough and to make it self-contained, so it should hopefully be readible (or I failed miserably :)).

It will be necessary to be aware of Chen’s work for references, but I am having too much fun discovering this stuff on my own to ruin the fun and read his papers. If everything I do turns out to be unoriginal, that is fine. We can just promote it as a “review” article. Hopefully, I can add some new insights that might help others learn the material at least.

Best wishes,
Eric

Posted by: Eric on August 12, 2004 7:31 PM | Permalink | Reply to this

Sigma

Hi Urs,

I actually spent some time this weekend trying to make some changes to the notes you suggested, but I don’t feel like I accomplished much. A big reason being that since my accident 3 weeks ago, I haven’t been able to get a good night’s sleep. This cast has got my wrist bent down at just the right angle to make it impossible to find a comfortable position to sleep. The best I can do is catch several “naps” throughout the night. I was also napping throughout the day Saturday and today, which is not very conducive to work.

With my excuses out of the way, back to the fun stuff :)

The thing I spent the most time meditating about this weekend was the issue of “mixed forms”, i.e. forms obtained by wedging coordinate basis 1-forms at different values of σ, e.g.

(1)dx (μ,σ)dx (ν,σ),

where σσ. Of course I can appreciate the desire to simply postulate that σ take on the role of a continuum index, in which case the above would be a natural thing to do, but I haven’t been able to convince myself yet that doing so is completely justified, or perhaps “motivated” is a better word.

I could be wrong, but skimming your deformation paper, I didn’t find anything that explicitly makes use of distinct σ’s. In fact, unless I missed something, where distinct σ and σ appear, they are accompanied by a δ(σ,σ).

If that is true, then it would seem that we could, in principle, rewrite that paper using exclusively the concept that I refer to in the notes as “loop tensor fields,” i.e. tensors defined along a loop in target space, e.g. a loop vector field X L(γ) along the loop L(γ) in target space is defined by

(2)X L(γ)=σIR/2 πX γ(σ),

where

(3)γ:IR/2 π

is a parameterized loop, i.e. a point in loop space, and

(4)L:ℒℳ

is the loop map which takes a point in loop space to its image in target space.

To put it another way, which is probably easier to refute, is that it seems like nothing would be changed if we set

(5) (μ,σ) (ν,σ)=δ(σ,σ) (μ,σ) (ν,σ)

and

(6) (μ,σ) (ν,σ)=δ(σ,σ) (μ,σ) (ν,σ).

Then we’d still have

(7){ (μ,σ), (ν,σ)}=0 ,
(8){ (μ,σ), (ν,σ)}=0 ,
(9){ (μ,σ), (ν,σ)}=δ (μ,σ) (ν,σ)

as before.

If you could demonstrate that this would lead to results that are incompatible with the paper, that would help me understand what I am missing because right now I am thinking that it is, in fact, compatible.

Eric

Posted by: Eric on August 16, 2004 2:49 AM | Permalink | Reply to this

Re: Sigma

Hi Eric -

I am feeling with you concerning your wrist. A couple of years ago I broke my right foot and I know how such a seemingly peripheric injury can bedevil body and soul.

I volunteered to baby-sit my mother’s dog today, which makes me spend a rather unproductive but very relaxing day, mostly filled up with driving by bike through the park like a madman - with the dog still being faster than me.

But my dad has an age old PC, and I see if I can successfully use it to post to the SCT.

You are right that many nice and useful objexts on loop space involve single integrals over loops only. But that’s just because that’s the simplest case. When you look at my 2-form paper or the one in preparation on super-Pohlmeyer stuff, you’ll see that for instance when nonabelian things come into play multi-integrals (or ‘iterated integrals’ in Chen’s language) are unavaoidable, and these in general involve the ‘mixed forms’ that you are talking about.

And that’s no wonder. After all, as I have mentioned in private mail recently, when you take two generic 1-forms on loop space and wedge them together, the result contains ‘mixed’ forms. There is nothing scary about these.

Also notice that I am not trying to ‘postulate’ that sigma plays the role of an index. This is a result, not an axiom. We talked about how a vector looks like on loop space, and by looking at the resulting expressions you see that the integral over sigma plays precisely the same role as the sum over the indices. The Polygon-space approximation makes it maybe easier to see why this must be true. Here the space one is working on is essentially the direct sum of several copies of target space, and this direct sum obviously induces indices that involve several copies of the original indices.

So let’s try to approach this systematically. I am claiming that once we know what a vector on loop space is everything about rank (m,n) tensors follows. Do you agree with what I wrote about loop space vectors last time? Can you see how the tensor product of two such vectors necessarily (in general) involves ‘mixed’ elements? Let me know which of these steps you do not agree with.

Posted by: Urs on August 16, 2004 3:17 PM | Permalink | Reply to this

Re: Sigma

Hi Urs :)

Thanks for the image of you being pulled along behind your mom’s dog. Injuries aside, work has been extremely unpleasant lately so anything to make me chuckle is more than welcome :)

So let’s try to approach this systematically. I am claiming that once we know what a vector on loop space is everything about rank (m,n) tensors follows. Do you agree with what I wrote about loop space vectors last time? Can you see how the tensor product of two such vectors necessarily (in general) involves ‘mixed’ elements? Let me know which of these steps you do not agree with.

Ok, but first of all, let me clarify that my conviction one way or another is not strong enough for me to “not agree” with any particular step. Not yet anyway. At the moment, I think a better way to describe the situation is that I don’t yet understand things.

What I think I understand is that a tangent vector X γ on loop space pushes forward to a loop vector field X L(γ) on target space, as described in my notes. I am pretty sure this is correct, especially since you liked that figure so much :)

I think I also understand that if γℒℳ is a loop for which L(γ) is contained with a coordinate chart on target space, we can express a covector as

(1)α γ= 0 2 π[α (μ,σ)dx (μ,σ) γ]dσ,

where, for the time being, I want to explicitly write down the integral over σ. From this, it follows that we should have

(2)α γβ γ= 0 2 π 0 2 π[α (μ,σ)β (ν,σ)dx (μ,σ) γdx (ν,σ) γ]dσdσ.

What I don’t understand is why we do not set

(3)dx (μ,σ) γdx (ν,σ) γ=δ(σ,σ)dx (μ,σ) γdx (ν,σ) γ

so that

(4)α γβ γ= 0 2 π[α (μ,σ)β (ν,σ)dx (μ,σ) γdx (ν,σ) γ]dσ.

If you do not do this, then the tensor product on loop space corresponds to a non-local operation on target space, where you combine elements in different cotangent spaces.

If we forget about loop space for a second, there is a place where this kind of nonlocal tensor product appears in standard differential geometry. That is in the Green’s function

(5)G=G(x,x)dxdx.

In the case of EM, we can write things like

(6)A= Gj,

where j is a current 1-form and A is the 1-form produced by the source j. In EM, I would call G a “multi-form” because it is a form, not on , but on ×, i.e.

(7)GΩ 1 ()Ω 1 ().

Of course, I would never suggest anything like the Green’s function is not important because it obviously is extremely important. However, I don’t think anyone would think anything wrong if I said that G was not a form on .

Similarly, I am not suggesting that mixed forms, or better “multi forms”, on loop space are unimportant. I am just wondering if such a thing would be more naturally associated with tensor products of spaces of forms on loop space. For example, if you have an application (non-abelian connections being a prime example), where you need things like

(8)dx (μ,σ)dx (ν,σ)

for distinct values of σ and σ, then is it possible that it would be more appropriate to think of this as a form

(9)dx (μ,σ)dx (ν,σ)Ω 1 (ℒℳ)Ω 1 (ℒℳ)

rather than

(10)dx (μ,σ)dx (ν,σ)Ω 1 (ℒℳ)?

If my hunch is correct that your deformation paper could be completely rewritten using loop tensor fields on target space, that would add substance to my suspicions.

Again, I’m not convinced either way, I’m just still trying to understand things and this is where I am at present.

Eric

Posted by: Eric on August 16, 2004 5:18 PM | Permalink | Reply to this

Re: Sigma

Thanks for the image of you being pulled along behind your mom’s dog. Injuries aside, work has been extremely unpleasant lately so anything to make me chuckle is more than welcome :)

The full truth is even more amusing than that:

The dog is not on the leash, usually, because she (a female) takes orders and the park/forest is rather large. So she runs around and invents games to play with us. One of these games is called: ‘Bet I am faster through the underwood then you are by bike on the path.’ She really takes the rigid self-invented rules of that game quite serious, to the extend that people stop watching us in amazement.

So we align like in an olympic contest, she in the scrubs, me on the path. She trembles in excitement, her eyes staring at me, but she won’t move until I give the correct sign, like shouting ‘los’ or clapping my hands. When I do, she darts off fiercely. (Sometimes I apparently violate the rules of the game by making some mistake with the start sign. Then she complains about my dumbness, instead of starting to run.)

I’ll try to catch up on my bike. But she reaches a certain point (always the same point that she somehow chooses in the beginning) which marks the end of the race usually before I do (not quite always, though, I am a more serious competitor then my mom…). Then she meets me half way, boasting in her victory - and ready for the next run.

Ok, back to loops. I am glad that you wrote:

What I don’t understand is why we do not set

dx (μ,σ)dx (ν,σ )=δ(σσ )dx (μ,σ)dx (ν,σ )

because that allows us to quite precisely address the point under discussion.

My answer is: We don’t do that because we are not free to define what ‘tensor product’ is supposed to mean. If you claim to provide a rank 2 tensor on loop space you are not free to redefine the laws of tensor multiplication.

(Besides, I think the definition you gave is self-inconsistent, because it would imply dx (μ,σ)dx (ν,σ )=δ n(σσ )dx (μ,σ)dx (ν,σ ) for an arbitrary power n, which is not well defined.)

If you do not do this, then the tensor product on loop space corresponds to a non-local operation on target space, where you combine elements in different cotangent spaces.

Yes! That is the case and very important. For instance in my 2-form paper I noted that gauge transformations on loop space only avoid certain non-localities in the resulting field strength of this sort if a certain consistency condition is met.

Still, these non-localities are there in general and they need to be there.

But the best thing is that you seceretly came to the same conclusion, apparently without noticing ;-) namely you wrote

is it possible that it would be more appropriate to think of this as a form

dx (μ,σ)dx (ν,σ)Ω 1 (ℒℳ)Ω 1 (ℒℳ)

That’s not only more appropriate, that’s what I am saying all along! :-) Think about this same equation not for forms on loop space but for ordinary forms. Then you will immediately see that this is the very definition of tensor product. The space of rank 2 tensors is the tensor product of the space of rank 1 tensors. That’s precisely how things are defined. And this is what I was getting at when teling you to consider the tensor product of any 2 vectors or 1-forms on loop space. The result ceratainly sits in the above tensor product space and it contains mixed forms!

You continued to write:

rather than

dx (μ,σ)dx (ν,σ)Ω 1 (ℒℳ)

Maybe you have a typo here? As stated, the left hand side is a 2-form, the right hand side the space of 1-forms. But even if you put Ω 2 on the right, we have Ω 2 Ω 1 Ω 1 and the same as above applies.

Hopefully this convinces you. If not, let me know about further doubts. :-)

Posted by: Urs Schreiber on August 16, 2004 6:42 PM | Permalink | PGP Sig | Reply to this

Re: Sigma

Wait! Wait! I feel like I walked into a trap :)

I made a typo and wrote something that maybe looks correct on accident :)

If you take a look at this paper, you will see that the Greens function is a double form. The reference to double forms points back to de Rham, “Differentiable Manifolds.” Too bad I don’t have this book handy.

I guess a more appropriate way to write the Greens function is maybe

(1)GΩ (1,1 )(×)

rather than

(2)GΩ 1 ()Ω 1 ()

as I mistakenly said in that last post. I hope you agree that

(3)GΩ 1 ()Ω 1 ().

Before I can make my point about multi-forms on loop space, we better try to settle on what are multi-forms on target space.

This is the way I understand it so far, which might not be 100% correct. Say we have manifold and 𝒩 with their respective spaces of forms

(4)Ω()=p=0 nΩ p()

and

(5)Ω(𝒩)=p=0 nΩ p(𝒩).

Given a p-form αΩ p() and a q-form βΩ q(𝒩), we can constuct a double (p,q)-form

(6)αβΩ (p,q)(×𝒩).

Keep in mind that might not be what one technically should call a double form, but it is what I imagine it should be not having a proper reference in front of me.

Probably a better thing to do would be to define injection maps

(7)ι 1 :Ω p()Ω (p,0 )(×𝒩)

and

(8)ι 2 :Ω q(𝒩)Ω (0 ,q)(×𝒩)

via

(9)ι 1 (α)=α1

and

(10)ι 2 (β)=1 β

so that

(11)ι 1 (α)ι 2 (β)=αβ

and we define

(12)(α 1 β 1 )(α 2 β 2 )=(α 1 α 2 )(β 1 β 2 ).

Perhaps is not the best symbol to use for bookkeeping purposes, but I hope you see what I am getting at. Come to think of it, perhaps is what we want because we would like to be able to move 0-forms across it (I think).

The point is that given a p-form αΩ p() and a (p,p)-form GΩ (p,p)(×𝒩), we obtain a p-form αΩ p(𝒩) via

(13)α= Gα.

This last expression is the important one and the one we want. Everything preceeding that is me trying to fill in the blanks. No guarantees on how well I did :) Oh yeah, the case we are interested in, of course, is =𝒩.

The whole point is that I am trying to figure out if

(14)dx (μ,σ)dx (ν,σ)

for σσ, should be thought of as an element of

(15)Ω 1 (ℒℳ)Ω 1 (ℒℳ)

or

(16)Ω (1,1 )(ℒℳ×ℒℳ).

I’m pretty sure you will tell me it is the former, but I don’t see why it isn’t the latter.

Eric

Posted by: Eric on August 16, 2004 8:59 PM | Permalink | Reply to this

Re: Sigma

Hi Eric -

I am somewhat reluctant to discuss ‘multi-forms’ until we manage to agree on the simple issue of ordinary forms. I think the elementary question at hand does not require any such notions.

Let me again emphasize the simple point under discussion:

You agreed that dx (μ,σ) is a 1-form on loop space. The fact that the symbol σ appears here has nothing to do with the ‘multi-forms’ and Greens functions that you mentioned. It is just a label identifying this particular form. Don’t think in terms of form on target space. We are talking about forms on loop space. Maybe it helps if we just write α for it. Then pick any other 1-form β on loop space.

Now you can wedge these together to obtain αβ (which, by the way, is =αββα, at least in the ordinary sense in which I am using the tensor product here).

The simple fact is that αβ=0 if and only if αβ. But you want to introduce a rule which makes αβ vanish even when they are not linearly dependent. Any such rule breaks the ordinary notion of forms and wedge products.

Of course one may make up new definitions as one desires, but please let us first agree on the ordinary notions.

Posted by: Urs Schreiber on August 17, 2004 11:09 AM | Permalink | PGP Sig | Reply to this

Re: Sigma

Hi Urs,

Sorry if I seem to be going off on tangents (no pun intended :)). The thing is that in my attempt to understand loop space differential geometry, I started writing up those notes. I actually understand my notes :) This means that I am only starting to understand loop space differential geometry.

The simple fact is that αβ=0 if and only if αβ. But you want to introduce a rule which makes αβ vanish even when they are not linearly dependent. Any such rule breaks the ordinary notion of forms and wedge products.

There is another case where αβ=0 even when α is not proportional to β. That is when the support of α and the support of β do not overlap.

In the case we are talking about,

(1)dx (μ,σ) γanddx (ν,σ) γ

for σσ correspond to cotangent vectors in target space in completely different cotangent spaces. Hence, even though it may turn out to be wrong, I don’t think it is so outlandish to ask the question.

The following is a question I’ve been saving for a rainy day. Even though it is sunny outside it may be relevant here. First, “What is a 0-form on loop space?”

After writing my notes, it became clear to me that a 0-form on loop space corresponds to a loop 0-form on target space, i.e. a 0-form defined along the image of loops in target space. Evaluating the 0-form on loop space at a point γ corresponds to integrating the loop 0-form along the loop in target space via the parameter measure dσ, a process I called tr, i.e.

(2)ϕ γ=tr(ϕ L(γ))= 0 2 πϕ γ(σ)dσ.

Ah ha! :) I shouldn’t have saved this question because I think it will help me understand things :)

The main question is, “How do we wedge two 0-forms together?” Taking a cue from standard geometry, we could write down

(3)(ϕψ) γ=ϕ γψ γ,

where the multiplication on the right is done in IR. If this is correct, then we have

(4)(ϕψ) γ= 0 2 π 0 2 π[ϕ γ(σ)ψ γ(σ)]dσdσ

and we have a similar “problem.” I would have been inclined to write

(5)(ϕψ) γ= 0 2 π[ϕ γ(σ)ψ γ(σ)]dσ,

i.e. you take values at corresponding σ’s in target space.

Of course if

(6)ϕ γ(σ)ψ γ(σ)=δ(σ,σ)ϕ γ(σ)ψ γ(σ)

then the two (would seem to) agree and we are back to square one :) At least getting a handle on this would seem to be slightly easier :)

It now seems to me to be a matter of order of operations. Do we multiply first and then integrate or integrate then multiply? The operations clearly do not commute. I think this gets to the heart of my concerns. What do you think? In standard geometry, where the wedge product above took its cue, the problem does not appear because multiplying and integration DO commute on “point space”, whereas they do not commute on loop space.

Hmm, interesting :) It is a subtle, but I think important, point.

What do you think about the commutation of multiplying and integrating? Depending on the choice we make, we will get two different versions of loop space differential geometry. In my notes, I chose to multiply first and then integrate. In your papers, you apparently integrate first and then multiply.

I hope you agree that the fact that we have an “order of operations” issue giving rise to two distinct flavors of loop space differential geometry is at least slightly interesting :)

Gotta run!
Eric

Posted by: Eric on August 17, 2004 2:24 PM | Permalink | Reply to this

Re: Sigma

Hi Eric -

you wrote:

That is when the support of α and the support of β do not overlap.

No, we don’t need to take that complication into account. Everything I am saying applies to one single cotangent space at a given point in loop space.

And yes, we could do the same discussion for 0-forms.

You ask what a 0-form on loop space is. It is just a function on loop space. The most important example of such a function which is not local in your sense is the Wilson line of some connection around the loop. Let A be any connection on target space and let F be the function on loop space which assigns

F(γ)=TrPexp( 0 2 πdσA(γ(σ))γ (σ)). This is a 0-form on loop space and it involves products of 0-forms on target space at different points. This is not a bug, but a feature.

You ask how we multiply 0-forms. As we always do! By multiplying their values. We don’t change the rules of differential geometry at all. So (ϕψ)(γ)=ϕ(γ)ψ(γ). This has nothing to do with loop space. This is just the general rule for doing differential geometry.

If you like to think of a generalization of this which you address as ‘order of operations issue’ that’s fine with me, but let’s first agree on the ordinary way to do differential geometry.

Please! :-)

Posted by: Urs Schreiber on August 17, 2004 4:46 PM | Permalink | PGP Sig | Reply to this

Re: Sigma

Hi Urs! :)

Thanks for your patience so far. I feel like as patient as you’ve been, I better say something intelligent quick or I will push you over the edge :)

You ask how we multiply 0-forms. As we always do! By multiplying their values. We don’t change the rules of differential geometry at all. So (ϕψ)(γ)=ϕ(γ)ψ(γ). This has nothing to do with loop space. This is just the general rule for doing differential geometry.

I have a feeling that what I am about to say is not very intelligent, but hopefully you can bear with me for just a little longer. Focusing on 0-forms, I’m sure we can straighten things out soon.

The rule

(1)(ϕψ)(p)=ϕ(p)ψ(p)

is absolutely unquestionable for 0-forms on “point space”, i.e. on standard manifolds where the points have no internal structure.

However, loop space is more subtle. When you say

let’s first agree on the ordinary way to do differential geometry

you are assuming that there is an ordinary way to do loop space differential geometry. If this were the case, then I don’t think Rajeev would have made the statement

The set of loops on space-time is an infnite dimensional space; calculus on such spaces is in its infancy. It is too early to have rigorous definitions of continuity and differentiablity of such functions. Indeed most of the work in that direction is of no value in actually solving problems of interest (rather than in showing that the solution exists.)

A point on loop space, unlike a point in a usual manifold, has some internal structure to it. This internal structure can potentially lead to some unordinary behavior. In particular, it can potentially lead to an “order of operations” ambiguity. This ambiguity does not appear in ordinary differential geometry because there the points have no internal structure.

To help explain what I mean, consider two ordinary 0-forms on some ordinary manifold whose points have no internal structure. In this case, we can multiply first and then integrate giving

(2) pϕψ=(ϕψ)(p).

Or we could integrate first and then multiply giving

(3) pϕ pψ=ϕ(p)ψ(p).

Since

(4)(ϕψ)(p)=ϕ(p)ψ(p),

it doesn’t matter in which order we perform the operations.

However, on loop space, the points have some internal structure so evaluating a 0-form on loop space involves an extra integral over this internal structure, i.e.

(5) γϕ=1 2 π 0 2 πϕ(σ) γdσ,

where the normalization is chosen so that 1 γ=1 . Because of this internal structure, now it does matter in which order you perform the operations. For instance, if we multiply first and then integrate, we get

(6) γϕψ=1 2 π 0 2 πϕ(σ) γψ(σ) γdσ.

On the other hand, if we integrate first and then multiply, we get

(7) γϕ γψ=1 2 π1 2 π 0 2 π 0 2 πϕ(σ) γψ(σ) γdσdσ.

Therefore, due to the internal structure of points on loop space, we have

(8) γϕψ γϕ γψ.

This might seem unordinary, but maybe loop space differential geometry is supposed to be unordinary :)

Although it should always go without saying, I am still not convinced one way or the other and am just trying to explore the various ways to define these basic operations. The internal structure of points on loop space seems to add a subtlety that might make these discussions worth while.

Eric

Posted by: Eric on August 17, 2004 6:31 PM | Permalink | Reply to this

Re: Sigma

Hi Eric -

don’t worry, I am ready to discuss this indefinitely. Maybe I found it irritating that it seemed to me that you were fighting against the obvious. But let’s sort that out - either way! :-)

Yes, we have to be careful on loop space, due to its infinite dimensionality there can be divergences in naive expressions and means must be taken to circumvent these. But our discussion is not at that point yet. The ambiguity that you are worried about is not a problem of this sort but - if I may say that - a confusion in your notation! :-)

In my opinion your emphasis on target space geometry while we are really talking about loop space geometry gets in the way of some simple insights.

A point on loop space does not have substructure. No point does.

In what you think of as a counterexample, where you write

(1) γϕψ=1 2 π 0 2 πϕ γ(σ)ψ γ(σ)dσ

you are implicitly using the wegde product on target space on the left. At the point we are discussing this is extra structure which I would really urge you to forget for a moment. When you think about it, this does not follow the general scheme that you mentioned more at the beginning of the comment. There is no reason for this to agree with the expression γϕ γψ, which is the correct expression.

Moreover, by far not every function on loop space is of the form that you are assuming here. I.e. not every F is of the form F(γ)= γϕ(σ)dσ. Just take the Wilson loop I mentioned as an example.

Posted by: Urs Schreiber on August 17, 2004 7:13 PM | Permalink | PGP Sig | Reply to this

Re: Sigma

Hi again Eric -

last night I kept thinking about how I could convince you concerning our discussion. It occurred to me that it might help to again look at n-gon space.

In particular let’s look at “triangle space” Triangle, the space of 3-tuples of points in target space , which should be thought of as indexed by a discrete σ taking values 0, 2pi/3 and 4pi/3.

This space already features all the issues that we are talking about and it finite dimensional, so that we can safely ignore problems with divergencies that occur in loop space and hence convince ourself that these play no role for the identification of the differential geoimetry over loop space.

Let me for simplicity just assume that target space is R 2 . Then the first important thing is that Triangle is just the same as R 6 .

This makes my point that we should not think about points in Triangle having ‘substructure’ in the sense you used this notion in your comment.

Namely differential geometry on Triangle=R 6 does not depend on our interpretation of a point in R 6 as describing a triangle in R 2 . Given two functions ϕ and ψ on Triangle=R 6 their product is obviously

(1)(ϕψ)(x)=ϕ(x)ψ(x).

Next consider 1-forms on Triangle=R 6 . If the coordinates on =R 2 are labeled x 1 and x 2 it is convenient to label the coordinates of Triangle=R 6 as

x (1,0 ), x (1 ,2 3 π), x (1 ,4 3 π), x (2,0 ), x (2 ,2 3 π), x (2 ,4 3 π)

We could just as well label them x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , but the above labeling reminds us about how we want to interpret the points in Triangle=R 6 . But let’s use both notations equivalently

x 1 =x (1,0 ), x 2 =x (1 ,2 3 π), x 3 =x (1 ,4 3 π), x 4 =x (2,0 ), x 5 =x (2 ,2 3 π), x 6 =x (2 ,4 3 π)

In these coordinates, the basis 1-forms are of course

dx 1 =dx (1,0 ), dx 2 =dx (1 ,2 3 π), dx 3 =dx (1 ,4 3 π), dx 4 =dx (2,0 ), dx 5 =dx (2 ,2 3 π), dx 6 =dx (2 ,4 3 π)

where again the equality signs just equate two notations.

And from these we obtain non-vanishing 2-forms like

(2)dx 1 dx 2 =dx (1,0 )dx (1 ,2 3 π).

Note that just because we want to interpret points in R 6 as describing triangles in R 2 the differential geometry on R 6 does not change. Still, in the interpretation of R 6 as TriangleR 2 the above ordinary 2-form on R 6 has a ‘non-local interpretation’ in target space, in a sense. But this is not a problem and in fact inevitable and good. Given any triangle in R 2 one can certainly consider changes of the x 1 coordinate of its vertex labeled by σ=0 together with changes of the x 1 coordinate of its vertex labeled by σ=2 π/3 . This gives the above 2-form.

Similarly for 0-forms on Triangle. Consider the function F on Triangle=R 6 which maps any triangle on R 2 to its circumference and consider the function G which maps a triangle to the length of its longest edge. Both F and G are ordinary functions on Triangle=R 6 and their product FG is certainly

(3)(FG)(x)=F(x)G(x)

which, when evaluated on any point x of Triangle=R 6 returns the product of the circumference of the given triangle with the length or its longest edge.

Furthermore note that the functions F and G do not come from summing 0-forms on target space over the vertices of the triangle.

The functions which you considered as 0-form on loop space in your latest comment would have triangle space analogs H of the form

(4)H(x)= i=1 6 h ix i

with {h i} a set of constants. Clearly, this is only a very small subset of all functions on Triangle=R 6 .

I hope that these example at least clarify my point. I would suggest that we first try to agree on the differential geometry of triangle space before continuing the dicussion of loop space.

Posted by: Urs Schreiber on August 18, 2004 10:57 AM | Permalink | PGP Sig | Reply to this

Triangle Space

Hi Urs,

Thanks for putting up with these questions. Believe it or not, none of this stuff is obvious to me and I’m not simply trying to cause trouble :) On the bright side, if I am struggling with this, I can pretty much guarantee you that I’m not alone. Of course there will be anomalies of incredibly bright students, like yourself, for which this stuff is obvious, but I’m willing to bet that for a majority of students, it is far from obvious.

Work is distracting me at the moment (or should it be the other way around? :)), but I think your idea of working with triangle space is a good one.

More later…

Gotta run,
Eric

Posted by: Eric on August 18, 2004 6:57 PM | Permalink | Reply to this

Re: Triangle Space

Hi Eric -

I am glad that you replied, I was getting worried that I might have annoyed you.

Yes, let’s consider triangle space and maybe n-gon space for arbitrary n. The major part of the interesting stuff that I am looking forward to discuss can be discussed on n-gon space just as well as on loop space. Once the concept of n-gon space is familiar loop space is just an afterthought - and we can then concentrate on the subtleties it brings with it.

Maybe one further comment that might help familiarize with these notions:

Obviously n-gon space is nothing but the configuration space of n distinguishable particles. From that point of view it may not seem all that exotic. A point in n-gon space is a configuration of a physical system consisting of n distinguishable particles. A vector on n-gon space hence is associated with an infinitesimal shift in the configuration of these particles, and so on.

Posted by: Urs Schreiber on August 18, 2004 7:26 PM | Permalink | PGP Sig | Reply to this

Re: Triangle Space

Hi Urs :)

I am glad that you replied, I was getting worried that I might have annoyed you.

Hah! “You” annoy “me”? I’m more worried of the other way around :) Occasionally, one or two people in my life have tried hard enough to actually annoy me, but you are unlikely to ever achieve that fete :)

Yes, let’s consider triangle space and maybe n-gon space for arbitrary n. The major part of the interesting stuff that I am looking forward to discuss can be discussed on n-gon space just as well as on loop space. Once the concept of n-gon space is familiar loop space is just an afterthought - and we can then concentrate on the subtleties it brings with it.

Sounds great to me :)

The first troubling thing for me is now this issue of reparameterization invariance. For n-gon space on IR m, there are 2 n (isolated) points in IR mn that correspond to the same n-gon in target space. This comes from n cyclic permutations in the one direction and n cyclic permutations in the other. For example,

(1)(x (1,0 ),x (1 ,2 3 π),x (1 ,4 3 π),x (2,0 ),x (2 ,2 3 π),x (2 ,4 3 π))
(2)(x (1 ,2 3 π),x (1 ,4 3 π),x (1,0 ),x (2 ,2 3 π),x (2 ,4 3 π),x (2,0 ))

and

(3)(x (1 ,4 3 π),x (1,0 ),x (1 ,2 3 π),x (2 ,4 3 π),x (2,0 ),x (2 ,2 3 π))

all correspond to the same triangle in IR 2 . It’s going to take some work before I’m even confortable with n-gon space :)

Gotta run!
Eric

Posted by: Eric on August 18, 2004 10:38 PM | Permalink | Reply to this

Re: Triangle Space

Hi Eric -

all correspond to the same triangle in IR 2

True. But this true fact should not keep you from getting to the interesting stuff.

If you prefer, call the space the ‘labeled triangle space’ which consists of triangles with distinguishable vertices. The space of unlabeled triangles is a subspace of this space and that’s fine.

Posted by: Urs Schreiber on August 19, 2004 1:57 AM | Permalink | PGP Sig | Reply to this

Re: Triangle Space

Hi Urs,

all correspond to the same triangle in IR 2

True. But this true fact should not keep you from getting to the interesting stuff.

Right. I pretty much understand that this is the same concept as what I called “equivalent loops,” i.e. points in loop space with the same image in target space differing only by parameterization, so it doesn’t bother me. But what is different here is that “equivalent n-gons” are discrete isolated points in n-gon space, where we had a continuum loop subspace before. This subspace of equivalent loops allowed us to define equivalent vector fields corresponding to loop vector fields tangent to the loop, i.e. your K vector fields.

This means that we cannot define this reparameterization vector field K on n-gon space because we cannot define curves on these discrete points. I am tempted to suggest considering n-gon space over a discrete manifold, e.g. n-diamond, as in our notes :)

That might make an interesting follow up paper ;)

Another thing about this redundancy of equivalent n-gons is the 2 n-fold symmetry under cyclic permutations of the coordinates. That wasn’t quite obvious to me before. This translates to an -fold symmetry on loop space (at least for loop space over IR m).

With that said, I agree that we can still learn a lot about loop space by studying n-gon space. At the same time, we should understand the differences, e.g. no K vectors.

More later, but goodnight for now,
Eric

Posted by: Eric on August 19, 2004 6:11 AM | Permalink | Reply to this

Re: Triangle Space

Hi Eric -

yup, there is no continuous rep invariance on n-gons. That’s what we are loosing in this approximation.

BTW, the cyclic permutation symmetry is n-fold on oriented n-gon space and 2 n-fold only on non-oriented n-gon space.

Are you planning to include a discussion of the differential geometry of n-gon space in your notes?

Posted by: Urs Schreiber on August 19, 2004 11:06 AM | Permalink | PGP Sig | Reply to this

Post a New Comment