Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

November 25, 2006

Bulletholes

This blog, as well as the String Coffee Table and the n-Category Café, are served as application/xhtml+xml to compatible browsers. They, therefore, need to be well-formed at all times. Otherwise, visitors will see a “yellow screen-of-death” instead of the desired content.

In order to ensure well-formedness, user-input is validated before it can be posted. A local copy of the W3C Validator is hooked into the “preview” function for comments and entries. And, in the case of comments, we rigourously enforce that comments validate before they can be posted.

That sounds great in theory. And, in practice, it seems to have worked quite well. One might even be forgiven for complacently thinking the arrangement bulletproof.

But, then Henri Sivonen came along1, to point out that one has been living in a fool’s paradise. The W3C Validator fails to even enforce well-formedness. Actually, the fault is not in the software written by the W3C, but in the onsgmls SGML parser, which has only limited support for XML.

Far from being bulletproof, it was quite trivial to introduce non-well-formed content onto these blogs. That none of the previous six thousand or so comments have done so can be attributed either to dumb luck, or to the essential goodness of humanity. Needless to say, neither can be counted upon.

So, as a quick and dirty hack, if the W3C Validator says your comment is valid, I run it through a real XML parser, just to be sure. It seem a bit redundant, and the XML parser bails at the first well-formedness error (so it could take several passes to catch all the well-formedness errors missed by the W3C Validator). A better solution would be for someone to fix OpenSP 1.5.2, to ensure that onsgmls actually checks for well-formedness, when operating in XML mode.

Update (11/27/2006):

It seems to me that there are only about 3 people in the world using it, but I might as well release an updated version of the MTValidate plugin.

Version 0.4 of the plugin incorporates a new configuration option in /plugins/validator/config/validator.conf . Setting

XHTML_Check  = 1

runs ostensibly “valid” comments through a real XML parser, ensuring that they really are well-formed. To use this option, you’ll need the XML::LibXML Perl Module.

The new version also incorporates yet more user-friendly error messages from version 0.74 of the W3C Validator.


1 In response to a bit of flamebait from Anne van Kesteren.

Posted by distler at 2:33 AM | Permalink | Followups (13)

November 24, 2006

Dalí Thanksgiving

Posted by distler at 3:57 PM | Permalink | Post a Comment

November 19, 2006

Bulk Validator

Validator Logo

Maybe everyone else knew about this, but I recently stumbled upon Validator, a bulk XML (including XHTML1) validator. It features a GUI interface for MacOSX and Windows and a commandline tool for Linux.

Hand it a file, and it will validate it. Hand it a directory, and it will happily recurse through all subdirectories, validating every XML file it can find.

At least on my system, I needed to install XML::LibXML. For commandline use, I added

alias validate '/usr/bin/perl /Applications/Validator.app/Contents/Resources/script /Applications/Validator.app \!$'

to my .cshrc file. Now I can do things like

validate ~/Sites/blog/ | grep -v "Valid\|Well-formed"

Sweet!


1 … and DocBook, SMIL, SVG, XML Schema, etc.

Posted by distler at 7:19 PM | Permalink | Followups (1)

November 16, 2006

Segal on QFT

I spent a delightful afternoon, yesterday, discussing quantum field theory, the renormalization group, and such matters with Graeme Segal. Earlier, he gave a nice talk in the Geometry and String Theory seminar on his approach to QFT.

Posted by distler at 4:37 PM | Permalink | Followups (6)

November 11, 2006

Localized

Frenkel, Losev and Nekrasov have put out Part I of a huge project to study topological field theories “beyond the topological sector.”

It sounds like we will spend some time discussing their work in the Geometry and String Theory Seminar, so it might be good to give a little summary here.

They’re interested in a set of related theories in various dimensions

  • d=1d=1: A certain supersymmetric quantum mechanics model, to be discussed below.
  • d=2d=2: A topological σ\sigma-model (the “A” model), which is related to Gromov-Witten Theory
  • d=4d=4: Topologically-twisted N=2N=2 SYM, which is related to Donaldson Theory.

In each case, the field space, \mathcal{F}, is an infinite dimensional supermanifold (of bosonic and fermionic fields), with a nilpotent odd involution, QQ. If one computes the expectation value of topological observables (functions on \mathcal{F} which are QQ-invariant, modulo QQ-exact), one finds that the computation localized on a finite dimensional subspace of \mathcal{F} which, in each case, is called “instanton moduli space.” In the d=2d=2 case, an “instanton” is a holomorphic map from the worldsheet, ΣM\Sigma\to M. In the d=4d=4 case, an “instanton” is an anti-self-dual connection (modulo gauge transformations).

But there’s another way in which this localization can occur. Consider the d=4d=4 case. The Euclidean action S E=12g 2trF*F+iθ8π 2trFF+ S_E = \int \frac{1}{2g^2} \tr F\wedge *F +\frac{i\theta}{8\pi^2} \tr F\wedge F +\dots can be written S E=i4π(τF F +τ¯F +F +)+... S_E = -\frac{i}{4\pi}\int \left(\tau F^-\wedge F^- +\overline{\tau} F^+\wedge F^+\right) + ... where F ±=12(F±*F)F^\pm = \tfrac{1}{2} (F\pm *F) and τ=θ2π+4πig 2,τ¯=θ2π4πig 2 \tau = \frac{\theta}{2\pi} +\frac{4\pi i}{g^2},\quad \overline{\tau} = \frac{\theta}{2\pi} -\frac{4\pi i}{g^2}

If we send τ¯i\overline{\tau}\to -i\infty, while holding τ\tau fixed, we localize on the ASD configurations F +=0 F^+=0 But that’s crazy! you say, τ\tau and τ¯\overline{\tau} are complex conjugates of each other. True, they are, if θ\theta is real, as required by CPT invariance (more precisely, Reflection-Positivity). However, if we are willing to deform the theory, in a CPT-violating fashion, by giving θ\theta a large, negative imaginary part, we can we can take τ¯i\overline{\tau}\to -i\infty, with τ\tau fixed, by simultaneously going to weak coupling, g 20g^2\to 0.

The price we will pay, in a canonical formalism, is that the spaces of “in” states and “out” states will no longer be isomorphic. The computation of topological observables is unaffected. But Frenkel et al want to go beyond the topological sector and compute the matrix elements of arbitrary observables, in this localized limit.

The first (100 page) installment is about the SUSY Quantum Mechanics case.

Posted by distler at 11:19 PM | Permalink | Followups (6)

November 3, 2006

del Pezzo

There are several reasons why a lot of recent work on string compactifications has focussed on the case of Type-II orientifold backgrounds. The most obvious is that turning on fluxes gives a mechanism for moduli stabilization.

Equally important, however, is that, in contrast to heterotic backgrounds, many of the phenomena of interest (nonabelian gauge theories coupled to chiral matter) are localized in the target space. This localization (along with the warping due to large fluxes) provides a nice mechanism for generating a large hierarchy of scales. It also emboldens one to take a “tinker-toy” approach, putting together pieces of the desired physics, localized at different locations in some (unspecified) Calabi-Yau orientifold (the Standard Model on a stack of branes at this singularity over here, supersymmetry-breaking over there, …). Whether, in fact, these pieces can be assembled together in arbitrary ways, or whether there are important constraints that one misses in the purely local analysis, is a crucial question.

Another question, to which there is still not a clear-cut answer, is to what extent one can get the Standard Model (without extra junk) out of these local constructions. We know that, with suitable bundles, on the heterotic side, one can get the Standard Model field content, on the nose. (I am thinking, here, of the work by the Penn group in the context of heterotic M-Theory.) Moreover, on the simply-connected covering space (i.e., before Wilson-line breaking), there is a known dictionary between the heterotic compactification, with bundles constructed via the Freedman-Morgan-Witten construction and the F-theory dual. Unfortunately, the same is not true after Wilson-line breaking. So, whereas one can get GUT groups, one cannot, currently, get the Standard Model on the F-theory side.

The “state of the art,” in terms of local constructions, seems to be the model of Verlinde and Wijnholt. The complex codimension-3 singularities of a Calabi-Yau look like a complex cone over a del Pezzo surface, and Verlinde and Wijnholt manage to fit a Standard Model-like theory into the biggest and best of the del Pezzo singularities, dP8. Recently, Buican et al showed how the above construction can be embedded in a compact Calabi-Yau geometry.

Posted by distler at 11:32 PM | Permalink | Followups (23)