May 29, 2004
Movin’ On Up
MT 3.0
Yes, if you look down at the sidebar, you see that Musings and the String Coffee Table have been upgraded to MT 3.0. That’s 29 plugins1 and 657 lines2 of (unified diff) patches to the MT source code. Actually, 657 lines is a good bit shorter than my accumulated patches for MT 2.661. 6A actually fixed several of the bugs on my list in 3.0, while introducing, so far, only two new ones.
Remarkably, the whole thing seems to work.
(Well, OK, the comment-posting code was busted for a while, and people could preview, but not post their comments. Thanks to Srijith for catching this.)
iBook
I type these words on my new 14" G4 iBook. My aging G3 iBook was suffering from the dreaded backlight problems, and intermittent trackpad wonkiness. Keeping an external monitor and a USB mouse plugged in kinda defeats the concept of “laptop.” So, when I had a chance to purchase a brand new 1 GHz 14" iBook for under $1000 (ya gotta know the right people ;-), I decided to go for it. It arrived yesterday. I installed the RAM upgrade (to 640 MB), booted the machine in FireWire Disk Mode, and proceeded to clone my old machine’s hard drive onto the new one3. Upon rebooting, iTunes demanded that I authorize the new machine, Mathematica demanded that I re-enter my License Code (it demands that, whenever you so much as sneeze), but everything else worked flawlessly.
My only miscalculation was the naive presumption that I could re-use the Airport card from the old machine. Nope! “Airport Extreme-Ready” means incompatible with the old cards. Guess I’ll be adding another $100 to the cost of this baby …
1 The 30th plugin, MT-Blacklist, awaits a 3.0-compatible update, and another, was rendered superfluous by 3.0. Six of the 29 were plugins of mine.
2 More precisely, that was: 14 files patched, 289 lines added and 29 lines removed from the MT codebase. This doesn’t count patches to other people’s plugins. I just copied the plugins over from my MT 2.x plugins directory.
3 N.b. I had MacOSX 10.3.4 (build 7H63) installed, a later build than the one included with the machine. This is important.
May 27, 2004
High Energy Supersymmetry
I’ve really gotta stop posting about the Landscape. Posts on the subject rot your teeth and attract flies.
Still, it helps to sort out one’s thinking about anthropic ideas, which definitely clash with the sort of “explanation” we have become used-to in field theory. In any scientific framework, one needs to understand what’s just given — input data, if you will — what needs to be “explained” and (most importantly) what counts as an explanation. There’s a temptation to mix and match: to envoke the anthropic principle to explain some things, and “technical naturalness” to explain others. But that is simply inconsistent; a statistical distribution in the space of couplings does not favour technically-natural ones over others.
Consider the question: why is the QCD scale so much lower than the Planck scale (or the GUT scale)?
We are accustomed to saying that this large hierarchy is natural because it arises from renormalization-group running. The QCD coupling starts out moderately small at the GUT scale (), and increases only logarithmically as we go down in energy.
But, in the Landscape, there’s a probability distribution for values of , which might just as easily be , or . What sounded like a virtue now sounds like a vice. The ratio depends exponentially on , and so is an exquisitely sensitive function of the moduli — exactly the sort of thing about which it is hard to make statistical predictions.
Instead, there’s an anthropic explanation for the value of . Namely, the proton mass (which is essentially determined by the QCD scale) is tightly constrained. Vary by a factor of a few, and stars cease to exist. Hence must be pretty close to , otherwise, we aren’t here.
Similarly, point out Arkani-Hamed and Dimopoulos, the electroweak scale cannot be vastly different from . For the ratio enters into the neutron-proton mass difference. If the neutron were lighter than the proton, there would be no atoms at all. If it were much heavier, all heavy elements would be unstable to beta decay, and there would be only hydrogen. Either way, we would not exist.
If the electroweak scale is anthropically-determined, is there any reason to expect any beyond-the-Standard-Model particles below the GUT scale? We don’t need low-energy supersymmetry to make natural. Arkani-Hamed and Dimopoulos posit a scenario where supersymmetry is broken at a high scale, with squarks and sleptons having masses in the GeV range (more on that below), whereas the “'inos” (the higgsino, the gluino, the wino, zino and photino) survive down to low energies.
Light fermions are, of course, technically natural. But there’s no reason to expect the theory to have approximate chiral symmetries. So technical naturalness is not, in this context an explanation for the light fermions. Instead, Arkani-Hamed and Dimopoulos argue that low-energy supersymmetry does have one great virtue — it ensured the unification of couplings around GeV. The “'inos” contribute to the -function at 1-loop, so the 1-loop running in this model is exactly as in the MSSM. The squarks and sleptons contribute at 2-loops (as they come in complete multiplets, their 1-loop contribution does not affect the unification of couplings), and removing them from low energies actually improves the fit somewhat.
Arguing for coupling constant unification sounds equally bogus until you turn the argument on its head (thanks to Aaron Bergman for helping me see the light). Assume that at short distances one has grand unification. Then one needs light “'inos” so that the 3-2-1 couplings flow to their anthropically-allowed values at long distances.
Once we’ve abandoned low-energy, SUSY breaking, why not let the SUSY breaking scale be all the way up at the GUT scale? The reason is, again, anthropic. The gluino is a light colour-octet fermion, and hence very long-lived (it decays only via gravitino exchange). If you push the SUSY breaking scale up too high, the long-lived gluino creates problems for cosmology. Arkani-Hamed and Dimopoulos favour a SUSY-breaking scale, GeV.
This gives a big improvement over low-energy SUSY in the context of the landscape. Flavour-changing neutral currents are no longer a problem. And it ameliorates, but does not really solve the problem of proton decay.
Recall that there’s no reason for the generic vacuum on the Landscape to respect R-parity. R-parity is respected only on very high codimension subvarieties of the moduli space (if it’s present at all). So, generically, one expects R-parity violating terms in the superpotential to be unsuppressed. Since the squarks are much lighter than , the dominant contribution to proton decay comes from squark exchange and the proton lifetime is roughly where is the strength of the R-parity violating Yukawa couplings.
For TeV-mass squarks, the anthropic bound on the proton lifetime gives , whereas the observational bound is . Pushing the squark masses up to GeV, the bound on is no longer so absurdly small. The anthropic bound is , and the observational bound is , but there is still a 4-orders of magnitude discrepancy which needs explaining.
I think that’s still a serious challenge for the anthropic principle. Why are the R-parity violating Yukawa couplings 4 orders of magnitude smaller than required by the anthropic bound?
A possible way out was suggest to me by Nima in the course of our email conversation. The lightest superpartner (one of the neutralinos) decays as well through R-parity violating interactions (a similar diagram to the one which led to proton decay, but with one R-parity violating and one R-parity preserving vertex, instead of two R-parity violating vertices. If we want the lightest superpartner to furnish a candidate for the dark matter (leading to structure formation and hence to us) we need its lifetime to be at least comparable to the age of the universe. For a few hundred GeV, to get a lifetime of seconds, one ends up requiring .
Perhaps it is the existence of dark matter that “explains” the nearly exact R-parity in our universe. I’m still pretty sceptical, but I’m keeping an open mind. So, there may well be more posts on this subject in the future …
May 26, 2004
Now More User-Friendly
If you’ve ever used the W3C Validator, you probably noticed a couple of things:
- The error messages produced by the
onsgmls
parser are pretty obscure. - The latest version of the Validator attempts to improve the situation by including its own, more verbose error messages, in addition to the terse ones from
onsgmls
. These messages are not necessarily the clearest, but they are a big improvement.
If you’ve ever commented on this blog, you know that we run comments through a local copy of the Validator, yielding the same obscure error messages as the “old” W3C Validator. Alexei Kosut seems to have lost interest in his MTValidate plugin (at least, he never answered any of my emails). So I decided to update the plugin to use the new, more user-friendly error messages.
The result is mtvalidate-0.2. To install,
- There are some Perl module prerequisites. Get your webhost to install them, or use CPAN to install them in your
extlib
directory. - Make sure you have the
onsgmls
SGML parser installed on your system. It comes with RedHat Linux, it’s available via fink for MacOSX,
and for other OS’s, you can always download and compile the source code.fink install opensp3
- Download and uncompress the sgml-lib directory and put it inside the
validator
directory. - Edit the
SGML_Parser
line invalidator/config/validator.conf
to reflect the location ofonsgmls
. - Move
MTValidate.pl
and thevalidator
directory into your MovableTypeplugins
directory. - Follow my previous instructions to enable validation of comments. Alexei has instructions to enable validation of entries.
Let me know how you like the new, “improved” error-reporting. And let the W3C know if you have any suggestions for improving the error messages.
May 24, 2004
Rampant Paranoia
Now that MacOSX has been smitten with two remote protocol handler vulnerabilities in less that a week, people are running a bit scared. Jason Harris claims to have found a new one, in which a hostile attacker gets LaunchServices to register a new URI scheme, for which a surreptitiously-downloaded hostile application is the default handler.
Mr. Harris provides two sample exploits, differing in the protocol used to download the hostile application to the victim’s machine. If successful, they are supposed to create a file, “owned.txt
” in the victim’s home directory. When I tried the exploits in Mozilla, the hostile attempts were blocked with the messages, “malware is not a registered protocol.” and “guardian 452 is not a registered protocol.” No disk was remote-mounted (I do have the “disk://
” protocol disabled using the RCDefaultApp PreferencePane) and no file was downloaded via FTP.
I was equally unsuccessful in getting either exploit to work in Safari, though no helpful diagnostic error message was given. I’m not saying there’s no possibility of an exploit here (though I’m somewhat incredulous that the mere act of downloading an application — not launching it, not installing it in /Applications/
, merely downloading it — would be enough to get LaunchServices to register it as the default handler for some unknown URI scheme), but it’s a bit premature of Mr. Harris to claim
Because this sample exploit registers its own URI scheme, none of the methods people had been using involving disabling certain scripts, moving Help.app or changing the ‘help’ URI scheme would protect against it. At this time, only Paranoid Android provides protection from it.
Dick Cheney makes me paranoid, but the author of my beloved Chicken of the VNC? Nah…
Update (5/25/2004): John Gruber has a more thorough analysis of this new “threat”. According to John, the hostile application gets registered with LaunchServices when it is displayed in the Finder (still sounds wacky to me, but if you say so …). That would happen, for instance, if you had the Finder assigned as your ftp://
helper. Me, I have that task assigned to Mozilla. If the hostile application doesn’t get registered, it can’t be used to attack you.
I find this “display an application in the Finder, and it’s automatically registered as a URI handler” — if true — to be very disturbing. Only applications in /Network/Applications
, /Applications
and $HOME/Applications
should be automatically-registered as URI handlers. That’s true of Services, why should URI handlers be different?
Update (6/7/2004): The 2004-06-07 Security Update has a more comprehensive fix for this whole class of problems. Kudos to Apple for their quick work on the issue and for their forthright and comprehensible explanations of their fixes.
May 22, 2004
WordPress 1.2, MathML Goodness
Update (3/21/2005):
With WordPress 1.5, many of the troubles discussed below have gone away. A new version of the plugin, along with simplified setup procedure are detailed here.WordPress 1.2 has just been released. Congratulations to Matt and his team for numerous improvements and a shiny new plugin architecture!
In celebration of the event, I’m releasing an itexToMML plugin for WordPress 1.2 and above. This brings easy-to-use mathematical authoring to the WordPress platform.
Installation involves a few simple steps.
- First, you need to download and install the itex2MML binary. There are precompiled binaries for Linux and Windows and a precompiled MacOSX binary is included with my source distribution.
- Edit line 22 of the plugin to reflect the location where you installed the binary. By default, it says
$itex2MML = '/usr/local/bin/itex2MML';
- Install the plugin as
wp-content/plugins/itexToMML.php
- Apply the following patch, which makes sure that the installed text-filtering plugins — wptexturize, Textile (1 and 2) and Markdown — play nice with MathML content. (These changes will, hopefully, be in the next release of WordPress.)
- Activate the plugin in the administrative interface.
- Start serving your blog with the correct MIME Type.
- If you want people to be able to post comments in itex, add the requisite list of MathML tags to your
mt-hacks.php
file.
That’s the good news.
The bad news is that WordPress 1.2 has a serious bug, which renders the plugin nearly useless for serious work. Like its ancestor, b2, WordPress eats backslashes. Type “\\a” in the entry form, and “\a” gets posted to your blog. Re-edit the post, and “\a” gets turned into “a” when re-posted. Since TeX relies heavily on backslashes, this is a pretty debilitating feature. Hopefully, it’ll get fixed soon.
The other thing that is less than ideal is that enabling the plugin is all-or-nothing. When enabled, all your posts and comments get filtered through itexToMML, even those with no math in them. That’s rather wasteful of resources.
But, again, I’m pretty sure that this will have to change in subsequent versions of WordPress. Forget about the people using itexToMML. Consider the choice of text filters for composing posts. Currently, there are four: wptexturize (the default), Textile1, Textile2 and Markdown. Say you have been using Textile for a while and decide one day to switch to Markdown. Guess what? You can’t! If you disable Textile and enable Markdown, this choice applies to all your posts. But the syntaxes of these two markup dialects are incompatible. Your old posts will break horribly if you switch. Once you’ve accumulated a body of posts using one text filter, you are basically stuck, regardless of whether something better comes along, tempting you to switch.
MovableType lets you assign a choice of text filter to each of your posts individually. If you decide one day to switch from Textile to Markdown, your old posts don’t break, because they still get processed with Textile. I added the ability to assign a choice of text filter to each comment in MT. That way, commenters can compose their comments in their favourite idiom, rather than yours.
It seems to me that, once you start giving people a choice of text filters for formatting their posts, it’s inevitable that you’ll need to allow them to make that selection on a per-post basis. WordPress actually allows multiple text filters to be applied to (every) post. If you want to use itexToMML with Textile formatting, you just activate both plugins. In MovableType, I had to create a third text filter plugin, whose sole purpose was to daisy-chain the other two together. It will be cool to see how WordPress eventually handles this. Perhaps there will be a set of checkboxes in the composition window, letting you select which text filters apply to the post you’re composing.
But all that is for the future. Right now, WordPress users have a shiny new toy to play with. I hope they enjoy my small addition to the party.
MIME Types for WordPress
Those familiar with this blog will know that to get MathML to render in Gecko-based browsers (Netscape 7, Mozilla, Firefox,…) and in IE/6 with the MathPlayer 2.0 plugin, you need to serve your pages as application/xhtml+xml
. My MovableType solution involves using mod_rewrite
to set the HTTP Content-Type
headers.
In WordPress, as in any PHP-based system, it’s probably preferable to set the headers directly in your PHP code. It would be great if someone wrote up a definitive guide to doing this in WordPress. Unfortunately, most of the existing instructions, like Simon Jessey’s are written under the misapprehension that the correct thing to do is to set the Content-Type
based on the Accept
headers sent by the browser.
This is wrong. It may be “morally correct,” but it doesn’t actually work with real-world browsers.
Both Camino and Opera 7.5 include application/xhtml+xml
in their Accept
headers. Both cough up hairballs when served XHTML+MathML content with that MIME type. IE/6, with the MathPlayer 2.0 plugin installed, handles application/xhtml+xml
(either straight XHTML or XHTML+MathML) just fine, even though it doesn’t say so in its Accept
headers.
The only correct thing to do is to send the MIME type based on the User-Agent
string sent by the browser. Anybody want to take a crack at writing up some instructions for WordPress?
if ( (preg_match("/Gecko|W3C_Validator|MathPlayer/", $_SERVER["HTTP_USER_AGENT"]) && !preg_match("/Chimera|Camino|KHTML/",$_SERVER["HTTP_USER_AGENT"])) || preg_match("/Camino.*MathML-Enabled/", $_SERVER["HTTP_USER_AGENT"]) ) { header("Content-type: application/xhtml+xml; charset=utf-8"); print('<?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1 plus MathML 2.0//EN" "http://www.w3.org/Math/DTD/mathml2/xhtml-math11-f.dtd" > <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> '); } else { header("Content-type: text/html; charset=utf-8"); print(' <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> '); }
to be placed in wp-blog-header.php
or at the top of whatever pages need to be served with the correct MIME type. Either way, you need to remove the hard-coded DOCTYPE declaration and opening <html>
tag in the affected pages.
May 19, 2004
Remote Exploit for MacOSX
[Via Jay Allen] This is the sort of thing one expects from our friends in Redmond.
- Help Viewer.app will happily run scripts on your local machine.
- Help Viewer.app is the default helper application for
help://...
URI’s. - Ergo an evil web master can execute scripts on your computer by redirecting an (innocent-looking) link that you click on to a
help:runscript=/path/to/some/script.scpt
URL.
By itself, this is limited to executing scripts or applications that are already on your machine (this includes the ubiquitous OpnApp.scpt
, which can execute shell commands). For extra fun, Mr. Evil can get you to remote-mount a disk image by redirecting you to a disk://...
URL, and then use the previous trick to run an application on the mounted disk image.
That is really, really, evil.
Workaround: Use the RCDefaultApp PreferencePane to disable the help://...
helper application. And, similarly, disable disk://...
and disks://...
.
Update (5/21/2004): Apple has released an update to Help Viewer.app to address this issue, Security Update 2004-05-24 (also available through Software Update):
HelpViewer: Fixes CAN-2004-0486 to ensure that HelpViewer will only process scripts that it initiated. Credit to lixlpixel <me@lixlpixel.com> for reporting this issue.
Update (5/22/2004): John Gruber points out another vulnerability, this time in Terminal.app’s handling of the telnet://
URI scheme. Following a
telnet://-npath%2Fto%2Fsome%2Ffile
will overwrite any file you have write-access to. Best to disable that URI scheme too, until Apple fixes Terminal.app. (It’s fixed in 10.3.4.).
May 17, 2004
del Pezzo
Seiberg Duality is one of the mysterious and wonderful features of strongly-coupled supersymmetric gauge theories to have emerged from the interplay between string theory and gauge theory. In the purely gauge-theoretic context, it’s a bit of a black art to construct Seiberg dual pairs of gauge theories. A stringy context in which a large class of examples can be found, and hence where one can hope to find a systematic understanding, is D-branes on local del Pezzo surfaces.
Let be the noncompact Calabi-Yau 3-fold, which is the total space of the canonical bundle of a del Pezzo surface, (, with points blown up). has a minimal-sized surface, (, embedded via the zero section). “Compactify” Type IIB on , and consider space-filling D3-branes and D5-branes wrapped on cycles of . Varying the Kähler moduli of is an irrelevant deformation of the resulting 4D gauge theory. So, studying the different D-brane descriptions which arise as one moves in the Kähler moduli space gives a concrete description of Seiberg Duality. (I’m lying slightly, here, but part of the mystery of the subject is understanding exactly when that’s a lie.)
At certain loci in the moduli space, a nonabelian gauge invariance is manifest, and one has a quiver gauge theory with massless bifundamentals and, typically, some gauge-invariant superpotential for them. There’s a close relation between, , the derived category of sheaves on (in which the aforementioned D-branes are objects) and the derived category of quiver representations (with relations given by the derivatives of the superpotential).
There’s a rich literature on this subject, but two recent papers provide a good entrée into it for those (like yours truly) who haven’t been following the literature in much detail. Chris Herzog argues that admissible mutations of strongly exceptional collection of coherent sheaves (which, in turn, for a basis of objects in the derived category) is Seiberg Duality. Aspinwall and Melnikov discuss the same issue from the point of view of tilting equivalences between the derived categories of quiver representations (an approach pioneered by Berenstein and Douglas).
May 16, 2004
Best $20 I’ve Spent in a While
I took the family kayaking on Town Lake today.
We paddled past groups of turtles basking in the sunlight. We watched egrets fishing in the shallows and a Guinea Fowl chasing pigeons off its “turf” on the bank. A blue heron glided past, a few feet from us. And we were accosted by two, extremely hopeful, swans.
All in the space of an hour, literally a stone’s throw from downtown Austin.
Brouhaha
MT 3.0 was released this week, and the announcement was greeted with an unprecedented furor. No longer would MT be free for (unlimited) personal, noncommercial use. Instead, the license fee would depend on the number of blogs and authors. In response to the outpouring of complaints, Six Apart “clarified” (read changed) their licensing terms to read as follows.
Free Version | Personal Edition | Personal Edition 10 | Personal Edition 13 | Personal Edition add-on |
|
---|---|---|---|---|---|
“Active” Weblogs | 3 | 5 | 10 | 13 | 1 |
“Active” Authors | 1 | 5 | 10 | 13 | 1 |
Price | free | $99.95 | $149.95 | $189.95 | $9.95 |
Introductory Price | free | $69.95 | $119.95 | $149.95 | $9.95 |
There’s a certain schadenfreude in contemplating the feverish late-night discussion which led to a revised pricing scheme in which it is cheaper to buy the 5-author license and 8 add-on packs ($179.55) than to buy a 13-author license ($189.95). [OK, at the introductory price, it’s only $0.40 cheaper to go à la carte, but still …]
Even more significant than the increased weblog/author limits, was a liberalization in what those terms meant, for the purpose of the license. The number of weblogs is no longer defined by the number in the user-interface (many people use several “sub-blogs” to assemble their weblog), but rather the number of distinct URL’s. And we only count “active” authors and weblogs (those with new posts in the previous 90 days) towards the total count. This helps a lot. With the two blogs I host here on golem, I can comfortably get by with the $70 license.
There are three obvious options for most MT users
- Switch to an open-source alternative, like WordPress.
- Stick with MT 2.661, and its more liberal licensing scheme.
- Ungrade to MT 3.0.
MT 2.661 did not suddenly cease to work when 3.0 was released. So there’s no reason not to keep using it, if it’s working for you. On the other hand, like any piece of abandon-ware, you cannot expect Six Apart to release fixes for any of the numerous bugs which have been uncovered in 2.661, nor for any that will surely be discovered in the future. Luckily, you have the source and, if you are comfortable patching it, you can keep it working forever.
I’m not averse to paying for software that I like. Like Mark Pilgrim, I’m a MacOSX user. Instead of running Linux (or Darwin) I’m willing to pay for the privilege of running software with features not available in its open-source alternatives. But, unlike Apple, which I expect to be around for some time to come, and which I expect will release major revisions to its OS at regular intervals for a price of $129 (and considerably cheaper at UT’s site-license), I’ve no idea whether Six Apart will be around for the long haul, nor what their licensing terms for MT 4.0 will be.
That uncertainty is very troubling to me. I’ve spent a lot of effort developing the software for people to be able to post mathematical content on their blogs. I chose MT as a platform, not only because it was the most capable weblog software for the purpose, but also because I expected that it would remain free for personal, noncommercial use. Now I’m not so sure …
MT 3.0 sports significant extensions to the API, which ought to drastically simplify life for plugin authors like me. Perhaps (I haven’t really looked closely enough) my modifications to MT could be achieved without extensively patching the source code. A set of plugins and some modified templates, without any nasty patching of source-code, would make it infinitely easier for others to replicate what I’ve done here at Musings. But, with the new, restrictive licensing, fewer people would want to avail themselves of the easier installation.
For the vast majority of MT users, the obvious answer is to switch platforms. WordPress, in its current state of development, is adequate to their needs. And they can be assured that it will always be free.
I’m not in that position. If WordPress had the features needed to do what we do here at Musings, or if I could feasibly add those features, I would switch in a minute. But it doesn’t, and I can’t. I say that, not out of malice, but out of wistfulness. Matt’s a friend, and he and his colleagues are doing a great job with WordPress. It’s just not a feasible alternative for me … yet.
For the present, I’m sticking with MT. I’ll probably even pony up for the upgrade to 3.0. But if a suitable open-source alternative comes along …
Update (5/18/2004): In response to a comment by Anil Dash of Six Apart, I wrote the following on Phil Ringnalda’s blog. I think I ought to reprint it here:
But, after all, this *is* a developer release, and somebody like you can do things that you couldn’t do before (not just technically– you can sell services around MT now, like installation or plugins, and you can become a hosting partner for MT) so hopefully that makes up for things a little bit.
…
And, since I’m not involved in running or judging the plugin contest, I can say that I hope you enter and get yourself your share of the goodies. :)
Oh Joy! Now I can make money developing plugins for MT.
Thanks. I’ll keep my day job.
I can’t speak for Phil, but I didn’t get into developing for MT because I thought I could somehow make money off it.
To the contrary, the thing that worries me the most is that people, unhappy with the licensing restrictions, will flee the platform and my work will have been wasted.
If not with MT 3.0, then with MT 4.0 or …
Already, the idea of using MT 3.0 as courseware seems to have been rendered prohibitive by the new licensing terms.
I want 6A to succeed as a company. But I also want to know that I am not wasting my time.
Update (6/17/2004): 6A’s new pricing seems much more reasonable, both for individual and Educational users. And, much to my delight, they’ve upgraded me to an “Unlimited” Personal Edition.
May 8, 2004
So it’s Come Down to This
From Mazar-i-Sharif to Guantanamo to … , we’ve seen this one coming, haven’t we? We’ve just been in denial.
“As I understand it, technically unlawful combatants do not have any rights under the Geneva Conventions.”
— Donald Rumsfeld, January 11, 2001
Perhaps some MPs will be court-martialed. And some mid-level officers will take early retirement. But pinning the blame on a few “aberrant” troops (it’s always someone else to blame, isn’t it?) won’t cut it this time. As Brad DeLong would have it, the fish rots from the head.
May 3, 2004
Digging up the Landscape
Much excitement has been generated by the work of KKLT. At least for one class of compactifications down to 4 dimensions (F-theory backgrounds with fluxes) we seem to have the physics which lifts the degeneracy of the moduli space under good control.
I say seem, because there are some important gaps, one of which got filled today.
May 1, 2004
Write-In
Thanks to the machinations of Tom DeLay and the Texas Republicans, Austin is now the largest city in the nation without its own seat in the House of Representatives. Travis County was chopped up and parceled out between three Districts:
- District 21, which extends to San Antonio.
- District 10, which reaches the suburbs of Houston.
- District 25, which snakes hundreds of miles south to the Mexican border.
Our former Congressman Lloyd Doggett is running in District 25. We ended up in District 10, a challenge viewed so hopeless, that the Democrats did not even field a candidate in the Primary. That’s right, here in our part of the City of Austin, there’s no Democrat on the ballot, come November.
My colleague at UT, Mathematics Professor Lorenzo Sadun was sufficiently disgusted that he decided to run as a write-in candidate.
Sign the ballot petition (if you’re registered to vote in the 10th District), give him money or just wish him luck … He’ll need it.