Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

December 21, 2006

Blogs vs Wikis

There’s an interesting cross-blog conversation about using blogs a research collaboration tool. I thought I’d take a little break from calculating KR groups and make a few comments.

What makes a good collaboration tool? The particular project I’m taking a break from is one that Dan Freed, Greg Moore and I have been slowly plodding along with. We’ve been conversing by conference-call and emailing around TeXed notes. Most of those notes need revising, in light of subsequent conversations … Not at all atypical. Indeed, it more or less describes most of the collaborations I’ve ever had. At the end of the day, I have an email box full of notes and comments and revisions thereof, all jumbled together in a not-very-coherent mess.

What we really need is a Wiki, where we can collect our results, make corrections, raise issues to be dealt with, etc. Blogging software is all very nice, but it really doesn’t lend itself to this “going back and revising” process that characterizes ongoing research. It’s a great tool for communicating with others, but it’s less than ideal for the purpose I’ve described. Heck, blogging systems don’t even have Revision Control.

I’ve been looking into Wikis for a while now. There are lots of different ones out there. But if we start restricting to those which have reasonable facilities for doing Math, the field narrows quickly.

  • MediaWiki uses texvc to edit formulæ. Dave Harvey (whom I met in Minnesota) designed blahtex as a drop-in replacement for texvc. But, MediaWiki isn’t XHTML-safe, so the advantage of blahtex (the ability to output MathML) may be moot.
  • Bob McElrath uses Zwiki. And he maintains LatexWiki, a plugin for ZWiki, which produces PNG equations. Unfortunately, ZWiki is a resource-pig, and for that and other reasons, Bob seems has given up on it.
  • TiddlyWiki, with the jsMath plugin is mind-blowingly cool (once you realize that the whole damned Wiki is running locally … in Javascript … in your browser). There is a server-side implementation, which Bob seems to be maintaining now. With access-control features and version-control, that may graduate TiddlyWiki from “personal notebook” to the “collaboration tool” I’m after. But jsMath (like most client-side tools for rendering math) seems kinda slow. And I wish it supported more of LaTeX.

There are other Wiki possibilities; MoinMoin seems quite good. But everything I’ve mentioned needs work before I’d find it a completely satisfactory solution. Whatever the installation requirements on the server-side, I want entering content to be as easy and LaTeX-like as possible. Ideally, it would use itex2MML server-side, and serve static pages, as much as possible.

Public or Private

A lot of the discussion revolves around whether this online research ought to take place in public or in private. It seems to me rather strange to advocate hard for doing it publicly and then, when you actually go to set it up, do so privately.

Personally, I’m of the opinion that most people really don’t want to know how the sausage is made. Urs Schreiber is marvellously uninhibited when it comes to discussing work-in-progress in his blog. That’s great, if you can do it. One of my New Years resolutions is to try to do more of that kind of “thinking aloud” here on Musings.

Ultimately, it’s not an either/or proposition. There are some things are best kept under wraps; others would benefit from input from others. For instance, even if I had that hypothetical private Wiki set up for this project with Dan and Greg, there would be at least one public page, entitled Examples to Calculate. It would be great to get some feedback on orientifold backgrounds to which to apply our analysis. Right now, for instance, I’d like some examples of Calabi-Yau orientifolds with O7-planes, where the underlying Calabi-Yau has

  • Nontrivial fundamental and/or Brauer group.
  • (H 2(X) torH 3(X) tor)(\mathrm{H}^2(X)_{\text{tor}}\oplus \mathrm{H}^3(X)_{\text{tor}}) remains nontrivial after tensoring with [1/2]\mathbb{Z}[1/2].
  • If the example is physically-interesting, as a Type-IIB flux vacuum, so much the better.

Suggestions?

Update: Wiki Wishlist

Just for clarity, what I think I’m looking for in a Wiki (list to be updated, as warranted):

  1. Serves static (X)HTML pages.
  2. When the user clicks “edit”, uses AJAX to swap the (X)HTML+MathML content with the wiki+LaTeX text for editing.
  3. Is sufficiently plugable, so that I could wire in itex2MML on the server-side.
  4. Either is good enough to emit well-formed XHTML (sounds unlikely), or could use Sam Ruby’s Javascript to allow MathML in HTML4.
  5. I’m willing to use Apache’s native access control capabilities, so built-in ACL’s are a plus, but not a requirement.
Posted by distler at December 21, 2006 8:52 AM

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1083

83 Comments & 0 Trackbacks

Re: Blogs vs Wikis

Jacques:- Your remark about my advocating hard for one model and then doing another, is a little sad. Are you against the idea of people learning something from their discussions and modifying their ideas? I’m certainly not against it. And I’ll venture that you should not be either. There’s no shame in changing one’s mind, and the reason I raised the idea in the first place (on the first post) was to try to get people’s opinion. That’s why I blog, in fact… to hear others’ opinions on things, to help me learn. I learned that the private aspect that people were concerned about having is *indeed* important when setting something up that involves people who are just learning to find their own voices in the field. They don’t want to do that in public, and I had not appreciated that as much in my original thoughts. So I set up a private system for my students and myself to work on projects within our group. This is in fact different from what the other model was proposed to achieve and so there simply is no contradiction. I still think that the public and (globally participated in by many reserch groups) model that I advocated has a place too. See my recent comment to Moshe in the recent thread of Urs) But that was not what I was trying to set up in the instance I talked about in the more recent post, and privacy was much more paramount in this case, and I focused on that, mindful of things that people -including yourself- had said in the comments of that first post.

There’s nothing wrong with learning from other’s opinions, experience and ideas, Jacques. That’s all I did.

Cheers,

-cvj

Posted by: cvj on December 21, 2006 10:54 AM | Permalink | Reply to this

Public vs Private

I probably misconstrued your comment, then. (If so, I apologize.) I had understood that you had decided to make your research blog private because of the disapprobation you felt you had encountered, in the comments to your original post, for the idea of a public research blog.

It seemed to me that, just because people were telling you that a public research blog wasn’t right for them (or for many of their colleagues) should have little bearing on whether it was right for you (which, at the time, you clearly seemed to indicate it was).

Diffr’nt stroke for diffr’nt folks. Urs, after all, has been running an outrageously successful public research blog for years, now. And nobody has voiced the opinion that he is making a mistake in doing so.

I am, however, entirely sympathetic if you say that a public research blog wasn’t the thing that would meet your needs (and those of your students) at this particular juncture.

I see no contradiction there, as should be clear from what I have written in the body of my post.

Posted by: Jacques Distler on December 21, 2006 11:20 AM | Permalink | PGP Sig | Reply to this

Re: Public vs Private

Hi,

The idea in my original post was primarily about a research blog (or other device.. maybe a wiki is better.. I don’t know) that many research groups around the world could take part in as an ongoing conversation or series of conversations… a sort of wildlife reserve of ideas, if you like. A resource for everybody. The issue of just how public it should be, whether it should be by membership only, etc, etc, is a secondary discussion, in fact. It was part of a discussion I was trying to get going about using the blog format as a research tool, to sit alongside, say, the arXiv. The discussion got hung up on several points that I would have preferred it not to, but that’s life. I did learn a lot about people’s concerns for privacy and also about priority issues - ones that I had not anticipated would be quite so strong concerns. I learned from that. Given the mixed feelings people had about the whole thing, it did not seem to be the right time to go ahead and try to set something up. I backed off on that more lofty goal, but there are other goals to achieve.

This more recent post was about how I implemented something that works for me in the context of having discussions within my group. Very different goals to be achieved there. And no contradiction with the first goal/idea.


So good, we agree.

As just for going ahead and implementing something for the more global idea that does not take into account the wants, needs and strong concerns of the very people I’d like to see participating in it… that would be an odd way to proceed, wouldn’t it?

heers,

-cvj

Posted by: cvj on December 21, 2006 11:48 AM | Permalink | Reply to this

Re: Public vs Private

As just for going ahead and implementing something for the more global idea that does not take into account the wants, needs and strong concerns of the very people I’d like to see participating in it… that would be an odd way to proceed, wouldn’t it?

Actually, no it wouldn’t.

I’m a strong believer in the “Release early, release often.” philosophy. Build something you think people might be interested in, put it online, see what the reaction is, and then adapt to make it better.

That’s the way the arXivs were created and, indeed the Web itself. It’s a very successful model. Works much better than sitting around discussing what the ideal way to do “X” is (whatever “X” is) and speculating about how popular an implementation of “X” would be.

Posted by: Jacques Distler on December 21, 2006 3:05 PM | Permalink | PGP Sig | Reply to this

Re: Public vs Private

Up to a point, you are correct. But not taking into account opinion at all is also not a good way to proceed. A little bit of both approaches. I was also around when the arXiv started. It did not just spring up out of nothing. Opinions were sought.

-cvj

Posted by: cvj on December 21, 2006 3:32 PM | Permalink | Reply to this

Re: Public vs Private

Furthermore, I would say that in the last two days of discussion about this issue that has happened on three blogs, the “sitting around discussing” you refer to has probably saved a huge amount of time and effort, since whoever builds the prototype system will not have to waste time figuring out some of the best and most useful features… A lot of great suggestions are being ironed out with a little discussion first.

Best,

-cvj

Posted by: cvj on December 21, 2006 3:41 PM | Permalink | Reply to this

The arXivs

Opinions were sought.

As an Assistant Professor at Princeton, I must have been out of the loop.

All I remember was some people at Aspen complaining that their email inboxes were over-quota (this was in the days when there was still such a thing as disk quotas) because they were subscribed to Joanne Cohn’s email list. Ginsparg then went home and wrote some shellscripts. I don’t recall him telling anyone beforehand what he had in mind (let alone asking them how they thought it should work).

But, as I said, I may have been out of the loop.

Posted by: Jacques Distler on December 21, 2006 4:22 PM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

I have made several MediaWiki installs over the past few months, and I’ve recently started playing around with GroupWiki, a MediaWiki fork which seems to allow finer control over who can edit which page. MediaWiki appears to be sturdy software, but as soon as I try to add any but the most trivial features, I stumble into a thicket of PHP which resembles nothing so much as the mutant offspring of MS-DOS batch files.

One thing I have sorely missed while doing math markup in Wikipedia articles has been the ability to use \newcommand. Perhaps I rely upon macros too much, but they can be an awfully nice way to enforce consistency and make the markup human-readable. (Once I had a healthy set of definitions built up, I discovered I could type my notes in LaTeX as fast as my professors could lecture, which was convenient.)

In my experience, wikis have been a good way to generate material when the goal is a specific document: a set of class notes, a journal article, etc. They don’t have very convenient features for handling discussions. To have a semblance of threading on a Wikipedia talk page, for example, you have to indent each paragraph manually with lots of colons (which the parser turns into nested <dd> and <dl> tags, in a manner which breaks if you also include <div> tags, as I discovered yesterday afternoon). Recently, I started hacking on a MediaWiki extension which lets you drop a tag onto a wiki page and conduct a blog-style discussion within the wiki. If the people around the office find it useful, it might turn out to be a good tool.

Posted by: Blake Stacey on December 21, 2006 11:39 AM | Permalink | Reply to this

Re: Blogs vs Wikis

It seems that with MediaWiki you are always forced to fork to get math working fine, and then when it comes to updates, to remix and refork. At least that was my experience one year ago.

Actually, I wonder I can claim the first PhysRevD citation into a wiki: Phys. Rev. D 73, 013009 (2006) cites http://www.physcomments.org/ wiki/index.php?title=Bakery:HdV, one of my un-arxivable recopilations of half-baked ideas. Problem become when the provider upgraded the PHP support forcing me to upgrade the mediawiki to a new version which was incompatible with my math patch. Add the database-oriented way of storing the information, and voila, you inherit a missing link.

Posted by: A. Rivero on December 21, 2006 1:20 PM | Permalink | Reply to this

Re: Blogs vs Wikis

…That’s why I maintain my own server. If it’s broken, you can be sure it’s my fault. ;)

Posted by: Bob McElrath on December 23, 2006 9:32 PM | Permalink | Reply to this

Re: Blogs vs Wikis

It might be worth pointing on the String Vacuum Wiki. I’m not sure what its status is, but it’s definitely an attempt to do this sort of thing.

Posted by: Aaron Bergman on December 21, 2006 11:54 AM | Permalink | Reply to this

Re: Blogs vs Wikis

UniWakka does support Latex-like Math.

Posted by: Zack on December 21, 2006 12:10 PM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

It might also be worth pointing out the “Physics Tools for the LHC” Wiki, which if I remember correctly was created by Matthew Schwartz. This followed discussions at various workshops at which there was broad agreement that such a Wiki would be useful, but it has seen little use so far. The hope, as I understand it, was that a knowledge base of how to use and interface various Monte Carlo tools could be accumulated in one place. I expect such a site would have to develop a critical mass of existing content before many people start to use it.

In a small MC software project with just a few participants, we have recently found PBWiki to be a useful (because free and easy) way to set up a collaborative online space. I wouldn’t want to use it for anything requiring TeXed equations, though.

Posted by: Matt Reece on December 21, 2006 1:12 PM | Permalink | Reply to this

TW + jsM

physicswiki

is a working example of TiddlyWiki used with jsMath, with many modifications.

“This is my Wiki. There are many like it, but this one is mine…”

Posted by: garrett on December 21, 2006 3:03 PM | Permalink | Reply to this

Re: TW + jsM

I apparently can’t enter a hyperlink properly.

physicswiki

Posted by: garrett on December 21, 2006 3:08 PM | Permalink | Reply to this

Re: Blogs vs Wikis

I’m flattered you think TiddlyWiki+jsMath is “mind-blowingly cool”, though honestly my plugin is tiny, Jeremy Ruston is to credit for TiddlyWiki, and Davide Cervone for jsMath. I have a rather extensive (private) site now with my notes on it, and tons of plugins. With the plugins you can add a calendar, to-do list, FAQ, math, alternate wiki rendering, XHTML, arXiv references, RSS Feed, XML Reader (in fact I read the arXiv in my TiddlyWiki), trackbacks; blog-like plugins like CommentPlugin and RecentPlugin together make a decent blog-like site. Personally I thought the CommentPlugin turned every tiddlywiki into a bathroom wall, or “sandbox” in wiki lingo, and it was just a mess. (I generally hate all forum-like sites because I hate reloading the page 800 times to have a conversation)

Obviously I’ve been thinking about these issues for a few years, so I’ll share some. (Hey, maybe I should start a blog too…) While you’ve been figuring out how to get TeX on your blog, I’ve been doing the same for wikis. Though, I’ve probably been less visible to the physics community than you. ;)

I started my original ZWiki Notes page for exactly the reasons you cite, and a few others. The original site had an email interface, so that email discussions among myself and collaborators, which typically contain some tex, would be rendered. This worked fine in principle unless you made a mistake in your tex, and then you had to go on the web and fix it, or the page containing the email would break. I did later adapt itex to run in ZWiki too. Though I stopped using ZWiki, I still cringe a bit every time I write tex in an email (or some ascii-art math), and I still want a tool like this. FYI, my LatexWiki plugin for ZWiki has continued in the Axiom Project (a symbolic algebra software that all physicists should take a serious look at!)

This really hits at the heart of the problem though. A Wiki is designed to be simple. Its text markup rules are supposed to be so simple that you’ll never screw them up. And, it’s supposed to be tolerant of your mistakes, and render the page in some sensible way anyway. Obviously, this limits the complexity of wiki input. TeX math is simply, not simple. Not everyone has adhered to this (perhaps for the better), and everything from tables to math to music notation have been added to wikis. Wikipedia certainly supports the largest set of weird kinds of input that are neither simple nor error-tolerant. But if you’re going for complex input, why bother with a wiki in the first place? Why not write it directly in TeX/XHTML/RTF? A strong argument can be made however for adding tex math to a wiki, as it is common practice to include simple equations in simple (email) conversations.

HTML, TeX, and various other document types are based around a write/compile/check cycle. Once any input format gets too complex, you’ve just got to check everything you write. (Including, this blog entry, grr…) In such circumstances, you want to keep the write/compile/check cycle as short as possible.

So, TeX on a blog or wiki is halfway between the wiki idea and a full blown tex document. You need a write/compile/check cycle but you want it to be fast. The TiddlyWiki+jsMath accomplishes most of this, but I lost the original idea of collaborating in email. Maybe I should get over my love affair with mutt+vim and move to the web era for communication/collaboration…or maybe someone will be adventurous enough to add some kind of email gateway to ZiddlyWiki.

It’s clear that TiddlyWiki represents a fundamentally different way of thinking about documents. Jeremy Ruston and Gregg Wolfe recently visited northern CA and I had some very interesting conversations with them on this topic. There is, for instance, a book on web campaigning available. You can zip up your tiddlywiki after taking some notes, and email it to someone (thus keeping it private). People generally keep one on a USB stick. Many people take notes in class using it. People embed gmail using an <iframe> and check their email inside a tiddlywiki.

Regarding your comments about itex, one should keep a few things in mind. LaTeX is a macro system. Most modern markups and computer languages are described by a GLR grammar (because it is very predictable). itex uses a grammar (flex/bison). It is generally not possible to translate a macro expansion system into a grammar. The fundamental reason is that the latex macro expansion is Turing complete. You can, in general, use it to perform computations. Most other “simple” languages such as HTML, fortran, or C are not Turing complete – though C++ is Turing complete via its template system and the C Preprocessor is a macro expansion system and is Turing complete. Therefore, itex can never emulate all of latex. You may have programmed in more macros that don’t require Turing completeness than are present in jsMath, but itex will never be capable of truly parsing latex. (try feeding it the revtex file, for example)

jsMath (used by my TiddlyWiki plugin) is a literal and faithful translation of the TeX parsing rules as laid out in the TeXbook into javascript. (quite a feat, if you ask me!) Thus, in my mind, it is the best it could possibly be, without actually being Turing complete. (though it might actually be – I’m not sure) I think it would be better to simply define your favorite macros such as AMS Symbols. So, arguing that we need more of latex in tex converter X (i.e. itex, tex4ht, jsMath, etc) is arguing to turn X into a turing complete macro language. Here lies madness. Instead, I think it would be better to define some subset of commonly used TeX in a way that can be recognized by a parser. (In particular, the subset corresponding to MathML is an obvious choice). This is exactly what itex is. Don’t confuse your favorite set of macros with TeX itself. Everyone has different favorites.

I do agree with you on jsMath’s speed. The fundamental reason is that it must “measure” the size of glyphs by making a hidden <div>, drawing them, and then asking for the size of the <div> using javscript. Not exactly efficient. I have started a LaTeX MathML plugin that converts tex directly to MathML, based on ASCIIMathML. (This isn’t quite finished or fully working, feel free to grab it and try it though – if you enhance it, please shoot me an email) As soon as this is working I’m going to dump jsMath because I do believe the future is MathML. MathML rendering is still rather horrid in appearance though. It’s just not as pretty or finely-tuned as TeX. But, the more of us use it, the more we’ll complain about and fix these kinds of things in browsers.

Happy Holidays to all!

Posted by: Bob McElrath on December 23, 2006 8:50 PM | Permalink | Reply to this

Re: Blogs vs Wikis

“Most other “simple” languages such as HTML, fortran, or C are not Turing complete – though C++ is Turing complete via its template system and the C Preprocessor is a macro expansion system and is Turing complete.”

I realize I’m ignorant here, but this statement sounds odd. Isn’t C Turing complete? I thought the amusing thing about the C++ template system is that templates alone are Turing complete – they can be made to do calculations at compile time.

My understanding had been that you don’t need much at all for Turing completeness. In fact, I thought I recalled some discussion of how features were specifically not added to some markup language out of a puzzling desire to not have a Turing-complete language, though I’ve forgotten the details.

But, I know essentially nothing about computer science, so perhaps I should seek enlightenment. Wikipedia appears to back up my preconceptions….

Posted by: Computer Science Ignoramus on December 23, 2006 11:07 PM | Permalink | Reply to this

One Tiddly to Rule them all

I’m flattered you think TiddlyWiki+jsMath is “mind-blowingly cool”, though… [long list of plugins which do all kinds of amazing things]… People embed gmail using an and check their email inside a tiddlywiki.

Clearly, TiddlyWiki is just one step away from World Domination. All you need is a plugin to implement Emacs and the takeover will be complete!

More seriously, TiddlyWiki seems like the perfect personal document system.

It’s clear that TiddlyWiki represents a fundamentally different way of thinking about documents. … You can zip up your tiddlywiki after taking some notes, and email it to someone (thus keeping it private). People generally keep one on a USB stick.

Yes, one can zip up a bunch of Tiddlers and mail them to a friend, but what I am trying to get away from is the business of emailing documents (and revisions thereof) back and forth. What I want is a collective workspace.

Maybe one can achieve that by storing a TiddlyWiki on a WebDAV share. I haven’t really thought this through very well. One still needs locking (and, hopefully, some crude versioning), otherwise you run into trouble with two people editing the same file at the same time.

jsMath (used by my TiddlyWiki plugin) is a literal and faithful translation of the TeX parsing rules as laid out in the TeXbook into javascript. (quite a feat, if you ask me!)

Davide Cervone, whom I also met in Minneapolis , has mad skilz. You should have seen the WYSIWYG math editor, based on jsMath, that he demo’d there (“a couple of days work,” he said). In many ways, I like jsMath much better than itex2MML. But, everyone agrees, it’s slow. MathML is the way of the future.

Of course, I also like the idea of moving away from XML’s Draconian error-handling.

As soon as this is working I’m going to dump jsMath because I do believe the future is MathML. MathML rendering is still rather horrid in appearance though. It’s just not as pretty or finely-tuned as TeX. But, the more of us use it, the more we’ll complain about and fix these kinds of things in browsers.

I’ve been doing my share of bug-reporting, as of late. I urge you to do the same.

Posted by: Jacques Distler on December 23, 2006 11:59 PM | Permalink | PGP Sig | Reply to this

Re: One Tiddly to Rule them all

If you want collective editing, there are two options currently, ZiddlyWiki (which I need to get around to releasing a new version of…) and ccTiddly. We have had several discussions about about unifying some of this, and using WebDAV. In princple all you need for a server is WebDAV, and to a large extent, ZiddlyWiki already does it. In that, your tiddlers are all in separate files. Zope is WebDAV capable. In principle all one needs is WebDAV reads/write and an empty directory for tiddlers, and you have a server-side. Many of us want this, but no one has had the time to implement it yet. Search the TiddlyWiki or TiddlyWikiDev google groups for more discussion on this. If you want to help out, the ZW code is in Subversion

Currently ZiddlyWiki does locking using it’s own method, but WebDAV does have its own locks and ZiddlyWiki respects them (so it avoids conflicts if someone is editing with WebDAV and someone else edits inside their TiddlyWiki). Versioning is done with the Zope revision system however, and would need to be rewritten to work on WebDAV. I want this on WebDAV because Zope is a pain in the ass to set up. If I hadn’t already been running Zope, I doubt I would have jumped into ZiddlyWiki. (However note that there are free zope servers out there)

Anyway, you want a server-side for collaboritive editing. I haven’t tried it for collaborating yet, but maybe I will. (My collaborator Ben Lillie discovered TiddlyWiki+jsMath because of your blog and was very excited about it) Note that you can keep things private to your collaborators by enforcing logins and adding the private tag to tiddlers, thereby achieving some mixture of public and private content. But, how would you maintain the flow of a conversation? TiddlyWiki is very much like the original hypertext. It’s a dizzying array of interconnected nodes. Perhaps I’ll look again at CommentPlugin+RecentPlugin to see if this will work.

Finally, I’ve had a couple conversations with Davide about putting a MathML output module into jsMath. He’s quite receptive, and it should be quite fast. Now we just need to find someone with the time to write it.

Posted by: Bob McElrath on December 24, 2006 10:31 AM | Permalink | Reply to this

Access Control

Currently ZiddlyWiki does locking using it’s own method, but WebDAV does have its own locks and ZiddlyWiki respects them (so it avoids conflicts if someone is editing with WebDAV and someone else edits inside their TiddlyWiki)

I really should look into ZiddlyWiki (not that I really have any desire to set up Zope, just for this purpose). But I find this confusing.

How does ZiddlyWiki know whether someone is editing a Tiddler (since that editing happens locally on the user’s machine)?

Note that you can keep things private to your collaborators by enforcing logins and adding the private tag to tiddlers…

Is that some access-control enforced by ZiddlyWiki? I don’t see how a “private” tag can (on the client side) prevent me from viewing-source on a Tiddler.

Anyway, as I explain in my followup entry, I think Instiki does the whole access-control thing wrong (or, at least, in too simple-minded a fashion for my tastes).

Posted by: Jacques Distler on December 24, 2006 6:40 PM | Permalink | PGP Sig | Reply to this

Re: Access Control

How does ZiddlyWiki know whether someone is editing a Tiddler (since that editing happens locally on the user’s machine)?

Because when you edit a tiddler, it does locking on the server to prevent conflicts. It’s not a WebDAV lock, but it respects those. In case you’re using Zope’s External Editor – which just uses WebDAV and feeds it to your favorite editor.

Is that some access-control enforced by ZiddlyWiki? I don’t see how a “private” tag can (on the client side) prevent me from viewing-source on a Tiddler.

Tiddlers tagged with private or onlyAdmin are not delivered to anonymous clients when they visit your page. Essentially, ZiddlyWiki assembles the TiddlyWiki when you request it, and can excludes some tiddlers if the access permissions are not sufficient. You can’t view-source on tiddlers you don’t have. ;) Also, if you log in, it will now download anything to which you didn’t previously have access. (no page reload!)

The Zope security capabilities are massive, massive overkill for the needs of ZiddlyWiki. So if the ZiddlyWiki permissions are not enough for you, it is not to hard to add more (just a couple lines of python, really)

Posted by: Bob McElrath on December 25, 2006 1:01 AM | Permalink | Reply to this

Re: Access Control

Because when you edit a tiddler, it does locking on the server to prevent conflicts.

That does not appear to be a feature of standard TiddlyWiki Tiddlers. Clicking on the “edit” button changes the template with which the Tiddler is displayed, but does not induce any new communications with the server.

  • Do ZiddlyWiki Tiddlers have some modified Javascript code which sends and HTTP request back to the server when you click the “edit” button?
  • What happens when you host such a modified Tiddler on a non-ZiddlyWiki host, or move a standard Tiddler to ZiddlyWiki?
Posted by: Jacques Distler on December 25, 2006 10:08 AM | Permalink | PGP Sig | Reply to this

Re: Access Control

Remember the standard TiddlyWiki does no AJAX and makes no HTTP requests. It is solely a client-side application.

Do ZiddlyWiki Tiddlers have some modified Javascript code which sends an HTTP request back to the server when you click the “edit” button?

Yes. The ZiddlyWikiPlugin (note: unreleased code) hijacks the load/save callbacks (among others) to do this using AJAX.

What happens when you host such a modified Tiddler on a non-ZiddlyWiki host, or move a standard Tiddler to ZiddlyWiki?

The operations you are describing are essentially cut-and-paste copies of the original content. Therefore, no synchronization can be maintained with the original source.

I had some very interesting discussions with Jeremy Ruston about tiddler fingerprints – being able to keep track of the “geneaology” of a piece of content, if you will. But at present this is just idle discussion and doesn’t exist.

Posted by: Bob McElrath on December 28, 2006 1:20 PM | Permalink | Reply to this

Re: Blogs vs Wikis

Why that “VS” ?
It looks like that forum VS blog, or forum VS wiki (dirty wiki with comments no more pure wiki! wanna comments? install forum !).

There is no need to separate, rather there is need to converge.

Say, Trac - wiki + issue tracker.
Say, NPJ: wiki + blog +issue tracker plugin

Posted by: Arioch on December 31, 2006 6:31 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Hi,

I wonder if technology has progressed enough since this discussion that modifying Mediawiki source to accommodate itex2MML has become a real possibility?

I am writing a diary of my experience over the past 2 days here:

Porting nLab to Mediawiki

I’m new to PHP, but my impression so far is that it should certainly be doable.

Any words of wisdom (or even collaboration) would be greatly appreciated. I’m extremely happy with itex2MML and I am also a fan of Mediawiki, so getting itex2MML to work in Mediawiki would be fantastic.

Posted by: Eric on June 1, 2009 3:54 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Maybe a more specific question would be about the issue of “XHTML safe”. Is it still the case the Mediawiki is not XHTML safe? And what does that means?

In the comments to Parser.php, it says:

PHP Parser - Processes wiki markup (which uses a more user-friendly syntax, such as “[[link]]” for making links), and provides a one-way transformation of that wiki markup it into XHTML output / markup (which in turn the browser understands, and can display).

I’m thinking that since Parser produces XHTML, then we can slip a call to itex2MML in there somewhere. Is it that easy? (I know it never is)

Posted by: Eric on June 1, 2009 4:09 AM | Permalink | Reply to this

Re: Blogs vs Wikis

I wonder if technology has progressed enough since this discussion that modifying Mediawiki source to accommodate itex2MML has become a real possibility?

What do you mean by “the technology”? What do you expect to have changed?

I’m new to PHP, but my impression so far is that it should certainly be doable.

Depends what “it” is.

Maybe a more specific question would be about the issue of “XHTML safe”. Is it still the case the Mediawiki is not XHTML safe?

Yes.

And what does that means?

“Does not — and cannot be easily modified to — reliably output well-formed XML.”

Changing that would mean

  1. a considerable expenditure of effort
  2. forking Mediawiki

I can’t imagine why anyone would be so in love with that particular piece of software that they wished to expend the required effort.

But, if you are, then creating an XHTML-safe fork of Mediawiki would surely be a worthwhile endeavour.

Best of luck!

Posted by: Jacques Distler on June 1, 2009 8:29 AM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

Thanks!

So, if I understand correctly, Mediawiki now produces XHTML (or so it claims in the source code), but that XHTML is not “safe”? If that’s true, then it seems like a bigger problem that they need to address. I’m not sure if that would kill the idea of adding itex2MML to it though. Would it?

By the way, it’s not that I’m so much in love with Mediawiki. It’s just that Instiki is lacking some features that we’d like. We can either spend effort improving Instiki or spend effort getting Mediawiki to work with itex2MML. My personal opinion is that itex2MML should become a standard for doing mathematics on the web. If we can get itex2MML working with Mediawiki, then this might work its way into the main Mediawiki source, which would then make it available on Wikipedia. That has a poetic sound to it.

Posted by: Eric on June 1, 2009 10:08 AM | Permalink | Reply to this

Re: Blogs vs Wikis

So, if I understand correctly, Mediawiki now produces XHTML…

It has always produced “XHTML”. That means nothing.

It’s just that Instiki is lacking some features that we’d like.

Such as?

If we can get itex2MML working with Mediawiki, then this might work its way into the main Mediawiki source, which would then make it available on Wikipedia. That has a poetic sound to it.

As one can discern from the fate of the BlahTeX project, the developers of Mediawiki and/or the maintainers of Wikipedia (since the primary purpose of Wikimedia is to drive Wikipedia, there’s not much point in distinguishing between the two) are not the least bit interested in going down that road …

Posted by: Jacques Distler on June 1, 2009 10:22 AM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

Such as?

Redirects is one.

The jury is out whether it is due to the server or to Instiki, but the nLab often slows to a crawl if 3 or more people are editing content (even separate pages) simultaneously. There’s also been some extremely weird behavior where what one author types appears on another author’s screen (and vice versa).

Like I said here, Instiki is great for what it was designed for, but it does not seem to be designed for large semi-permanent encyclopedic projects involving 10s to 100s of authors.

Preview would be nice.

Sharing a developer base with Mediawiki would be nice so that 10 years from now, we won’t be completely obsolete.

Etc etc.

Re blahtex…

Just because something was not accepted 3 years ago, doesn’t mean it was a bad idea or that it will not be accepted in the future.

What was your opinion of blahtex? How did it compare to itex2MML?

I may be dreaming, but I’m looking for an Achilles’ heel, i.e. some spot where we can sneak in a call to itex2MML that requires very little code modification.

Posted by: Eric on June 1, 2009 11:02 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Redirects is one.

Maybe you can explain to me exactly what is required, then.

I can think of two things.

  1. Multiple URLs pointing to the same resource (i.e., to the same wiki page).
  2. The software takes “Wikilinks” — text between double square-brackets, [[...]] — and generates URLs. Perhaps you want multiple Wikilinks to generate the same URL.

In either case, the Mediawiki implementation is B.A.D., since neither of the above requires HTTP redirects, much less special (“redirect”) wiki pages.

The jury is out whether it is due to the server or to Instiki, but the nLab often slows to a crawl if 3 or more people are editing content (even separate pages) simultaneously.

I gather there have been some performance issues with ncatlab.org, lately. I don’t know what the cause is. I made some configuration changes, yesterday, that may or may not fix them.

In any case, I doubt very much that they have anything to do with having 3 or more users editing at the same time.

There’s also been some extremely weird behavior where what one author types appears on another author’s screen (and vice versa).

What are you talking about?

Instiki … does not seem to be designed for large semi-permanent encyclopedic projects involving 10s to 100s of authors.

Could you elaborate on what aspects of its design are ill-suited to the use-case you are envisioning?

Preview would be nice.

I’ve hear this request before. I’m not sure why it’s desirable to distinguish between “Submit” and “Preview”.

Unlike Wikimedia, multiple successive “Submit“s do not generate multiple revisions. This way, you diminish the chance of data-loss (through the browser crashing, or people simply forgetting to click “Submit” after previewing). The latter is single biggest complaint about Wikimedia.

Sharing a developer base with Mediawiki …

Since you are talking about forking the latter project, I don’t see how you’re “sharing” anything.

What was your opinion of blahtex? How did it compare to itex2MML?

BlahTeX was designed as a drop-in replacement for texvc. It supports both PNG generation (as texvc does), but also can generate MathML.

The particular dialect of TeX that it supports is – by design – supposed to be the same as texvc (hence the phrase “drop-in replacement”). itex2MML is not similarly constrained.

Otherwise, they are quite comparable.

I may be dreaming, but I’m looking for an Achilles’ heel, i.e. some spot where we can sneak in a call to itex2MML that requires very little code modification.

I think you are missing the forest for the trees.

Posted by: Jacques Distler on June 1, 2009 11:36 AM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

There’s also been some extremely weird behavior where what one author types appears on another author’s screen (and vice versa).

What are you talking about?

Urs could tell you better than I could because he experienced it, but apparently he and Andrew Stacey were editing a page simultaneously. What Urs wrote, appeared, in real time, in Andrew’s edit box, as if Andrew had a ghost writer sitting at his computer. And vice versa. What Andrew typed in his edit box appeared, in real time, on Urs’ monitor. Like some accidental “chat”.

I think you are missing the forest for the trees.

My humble task will be complete when/if I can go to nLab, copy some wikitext, open an edit box in (some possibly modified version of) Mediawiki, paste the wikitext and have it render the page correctly using itex2MML in the background. If that can be accomplished with a cheesy one-line hack that does not solve the XHTML safe issue, I’ll be more than satisfied.

Posted by: Eric on June 1, 2009 12:25 PM | Permalink | Reply to this

Re: Blogs vs Wikis

Urs could tell you better than I could because he experienced it, but apparently he and Andrew Stacey were editing a page simultaneously. What Urs wrote, appeared, in real time, in Andrew’s edit box, as if Andrew had a ghost writer sitting at his computer. And vice versa. What Andrew typed in his edit box appeared, in real time, on Urs’ monitor. Like some accidental “chat”.

I think you are confused.

My humble task will be complete when/if I can go to nLab, copy some wikitext, open an edit box in (some possibly modified version of) Mediawiki, paste the wikitext and have it render the page correctly using itex2MML in the background. If that can be accomplished with a cheesy one-line hack that does not solve the XHTML safe issue, I’ll be more than satisfied.

I’m not sure why you would find that satisfying.

But, if it give you pleasure …

Posted by: Jacques Distler on June 1, 2009 1:00 PM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

I think you are confused.

That wouldn’t be the first time. Andrew just explained:

Eric, looking at your comments on Jacques’ blog, may I just correct one thing. The time when Urs and I got each others’ output was when we were logged in to the server simultaneously and is due to the strange way that the virtual server is configured. It has nothing to do with Instiki.

You said:

I’m not sure why you would find that satisfying.

I was just describing my specific goal for this project, which I think is relatively humble, i.e. no grand plans to clean up Mediawiki to the satisfaction of XHTML standard setters, etc. I hope the justification for that goal is clear, i.e. it would mean that we have a functioning Mediawiki that uses itex2MML.

As far as why I would find it satisfying, I am generally happy to accomplish a goal :)

Is there a difference between “forking” Mediawiki and building an “extension” for Mediawiki? If my goal is simply to modify Mediawiki so that it can process and correctly display itex (safe or not), then I hope the trick can be isolated into an extension or something so that we would maintain the benefits involved with piggy backing on the Mediawiki developer community.

Posted by: Eric on June 2, 2009 9:55 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Is there a difference between “forking” Mediawiki and building an “extension” for Mediawiki?

Sure.

All I am saying is that making Mediawiki XHTML-safe will require much more than “an ‘extension’”. The required changes would amount to forking the project.

If my goal is simply to modify Mediawiki so that it can process and correctly display itex (safe or not) …

I am afraid I don’t catch your distinction.

Ill-formed XML will not “display” — correctly, or otherwise.

Posted by: Jacques Distler on June 2, 2009 10:09 AM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

Thanks. I’ve been thinking about this problem for all of 3 days and like I said on my diary, “I have no particular skill that qualifies me for the task of creating an itex2MML extension for Mediawiki.” I wasn’t saying that just to be humble :)

I don’t even know the difference between XHTML and XML (I had never even heard of XHTML although I’ve used XML before). From tinkering around, I now know that MathML will not display at all if there is some bogus code, but XHTML apparently will. I suppose, and it makes sense, that this is what you’ve been trying to tell me. Sorry I’m slow.

I’ll dig around some more. In the worst case, I’ll merely end up learning something. In the meantime, I like Mike Shulman’s advice:

Long-term support. On this one I would be inclined not to fix something that ain’t broke yet. As has been said, the content of the nLab is stored in a very portable form, and if and when the Internet moves on from instiki, it shouldn’t be difficult to move it to whatever format seems best then – and what to move to will probably be more evident then than now.

Your argument about submit vs preview makes a lot of sense.

If we could make one change to Instiki that would have the largest benefit to the nLab right now, I think it would be redirects. Currently, there is a lot of work on (,1)(\infty,1)-categories, etc. This has resulted in pages referred to as:

(infinity,1)-category of (infinity,1)-categories

This is obviously not very pleasing to the eyes. It would be much better to have pages like:

\infty-category.

After some back and forth, we decided not to use the unicode titles because it would cause headaches when writing articles. Redirects would solve this problem. For example, we could have a link

[[infinity-category]]

automatically redirect you to

[[\infty-category]].

Currently, we’re inserting

category: redirect

in pages that simply point to other pages.

PS: We’ve thought about moving the redirect from [[\infty-category]] to [[infinity-category]] so that the page you end up on has the nice unicode title, but since there are so many links to [[infinity-category]], this would force us to either change all the links or force everyone to click two links to get to the right page. A real redirect would solve this problem too.

Posted by: Eric on June 2, 2009 10:59 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Regarding Jacques’ earlier comment on redirects, I think that either option would work. Just to be sure I understand the ideas, in one case http://…/show/infinity-category would display the contents of the page “∞-category” (perhaps with a comment about having been redirected, which is what mediawiki does right?), and in the other a link like [ [infinity-category] ] would just send you to http://…/show/∞-category – right? I think I would mildly prefer the first option, partly so that people linking to pages from outside also wouldn’t have to type Unicode, and partly so that the reader could be notified that a redirect has occurred.

Certainly, neither approach requires an HTTP redirect. (Does Mediawiki use HTTP redirects? It doesn’t look like it to me.) But I’m not entirely clear what you mean by not requiring special wiki pages; do you mean that you would prefer that a redirect A\toB be stored some other way than with text like “#REDIRECT B” in the source of page A? If so, why?

Posted by: Mike Shulman on June 2, 2009 12:16 PM | Permalink | Reply to this

Re: Blogs vs Wikis

But I’m not entirely clear what you mean by not requiring special wiki pages; do you mean that you would prefer that a redirect A→B be stored some other way than with text like “#REDIRECT B” in the source of page A?

Correct.

If so, why?

Because the Mediawiki implementation is utterly brain-dead.

I explained the scheme which Instiki will follow in an email of which, alas, you were not a recipient.

Here is the scheme:

Here, roughly, is what I propose to implement. It consists of two pieces

  1. A facility for renaming pages
  2. A facility for “redirecting” Wikilinks to a page of another name.

Say you create a page, “nitwits”. Later, you decide that the name of the page should be singular, rather than plural.

  1. Go to the existing “nitwits” page, and click on “Edit”.
  2. Click on the “Rename page” checkbox and enter the new page name.
  3. The software will add a

    [[!redirects nitwits]]
    

    to the top of the page text.

  4. When you click “Submit”, the page will be renamed, and — automagically — Wikilinks to

    [[nitwits]]
    

    will point to the ‘new’ page, named “nitwit”, instead.

You can add additional redirection links to a page. For instance, if you add

    [[!redirects nincompoop]]

to the “nitwit” page, then Wikilinks to

    [[nincompoop]]

will point to the “nitwit” page (instead of creating a create-a-new-page link).

Posted by: Jacques Distler on June 2, 2009 2:24 PM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

Because the Mediawiki implementation is utterly brain-dead.

That doesn’t convey any information to me. You already said it was broken; I’m curious in what way it is broken or brain-dead or whatever other disparaging adjective you prefer.

Here is the scheme:

That sounds fine to me. What happens if the page “nincompoop” already exists when you add [[!redirects nincompoop]] to “nitwit”?

Posted by: Mike Shulman on June 2, 2009 8:53 PM | Permalink | Reply to this

Re: Blogs vs Wikis

I’m curious in what way it is broken or brain-dead or whatever other disparaging adjective you prefer.

Because it adds layers of indirection and complexity (both for the programmer and for the wiki editors), where none are necessary.

It’s an “If all you have is a hammer, everything looks like a nail.” kind of solution.

What happens if the page “nincompoop” already exists when you add [[!redirects nincompoop]] to “nitwit”?

Immediately? Nothing. Real, existing, pages always trump redirects. (However, if you later delete the “nincompoop” page, all [[nincompoop]] links then become links to “nitwit”.)

I can see arguments for flipping this behaviour, so that redirects trump real pages. But I’d prefer to start with this, and see how it works.

Try it out.

Posted by: Jacques Distler on June 2, 2009 10:32 PM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

Awesome.

I just tried to rename

[[\infty-category]]

to

[[infinity-category]]

as a test, but it won’t let me. It complains that [[infinity-category]] already exist.

That brings up another issue. What if we rename a page, but later decide that we want to reverse the redirect?

For example, in my test, I was going to redirect [[\infty-category]] to [[infinity-category]]. If that worked, I was going to copy the contents of [[infinity-category]] to [[\infty-category]] and reverse the redirect so that when you go to [[infinity-category]] you end up on [[\infty-category]]. Is that possible?

Once a page is renamed, does it become unavailable if we want to “undo” the redirect?

Very nice. I like this solution a lot.

PS: My test was probably ill-conceived, but I think we do need the ability to redirect a page to a page that already exists.

Posted by: Eric on June 3, 2009 12:41 AM | Permalink | Reply to this

Re: Blogs vs Wikis

I think you have misunderstood what this solution does. It’s much simpler than what you were trying to do.

You give a page the name you want it to have. Then you provide a list of names which point to that page. You don’t move content around, create superfluous “redirect” pages, or any such crap.

If you change your mind about what a page should be called, you change its name. Period. The previous name gets added to the list of names that point to the current page.

Now, alas, as recently as a few hours ago, Toby was still busy adding superfluous “redirect” pages to the Wiki.

Hopefully, that foolishness can stop now.

Posted by: Jacques Distler on June 3, 2009 1:07 AM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

Ok. But what about this example…

Say that Urs writes an article about [[nitwits]], I write an article about [[nincompoops]], and John writes an article and rightly names it [[nitwit]]. All three pages contain content. Later we realize that we created three pages on the same subject and Urs and I decide we both want to redirect to [[nitwit]]. What do we do?

This is not a hypothetical example. Several times, authors have created content on a page they think no one has discussed before. Later, they find out that a page already exists under a different, but equivalent name. They then transfer their material to an existing page and then place a redirect on the page they created. Presumably the page they created has a sensible name and it would make sense for them to have a redirect there.

Does the current implementation accommodate this scenario?

Posted by: Eric on June 3, 2009 1:27 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Does the current implementation accommodate this scenario?

No, of course it does not. Only ‘foolishness’ and ‘brain-dead’ ideas can accommodate this very real scenario.

Posted by: Toby Bartels on June 3, 2009 1:04 PM | Permalink | Reply to this

Re: Blogs vs Wikis

Sorry, I'm getting a little snippy here. Thank you, Jacques, for working on this.

Posted by: Toby Bartels on June 3, 2009 1:34 PM | Permalink | Reply to this

Re: Blogs vs Wikis

“Rename” implies you have a some content on a page and you want to rename the link to that content.

When I think of “Redirect”, I think of just an automatic content-less reference to another page that contains the content.

In my example with [[\infty-category]], what I envision as a nice solution would be that instead of a “Rename” button, you have a “Redirect” button. The same edit text box appears for the name of the page you want to redirect to. When you click submit, it gives you an error if the current page you are on still contains content, e.g. if you submit a redirect and the page still contains content, you receive an error message

To redirect to a new page, you must remove all content from this page first. Please transfer any material to the page you are redirecting to, delete this content, and try to submit the redirect again.

In other words, if the page we are redirecting from has content, we need to move that content over before it will let you redirect.

If we did something like this, I would avoid giving an option to automate the process, i.e. no option “Would you like to transfer this content now?” I can imagine myself accidentally deleting content.

Posted by: Eric on June 3, 2009 1:00 AM | Permalink | Reply to this

Re: Blogs vs Wikis

I have a knack for doing careless things, which sometimes comes in handy when debugging code :)

I can imagine a situation where someone with good intentions creates a redirect, but the redirect was ill-conceived or otherwise vetoed by others. There should also be some way to undo a redirect without requiring administrators I think.

Maybe that is one reason why Mediawiki does it that way?

Posted by: Eric on June 3, 2009 1:09 AM | Permalink | Reply to this

Re: Blogs vs Wikis

I’m afraid I don’t understand.

The redirection is achieved by adding a little bit of text:

[[!redirects ... ]]

to the page. Anyone can remove/alter that piece of text. (And this includes the “automatic” redirect which is added when you rename a page — that, too is just a piece of text that can be edited/removed.)

Say that Urs writes an article about [[nitwits]], I write an article about [[nincompoops]], and John writes an article and rightly names it [[nitwit]]. All three pages contain content. Later we realize that we created three pages on the same subject and Urs and I decide we both want to redirect to [[nitwit]]. What do we do?

Presumably, that’s a scenario where either

  1. You want to consolidate the relevant text in one place, and have all three terms, [[nitwit]], [[nitwits]] and [[nincompoop]] point to that one, consolidated article. The other pages can be renamed and/or deleted.
  2. You want to maintain separate articles, and merely establish links between them.

Which you do is totally up to you.

I’m just making it possible for you to choose.

You’ve posted 4 comments in relatively rapid succession, essentially asking, “Why doesn’t this work like Mediawiki’s redirect pages?”

Might I suggest that, instead of trying to fit this square peg into a conceptual round hole, you forget about how Mediawiki does things, take this mechanism for what it is, and see what you can do with it?

I think you will find that it does what you want, in a simple, intuitive fashion.

The other reason I’d like you to stop arguing and start using the new feature(s) is that I would like y’all to give them a good workout. This was relatively rapidly-written code. It’s all less than a day old. Doubtless, there are bugs I have missed, and edge-cases I failed to consider. Surely, some of those will come to light when y’all systematically start cleaning up the nlab.

(I suppose it’s too much to ask that someone — Andrew? —- contribute some tests of the new features to the Instiki test suite. That’s the most important part of good programming practice, and I’m afraid I have, so far, only written a couple of token tests, where there should be a dozen.)

Posted by: Jacques Distler on June 3, 2009 9:43 AM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

If I understand what you’re saying, if both A and B exist and we want B to redirect to A, we should delete page B and add the !redirects B instruction on page A. However, what I don’t see is how to delete a page.

I am kind of confused by everything that went on at the Sandbox, but possibly Toby objects to this solution because the edit history of page B will be lost?

Posted by: Mike Shulman on June 3, 2009 10:44 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Deleting pages (since it is an irrevocable step) requires an Administrative password (which, I believe, Toby has but apparently you do not).

Merely renaming page “B”, however, is something that anyone can do.

Posted by: Jacques Distler on June 3, 2009 10:51 AM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

So you’re saying that if both A and B exist and I want B to redirect to A, I should add [[!redirects B]] to A and then rename B to something like oldB or B/history. Then the edit history of page B will be preserved at B/history, if anyone knows to go look for it there.

I have to say I don’t see why this is cleaner than the Mediawiki solution, where page B still exists with its edit history and a #REDIRECT instruction.

Posted by: Mike Shulman on June 3, 2009 12:24 PM | Permalink | Reply to this

Re: Blogs vs Wikis

Mike wrote in part:

I am kind of confused by everything that went on at the Sandbox

I did a lot of nonsense there before I figured out how it works, so you probably shouldn't pay attention to most of that.

Then the edit history of page B will be preserved at B/history, if anyone knows to go look for it there.

The good thing about this method is that, if we consistently use /history for old pages (and for nothing else), then it will be clear where to go look for such stuff by scanning the backlinks at the bottom of the page. (MediaWiki has special code to tell you which backlinks are redirect pages.)

Posted by: Toby Bartels on June 3, 2009 1:36 PM | Permalink | Reply to this

Re: Blogs vs Wikis

The other pages can be renamed and/or deleted.

How can they be renamed? What can you rename them to? The best idea that I've come up with so far is to add /history to their names while still making them into redirect pages in the old way. Then (after putting in the proper !redirects notices) internal links to the old name will work, but we can't actually get rid of those pages.

I know, of course, that you would like to delete them, regardless of what edit history they contain. But the rest of us, as far as I can tell, would not.

Posted by: Toby Bartels on June 3, 2009 1:09 PM | Permalink | Reply to this

Re: Blogs vs Wikis

First of all, thanks so much, Jacques, for implementing a redirect feature so quickly!! I am really excited about getting this working. If this is the wrong place to be reporting problems and making requests, please tell me where I should go.

It appears that the target of the link [[nitwits]] is whichever page claimed to [[!redirects nitwits]] at the time when the page containing [[nitwits]] was saved, rather than at the time the link is displayed. This seems like a bug to me; it means that if we change what page we want “nitwits” to redirect to, we have to go around and re-save all of the pages that link to it in order for their links to get updated. And in fact, if there was no page claiming to [[!redirects nitwits]] when the link was saved, the link seems to remain as a “create this page” link, which could result in unknowing duplication of effort.

Also, here is another problem with the current implementation: I sometimes like to go directly to a page by typing in http://.../show/nitwits. (Actually, I wrote a firefox search bar plugin where I can just type “nitwits” and it will take me there.) But this doesn’t work if nitwits is redirected; instead of taking me to the page that nitwits is supposed to redirect to, it takes me to a creation page. I might then be fooled into believing that the page doesn’t exist, and go ahead and create it, destroying the redirect.

One more thing: searching. If I search for “nitwits” which redirects to somewhere, then the page that it redirects to only shows up as a “page containing search string in the page text.” It seems that ideally, when I search for X, the list of pages containing X in their title should be augmented by the pages which are redirected to by a page containing X in their title.

Posted by: Mike Shulman on June 3, 2009 11:12 AM | Permalink | Reply to this

Re: Blogs vs Wikis

I have to say I don’t see why this is cleaner than the Mediawiki solution, where page B still exists with its edit history and a #REDIRECT instruction.

For a page that already exists, and whose edit history you wish to preserve, it’s a wash.

But there’s no need to creat new pages, whose sole function is to “#REDIRECT” to the page containing the actual content.

And, moreover, in those instances where you don’t feel a sentimental attachment to the edit history of “B”, there’s no necessity to keep the page around, for the sole purpose of redirecting to “A”.

If this is the wrong place to be reporting problems and making requests, please tell me where I should go.

Bug reports might be more usefully handled by ordinary email.

It appears that the target of the link [[nitwits]] is whichever page claimed to [[!redirects nitwits]] at the time when the page containing [[nitwits]] was saved, rather than at the time the link is displayed.

When you sav the page which [[!redirects nitwits]], it should expire the cache for all the pages containing [[nitwits]]. When you visit those pages afresh, the new, correct, link should be created.

I tested this, and it seemed to work for me. I’m surprised it didn’t work for you. Could you explain the exact steps required to reproduce the problem?

Also, here is another problem with the current implementation: I sometimes like to go directly to a page by typing in http://…/show/nitwits. (Actually, I wrote a firefox search bar plugin where I can just type “nitwits” and it will take me there.) But this doesn’t work if nitwits is redirected …

Correct. That’s why I asked whether redirection of URLs was required, or just redirection of Wikilinks.

The downside of allowing pages to be renamed is that HTML hyperlinks get broken when you do. Broken hyperlinks are the curse of the WorldWide Web. They can, to some extent, be mitigated by HTTP redirects. But, ultimately, it’s better – as Toby has long advocated – to pick a good naming convention at the outset, and to try, wherever possible, to stick to it.

One more thing: searching. If I search for “nitwits” which redirects to somewhere, then the page that it redirects to only shows up as a “page containing search string in the page text.” It seems that ideally, when I search for X, the list of pages containing X in their title should be augmented by the pages which are redirected to by a page containing X in their title.

Good point.

Posted by: Jacques Distler on June 3, 2009 12:50 PM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

Broken hyperlinks are the curse of the WorldWide Web. They can, to some extent, be mitigated by HTTP redirects. But, ultimately, it’s better – as Toby has long advocated – to pick a good naming convention at the outset, and to try, wherever possible, to stick to it.

It's not a dichotomy; you should keep old links working and pick a good naming system to make that task easier (and for other purposes).

Posted by: Toby Bartels on June 3, 2009 2:50 PM | Permalink | Reply to this

Re: Blogs vs Wikis

Also, here is another problem with the current implementation: I sometimes like to go directly to a page by typing in http://.../show/nitwits.

I think that this is the only serious problem with the current system. (The failure to smoothly handle merges and current redirect pages, which I complained about before, is handled by /history pages if people are happy with that.)

Perhaps here we could also use HTTP redirects? It's nice that internal links bypass this, but external links seem to need them. I'm thinking that HTTP 302 is the best, since it might not be permanent (such as if [[foo]] first redirects to [[bar]] but the relevant material is later made into a separate article) and 307 is more complicated to use properly. (Then again, does anybody really cache 301 results?)

It would also be nice to have an HTTP redirect for http://.../new/nitwits; this could very easily lead to accidental duplication. (In fact, it would be nice if that redirected when [[nitwits]] exists as well!) I see that http://.../edit/nitwits is already handled.

Posted by: Toby Bartels on June 3, 2009 1:34 PM | Permalink | Reply to this

Re: Blogs vs Wikis

I’d also like to express thanks to Jacques for implementing this so quickly.

I think I see what I was doing wrong. I’ll try it again once the server comes back to life. It seems to be dead (or at least crawling) now.

Posted by: Eric on June 3, 2009 11:46 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Because it adds layers of indirection and complexity (both for the programmer and for the wiki editors), where none are necessary.

It’s an “If all you have is a hammer, everything looks like a nail.” kind of solution.

Well, just because I have a hammer, doesn’t necessarily mean that something isn’t a nail. (-:

I’m failing to see why the Mediawiki way is more complicated for wiki editors than the way you implemented. That way, if I want to redirect a page that doesn’t exist, I have to create a new page and add #REDIRECT to it, instead of adding [[!redirects]] to the existing page. But conversely, if I want to redirect a page that does already exist, in your implementation I have to delete or move that page first and then edit the target, rather than just adding #REDIRECT to the existing page. Toby listed here some other issues.

Of course, I can’t say what is more complicated for the programmer, not being familiar with the code. But I would naively expect that when figuring out where A redirects to, it would be easier to just look at the source of page A, rather than searching through the rest of the wiki for pages containing [[!redirects A]].

Posted by: Mike Shulman on June 3, 2009 12:44 PM | Permalink | Reply to this

Re: Blogs vs Wikis

I think I’m starting to get the idea, but am probably not all the way there yet. Sorry for being dense. However, this still seems to be incapable of accommodating a common scenario though.

This implementation seems to be geared towards redirecting new pages. Not existing pages.

For example, if [[nitwit]] does not already exist and we put content on [[nitwits]], we can place a

[[!redirects nitwit]]

at [[nitwits]]. It then creates a new page [[nitwit]] with the contents of [[nitwits]]. From that point, any link to [[nitwits]] is automatically redirected to [[nitwit]].

So far so good.

Now, if there does not already exist a [[nincompoops]], then we can place a

[[!redirects nincompoops]]

at [[nitwit]]. From that point, any link pointing to [[nincompoops]] is automatically redirected to [[nitwit]].

If that is correct, then so far so good.

The scenario that it seems we cannot currently accommodate, but I think is really the more common case is that all three pages, [[nitwits]], [[nincompoops]], and [[nitwit]] already exist. If all three pages already exist and we decide that we want to consolidate all three to [[nitwit]] and place redirects at [[nitwits]] and [[nincompoops]] to [[nitwit]], we currently cannot do that.

In the above scenario, it seems we would have to first consolidate the content of [[nitwits]] and [[nincompoops]] to [[nitwit]] (fine) and then ask an administrator to delete [[nitwits]] and [[nincompoops]]. Once they are deleted, we can add

[[!redirects nitwits]]
[[!redirects nincompoops]]

to [[nitwit]].

Is that correct?

I have a crazy idea.

What if all redirects, even new ones, are created by adding the redirect command on the target page rather than the source page. For example, if we want links to both [[nitwits]] and [[nincompoops]] to automatically redirect to [[nitwit]], then instead of modifying [[nitwits]] and [[nincompoops]], we simply add the commands

[[!redirects nitwits]]
[[!redirects nincompoops]]

to the [[nitwit]] and it pulls the redirects toward it regardless of whether those poages already exist and regardless of whether they already contain content.

An author would know that the redirect exists because links point to the new page. They can then undo the redirect by temporarily removing the

[[!redirects nitwits]]

command from [[nitwit]]. Then the content at [[nitwits]] becomes available again and the author can continue to consolidate the content. Once they’re happy with the consolidation, they can reinsert the

[[!redirects nitwits]]

command at [[nitwit]].

This would seem to allow us to create redirects with existing pages and also to easily undo a redirect if we want to since the redirect command is seen on the source code of the target page.

Posted by: Eric on June 3, 2009 12:54 PM | Permalink | Reply to this

Re: Blogs vs Wikis

For example, if [[nitwit]] does not already exist and we put content on [[nitwits]], we can place a

[[!redirects nitwit]]

at [[nitwits]]. It then creates a new page [[nitwit]] with the contents of [[nitwits]].

Sigh. No.

No pages are created by adding a [[!redirects ...]] directive. All it does is affect where Wikilinks (in this case, a [[nitwit]] Wikilink) point.

I have a crazy idea.

What if all redirects, even new ones, are created by adding the redirect command on the target page rather than the source page.

That’s how it currently works.

Posted by: Jacques Distler on June 3, 2009 1:05 PM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

Exellent! If that is the way it is supposed to work, then all we have to do is fix the bug, because it is currently not working.

For example, at [[nitwit]], I inserted

[[!redirects nitwits]]
[[!redirects nincompoops]]

Yet, when we go to [[nitwits]] or [[nincompoops]], it is not redirecting to [[nitwit]].

Since the source code at [[nitwit]] contains

[[!redirects nitwits]]
[[!redirects nincompoops]]

then once [[nitwits]] and [[nincompoops]] actually do redirect to [[nitwit]] we’ll be in business.

Posted by: Eric on June 3, 2009 1:25 PM | Permalink | Reply to this

Re: Blogs vs Wikis

That’s how it currently works.

Eric proposal isn’t quite how it currently works; he was saying that a [[!redirects foo]] at “bar” should take effect whether or not the page “foo” already exists. That would at least remove the need to delete or rename an existing page when you redirect it.

Posted by: Mike Shulman on June 3, 2009 1:33 PM | Permalink | Reply to this

Re: Blogs vs Wikis

The scenario that it seems we cannot currently accommodate, but I think is really the more common case

I think that in the long run, this will actually be far less common. We have a lot of legacy cases of this now, but new ones will occur only when we want to merge pages (including cases of accidental duplication) and these have, historically, been much rarer than merely wanting to rename/move pages.

The more that I think about Jacques's method, the more that I like it. But we still have to:

  • make sure that people are happy with /history pages for legacy redirects and new merges, or else implement something like Eric's solution for them; and
  • figure out something for incoming links.

Then I will be happy! (^_^)

Posted by: Toby Bartels on June 3, 2009 1:42 PM | Permalink | Reply to this

Re: Blogs vs Wikis

Yeah. I was confused at first (Mediawiki-itus), but after I started understanding, I think Jacques’ solution is MUCH better.

The only thing is that it currently only works for new pages. If it can be generalized to existing pages the way I tried to describe, i.e. if the [[nitwit]] example could be made to work, it would be 100% perfect. We would be able to undo redirects. We would be able to keep unconsolidated content at the old page, which would also keep its history. Etc etc.

I don’t think the scenario I describe will go away or become less common. If anything, I think it will become more common as more people participate.

A recent example illustrates the point…

One of my goals is to work through Urs’

[[Exercise in Groupoidifcation: The Path Integral]]

When we were walking through this on the original blog article, I (re)invented something I called “Exploding a Category”. I thought it helped (me anyway) understand what is going on, so I created a page

[[Exploding a Category]]

Only later, did I learn that an “Exploded Category” is the same thing as a [[category of elements]].

In an ideal world, I could consolidate my content (which I’ve done) to [[category of elements]] and then insert a redirect at [[category of elements]] that will pull any links away from [[Exploding a Category]]. In that way, my original page still exists and is accessible by removing the redirect and it sill contains some useful comments that are of interest to [[Exploding a category]] but not necessarily to [[category of elements]].

This scenario happens frequently. It is the nature of category theory that two people could write pages that at first seem distinct, but turn out to be “equivalent”. It then makes sense to consolidate existing content and insert a redirect.

Posted by: Eric on June 3, 2009 2:05 PM | Permalink | Reply to this

Re: Blogs vs Wikis

It is the nature of category theory that two people could write pages that at first seem distinct, but turn out to be “equivalent”. It then makes sense to consolidate existing content and insert a redirect.

I rather disagree.

In your example, I think that [[Exploding a Category]] (however we fix the name) should remain available to casual readers, not hidden (so that you have to remove !redirects code from [[category of elements]], then look at [[Exploding a Category]], then replace the !redirects code when you're done). I like your proposal for dealing with current redirects and future merges, but only if there's no real content at the redirect page; it's OK if you have to go through a bit of rigmarole to view archived content, since that'll be pretty rare, but not current content like your explanations of what exploding a category means to you.

More generally, I like that we have (for example) separate pages for [[strict 2-group]] and [[crossed module]], even though these are equivalent. The ideas developed in different ways, the definitions look very different, and so we explain them differently … even though we also say that they are equivalent and link each to the other.

Perhaps you want to make merges more frequent than they have been, and perhaps they will be, but I don't want that. (^_^)

Posted by: Toby Bartels on June 3, 2009 2:21 PM | Permalink | Reply to this

Re: Blogs vs Wikis

Aesthetically, I would prefer Eric’s solution ([[!redirects foo]] on “bar” takes priority over an existing “foo”) to having pages named like “foo/history”. However, it raises the problem of how I would be able to see that history, if show/foo and edit/foo are both trapped to go to “bar” (or somewhere else). So perhaps “foo/history” is better.

On further reflection, I think you are right that now that we have the ability to actually rename pages, bringing their edit history along with them, this will not be so much of an issue in the future.

Posted by: Mike Shulman on June 3, 2009 2:07 PM | Permalink | Reply to this

URL Generator

Let me clarify one important point about how the URL generator now works, and how it might be changed to accommodate some of the concerns raised.

Say a page contains [[foo]]. The steps that are taken to turn this into a URL are

  1. If there is an existing page by that name, then the URL /show/foo is generated.
  2. If there’s no such existing page, but, instead, there’s a page “bar”, which contains a [[!redirects foo]] directive, then the URL /show/bar is generated.
  3. Finally, if there’s no existing page with that name, and no page which redirects that name, then the URL /new/foo is generated.

I wavered considerably about the ordering of (1) and (2). There are arguments for having real pages trump redirects and there are arguments for redirects to trump real pages.

If someone can make a persuasive case for switching the order of (1) and (2), we can do so. But I thought it was more productive to push this change out, and get some real-world experience with it, rather than argue endlessly in the abstract.

As to HTTP redirects, that’s something we should definitely think about. If there was a solution that I really liked, I’d code it immediately. Since nothing currently stands out as an attractive solution, I’m gonna mull it over in my head for a while longer …

Posted by: Jacques Distler on June 3, 2009 2:08 PM | Permalink | PGP Sig | Reply to this

Re: URL Generator

In my opinion, what you’ve done is perfect. It is elegant. I like it a lot.

Is it possible to add

4. If there exists a page by that name, but there also exists a page [[bar]] containing [[!redirects foo]], then show bar.

?

That way, the contents and history of [[foo]] still exist independently and can be recovered by removing [[!redirects foo]] from [[bar]].

Posted by: Eric on June 3, 2009 2:18 PM | Permalink | Reply to this

Re: URL Generator

Is it possible to add 4. […]?

That's exactly what switching (1) and (2) would do. (You have to know how programmers think to see that switching them would also move an ‘if’ clause around.)

Posted by: Toby Bartels on June 3, 2009 2:23 PM | Permalink | Reply to this

Re: URL Generator

Here’s a slightly more thought out suggestion…

Say a page contains [[foo]]. The steps that are taken to turn this into a URL are

  1. If exists a page [[bar]] containing [[!redirects foo]], then show/bar.
  2. If there exists a page [[foo]], then show/foo.
  3. Finally, if there’s no existing page with that name, and no page which redirects that name, then the URL /new/foo is generated.

Here is pseudo code;

function [url] = urlgenerator(foo)

bar = getredirect(foo);

if ~isempty(bar)

url = show/bar

elseif exists(foo)

url = show/foo;

else

url = new/foo;

end

PS: This is not quite the same as switching Jacques’ original 1. and 2.

Posted by: Eric on June 3, 2009 2:53 PM | Permalink | Reply to this

Re: URL Generator

PS: This is not quite the same as switching Jacques’ original 1. and 2.

Actually, that was what I meant by “switching (1) and (2)”, namely that [[!redirect ...]] directives would trump existing pages, instead of (as now) vice versa.

As I said, I can see arguments for and against either behaviour.

Getting some real-world experience would be helpful in articulating which of these two behaviours is actually “better.”

Posted by: Jacques Distler on June 3, 2009 5:27 PM | Permalink | PGP Sig | Reply to this

Re: URL Generator

I see. Then my vote would (obviously) be to switch.

We have many real situations where we want redirects from existing pages to existing pages.

This, in combination with Toby’s */history (if desired) would seem to be ideal.

Posted by: Eric on June 3, 2009 6:39 PM | Permalink | Reply to this

Re: URL Generator

If we switch, then we don't really need */history; we just keep them where they are.

Posted by: Toby Bartels on June 3, 2009 7:00 PM | Permalink | Reply to this

Re: URL Generator

All the more reason to switch.

For more examples of existing redirects, if we switched we could take of all of these

category: redirect

Posted by: Eric on June 3, 2009 9:03 PM | Permalink | Reply to this

Re: URL Generator

Mike argues persuasively otherwise.

I think Mike wins.

Posted by: Jacques Distler on June 3, 2009 9:15 PM | Permalink | PGP Sig | Reply to this

Re: Blogs vs Wikis

The reason I can think of to distinguish between “Submit” and “Preview” would be so that one can write something and then test it out and see how it looks before making it visible to anyone else. I personally do not always write itex code that looks exactly the way I want it to on the first try (and occasionally the first try turns out quite ugly and wrong). But diminishing the chance of data-loss is also a worthwhile goal; I wonder if there could be a way to solve both problems at once.

Posted by: Mike Shulman on June 2, 2009 11:54 AM | Permalink | Reply to this

Re: Blogs vs Wikis

Maybe you can explain to me exactly what is required, then.

For what it's worth, a discussion of desired features (before the current system was implemented) had already begun (and still continues) here.

Posted by: Toby Bartels on June 3, 2009 2:18 PM | Permalink | Reply to this

Re: Blogs vs Wikis

Hi all,

I have just come back and back online form a vacation last week. I am very happy to see all you efforts here. From the point of view of somebody having to browse in addition to order 10 210^2 emails the very useful but by now also very lengthy discussion here let me for the moment just add one minor comment:

whatever the result of the discussion here, it would be great if somebody finds the time to summarize the main points in as far as they are effective for work on the nnLab at our HowTo page.

Posted by: Urs Schreiber on June 7, 2009 11:23 AM | Permalink | Reply to this

Editing tables in nLab

Hello.

I have been trying to make linebreaks in the nLab article Timeline of category theory and related mathematics but failed. Is there a way to have any table in nLab with linebreaks? Regardless how the article will turn out it is an important question for the future.

I also think that someone should add a link “create new page” in nLab. In wikipedia you can start new pages it could not find with a search by a click in the search page, nLab don’t have this option, and to create a link first to edit it and then removing the link feels like cheating.

Posted by: Rafael Borowiecki on July 9, 2009 3:47 PM | Permalink | Reply to this

Re: Editing tables in nLab

I have been trying to make linebreaks in the nLab article Timeline of category theory and related mathematics but failed. Is there a way to have any table in nLab with linebreaks? Regardless how the article will turn out it is an important question for the future.

I’m not sure what effect you are trying to create. Could you say what (X)HTML code you are trying to produce?

Obviously, Markdown’s table support is somewhat limited. One can get quite far, by adding CSS styling to the table, using Markdown’s metadata syntax.

If those two, taken together don’t suffice, you can, as a last resort, enter raw (X)HTML.

I also think that someone should add a link “create new page” in nLab. In wikipedia you can start new pages it could not find with a search by a click in the search page, nLab don’t have this option, and to create a link first to edit it and then removing the link feels like cheating.

One imagines that there surely must be at least one existing page which would be improved by a link to the page you wish to create.

Failing that, I suppose you could always type a

 /nlab/new/My+Completely+Unrelated+Page

URL into the address bar of your browser.

Posted by: Jacques Distler on July 9, 2009 4:03 PM | Permalink | PGP Sig | Reply to this

Re: Editing tables in nLab

This is related to move the page Timeline of category theory and related mathematics from wikipedia to nLab. You have to see its table code in nLab for yourself. There are entries in a table on this page such as the 1993 Kenji Fukaya, that starts a new line in the middle of a text. How can this be done in nLab?

I tried a simple html table in nLab but got only errors. No other special code except for direct and new lines in the editor.

I know little CSS and none markdown, i just started editing in nLab.

Posted by: Rafael Borowiecki on July 10, 2009 3:57 PM | Permalink | Reply to this

Re: Editing tables in nLab

I looked at the links but could not find anything that would help.

Is this a hard question?
Maby an example with CSS if it is too complicated?

I really need to be able to edit tables in nLab with the ability to start a new line anytime.

PS.
In the previous post after “direct” the HTML command BR
for a line break was not displayed but executed.

Posted by: Rafael Borowiecki on July 19, 2009 11:26 PM | Permalink | Reply to this

Post a New Comment