Recent Posts
|
posted 14 years ago
distler
123 posts
|
Same thing happen(ed) when you typed (the equally useless)
Fixed in Revision 744. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Next bug. For some strange reason, empty anchors mess up wikilinks:
means that Instiki does not process the wikilink. If I put text in the anchor, then it’s fine. If I replace the This does feel a bit like a “when I bang my head on the wall then it hurts” bug, but still it is strange behaviour particularly given the tag-dependence. (See http://ncatlab.org/nlab/show/Sandbox for some experiments.) |
|
posted 14 years ago
Andrew Stacey
118 posts
|
If you can’t remember, then I’ll experiment with taking it out and see who complains, and about what. |
|
posted 14 years ago
admin
64 posts
|
Ah. I see. I was deceived by your example TheoremThis is italic text. But this is not.
In general, the more specific CSS selector wins, and there was some circumstance where inheritance from |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Great! I’ll apply those tomorrow morning. (And I learnt a new word. Doubt I’ll be able to get it in to Boggle, though.) |
|
posted 14 years ago
Andrew Stacey
118 posts
|
It’s about inheritance. Let me do my example again.
TheoremThis is italic text. This is normal, but this is not.. The link should inherit the |
|
posted 14 years ago
admin
64 posts
|
Works for me. TheoremThis is in italics. This is normal. And this is italic, again. This whole paragraph is normal. This paragraph is in italics. was generated by
Aside from the kludgy bit about applying styles to spans, I have no idea what your issue is. |
|
posted 14 years ago
admin
64 posts
edited 14 years ago |
Thanks for the report. Fixed in Revision 742. I think this bug was a long-standing one. The other, fixed in that Revision, was completely iatrogenic. Oh, and your double-escaping bug is fixed in Revision 743. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
(Not so much a bug, but also not a feature request, so gets put here.) Is there a reason why the italic style in theorems is done via:
rather than:
The
will come up as italic, I think. Actually, I can test it here: TheoremYes, it did. So the normal CSS inheritance is effectively bypassed by the I’m generally reluctant to modify stuff that you’ve put in place! Is there a reason for the |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Okay, next one. I’m recording the error message first. It’ll take me a few minutes to track down exactly what is causing it.
Actually, didn’t take long at all. The last line of the document was I guess that getting a more sensible error (this causes smoke) would involve hacking maruku more than you’d like. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
I originally thought that this one was due to old browsers, but I’m now using FF6.0 so it can’t be that. Anyway, when I create a new page, the “Page X does not exist” is getting escaped once too often and I see:
|
|
posted 14 years ago
Andrew Stacey
118 posts
|
Thanks! I’ve updated the nLab and Azimuth Project. That’s great. |
|
posted 14 years ago
admin
64 posts
|
Fixed in Revision 736. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Forum: Heterotic Beast – Topic: Bugs Long lines (of code?) make the posts a bit wide. Currently, my view on http://golem.ph.utexas.edu/forum/forums/instiki/topics/bugs#post_77 has all the posts reaching in to the grey region on the right-hand side. |
|
posted 14 years ago
Andrew Stacey
118 posts
edited 14 years ago |
Maruku doesn’t like starting bold text with bizarre unicode symbols, or named entities (presumably these are converted to unicode symbols). Example This causes a crash in instiki:
PS: This showed up after I installed the latest version of instiki. |
|
posted 14 years ago
admin
64 posts
|
Forum: itex2MML – Topic: Feature Requests What should |
|
posted 14 years ago
admin
64 posts
|
Discuss bugs in itex2MML. |
|
posted 14 years ago
distler
123 posts
edited 14 years ago |
Forum: Heterotic Beast – Topic: Bugs No it doesn’t. Deleting the only post in a topic, also deletes the topic. Possibly, the redirect (which normally goes to |
|
posted 14 years ago
distler
123 posts
edited 14 years ago |
The former. You’re trading off workload on the server for stale data. Since computers are supposed to serve humans, rather than the other way around, the question is: does this improve the user experience? Say you implement the above suggestion. On the one hand, the user always (or almost always, depending on implementation) receives the cached page, i.e. gets a quick response. On the other hand, the data is invariably stale. Leaving these alone, the user is guaranteed to receive fresh data, but there could be a significant delay, if the page has to be regenerated. What percentage of requests for these pages hit the cache? A better solution is to pull in the As to moving away from Maruku, to some
|
|
posted 14 years ago
Andrew Stacey
118 posts
|
Forum: Heterotic Beast – Topic: Bugs Deleting the first post in a topic leads to a “page does not exist” error. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Bother. I can’t count. I saw that the last log file was numbered 23 and assumed that that meant I hadn’t missed any log files. Sadly not. So my logs aren’t complete. Ah well. Incidentally, why name them numerically? Why not name them by a date-and-time stamp? Then they wouldn’t have to all be renamed when the next one is created. Thinking about Recently Revised and All Pages, you suggested (somewhere) taking them out of the sweeper as a way of stopping them being regenerated every time a page is edited (I don’t know if this was one of you “If you’re going to do something crazy, here’s a way of limiting how crazy you’re going to be” suggestions or if you thought this was actually a good idea). Then I’d have to manually regenerate them every, say, hour by deleting the cached copy so that the next hit recreated it. It occurred to me that the same cron job that deleted the cache could also hit the page in the server to force the regeneration. It then occurred to me that it was silly having that go via the webserver when it was on the same machine as the program. With my current knowledge of instiki, what I thought of for getting round this was to have an instiki process running invoked “from the command line” and listening on, say, port 2500. I block that port from all outside traffic and use it only for localhost. Then the cron job hits that port, avoiding the webserver. The alternative would be to have it so that I could call |
|
posted 14 years ago
distler
123 posts
|
Did a lot of profiling this weekend, and produced a few tweaks to Maruku’s parsing, which speeded it up a little. Unfortunately, the main discovery was that (with that test page as input), 3/4 of Maruku’s time is spent in the I hope that one of your guys finds formal grammars sufficiently “categorical” to be worthy of a small bit of their attention. |
|
posted 14 years ago
distler
123 posts
|
Pretty much everything beyond the standard Markdown syntax needs to be written, though some folks in the peg-markdown network seem to have included Michel Fortin’s Markdown-Extra extensions (albeit, along with some of their own, incompatible, extensions). Presumably, this fork of peg-markdown would be directly linked to the itex2MML functions which process inline and display equations (ie, it would not use the Ruby bindings provided by the itextomml gem). So there’s a little bit more to do than write the peg grammar. But not a lot more … As to whether you want to ask someone to do this, I’ve already explained my desire to replace Maruku (for licensing reasons). Here’s another motivation, from efficiency. Instiki’s performance does genuinely suck, in this instance. The question is, are they (your nlab colleagues) willing to do anything about improving it? Feel free to point them to this discussion, and to the previous one on Markdown alternatives. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Forum: Heterotic Beast – Topic: Bugs May I suggest ”Other users online: XYZ”. Okay, that’s odd. I suggested the above having seen the “Users online: distler” message, and not seeing my own name. Now I click back to the forums list and I am listed there this time. So either it got it wrong first time, or you’re playing with the code and I should shut up and let you get on with it. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Now that I see the break-down, I agree with you that it’s a waste of time optimising the wikilinks for now. That’s an astonishing amount of time for maruku to take! I wonder how much of that is itex; I guess I can test that for myself by running maruku on a few files on my machine here, with and without itex. I could ask on the nForum about volunteers for writing a PEG grammar. That sounds like a good, specific task that someone might just be willing to do. Shall I ask? Do you have a clear idea of which bits of maruku’s syntax are missing, or would the task involve determining that? |
|
posted 14 years ago
distler
123 posts
edited 14 years ago |
On
Note that it does spend (what I consider to be) a significant amount of time querying the database, but it is totally dwarfed by the rendering time. (I don’t know why yours is spending an order of magnitude longer querying the database. Seems like something’s very wrong, there, even though the conclusion is the same.) The page, of course, contains a number of markup errors, like
which sent Maruku into convulsions. Surprisingly, correcting those errors did not appreciably affect the rendering time (the above-reported time is after making the relevant markup corrections). On my laptop, a typical time was
SQlite3 is faster than MySQL, but the machine itself is significantly slower than the iMac. Of those 49 seconds, spent rendering the page, 43 of them were spent in Maruku (for obscure reasons, Maruku has to be run twice, so really, we’re talking about 22 seconds to process the 175KB source). Maruku doesn’t particularly care about the number of WikiLinks, so that has nothing to do with why it takes so long render this particular page. Of the remaining 6 seconds, 4 seconds were spent in the Instiki Sanitizer. I don’t think there’s much to optimize there. The remaining 2 seconds were, largely, spent in the Chunk-Handler – the thing that processes Wikilinks (and, presumably, cares about how many of them there are). 2 seconds is still a long time, but it’s not surprising. Doing on the order of RegExp substitutions (5360 chunk-masking and 686 chunk-unmasking operations, to be precise) on a 175KB string, takes significant time. Using Regexps to process long strings sucks. I have looked at various optimizations of the Chunk-Handler code, but nothing I can do will contribute much to the speedup of rendering this page, which is dominated by Maruku. Now, if one of your nLab folks were to volunteer to write a PEG grammar for Maruku’s extended Markdown syntax, … Update:
Since I’m not gonna hold my breath for that to happen, I decided to spend some time (alas, more than I expected) making Maruku faster. The new rendering times for that page are
on
on my laptop. Roughly a factor of 2 improvement in the total rendering time, in both cases. Still not great, but it’s the best that I am going to achieve. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Forum: Heterotic Beast – Topic: Bugs Vanilla stores some information in the user database, including the last comment in each discussion that you read. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Forum: Heterotic Beast – Topic: Bugs Incidentally, if you want to get a feel for what Vanilla looks like but don’t want to sign on to the nForum, I have a test forum set up: http://www.math.ntnu.no/~stacey/Mathforge/Test/. I can easily add any plugins from the nForum that you might want to play with. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
The “print” view wasn’t much faster: 71303ms. |
|
posted 14 years ago
Andrew Stacey
118 posts
|
Okay, let’s take http://ncatlab.org/nlab/show/smooth%20infinity-groupoid%20--%20structures which, according to grep, has the order of 409 wikilinks. (Actually, it has 409 hits for the string Now I delete it from the cache, and try again. A cup-of-tea later, and I get the following: 74074ms (View 72773, DB: 1276). On the receiving end, I get 82s and 87s for the delivery times. Second time, similar figures. The time this gets a bit annoying is when editing a page, since then it has to regenerate it each time. That’s a fair wait if you’ve only changed a couple of spelling mistakes. |