Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

December 9, 2008

Science Citation Index

Posted by John Baez

When I heard some of the top journals in category theory aren’t listed by the Science Citation Index, I posted a question on the category theory mailing list.

I asked:

Dear category theorists -

Thomson Scientific runs the well-known “Science Citation Index”, which “provides researchers, administrators, faculty, and students with quick, powerful access to the bibliographic and citation information they need to find relevant, comprehensive research data”. I believe data from this index is used in tenure and promotion decisions at some universities.

I just heard that Theory and Applications of Categories and Cahiers are not listed on the Science Citation Index, while - for example - Elsevier’s journal Homeopathy is listed there.

Is this true? Is there some way to improve the situation?


I got a lot of replies. Here’s what Robert Dawson said about the journal Homeopathy:

That would be the one with one character per issue printed among 128 blank sheets, right?

Most of the replies, alas, were less funny… and some verge on the tragic, like this one by Joachim Kock. (I got his permission to quote this in its entirety.)

Hello category theorists,

Just to report that in Spain, Thomson’s Science Citation Index is now the main measure of quality of publication in mathematics: every report and application has to indicate impact factor (*) and citation count (**) of all one’s papers — and papers in journals not indexed (or in conference proceedings) simply don’t count as papers! As a concrete example, I received last year an evaluation from the Ministry of Science and Education explicitly telling me that I need to improve the number of papers published in indexed journals.

(*) Impact factor is really a silly measure for quality: for example many learned societies distribute papers into several different journals only according to length but using otherwise the same criteria for acceptance, whereas those different journals can have very different impact factors in Thomson’s index. (It may interest some of you that the Elsevier journal Chaos Solitons & Fractals edited by El Naschie has a higher impact factor than Annals of Mathematics.)

(**) Of course the citation count is Thomson’s count, which counts only citations from Thomson indexed papers, and even fails to identify preprint citations to papers later indexed. (E.g. paper A cites preprint B. When B is published in an indexed journal the citation from A does not count.)

It has a very bad effect, especially on young researchers, who have to follow the rules of Thomson’s and Ministry’s game, and look up impact factors before choosing which journal to submit to, instead of following scientific criteria.

Furthermore, access to Thomson’s database is not free. (The Spanish Ministry has paid access for all Spanish universities, instead of using that money to fund research.) It is more than likely that Thomson is affiliated in some way with Elsevier and other publishing houses — in any case they share the same goals of extracting money from science budgets — and therefore free journals represent a threat, and it is not very likely that any free electronic journal will be included in Thomson’s index. It did happen with Geometry & Topology, though…

I agree with George Janelidze that it is important to get TAC and Cahiers into the AMS citation database. This should be possible just by scientific reasons. Before that happens I think there is not much hope to enter Thomson’s index…

The real problem is to convince science foundations and other funding agencies to boycott Thomson. Just getting more and more good journals into Thomson’s index is not going to help with that.

Gettting the categories journals into the AMS citation database will help providing a strong alternative to Thomson.


Andrej Bauer added that in Slovenia, the Science Citation index is used in a formal way in government funding decisions, and also in deciding on promotions in his university. As a result, academics feel they must publish in journals reviewed by this index.

Giuseppe Longo pointed us to this protest against the use of citation indexes for the assessment of research:

Ronnie Brown pointed to this report, which explains many problems with basing decisions on citation statistics:

It may be hopeless to prevent bureaucrats from seeking numerical ways to measure the quantity and quality of research. As mathematicians, we understand the limitations of numbers better than many people… but this understanding will take a long time to spread, even if we work very hard at it. In the meantime, can we try to persuade the bean counters to count more relevant beans?

Posted at December 9, 2008 3:14 AM UTC

TrackBack URL for this Entry:

12 Comments & 0 Trackbacks

Re: Science Citation Index

Both MathSciNet and the ArXiv index citations per paper, but the arXiv’s citeseer index seems to be disfunctional at the moment. Maybe the effort died as per lack of funds. These at least give an indication of the number of citations to the particular author or the paper in question.

Any citation datum is, at best, a gross measure of quality; sort of like “the woman with blonde hair” is a description of a particular female. Or that blonde hair is the only measure of attractiveness or intellectual ability. (Spare us the jokes, please).

Administrators want easy measures of quality. It is part of our job as mathematicians to educate the public (and our bosses) about the meaning of measure or the measure of meaning. Detailed measures of the quality of research require time, patience, and an extreme lack of prejudice on the part of the reviewer. Consequently, quality is best described in prose rather than numbers. On the other hand, a variety of numbers can help delineate a multi-dimensional image of quality.

The IMU released this report this report on the misuse of Citation indices. Department chairs should provide that information to their Deans.

An item of continuing concern are the roles of blogging and Wiki’ing in evaluation of Promotion and Tenure. While these venues are NOT peer reviewed, they are reviewed. Some very popular blogs, such as this one, are providing a service to the greater community and improving science and science communication. However, popularity and indeed the number of comments on a given blog are not measures of the quality of the entry. If they were, this blog might be the “Animals that are verbs”-category cafe .

Posted by: Scott Carter on December 9, 2008 3:17 PM | Permalink | Reply to this

Re: Science Citation Index

Scott wrote:

However, popularity and indeed the number of comments on a given blog are not measures of the quality of the entry. If they were, this blog might be the “Animals that are verbs”-category cafe.

Hey! I thought that was a very high quality blog entry. And it’s the only one here that appears in the Science Citation Index. In fact it played a major role in my last promotion.

Posted by: John Baez on December 10, 2008 7:23 AM | Permalink | Reply to this

Re: Science Citation Index

Scott said:

Both MathSciNet and the ArXiv index citations per paper, but the arXiv’s citeseer index seems to be disfunctional at the moment.

If I look myself up on MathSciNet I am told that I have been cited by people other than myself precisely zero times. Fortunately I know this to be untrue, so I won’t slit my wrists just yet.

The MathSciNet citation count has the same problems as the others - the category theory journals are not among those considered to “count”. Presumably MathSciNet uses the same “AMS citation database” that Joachim refers to in his message. Anyway, the list of journals it deems worthy of counting can be found here and does not include TAC, Cahiers, Applied Categorical Structures or ENTCS (for example).

On top of this, I am falling foul of point (**) that Joachim makes:

The citation count…even fails to identify preprint citations to papers later indexed. (E.g. paper A cites preprint B. When B is published in an indexed journal the citation from A does
not count.)

I make my papers available on the arXiv long before they are officially published, and a google search shows I have quite a few citations for the preprint versions from before the published version has appeared. So these don’t count.

Furthermore, the “valid” journals are, in my experience, the ones that have the longest delay between submission and publication, which makes this particular problem even worse. I have a paper which was part of my 2002 thesis, which appeared in Math. Proc. Cam. Phil. Soc. in 2006. Another I put on the arxiv in 2003 and it will appear in JPAA in 2009! All the citations I’ve had in the interim don’t count, even if they were from papers in “valid” journals.

This is particularly bad for young researchers as the “invalid” period of time takes up a huge proportion of our active research life so far.

I recently had to list the number of citations (excluding auto-citations) for each of my publications for a grant proposal, so I became aware of this issue with MathSciNet although at the time I didn’t fully understand what the problem was. I spent hours (with the help of a nice friend) trawling the internet to try to find my true citation count. It wasn’t zero.

Posted by: Eugenia Cheng on December 9, 2008 4:49 PM | Permalink | Reply to this

Re: Science Citation Index

“If I look myself up on MathSciNet I am told that I have been cited by people other than myself precisely zero times. Fortunately I know this to be untrue, so I won’t slit my wrists just yet.”


But then again, as of this writing, you have 14,323 hits on a single entry of youtube! And Fame is a fickle food.

Is Citeseer functional now?

Posted by: Scott Carter on December 9, 2008 5:15 PM | Permalink | Reply to this

Re: Science Citation Index

The UK Computing Research Committee (an expert panel comprising representatives of the British Computer Society, the Council of UK Professors and Heads of Computing, and the Institution of Engineering and Technology) made a submission to the UK Higher Education Funding Council (HEFCE) on their proposed use of citation metrics to assess research performance by UK academics. After reading this report, a rational policy maker would surely abandon any such policy, but I doubt that such will be the case for HEFCE.

The report includes these statements:

“It would be incompetent and unprofessional to introduce a citation-based Research Excellence Framework until it has been established that there is an adequately complete, consistent and auditable set of data, available from multiple sources free of any commercial bias, that can be relied on to be kept up to date, that includes citations in journals, conferences, PhD theses, industrial reports and institutional repositories — and that assessments based on citation counts from these sources leads to cost-effective assessment of research quality that does not lead to undesirable changes in the way research is carried out or published or on standards or variety of teaching. We do not see any convincing evidence that these criteria have been met.”


“We have conducted an exercise on one paper that has over a hundred and fifty “references” on Google Scholar. The ISI citation count is 24; the citation count from ACM is 12 (ACM is the major professional body in international computing). There are no citations in common between the ACM and ISI lists! Many of the extra Google Scholar references are in industrial reports, books, PhD theses and conferences. A study based on a single paper can only be illustrative but it shows that much wider analysis is needed to find robust and valid bibliometric indicators.” (Italics in original.)

Posted by: peter on December 9, 2008 11:14 PM | Permalink | Reply to this

Re: Science Citation Index

I’ve just noticed yet another way in which I fall foul of the MathSciNet system: I have an article in Electronic Notes in Theoretical Computer Science (with Hyland and Power). However, this journal is not even part of the database at all, so the paper doesn’t count as a publication! To clarify: my papers in TAC do at least “count” as publications, even if citations from TAC don’t “count” as citations.

And guess what: in reality this non-publication is my most cited paper, but none of those citations “count”, even if it is from a “valid” journal. For example, this paper has been cited from the Journal of the LMS and Advances, each of which contributes zero to my total citation count of…zero!

I’m clearly going to spend the rest of the night pondering just how badly I’ve been playing this game.

Posted by: Eugenia Cheng on December 10, 2008 12:38 AM | Permalink | Reply to this

Re: Science Citation Index

Many years ago during an ego-search, I checked the paper bound citation index and found that Sheila Carter and I were listed as co-citers. She is a geometer interested in immersions, so the human looking through the papers must have gotten confused. I am sure I have never cited her work, and I expect she would not cite mine.

Eugenia’s claim of “not playing the game correctly” strikes me as odd. Whom are you playing for? Do your peers in the UK actually look carefully at citation numbers when determining grants? I don’t think NSF math reviewers take these data too seriously. Isn’t the content of the proposal the thing that matters? If not, then the quality of the researcher?

Posted by: Scott Carter on December 10, 2008 1:57 AM | Permalink | Reply to this

Re: Science Citation Index

This issue is in the air, here's a cartoon. (But the cartoonist forgot to correct for the number of times that your article or its citation appears in the premier journal of your field when that journal is not included in the citation indexes.)

Posted by: Toby Bartels on December 10, 2008 5:01 AM | Permalink | Reply to this

Re: Science Citation Index

Posted by: Toby Bartels on December 10, 2008 8:53 PM | Permalink | Reply to this

Re: Science Citation Index

Here is a follow-up cartoon (less relevant; I won't point to any more unless they become more relevant again).

Posted by: Toby Bartels on December 17, 2008 1:54 AM | Permalink | Reply to this

Darwinian Fitness trumps h-index; Re: Science Citation Index

31 Jan 2008 10:30 am
Why the h-index is little use

“In 2005 Jorge E. Hirsch published an article in the Proceedings of the National Academy of Science (link), proposing the ‘h-index’, a metric for the impact of an academic’s publications.”

“Your h-index is the largest number n such that you have n papers with n or more citations. So, for example, if you have 21 papers with 21 or more citations, but don’t yet have 22 papers with 22 or more citations, then your h-index is 21.”

“Hirsch claims that this measure is a better (or at least different) measure of impact than standard measures such as the total number of citations. He gives a number of apparently persuasive reasons why this might be the case….”

“The first class of exception is people with very few papers. Someone with 1-4 papers can easily evade the rule, simply because their distribution of citations across papers may be very unusual. In practice, though, this doesn’t much matter, since in such cases it’s possible to look at a person’s entire record, and measures of aggregate performance are not used so much in these cases, anyway.

The second class of exceptions is people who have one work which is vastly more cited than any other work. In that case the formula (*) tends to overstate the h-index….”

Be that as it may, at the August 2008 conference at Harvard for the 60th birthday of the geometer Shing-Tung Yau, Iz Singer and Edward Witten gave what were felt to be particularly interesting talks..

Singer’s – “The Interface between Geometry and Physics, 1967-2007″, summarized some of the advances in this area with which he’s been involved. 1967 was the year of a Battelle conference in Seattle on the intersection of mathematics and physics, organized by deWitt and Wheeler. Singer displayed a copy of a 1966 letter from Feynman to Wheeler turning down an invitation to
attend, with the explanation

“I am not interested in what today’s mathematicians find interesting.”

Feynman only had, what, 17 refereed papers published? That’s the way that I remember it, because 17 is my favorite integer. But surely he had more influence than SCI suggests.

And how to quantify the influence of Bohr, best remembered for something that was disbelieved almost a century ago, and simultaneously revered and underappreciated?

I, for one, go out of my way not to play the SCI game. I publish a relatively small number of refereed papers, mostly in venues that my coauthors want to be. I’ve been intentionally, since about 1975 when I started working with Ted Nelson, father of hypertext, putting millions of words (in the old magazine metric of 1 word = 6 alphanumeric characters including spaces) of implicit hypertext in print and radio and video and digital media.

I just don’t explicitly build all the hotlinks, because the proper link infrastructure designed by Ted Nelson was not implemented in the Woirld Wide Web (which omits, for instance, transclusion, bidirectionality, and micropayments).

I am trying to maximize the Darwinian Fitness of my writings.

The SCI hardly lives in the same metric space.

As 2009 is the bicentennial of Darwin, and sesquicentennial of “On the Origin of Species”, I wonder if we’ll have informed debate on the evolution of texts (including science and math papers) by artificial selection.

Posted by: Jonathan Vos Post on December 13, 2008 10:38 AM | Permalink | Reply to this

Elsevier and Regret; Re: Science Citation Index

Since the “The Case of M. S. El Naschie, Continued” thread seems closed to comments, let me throw in this Scaience Fiction/Fantasy quotation which seems to carry sychronicity:

“She held Elsevier’s indigo eyes, willing Elsevier to understand her need, and her longing – and her regret.”

Joan D. Vinge, The Snow Queen, 1980.

Posted by: Jonathan Vos Post on January 3, 2009 7:27 PM | Permalink | Reply to this

Post a New Comment