Paper Type |
Opinion |
Title |
Journal Impact Factors - How useful are they? |
Author |
Robert Molloy |
Email |
- |
Abstract: With increasing importance being attached nowadays to doing research and writing papers for publication, the question of journal selection arises. Apart from the obvious criterion of choosing a journal which corresponds with the subject area of the paper, it has become fashionable to use the journal impact factor as a criterion for selection. Indeed, it seems as though the value of the impact factor is becoming more important than the journal itself. But what exactly is this “impact factor” and how useful is it? Historically, the idea of an impact factor was first mentioned by Eugene Garfield in Science magazine in 1955 [1]. That paper is considered to be the primordial reference for the concept of what we know today as the Science Citation Index. Some years later, in the early 1960s, Garfield and Irving Sher created the journal impact factor to help select journals for the new Science Citation Index. The impact factor was based on 2 elements: thenumerator, which is the number of cites in a given year to articles published in the journal in the previous 2 years, and the denominator, which is the number of articles published in the journal in the same previous 2 years. Thus, a journal’s impact factor for 2006 would be:
Impact Factor 2006 =number of cites in 2006 to articles published in 2004+2005 number of articles published in 2004 2005 Nowadays, both journals and publishers alike attach great importance to their impact factors. If they are high enough, they use them for promotional purposes. This is a far cry from Garfield’s original intention. At the International Congress on Peer Review and Biomedical Publication in Chicago, USA, in 2005, Garfield reflected on the past 50 years since his original idea and commented: “In 1955, it did not occur to me that “impact” would one day become so controversial. Like nuclear energy, the impact factor is a mixed blessing. I expected it to be used constructively while recognizing that in the wrong hands it might be abused.” During its lifetime, the impact factor has gradually evolved into being a parameter that now far outweighs its intended application. For example, it now influences research assessments, grant applications, and even staff promotions in ways that Garfield could never have imagined. Even in its main role as an index of journal impact, its value is often overstated. I have heard it said, even by respected academics, that journals with impact factors of less than 1 are not worth considering for publication. But the fact is that there are many high quality journals in the fields of science, technology and engineering with impact factors of less than 1. Our Polymer Research Group, for example, has just had a paper accepted for publication in the journal International Polymer Processing which is generally regarded as being one of the leading journals for the polymer industry worldwide, yet it has a 2005 impact factor of 0.466. This is because some journals, especially industry-related journals, tend to publish a proportionately larger number of articles (the denominator) that are general interest rather than research articles and which tend not tobe cited (the numerator). Other journals simply publish in specialist areas that are well read by a particular community but are also not well cited. This skewness of citations amongst journals is well known and is one ofthe main arguments used by critics of the impact factor. Turning our attention closer to home, we now have Thai journal impact factors compiled by the Thai Journal Citation Index Centre (TCI) [2]. With more new journals appearing every year, an analysis of the quality of academic journals published in Thailand has recently been reported in ScienceAsia [3]. Two years ago in 2005, our Chiang Mai Journal of Science was ranked an impressive 3rd out of a total of 75 Thai journals in the TCI’s 2004 list of impact factors. Our impact factor was 0.113 which, when broken down into its component numerator and denominator parts (as in the equation on the previous page), came from 6 ÷ 53 = 0.113. However, when the following year’s 2005 list came out, we had plummeted to joint 53rd out of 126 with an impact factor of 0.016 ( = 1 ÷ 65). In the latest 2006 list, we are now 64th out of 166 with an impact factor of 0.019 ( = 2 ÷ 108). On the face of it, this downward trend would seem to suggest that our Chiang Mai Journal of Science is losing its “impact” relative to other Thai journals. It is only when you analyse where the figures come from that you can understand the reasons why. For example, in September 2005, our Journal published a special issue for the Smart Mat-’04 International Conference which alone contained 52 papers, the equivalent of about 5 regular issues. Unless this number (which contributes to the denominator) is offset by a proportionate increase in the number of cites (the numerator), the impact factor is bound to go down. Hence, our impact factor for 2006, and it will be the same for 2007, suffered accordingly. In 2008, our impact factor will no doubt go soaring back up again since the 2005 data will no longer be counted. However, with the Journal already committed to publishing 2 more special conference issues in 2008, our impact factors for 2009 and 2010 will decrease again. Another contributing factor is that only cites in Thai journals are counted, not those in international journals. Most of the cites to our Journal, including self-cites, appear in international journals and are therefore not counted. Of course, it can be argued that, because the numbers involved, especially for the numerator, are so small, the fluctuations in the impact factor from year to year will inevitably be large. Nonetheless, this analysis serves to illustrate the point that trends in impact factors can be misleading. From the positive feedback that we have received in recent years, we would like to think that our Chiang Mai Journal of Science is improvingin quality year by year, despite the recent downward trend in the impact factor. Soon, to meet the increasing demand, we will need to increase publication from 3 to 4 issues per year, which is clearly a positive sign. Yet as we continue to attract more papers, including from overseas, our impact factor will not improve unless the number of cites in Thai journals increases proportionately. “Impact Factor is not a perfect tool to measure the quality of articles or journals but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion that we have in each field of the best journals in our specialty.” Personally, I would not disagree with this view. The system is clearly not perfect but it is the best that we have. Some observers, including librarians, have argued that the numerator (number of cites) in the impact factor calculation is much more relevant to a journal’s “impact” than the denominator (number of articles). Therefore, why not weight them differently or just ignore the denominator completely and consider only the number of cites. They claim that this would also bring review journals more into line with research journals since the high impact factors of review journals are artificially enhanced by the relatively low number of articles that they publish. The detailed arguments for and against impact factors are too numerous to mention here. Suffice it to say that it is not so much their use as their “overuse” (some would say “abuse”) which is the main cause for criticism. Finally, returning to the theme of journal selection, it was mentioned at the start of this article that there is a growing trend for aspiring authors to select journals primarily by impact factor rather than journal content. In my opinion, while the impact factor is certainly important, it should not be the prime consideration in journal selection. The prime consideration should be the suitability (in terms of content and style) of the journal itself for the subject matter of the paper. Not only does this enhance the paper’s chances of being accepted, it also ensures that it will be read by fellow workers in the same specialist field. Going for a higher impact factor in a less suitable journal only increases the risk of rejection. In conclusion, journal impact factors are undoubtedly useful if used constructively. Until someone comes up with a better idea, they are here to stay for the foreseeable future, probably with some fine adjustments along the way. Having said that, impact factors, and especially their growing influence in other areas such as research assessments, remain a controversial issue. We can therefore expect the debate to continue as to how “important” they really are. |
|
Start & End Page |
269 - 271 |
Received Date |
|
Revised Date |
|
Accepted Date |
|
Full Text |
Download |
Keyword |
|
Volume |
Vol.34 No.3 (SEPTEMBER 2007) |
DOI |
|
Citation |
Molloy R., Journal Impact Factors - How useful are they?, Chiang Mai J. Sci., 2007; 34(3): 269-271. |
SDGs |
|
View:1,177 Download:215 |