|
Editor-in-Chief
Tommy Boone, PhD, MPH, MA, FASEP, EPC
|
|
|
Journal Impact Factor:
A Critical Review
Tommy Boone, PhD, MPH, MA,
FASEP, EPC
Professor and Chair
Director, Exercise
Physiology Laboratories
The College of St.
Scholastica
Duluth, MN 55811
“If you want to
truly understand
something, try to change it.” -- Kurt Lewin
It is widely recognized that
“impact factors”
are used to rank and, therefore, identify the so-called best journal to
publish one’s work. This is true today even though most
researchers
really don’t understand that the Science Citation Index (SCI)
is frequently misunderstood, if not used indiscriminately. After
a careful examination of the cumulative impact data, it is clear that
the
quantitative influence reflects more than the assumed quality of the
journal
[1]. From a historical point of view, Gross and Gross suggested in 1927
that scientific journals should be ranked [2]. In 1955, Garfield
stated that, “reference counting could measure impact” [3].
However,
it wasn’t until the early 1960s that such thinking led to the
development
of the SCI, which subsequently published in 1975 the Journal
Citation Reports (JCR) [1]. Since then, according to
Garfield
[1], bibliographic “impact factor” has taken several interesting
twists,
for example:
“The most used
data in the JCR are
impact factors – ratios obtained from dividing citations received in
one
year by papers published in the two previous years. Thus, the
1995
impact factor counts the citations in 1995 journal issues to ‘items’
published
in 1993 and 1994. I say ‘items’ advisedly. There are a
dozen
major categories of editorial matter. JCR’s impact
calculations
are based on original research and review articles, as well as
notes.
Letters of the type published in the BMJ and the Lancet are
not included in the publication count. The vast majority of
research
journals do not have such extensive correspondence sections. The
effects of these differences in calculating journal impact can be
considerable.”
[1, p. 2]
The counting of “items” per
se is a serious
source for misplaced use of the “impact factor." Garfield [1]
states
that absolute citation counts “…preferentially give highest rank to the
largest or the oldest journals.” Journals with the highest
impact factor may not be the very best journals, although the JCR
is used to rank, evaluate, categorize, and compare journals.
Numerous
factors go into increasing a journal’s impact, including: (a) subject
matter
such as dermatology vs. molecular biology; (b) papers that cite all of
the relevant literature; (c) time required to review manuscripts; and
(d)
other subtleties such as letters, discussions, proceedings, notes, news
stories, and editorials published in journals [4].
Another condition that
favors having a
high impact factor is whether the journal is a print-copy publication
in
contrast to an electronic journal. For example, the Journal
of
Exercise Physiologyonline and the Professionalization
of Exercise Physiologyonline do not
have an impact factor. Recently, a colleague from another
university
said: “There isn’t any incentive for me to publish with JEPonline.
Tommy, my administrators require published articles in journals with
high impact
factors. If I want tenure, that is what I must do.” In
general,
the statement is consistent with the thinking of many
administrators.
But, of course, there are many inconsistencies across the university
system
regarding what is important for promotion and tenure.
Whether exercise physiologists should be concerned when submitting a
manuscript
to a journal, electronic or otherwise that does not have an impact
factor
needs further study. In fact, given the information in this
article,
it is unfortunate that exercise physiologists would submit their
research
to only journals with an impact factor.
Garfield [4, p. 3] states
that, “As a general
rule, the journals with high impact factors are among the most
prestigious
today.” This perception may be wrong. Clearly, the
notions
of influence or importance are subjective if not biased. The
impact
factor, as Hoeffel [5] has stated, “…is not a perfect tool to measure
the
quality of articles….” In fact, the impact factor does not
define the quality of scientific journals within a particular
field.
A journal’s impact factor is more complicated than most researchers
realize.
Finding an objective measure of the impact of different journals is not
in itself a bad idea. The “citation analysis” approach may not be
the best method, however. Just because a researcher cites a
journal
article does not automatically mean that the journal itself is
important.
It may simply indicate that the article in the journal is important to
a research’s subject matter. Thus, the assumed influence or
impact
of the journal is not as much an objective measure of the journal as it
is a particular topic of research.
For example, although a
journal’s impact
factor is based on the numerator (which is the number of citations in
the
current year to any items published in a journal in the previous 2
years)
and the denominator (which is the number of substantive articles
published
in the same 2 years) [4], there is wide variation from journal to
journal
in what constitutes the numerator and the denominator. Obviously,
both are important in the calculation of the impact factor. The
process
tends to highlight older journals more so than journals that publish
more
manuscripts [1, 6]. There is room for considerable error and
inflation
by the inclusion of items such as letters to the editor, editorials,
book
reviews, and news reports as research papers or review articles
[6].
These confounding variables indicate that the impact factor is not an
objective
measure of the influence of journals in a particular field.
Informed
and careful use of the impact factor is essential to avoiding
administrative
mistakes. Hence, contrary to the view by Hopkins [7] who believes
in the use of the impact factor in making administrative decisions, it
is clear that it should not be among the primary considerations used by
exercise physiologists to improve their chances of promotion and/or
tenure.
If librarians want to use
the JCR as
a tool for management of library journal collections, then so be
it.
However, jumping to ill-formed conclusions about the quality of
published
articles and/or journals is something altogether different. The
idea
that publishing in a journal with a high impact factor is better than
publishing
in a journal with a small or no impact factor does not make
sense.
It is the same unfortunate notion that “bigger is better”.
Exercise
physiologists should be encouraged to publish their work when and where
possible, particularly if the journal provides the timely opportunity
to
position the article in the scientific literature. There really
isn’t
any serious advantage to career development by holding out to a
so-called
more influential journal. There are many different ways to
demonstrate
scholarly work and/or publication that are just as objective
measurements
of overall quality, if not better measurements of the researchers’
work.
It is incorrect to conclude that the quality of research is somehow
inferior
when published in journals with small or no impact factor. There
is room for significant improvement in ranking print-copy and,
potentially,
electronic publications whereby the JCR data are used by
advertisers
to enhance the marketing of a product [8].
The first improvement
might be a complete
re-evaluation of the use of the bibliographic “impact factor”.
Recently,
Porta [9] pointed out that Garfield [10-12] argued early on that the
impact
factor “…is often not the scientometric indicator of choice” [9].
The impact factor “average” is a “…highly skewed distribution (often,
85%
of citations received by a journal are actually received by about 15%
of
the articles it published)” [9, 13, 14]. Even then, Jefferson
[15]
concludes that, while the cited journals are read more often than
journals
with lower impact factors, there is no evidence that the journals with
higher impact factors are of higher quality. Jefferson and
colleagues
have also questioned the measurement of quality of the editorial peer
review
process, stating that there is no internationally accepted definition
of
quality [16]. Jefferson further concludes that: “…far from
assessing the quality of scientific production, which underscores the
altruistic
nature of the scientific enterprise, citation rates are being used to
apportion
research money….” [15]. It is disconcerting to say the least that
the impact factor may also be directly linked to the researchers’
pockets
[17]. If so, Jefferson’s view of the impact factor as “a
circulation
business indicator” is likely to be used to influence healthcare
research
and decisions about health and wellness that may have negative
consequences
in the public sector.
This kind of thinking is
highly unfortunate.
Researchers should be able to publish their work where possible,
regardless
of the so-called usefulness of a journal. And, all journals
should
be given the same professional respect for publishing quality
articles.
A journal’s impact factor should not be used to assess a faculty
member’s
publications for the purpose of accessing a job, getting tenure, or
promotion.
It just doesn’t make any sense, given the vast number of factors
unaccounted
for in the assumed “quality analysis” of an article and/or
journal.
In other words, “more is not always better” and, in this case, it is a
gross violation of publication and/or scientific accountability with
the
notion being that only good work is published in journals with high
impact
factors. Researchers who submit their work to a journal with no
impact
factor must, therefore, be drawing from the bottom of the barrel and,
for
reasons without any justification, the submission and publication must
be a waste of time. However, this is not in fact true.
The quality of published
manuscripts is
defined by the integrity interwoven into the research and/or writing
process.
It is not defined by impact factors per se. Perhaps, what is
needed today is increased adaptive
capacity to help members of different professions respond quickly and
intelligently
to needed change. If so, a whole new decision-making process must
evolve
to allow today’s researchers to act and to evaluate the results of
their
work, instead of relying on a model more than 50 years old.
Exercise physiologists, in particular, should assess the results of
their
work and the work of their colleagues by reading the published work
and,
then, critically reflecting on the merits of the published
product. This understanding
is within their grasp since peer-reviewers for exercise physiology
research
come from within the manpower of the evolving profession of exercise
physiologists.
Although this proposal
may appear radical to the
exercise physiology community, before they can learn to be leaders as
healthcare
professionals, they must learn to think for themselves. Academic
programs in kinesiology must become exercise physiology programs,
exercise
physiologists as technicians must become inspired professionals and, as
many have noticed with the founding of the American Society of
Exercise
Physiologists, exercise physiologists must learn how to recover
from
their lack of an exercise physiology philosophy and begin with
questions
such as:
- Why is the impact
factor
important to publishing,
promotion, and tenure?
- Why should the
widely
believed notion that
ranking journals is so important as to be unquestioned and, therefore,
accepted as dogma?
- What experiences are
important in explaining
why exercise physiologists publish in journals with the highest impact
factors?
- Is the content from
journals without impact
factors important to exercise physiologists as healthcare professionals?
- What role does
publishing articles play in
the professional development of exercise physiology?
- What can ASEP do to
encourage new thinking
about the quality of journals and articles?
- How can exercise
physiologists help others
understand that journals and articles come in every size, shape, and
disposition,
and that each deserves the respect of the leaders, researchers, and
writers
in the discipline?
- What can the ASEP
leadership do to encourage risk
taking, especially in developing an overriding philosophy that “to
publish
is to do something good”?
- How can exercise
physiologists be encouraged
to speak up and set the direction in the professional development of
exercise
physiology?
There is growing evidence
that the compulsion
to use impact factors to identify quality journals and articles is
seriously
flawed. With all the intellect that the world has, it is a wonder
that researchers have fallen victim to nurturing such a bad idea.
Perhaps, it is true that some researchers have gotten too caught up in
the apparent correlate of the “impact factor” and NIH grants, big
money,
elaborate laboratories, and other extraordinary and potentially
troubling
notions that research is just another form of competition. Is it
possible that many have forgotten why they are doing research? It
is common knowledge that poets and writers talk about the freedom to do
their work, to publish, and to even make mistakes in being
creative.
Why can’t researchers just enjoy doing research and publishing their
work
in any journal that will publish it? It seems so pointless to
hear
adult men and women saying, “my journal is better than your journal” or
“my article was cited more than your article.”
Liu [18] is correct about
one thing:
“…the citation number of individual papers should be adjusted according
to discipline to improve on an imperfect but widely used indicator of
research
quality.” However, this in itself will predictably fail to
correct
the problem, which is largely related to the single numerical summary
measure
[19] And, as Liu [18] and Jefferson and colleagues [20] highlighted in
their published works, there is no solid empirical evidence on the
effectiveness
of traditional peer review. In fact, recently, Robergs [20]
published
an extensive article about the issues and concerns that associate with
the traditional peer-review process. It is a fearless and
optimistic
article about real problems in publishing that few researchers seem to
have the courage to question. In a way, to begin to understand
the
problem with impact factors is to understand that peer-review needs
critical
assessment as well.
This article is a
snapshot of the impact
factor “problem” as it is presently used. Researchers (not
librarians,
information scientists, publishing houses or companies, to mention a
few)
ought to be the primary players in deciding the actual value of the
quality
of their research. The problem is that only a few researchers,
exercise
physiologists included, seem to understand that research, as is
presently
understood, is held captive in thinking that is decades old. It
will
continue down the same path unless the entrepreneurs of the 21st
century
demand their publishing rights. But, first, they must look deeply
and critically at answering the following questions. Why aren’t
the
researchers in charge of publishing? Secondly, if researchers
continue
to distance themselves from the right to publish in the journal of
their
choosing, what are the social and professional implications?
“Leaders…can
conceive and articulate
goals that lift people out of their petty preoccupations, carry them
above
the conflicts that tear a society apart, and unite them in pursuit of
objectives
worthy of their best efforts.” -- John W. Gardner, No Easy
Victories
What is presently needed in
academics is strong
leadership to argue the case that the Institute of Scientific
Information
(ISI), the database publishing company that publishes the SCI
and the impact factor, has crossed the line of common sense. In
particular,
since journals are moving from print-copy to electronic [22-25],
librarians
will have less concern about the financial costs that typically
associate
with hard copies of the journals in the library. So, why not
discontinue
the impact factor since it is misused in determining the “…importance
of
individual researchers, research programs, and even the institution
hosting
the research” [26]? Hecht and Hecht, and Sandberg [26] recommend
abolishing the impact factor. It should not be used to assess the
relevance of research, and it should not be considered when making
decisions
about promotion and funding research proposals [27]. In short,
the
impact factory should be used with extreme care due to the many factors
that influence citation rates. Golder [28] points, for example,
to the bias towards English language journals compared to journals in
other
languages.
Just as many medical
researchers submit
their manuscripts to generalist medical journals (like the New
England
Journal of Medicine and the Lancet) to access the largest
recognition
of their published work versus publishing in specialist medical
journals
with a smaller audience [29], exercise physiologists often do exactly
the
same thing. It is not necessarily wrong, but it doesn’t make it
right
either. Publishing in a journal with a lower profile is not
popular
among many researchers yet, according to Sloan [29], it isn’t unusual
for
dentists to prefer specialist journals to generalist journals.
This
kind of thinking is revolutionary. Perhaps exercise physiologists
should also question the value of using impact factors in the
assessment
of publication quality. It is an old “numerology” system that is
simply misused and misconstrued. It is also an inappropriate
measure of scientific quality.
In sum, the impact factor
is ill conceived,
intellectually and technically flawed, and misleading effort to assess
the academic worth of a paper [30]. And, if that’s not
enough, Walter and colleagues [30] point out that the impact factor
(IF) has now “…spawned a range of flawed offspring, including ‘Scope-adjusted
IF’, ‘Discipline-specific IF’, ‘Journal-specific
influence
factor’, ‘Immediacy index’ and ‘Cited half-life’”.
Exercise physiologists should reflect critically on the issues
presented in this article.
This article is by no means complete in its analysis, it is nonetheless
reasonably complete to help with the decision to make a clean break
from the use of impact factors.
References
1. Garfield, E. (1996).
Fortnightly Review:
How can Impact Factors be Improved? BMJ. 313:411-413 (17 August).
[Online]. http://bmj.com/cg/content/full/313/7054/411?ijkey=QHqq/zTqoo0Qk
2. Gross, P.L.K. and Gross,
E.M. (1927).
College Libraries and Chemical Education. Science. 66:385-389.
3. Garfield, E. (1955).
Citation Indexes
for Science: A New Dimension in Documentation Through Association of
Ideas.
Science. 122:108-111.
4. Garfield, E. (1999).
Journal Impact
Factor: A Brief Review. CMAJ. 161:1-7 (19 October). [Online]. http://www.cmaj.ca/cgi/content/full/161/8/979?ijkey=nr8.IXo1aXxvc
5. Hoeffel, C. (1998).
Journal Impact
Factors. [letter]. Allergy. 53:1225.
6. Joseph, K.S. and Hoey, J.
(1999). CMAJ’s
Impact Factor: Room for Recalculation. CMAJ. 161:1-5 (19 October).
[Online]. http://www.cmaj.ca/cgi/content/full/161/8/977?ijkey=.LEQZ2kRXqDUE
7. Hopkins, W.G. (2000).
Impact Factors
of Journals in Sport and Exercise Science. Sportscience. 4:1-6
[Online]. http://www.sportsci.org/jour/0003/wgh.html
8. British Medical Journal.
(2003). Quality
of Impact Factors of General Medical Journals. [letter]. [Online]. http://bmj.com/cgi/content/full/326/7383/283
9. Porta, M. (2003). Quality
Matters –
and the Choice of Indicator, Too. BMJ.COM. [letter, February 6].
[Online]. http://bmj.com/cgi/eletters/326/7383/283
10. Garfield, E. (1985).
Uses and Misuses
of Citation Frequency. Current Contents Life Sciences. 43:3-9.
11. Garfield, E. (1986).
Which Medical
Journals Have the Greatest Impact? Ann Intern Med. 10:313-320.
12. Garfield, E. (1987).
Prestige Versus
Impact: Established Images of Journals, Like Institutions, are
Resistant
to Change. Current Contents Life Sciences. 38:3-4.
13. Porta, M. (1996). The
Bibliographic
‘Impact Factor’ of the Institute for Scientific Information, Inc.: How
Relevant is it Really for Public Health Journals? J Epidemiol Community
Health. 50:606-610.
14. Seglen, P.O. (1992). How
Representative
is the Journal Impact Factor? Research Evaluation. 2:143-149.
15. Jefferson, T.O. (2003).
Quality of
Impact Factors of General Medical Journals – PRAVDA Wins Hands Down.
BMJ.COM.
[letter, February 8]. [Online]. http://bmj.com/cgi/eletters/326/7383/283
16. Jefferson, T.O., Wager,
E., and Davidoff,
F. (2002). Measuring the Quality of Editorial Peer Review. JAMA.
287:2786-2790.
17. Jimenez-Contreras, E.,
Delgado-Lopez-Cozar,
E., Ruiz-Perez, R., and Fernandez, V.M. (2002). Impact-Factor Rewards
Affect
Spanish Research. Nature. 417:898.
18. Liu, J.L.Y. (2003). The
Assessment
of Research Quality Using a Combination of Approaches. BMJ.COM.
[letter,
February 19]. [Online]. http://bmj.com/cgi/eletters/326/7383/283
19. Smith, R. (1998).
Unscientific Practice
Flourishes in Science. BMJ. 317:1036-1040.
20. Jefferson, T.O.,
Alderson, P., Davidoff,
F., and Wager, E. (2003). Editorial Peer-Review for
Improving
the Quality of Reports of Biomechanical Studies (Cochrane Methodology
Review). In
The Cochrane Library, Issue 1. Oxford: Update Software.
21. Robergs, R.A.
(2003). A Critical
Review of Peer Review: The Need to Scrutinize the “Gatekeepers” of
Research
in Exercise Physiology. Journal of Exercise Physiologyonline.
6: (2)i-xiii. [Online]. http://www.asep.org/asep/asep/EDITORI1.doc
22. E-J Miner. (2003).
Electronic
Journal Miner. [Online]. http://ejournal.coalliance.org/browseTitles.cfm?filter=P
23. DOAJ. (2003). Directory
of Open Access
Journals. [Online]. http://www.doaj.org/alpha/
24. The University of
British Columbia
Library. (2003). Freely Available Electronic Journals. [Online]. http://toby.library.ubc.ca/ejournals/unrestrictedlist.cfm
25. Open Access Now. (2003).
Campaigning
for Freedom of Research Information. [Online]. http://www.biomedcentral.com/openaccess/archive/?page=home&issue=2
26. Hecht, F., Hecht, B.K.,
and Sandberg,
A.A. (1998). The Journal “Impact Factor”: A Misnamed, Misleading,
Misused
Measure. Cancer Genet Cytogenet. 104:77-81.
27. Bloch, S. and Walter, G.
(2001). The
Impact Factor: Time for Change. Aust N Z J Psychiatry. 35:563-568.
28. Golder, W. (2000). Wer
kontrolliert
die Kontrolleure? Zehn Thesen zum sogenannten Impact Factor. Onkologie:
International Journal for Cancer Research and Treatment. 23:73-75.
29. Sloan, P. and Needleman,
I. (2000).
Impact Factor. British Dental Journal. 189:1-2. [Online]. http://www.eastman.ucl.ac.uk/~pdarkins/iceph/Impact%20factor.pdf
30. Walter, G., Bloch, S.,
Hunt, G., and
Fisher, K. (2003). Counting on Citations: A Flawed Way to Measure
Quality.
MJA. 178:280-281.
 |
|