Issues raised by the following article were discussed at the May 24, 1999 authorship retreat in Montreal. We are still interested in receiving your comments, criticisms, or questions. You may submit comments by sending an e-mail to Veronica Yank at firstname.lastname@example.org.
Academia and Clinic
Disclosure of Researcher Contributions: A Study of Original Research Articles in The Lancet
Veronica Yank, BA, and Drummond Rennie, MD
Background: Authorship disputes and abuses have increased in recent years. In response to a proposal that researcher contributions be specified for readers, The Lancet began disclosing such contributions at the end of original articles.
Objective: To analyze the descriptions researchers use for their contributions and to determine how the order of names on the byline corresponds to these contributions, whether persons listed on the byline fulfill a lenient version of the criteria for authorship specified by the International Committee of Medical Journal Editors (the Vancouver Group), and whether the contributions of persons listed as contributors overlap with the contributions of those who are acknowledged.
Design: Descriptive study.
Measurements: A taxonomy of researchers’ contributions was developed and applied to researchers’ self-reported contributions to original research articles published in The Lancetfrom July to December 1997.
Results: Contributors lists occupied little page space (mean, 2.5 cm of column length). Placement on the byline did not indicate the specific category of task performed, although the first-contributor position corresponded to a significantly greater number of contributions (mean numbers of contributions: first-contributor position, 3.23; second-contributor position, 2.51; third-contributor position, 2.20; and fourth-contributor position, 2.51) (P < 0.01). Forty-four percent of contributors on the byline did not fulfill a lenient version of the Vancouver Group’s criteria for authorship. Seventy percent (corrected from 60%, as stated in the original article) of the most common categories of activities described on contributors lists overlapped with those on acknowledgments lists.
Conclusions: Publication of lists that specify contributions to research articles is feasible and seems to impart important information. The criteria for authorship outlined by the Vancouver Group do not seem to be congruent with the self-identified contributions of researchers.
Ann Intern Med. 1999;130:661-670.
From the Institute for Health Policy Studies, University of California, San Francisco, California. For current author addresses, see end of text.
As the number of authors per original biomedical research paper has increased, accountability has become dislocated from credit (1, 2) and disputes and abuses of authorship have increased (3-5). This situation led us to propose a new system (2, 6, 7) that 1) acknowledges as a contributor everyone who has performed important work on a project that results in an article, 2) lists descriptions of their contributions for the reader, 3) includes on the byline the names of those who contributed most substantially, and 4) lists as guarantor those who can take responsibility for the integrity of the entire work (8). Science does not exist until it is published, and the idea is to make what is published truly an “inscription under oath” (9). This can best be achieved by publicly identifying the contributions of each researcherwithin the paper itself rather than in less public locations (academic departments, research units, curricula vitae) that readers can rarely access.
In July 1997, The Lancet became the first biomedical journal to implement our proposal. The Lancet requests that each contributor to original research articles describe the tasks that he or she performed; this information is published in a “Contributors” list at the end of the article. The researchers are not required to use any predefined categories or checklists of contribution; they are simply asked to devise their own descriptions of their work. The Lancet does not require that one or more contributors be identified as guarantors for the paper. In addition, many of the articles include an “Acknowledgments” section that recognizes the work of some participants who are not named on the byline or in the contributors list. In January 1998, the BMJ became the second general medical journal to adopt the proposal but added the requirement that at least one guarantor be designated. Other journals have since implemented versions of the proposal (10-13)
The advantages of replacing obscure coding schemes, such as the order of authors, with a clearer system are obvious (2). However, it is still essential that the emerging strategies of disclosing and assessing contribution are scrutinized to determine whether they are reliable, valid, relevant, and feasible.
These concerns (14) prompted our study, which assessed researchers’ published self-reports of their contributions to original research articles in The Lancet. We wanted to determine whether-and, if so, how-such descriptions could help readers accurately assign credit and accountability for published research. We developed a taxonomy of contributions and used it to analyze descriptions of contributions. Specifically, we examined how actual contributions correlated with the order of names on the byline, fulfilled the criteria of authorship as defined by the International Committee of Medical Journal Editors (the Vancouver Group), and differed between the byline and the acknowledgments section.
We assessed information on the contributions of researchers to all original research articles (“papers” and “early reports”) (n=121) that appeared in The Lancet from July to December 1997. We identified the different sources of information as the byline-the list of names appearing beneath the title of the article-and the two lists that can appear at the end of the article. These lists are the contributors list, which describes the respective activities of contributors named on the byline (Appendix) and sometimes names additional participants, and the acknowledgments list, which recognizes persons who are not named on the byline or in the contributors list.
Assessment of Contributions
We identified and excluded the articles without contributors lists. For the remaining articles, we counted persons listed on the byline and in the contributors and acknowledgments lists and measured the column length devoted to these lists. We used a very conservative definition to identify potential guest authors: a person who was listed on the byline of an article that contained a contributors list but had no contribution attributed to him or her. If the contributors list contained any general description of group contribution (for example, “all authors participated in the writing of the paper”), none of the contributors were defined as guest authors. We also identified contributors lists that contained “additional information” describing the work performed by contributors not named on the byline. Finally, we examined the potential impact of the disclosure policy by comparing information on the byline and in the acknowledgments lists for the study period with that included in original articles published in The Lancet during the previous 6 months (January to June 1997), before the policy was implemented.
We also defined and collected data on the special case of the trials list, which is specific to large, multicenter trials that often list committees of investigators. Because the findings were similar to those for the acknowledgments list, we report only the acknowledgments list data.
Development of Taxonomies of Contribution Categories
We found that we could group The Lancet ‘s descriptions of contributions under general category headings. We defined the percentage of participation as the percentage of all contributors on the contributors and acknowledgments lists (n=852 and n=650, respectively) who contributed to a specific category. The percentage of participation was used to identify the most common activities. We developed separate taxonomies of categories for the contributors and acknowledgments lists (Appendix). The categories in the contributors list taxonomy encompassed a wide range of involvement. To cite two examples for the “clinical” category, the descriptions “was responsible for the medical monitoring of patients” and “was responsible for ophthalmic care, photocoagulation treatment, and scoring of the fundus photographs” were found. The categories in the acknowledgments list taxonomy were necessarily more vague because the terminology was less precise (for example, “physicians”) (Appendix).
Development of the Top 10 Most Common Contributions
We calculated the percentage of participation for each contribution category. Those with the highest participation were identified as the top 10 categories and were used to distinguish between the activities performed by persons in different positions on the byline and in acknowledgments lists. Categories other than the top 10 categories were “minor” (categories with percentage of participation less than 5% approximately) or “ambiguous” (categories whose activities or level of involvement were unclear).
Definition of “Major” and “Partial” Contributions
For some categories, we were able to identify contributions that could be assigned to mutually exclusive subcategories of “major” and “partial.” Major indicated that a contributor fulfilled a majority of the activities for that category; partial indicated that the contributions had been more limited (Appendix). For example, in the category “analyzed or interpreted the data,” the contributions “carried out all data analyses” and “assessed data and interpreted findings” were tallied as “major.” In contrast, “analyzed death certificate data” and “performed outcomes assessment part of study” were tallied as “partial.” When a group of people (more than four) were listed without explanation as having made the same contribution-for example, “wrote paper”-we determined that everyone had contributed to the partial subcategory “performed part of writing or editing.” In contrast, when four or fewer researchers “wrote the paper,” we tallied their contribution as a major contribution. Because we had no means of determining the relative importance of partial contributions in some categories compared with major contributions in others, we gave the same weight to both types of contributions in all calculations unless otherwise specified.
Testing of the Contributors Taxonomy
We tested reliability by assessing 31 of the 115 articles that contained contributors lists. We were not blinded to the study hypotheses, but each of us was blinded to the other person’s assessments. We assigned the contributors mutually exclusive codes: “1” indicated that they had contributed to a given category and “0” indicated that they had not contributed.
We report the level of agreement between coders using the k statistic and Landis and Koch’s guidelines (14): a k value greater than 0.75 denotes excellent reproducibility, a k value greater than 0.4 but less than or equal to 0.75 denotes good reproducibility, and a k value of 0.4 or less denotes marginal reproducibility. We assessed contributions in all broad categories and in subcategories (Appendix) (15). After performing our tests of reliability, we discussed the cases about which we disagreed until we agreed on the final coding.
We tested external validity by comparing original research in The Lancet with original research in BMJ. We determined whether the taxonomy was also applicable to the contributors lists of BMJ, which would indicate that it had a certain degree of external validity (Appendix). With one exception, all descriptions in BMJ were encompassed by the categories of contributions already identified for The Lancet. (The exception was the “guarantor” category, which was used only in BMJ.)
Testing of Hypotheses: Analyses of The Lancet ‘s Contributors Lists
Correspondence between Persons Who Appeared on the Byline and Their Specific Contributions
It has been suggested that certain positions on the byline correspond to specific jobs (for example, the statistician is listed second) or to especially numerous contributions (for example, the first person does the most). To address these issues, we stratified the contributors in the first, second, third, and last positions into mutually exclusive categories. The person listed last was defined only as “last.” For each position, we identified the mean number of contributions and the contribution categories with the highest percentage of participation. Articles that lacked contributors lists or listed only a corporate entity on the byline were excluded. We examined how actual contributions correlated with the order of names on the byline by performing an analysis of variance and a Student-Newman-Keuls test of the numbers of contributions to detect differences among the byline positions. The latter test is a statistical procedure for performing all pairwise comparisons between several experimental groups (16).
Correspondence between Contributors Listed on the Byline and Fulfillment of the Vancouver Group Criteria for Authorship
Since 1985, the Vancouver Group has promulgated criteria (17) that require persons listed as authors to have made all of the following substantial contributions: 1) conceived and designed the paper or analyzed and interpreted the data, 2) drafted the paper or revised it critically for important intellectual content, and 3) approved the final version of the paper before publication (18). The criteria also include the caveat that participation solely in the collection of data, supervision of study activity, or acquisition of funding does not constitute authorship.
In our analyses, we modified the criteria to be much more lenient. We judged that a contribution to any of the following categories would fulfill the first part of the Vancouver Group’s criteria for authorship: conceived of study, designed study, analyzed or interpreted data, performed laboratory analysis, performed statistical analysis, managed or analyzed clinical aspects, or performed field work or epidemiology. We judged that anyone who wrote the paper, wrote part of the paper, or revised the paper fulfilled the second part of the Vancouver Group’s criteria; in addition, we made the generous assumption that 100% of persons listed on the byline had fulfilled the third part of these criteria.
To determine whether contributors on the byline fulfilled our lenient version of the Vancouver Group’s criteria, we ascertained 1) the proportion of all persons listed on the byline who fulfilled these criteria, 2) whether researchers in certain positions on the byline (first, second, third, and last) had higher or lower percentages of fulfillment, and 3) whether researchers who had contributed to the “Vancouver caveat” categories (data collection, supervision, and funding) had higher or lower percentages of fulfillment. We also identified researchers who contributed solely to the “Vancouver caveat” categories.
Overlap between the Contributors and Acknowledgments Lists
We hypothesized that researchers were using their own criteria to determine who should be listed on the byline as authors rather than included in the acknowledgments list. Thus, although overlap between the contributors and acknowledgments lists may indicate a continuum of activities between the two groups, such confluence could also signify that groups were making their own idiosyncratic determinations about where to draw the line in determining authorship along this continuum. Differences of opinion about what degree of participation, let alone what activities, constituted “enough” to merit authorship may also explain why some persons on the byline did not fulfill the Vancouver Group criteria. Therefore, to identify overlap, or lack of overlap, between the activities of persons on the contributors and acknowledgments lists, we compared the top 10 categories of these two lists and identified activities that were common to both or unique to either.
Six (5.0%) of the 121 articles did not include a contributors list. The total number of researchers’ names on the bylines was 785, the total number of names in the contributors lists was 852, and the total number of names in the acknowledgments lists was 650. Three (0.4%) of the 785 researchers on the byline were identified as guest authors, according to our lenient definition. Four contributors lists (3.5%) contained additional information-that is, contributors not included on the byline.
As seen in Table 1, the data in the byline and the acknowledgments section do not differ meaningfully between articles published before (n=119) and after (n=127) The Lancetimplemented the contributors list policy. Furthermore, the column space devoted to the contributors list is very short (although the list is printed in small type).
Development of Taxonomies
We found that the descriptions of contributions were often detailed. These descriptions could, however, be combined into groups under general category headings, such as “designed the study” or “performed clinical analyses or management” (Table 2).
Testing of Taxonomies
Our k statistics for assignment of contributions to all categories in Table 2 indicated excellent reproducibility (k range, 0.77 to 0.95), except for “data management,” which had good reproducibility (k=0.58). Assignment of contributions to all subcategories had excellent or good reproducibility (k range, 0.43 to 0.90), except for the subcategory of “clinical analysis or management,” which had marginal reproducibility (k=0.31).
Application of the Taxonomies: Do the Data Have Meaning?
Six articles (5.0%) were excluded from the analyses because they lacked contributors lists or listed only a corporate entity on the byline. All other articles had at least two names listed on the byline and in contributors lists (n=115).
Contributors List: How Major Contributions Correspond to Order on the Byline
The most prominent ambiguous categories of contribution were “performed supervision” (7.7% participation) and “investigator” (3.1% participation), which were defined as ambiguous because the activities and level of involvement were unclear.
Table 2 identifies the top 10 categories of contribution for all researchers on the contributors list. When stratified by byline placement, the most common contribution categories of persons in the first, second, third, and last byline positions closely corresponded to the top 10 categories of the entire pool of contributors (Table 2).
However, the first contributor on the byline was twice as likely as the other contributors to have coordinated the study. The first and last contributors on the byline were also more likely to have written the paper, designed the study, and analyzed the data. In contrast, the second and third contributors were twice as likely to have managed the data and were more likely to have performed clinical, statistical, or laboratory assessments.
Similar patterns can be observed for the subcategories of the major contributions (Table 2, corrected from Table 3, as stated in the original article). For example, the first contributors on the byline had a considerably higher percentage of participation than did the rest of the contributors for the subcategories of having written the paper, designed the study, analyzed the data, and coordinated the study; between the second and third contributors, fulfillment of these subcategories diminished progressively. On the other hand, the third contributor on the byline was more likely than the rest to have fulfilled the subcategories of “collected data” and “performed clinical analysis or management.”
The number of contributions differed for contributors in each byline position (P < 0.01). The mean numbers of contributions for each byline position were as follows: first, 3.23; second, 2.51; third, 2.20; and last, 2.51. Further analysis revealed that the first contributor on the byline made more contributions (P < 0.01); however, statistically significant differences were not found among contributions from persons in the other byline positions.
Fulfillment of the Vancouver Group Criteria for Authorship
Table 3 shows that approximately 44% of all contributors on the byline do not fulfill even our lenient interpretation of the Vancouver Group criteria for authorship. The subgroup of contributors who indicated that they had obtained funding fulfilled the Vancouver Group criteria most often. All other subgroups had comparable participation in the first part of the Vancouver Group criteria (conception, design, analysis, or interpretation). Participation in the second part of the Vancouver Group criteria (writing or critical revision) varied more widely. The fulfillment of all parts ranged from highs of 92.3% for those who obtained funding and 71.3% for researchers in the first byline position to a low of 46.7% for persons listed third on the byline.
Researchers listed on the byline commonly contributed to the “Vancouver caveat” categories. The percentages of participation were 20.1% for data collection, 8.4% for supervision, and 1.7% for funding. However, these were rarely a person’s only contribution. Indeed, participation in data collection seems to be as good a marker for having fulfilled the Vancouver Group criteria as (or a better marker than) second or third position on the byline. Four byline contributors (0.5%) had contributed solely to “data collection,” two (0.3%) had contributed solely to “general supervision,” and none had contributed solely to “funding.”
Overlap with the Acknowledgments Lists
Table 4 reports the top 10 categories of contribution for the acknowledgments list and identifies the 7 categories that appear on the top 10 lists for both the contributors and acknowledgments lists. This overlap between a majority of categories (70%) demonstrates that a continuum in activities exists across the two lists. Indeed, persons on both lists have a percentage of participation of about 20% for the “collected data” category. However, Table 4 also highlights the fact that the acknowledgments lists contained more inexplicit categories (“general contributions” and “names of people only”).
Our findings indicate that the use of contributors lists to disclose contributions to published articles is feasible in a widely circulated general medical journal. By feasible, we mean that researchers were able to describe their contributions in a meaningful and reproducible way, and editors were able to publish them. The contributors lists occupied little page space, which eliminates the concern that publication of contributors lists is awkward or wasteful. Our assessments also determined that contributors lists can generate information that seems relevant.
The analyses of the information in contributors lists showed several limitations to interpretation of the byline. First, order of placement on the byline conveyed limited meaning about the category and number of contributions. Second, only half of those persons listed on the byline fulfilled the Vancouver Group authorship criteria. Finally, the activities of persons named on the byline and those named in the acknowledgments list overlapped considerably.
Placement on the Byline
The order of placement on the byline gave little indication of a person’s specific category of contribution. Rather, the first, second, third, and last byline contributors shared the majority of the most common categories of contribution. Therefore, clear descriptions of the actual work performed can be obtained only from the contributors list.
The number of contributions made by the first contributor is statistically significantly greater than that made by the other contributors. However, it is less clear that subsequent names are arrayed in descending order (or another order) of contribution. Therefore, in the absence of specific descriptions of contributions, readers should avoid interpreting the byline order as a consistent hierarchy indicating degree of contribution. It is clear that the order is meant to convey some meaning, but that meaning is obscure without descriptions of contributions.
One explanation for the lack of distinction made among the contributions of researchers listed in byline positions other than the first may be found in the ways that research groups are organized. The assignment and performance of the tasks necessary for completion of a project may vary both within and between research groups. The contributors lists of some groups indicated that the researchers shared the work almost equally. In contrast, individual contributors in other groups performed tasks that were limited to one area of expertise or varied greatly in the level of involvement.
Data on the meaning of the byline order are scarce (2). In a survey of editors of clinical journals by the American Association for the Advancement of Science (Survey of journals conducted by the American Association for the Advancement of Science. 1995. Internal document), only 7 of 39 editors reported that they understood the meaning of the order of authors in their journals because their journals had policies on the subject. However, these policies differed widely on who should be named first, ranging from the senior researcher to the student and from the person who did the most work to the person previously identified by the official protocol. Similarly, a study by Davies and colleagues (19) found that most of the deans of Canadian medical universities did not have criteria for assessing their faculties’ contributions to published research and that they disagreed about what was meant by a specific position on the byline. Moulopoulos and colleagues (20) could not model the individual contributions of different researchers on the byline; no amount of modeling could capture the details of what the researchers actually did. Shapiro and coworkers (21), in a survey of persons listed first on the byline of 200 articles in 10 leading biomedical journals, found that the contributions of the other persons on the byline, even those placed second and last, were markedly inconsistent; an appreciable number had made negligible contributions to the research. In short, practices vary across investigators and disciplines, and at present, the meaning of the byline order is never made apparent to the reader.
Vancouver Group Criteria
According to the self-reports of contributors, the modified Vancouver Group criteria for authorship were not fulfilled by 44% of the contributors listed on the byline. Hoen and coworkers (22) found that researchers rarely knew the Vancouver Group criteria. Bhopal and colleagues (3) and Rennie and coworkers (2) found that even when researchers did know the criteria, they often disagreed with them, describing the criteria as out of touch with the realities of modern research. After examining the same articles assessed in our study, the editor of The Lancet, Richard Horton, found a similar level of fulfillment of the Vancouver Group criteria: only 50% (23). Contributors to articles thus seem to be applying their own criteria for determining who qualifies for byline recognition.
The Contributors and Acknowledgments Lists
The overlap between activities on the contributors and acknowledgments lists is extensive. This suggests that the contributions of individual persons exist along a continuum of activities and that researchers may be applying idiosyncratic criteria when they draw a line between the “authors” and “acknowledgees” on their teams. Therefore, having two lists only reinforces a distinction that is artificial. We have not found any previous studies that examined the contributions of those listed in the acknowledgments section.
Our findings should be interpreted with caution. They are preliminary findings on disclosures of contribution in 6 months of issues of a single biomedical journal. Our test of external validity of the categories of contribution was limited to assessment of 2 months of issues of BMJ. We have not attempted to determine whether researchers have over- or underreported their contributions, nor do we have data on their opinions, or those of readers, about reporting contributions. We do not have data on the systems of disclosure at other journals or information on whether the categories of contribution would be comparable to those identified for other types of biomedical publications. Finally, we cannot report on whether the implementation of the disclosure policy has influenced the prevalence of instances of authorship disputes or misdeeds. We attempted to minimize our limitations by performing appropriate tests of reliability and external validity.
Our analyses determined that contributors lists can generate information that seems meaningful. In addition, The Lancet has executed the proposal with apparent ease (23; corrected from Survey of journals conducted by the American Association for the Advancement of Science. 1995. Internal document, as stated in the original article). However, our findings will need to be replicated in larger studies that assess other journals and address the limitations noted above. Future research should examine the feasibility, reliability, and internal and external validity of The Lancet ‘s system of disclosure and of variations of the proposal that are being implemented at other journals.
Our findings indicate that, in general, readers, editors, and members of review committees should not depend on authorship standing or byline order as reliable indicators of the type or degree of work performed. The caveat for this is the case of the first contributor listed on the byline, who does make significantly more contributions. Even in that case, however, the specifics of the activities are not clear without further disclosure.
We have argued elsewhere (2) that researchers should list their contributions to papers and that editors should publish these lists for readers. Our present findings provide some objective support for the feasibility and appropriateness of those recommendations. We therefore offer the following policy recommendations.
First, the Vancouver Group criteria for authorship should be modified to reflect the following principles: Researchers should describe their contributions to papers, and editors should publish these descriptions for readers.
Second, journal articles should have only one list of contributors that names and describes the work of all the researchers who contributed substantively (including those who previously would have been mentioned only in the acknowledgments list). Those who contributed the most work should be named on the byline (2). Use of the acknowledgments section should be restricted to recognition of funders or corporate bodies.
Third, journals (and others) should develop uniform methods of collecting and analyzing data on the different systems that disclose researchers’ contributions to their papers.
Fourth, professional societies should consider identifying categories of contribution appropriate to their discipline and advising journal editors and members of review committees about the meaning and importance of these categories.
Fifth, review committees at academic centers and funding agencies should consider requesting that descriptions of contributions, preferably published, accompany all publications cited by applicants.
Example of a Contributors List
Article citation: Joensuu J, Koskenniemi E, Pang X, Vesikari T. Randomized placebo-controlled trial of rhesus-human reassortant rotavirus vaccine for prevention of severe rotavirus gastroenteritis. Lancet. 1997;350:1205-9.
Names as they appeared on the byline: Jaana Joensuu, Eeva Koskenniemi, Xiao-Li Pang, Timo Vesikari.
Contributors list: Jaana Joensuu was the clinical investigator responsible for the clinical conduct of the study and data collection. Eeva Koskenniemi did the statistical analysis. Xiao-Li Pang was responsible for rotavirus detection and typing. Timo Vesikari designed the protocol with E.T. Zito and P.H. Towinadre (Wyeth Ayerst Research) and wrote the first draft of the paper. (Note that Zito and Towinadre were not listed on the byline.)
Development of Taxonomies of Contribution Categories
We developed the taxonomies of categories of contribution by first reading all of the contributors lists selected for the study. Second, from a subgroup of 25 randomly selected articles, we built a database of all of the descriptions of contributions included on these lists, broken down into the units of contribution; original wording was maintained. Third, we formulated general headings for the natural categories these formed. Fourth, once we had this preliminary taxonomy of category headings, we discussed whether they succeeded in capturing the meaning of the original descriptions; we modified category headings as necessary. Fifth, we scanned the contributors lists of the remaining articles to determine the accuracy of the taxonomy for these, repeatedly revising the taxonomy as new categories or subdivisions of categories emerged. The contributors list taxonomy underwent three stages of revision. Finally, using the revised taxonomy of category headings, we separately scanned the entire pool of descriptions on the contributors lists and determined by mutual agreement that the taxonomy encompassed the full range of activities described on the contributors lists. The taxonomy was then considered complete and final. Blinded to each other’s determinations, we used the taxonomy to assess the categories of contribution described on the contributors lists. The taxonomy for the acknowledgments lists was developed by the same means.
The following are summaries of the taxonomies’ category headings. For ease of reading, we have paraphrased the headings to be more terse. The longer, more descriptive headings that we used as our contribution category headings reflected the descriptions used by the researchers. The broadest (highest level) categories, which encompassed major contributions, are not indented. The narrower or more limited subcategories that encompassed partial contributions and were found for the contributors list only are indented.
Contributors List Taxonomy
- Conceived the study
- Designed the study
- Designed part of the study
- Coordinated study
- Coordinated part of study
- Collected data
- Performed randomization or matching
- Collected part of data
- Performed data collection in a systematic review
- Performed follow-up data collection
- Managed data
- Managed part of data
- Performed quality control of data
- Performed part of quality control of data
- Analyzed or interpreted data
- Performed part of data analysis or interpretation
- Performed statistical analysis
- Performed part of statistical analysis
- Performed laboratory analyses
- Performed part of laboratory analyses
- Performed clinical analysis or management
- Performed part of clinical analysis or management
- Performed epidemiologic or public health work
- Performed part of epidemiologic work
- Wrote paper
- Performed part of writing or editing
- Advised on the study
- Advised on part of the study
- Secured funding
- Recruited study participants
- Performed previous work that was foundation of current study
- Obtained permissions
- Performed training
- Provided data
- Performed economic analysis
- Performed translations
- Consulted with government bodies
- Principal investigator
- Steering committee member
- Writing committee
- Data or safety monitoring committee
- End point validation committee
- Center of recruitment
- Center of statistical coordination
- Titles only included (tasks performed were not detailed)
- Wrote original grant application
- Participated in the execution of the study
- Realized the studies that were a part of the paper
- Contributed to general organization
- Was essential to the establishment of a register of the study population
- Was responsible for the ascertainment of cases
- Prepared cohort for screening
- Did section evaluation
- Responsible for office administration
- Prepared data
Acknowledgments List Taxonomy
- Names of people only, without explanation
- Names of institutions only, without explanation
- Funders of study
- Clinical staffpersons or activities (for example, physicians, nurses)
- Laboratory staffpersons or activities (for example, pharmacists)
- Field work staffpersons or activities (for example, epidemiologists)
- Other study implementation staffpersons or activities
- Participants in study
- Clerical staffpersons or activities
- Data collection
- Data provision (for example, persons or institutions that provided data, samples, or patients)
- Drugs or materials provision (for example, manufacturers, institutions, and persons providing nonmonetary support)
- Data management
- Assistance with data analysis or interpretation
- Independent review of data
- Performance of or assistance with statistical analysis
- Reading or review paper
- Editing of paper
- Action by or consultation with government bodies
- Thanks for permission to do or publish research
- Members of supporting organizations
- Project officer of granting agency
- Government bodies
- Laid foundation for or inspired study
- General help or contribution
- Heads of department
- Cooperation or collaboration
- Assistance in preparing paper
- Steering group or committee
- Data or safety monitoring committee
- Writing committee
Ambiguity in Acknowledgments List Taxonomy
The category headings in the acknowledgments list taxonomy were more vague because the terminology used by researchers to describe the contributions of those on the acknowledgment lists was less precise. Even when the attempt was made to be specific about contribution, researchers used general terminology-for example, in the cases when lists of names followed these descriptions: “physicians,” “nurses,” “chemists,” and “statisticians.” This imprecision precluded our ability to subdivide the assignment to categories according to level of contribution (for example, “major” and “partial”).
Testing of the Contributors Taxonomy
Because the researchers themselves chose the words and phrases, we considered their disclosures of contribution to have internal validity.
Test of External Validity: Comparison with BMJ
We tested whether the taxonomy developed from The Lancet ‘s contributors lists was applicable to the contributors lists of BMJ. We analyzed information on contribution in all original research articles (“papers” or “general practice” articles; n=69) in the January and February 1998 issues of BMJ. Three articles were missing contributors lists. We narrowed our assessments to the contributions of the “first” and “last” byline researchers. We categorized their contributions using The Lancet taxonomy. We also identified for each journal the categories with at least 5% participation and observed whether the journals overlapped in these categories.
We found that, with one exception, all of the descriptions of contributions included on BMJ contributors lists were encompassed by a category already identified on the taxonomy developed for The Lancet. The exception is that of the category “guarantor” (59.1% participation), which was present on BMJ lists alone. Following our recommendation (2), BMJasked contributors to identify guarantors-those persons who had contributed substantially but had also made added efforts to ensure the integrity of the entire project before and after publication. Excluding the “guarantor” category, 8 of BMJ ‘s top 10 categories for the first-byline position were also top 10 categories for the first-byline position in The Lancet. These included “wrote paper,” “designed study,” “analyzed or interpreted data,” “collected data,” “coordinated study,” “conceived of study,” “performed statistical analysis,” and “advised on design or analysis.” The 2 categories that were not on BMJ ‘s top 10 lists but were still present (with lower percentage of participation) were “performed clinical analysis or management” and “performed laboratory analysis.”
Ms. Yank first conceived of and designed this study; collected, analyzed, and interpreted the data; and wrote the article. Dr. Rennie assisted with refining the concept and design, assisted with data collection, and critically revised the article for important intellectual content.
Requests for Reprints: Drummond Rennie, MD, Institute for Health Policy Studies, University of California, San Francisco, Box 0936, Laurel Heights, San Francisco, CA 94143-0936.
Current Author Addresses: Ms. Yank and Dr. Rennie: Institute for Health Policy Studies, University of California, San Francisco, Box 0936, Laurel Heights, San Francisco, CA 94143-0936.
1. Rennie D, Flanagin A. Authorship! Authorship! Ghosts, guests, grafters, and the two-sided coin [Editorial]. JAMA. 1994;271:469-71.
2. Rennie D, Yank V, Emanuel L. When authorship fails. A proposal to make contributors accountable. JAMA. 1997;278:579-85.
3. Bhopal R, Rankin J, McColl E, Thomas L, Kaner E, Stacy R, et al. The vexed question of authorship: views of researchers in a British medical faculty. BMJ. 1997;314:1009-12.
4. Wilcox LJ. Authorship: the coin of the realm, the source of complaints. JAMA. 1998;280:216-7.
5. Flanagin A, Carey LA, Fontanarosa PB, Phillips SG, Pace BP, Lundberg GD, et al. Prevalence of articles with honorary authors and ghost authors in peer-reviewed medical journals. JAMA. 1998;280:222-4.
6. Godlee F. Definition of “authorship” may be changed. BMJ. 1996;312:1501-2.
7. Lundberg GD, Glass RM. What does authorship mean in a peer-reviewed medical journal? [Editorial] JAMA. 1996;276:75.
8. Rennie D, Yank V. If authors became contributors, everyone would gain, especially the reader [Letter]. Am J Public Health. 1998;88:828-30.
9. Lederberg J. Communication as the root of scientific progress. The Scientist. 8 February 1993:10-4.
10. Proto AV. Radiology-1998 and the future. Radiology. 1998;206:1-2.
11. Northridge M. Annotation: new rules for authorship in the journal: your contributions are recognized-and published! Am J Public Health. 1998;88:733-4.
12. What AJPH authors should know. Am J Public Health. 1998;88:721.
13. Information for authors. Ann Intern Med. 1998;128:I11-6.
14. Tacker M. Authorship discussion gains momentum at CSE retreat. CSE Views. 1998;21:131-2.
15. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159-74.
16. Glantz SA, Slinker BK. Primer of Applied Regression and Analysis of Variance. New York: McGraw-Hill; 1990:300-2.
17. Uniform requirements for manuscripts submitted to biomedical journals. International Committee of Medical Journal Editors. Br Med J (Clin Res Ed). 1985;291:722.
18. Uniform requirements for manuscripts submitted to biomedical journals. International Committee of Medical Journal Editors. JAMA. 1993;269:2282-6.
19. Davies HD, Langley JM, Speert DP. Rating authors’ contributions to collaborative research: the PICNIC survey of university departments of pediatrics. Pediatric Investigators’ Collaborative Network on Infections in Canada. CMAJ. 1996;155:877-82.
20. Moulopoulos SD, Sideris DA, Georgilis KA. For debate . . . Individual contributions to multiauthor papers. Br Med J (Clin Res Ed). 1983;287:1608-10.
21. Shapiro DW, Wenger NS, Shapiro MF. The contributions of authors to multiauthored biomedical research papers. JAMA. 1994;271:438-42.
22. Hoen WP, Walvoort HC, Overbeke AJ. What are the factors determining authorship and the order of the authors’ names? A study among authors of the Nederlands Tijdschrift voor Geneeskunde (Dutch Journal of Medicine). JAMA. 1998;280:217-8.
23. Horton R. The unmasked carnival of science. Lancet. 1998;351:688-9.