OUP user menu

Good publishing practice

Thomas F. Lüscher
DOI: http://dx.doi.org/10.1093/eurheartj/ehr506 557-561 First published online: 1 March 2012

This comment refers in part to ‘Relations between professional medical associations and the health-care industry, concerning scientific communication and continuing medical education: a Policy Statement from the European Society of Cardiology’, by ESC Board on page 666 and ‘Conflict of interest policies and disclosure requirements among European Society of Cardiology National Cardiovascular Journals’, by F. Alfonso et al., on page 587

Writing and information

The history of mankind really began with the advent of writing. As information grew in complexity, its representation in writing systems involving letters, numerals, and other markings became a necessity. The system of writing invented by the ancient Sumer in Mesopotamia some 5000 years ago enabled information to be preserved and accessible and, in turn, forms the cornerstone of our modern information society. Since then, the internet permits unrestricted access for everyone to virtually everything we know. The relevance, quality, and trustworthiness of an overwhelming amount of information have thus become a key issue, particularly in the scientific literature.

A changing environment

Two developments are noteworthy in this context: first, the ethos of academic institutions has changed with their increasingly close collaboration with business and industry. The search for truth within the ivory tower is no longer the primary mission of universities, scientists, and physicians; indeed, the commericalization of their knowledge and products has become an additional strategy. In 1980 the US Congress passed the Bayh–Dole Act which allowed universities to patent discoveries made possible with federal grants.1 Shortly thereafter, the US Supreme Court decided in the case Diamond vs. Chakrabarty that genetically engineered organisms are patentable. Given the declining number of federal grants, such novel opportunities were welcomed by most scientists and physicians, but they obviously introduced unprecedented interests into the scientific process.2 Despite this, European Universities eagerly followed the example of their American colleagues.

Secondly, the introduction of randomized clinical trials by Sir Austin Bradford Hill (1897–1991) in 19483 not only has changed the treatment of tuberculosis, but even more so the requirements for clinical evidence and, as a consequence, those for the development of drugs and later also devices. Half a century ago, the US Food and Drug Administration (FDA) did not have authority to require drug manufacturers to demonstrate efficacy and reasonable benefit–risk relationships of novel drugs.4 In 1961, US Senator Estes Kefauver introduced legislation that—in response to the thalidomide scandal that had evolved in Europe5 and Australia6—eventually gave the FDA authority to force pharmaceutical companies to provide efficacy and safety data before the introduction of their products into the market. Over the years, such legislation was adopted by most European registration agencies, including the European Medical Evaluation Agency (EMEA). This made double-blind, controlled, and randomized clinical trials mandatory for drug development and changed the practice of medicine, particularly in the cardiovascular area. Obviously, such trials could not be performed without a close collaboration of academic scientists and physicians with industry. Although this led to an enormous boost in the quality and amount of clinical research, it again introduced novel interests in academia that increasingly attracted the attention of investigative journalists, politicians, and the public at large. Thus, the quality and trustworthiness of the publication process and its products has come into question recently—and this requires a credible response.

Publishing is communicating

Publishing makes scientific writing available—only findings that are published exist. From the ancient Greeks to modern science, discoveries have been written down in books and papers for the use of current and future readers. Over the centuries and particularly over the last decades, the number of scientific journals and publications has increased tremendously; indeed, in 2010 more than 38 000 scientific papers were published in close to 40 000 journals worldwide, and it is anticipated that with the rise of China and India as scientific nations, this number will increase further. Of note, most of the scientists that ever lived throughout the entire human history are working today.

With the abundance of scientific information, readers are faced with two major problems. (i) How can we ever acquire and digest what is published in books, journals, and the internet? (ii) How can we assess the quality of the information? The internet has changed the way we access information dramatically. No longer do we have to spend hours, days, and weeks in libraries to obtain the information we require; rather, search engines help us to find what we are looking for within seconds. Comprehensive summaries and abstracts of papers and books have further facilitated searching and reading. Furthermore, peer-reviewed journals regularly provide expert reviews on relevant topics and highly cited guidelines for use in clinical practice.

The second problem, however, remains: what is the quality of the information we get? Is the question raised relevant? Are the methods used appropriate? Is the statistical analysis correct? Have the data presented really been obtained? And lastly, are the results relevant for clinical application?

The peer review system

The peer review system was introduced to help readers obtain the best scientific information. Who is a peer? A peer is a person of the same civil rank or standing, an equal before the law as The American College Dictionary puts it.7 In science and medicine, this would be a colleague of similar experience and knowledge, in short: an expert in the field. It is said that Henry Oldenburg (1618–1677) the long-time secretary of the Royal Society,8 introduced the system when he was editor of the Philosophical Transactions. As a theologist he did not feel competent enough to judge all the papers submitted and thus relied on the judgement of colleagues from other fields. Ever since the system has been adopted by most journals in science as well as in medicine.

Currently, most journals use two to five reviewers for submitted papers. Editors, with the help of such experts, base their decisions on distinct criteria such as originality, importance, appropriateness of the methods and statistics used, as well as the quality of illustrations and the discussion. Commonly, the reviewers remain anonymous to the submitting authors. Although the appropriateness of the anonymity of the peer reviewers has been debated, it certainly does ensure a more open assessment and rating of manuscripts and it avoids future personal conflicts between authors and reviewers.

Less than perfect

While the peer review system is widely used, it has remained controversial. Indeed, what can be said about the peer review system has been said by Churchill about our political system: ‘Democracy is the worst form of government, except for all those other forms that have been tried from time to time.’9 Every editor receives letters from authors who are disgruntled about the quality of the reviewers assigned to their rejected manuscript. Further, peers—and for that matter editors—may not always be as qualified as they should be, they may not always be as thorough as they are supposed to be, and they are not necessarily free of conflicts of interest of a personal, scientific, or financial nature. While we are beyond the times when the Inquisition of the Catholic and other Churches mistreated or burned eminent scientists such as Galileo Galilei, Giodano Bruno, or Michel Servet,10 even recent history is full of examples where prejudices, rivalry, and jealousy have delayed or hindered scientific discoveries, for instance the one made by Werner Forssmann.11 Furthermore, strong believers in certain concepts, in paradigms, as Thomas S. Kuhn would put it,12 can still make it difficult to accept novel findings. It appears, however, that in times of ‘normal science’, the peer review system works reasonably well, particularly if several experts are invited to assess the value of the work submitted. Importantly, if personal or financial conflicts exist, reviewers should decline an invitation to review. Many journals, among them the European Heart Journal and its affiliated journals, have therefore included such a statement in their invitation letter to potential reviewers.

The position of the European Society of Cardiology

The White Paper of the European Society of Cardiology (ESC) published in this issue13 discusses these issues carefully with a particular focus on conflicts of interest. The document states that ‘All manuscripts must be subject to anonymous, independent peer review. There should be independent statistical review of every accepted manuscript. Members of the editorial board and reviewers should decline any invitation to edit or review any manuscripts relating to topics, drugs, or devices in which they have significant commercial or academic interests.’ Thus, scientific journals are expected to implement a strict policy to meet such requirements. Only under such circumstances can published work fulfil the criteria of ‘certified knowledge’.

In another paper also published in this issue of the European Heart Journal, the editors of the national journals of the ESC report on a survey analysing such policies in cardiology journals in Europe.14 They found that most journals do indeed have such a policy, particularly disclosure of financial conflicts, but only 57% published that of all authors. Although most journals inform potential reviewers in their invitation letter to decline the request in the case of conflicts, it remains difficult for the editors involved to exclude potential bias of their reviewers—they must rely on their experience and on trust.15

What is a conflict?

After all, what really is a conflict? The word ‘conflict’, derived from the latin word confligere, means to come into collision, to clash. In a conflict as discussed here, two sorts of interests collide: scientific integrity and the desire for personal or financial success. Such desires may be conscious or not, but may lead to biases. The term ‘bias’ means tending or leaning towards a particular outcome.16 Obviously, numerous biases can arise in the scientifc process and in publishing. First and foremost, authors may want to prove pre-conceived notions and to advance their careers. Indeed, although Sir Karl Popper saw the scientific process evolving between conjectures and refutations,17 scientists truly strive to prove rather than falsify their own notions and hypotheses. This behaviour reflects the basic motivation of researchers as well as the incentives of the academic reward system. This will not pose a problem as long as editors and their peers ensure that the enthusiasm of authors for their findings is supported by appropriately obtained and analysed data and that the results and conclusions are discussed in a balanced manner. Although to the best of our knowledge no statistics exist on the degree of concordance or discordance of reviewers, it appears that in most instances they agree to a large extent on the quality of a submitted manuscript.

Recently, new conflicts have arisen: in some instances, authors may want to gain direct or indirect financial benefit from their research. An increasing number of findings leads to patents, and obviously such results are intended to translate into marketable products. Under such circumstances, it cannot be excluded that the question raised and the patient set chosen are intended to optimize the results, that unfavourable data are not presented, and that the interpretation and conclusions are overly optimistic. Since financial ties are not visible, full disclosure of such relationships must be required, as again outlined by the ESC White Paper. Although the availability of such information by itself does not solve the problem, it will help the editors and reviewers to ask the right questions and request appropriate revisions of the manuscript. Furthermore, it helps the reader to assess critically potential biases in the published paper—possibly with the help of a balanced editorial.

Obviously, publishers and editors are also prone to conflicts. First and foremost, publishers want to sell their product and editors want their journal to be successful. Large trials that involve a substantial purchase of reprints by the sponsor are as equally attractive as juicy findings that are taken up by the lay press and media and make the journal visible. Certainly, a 1998 article linking measles, mumps, and rubella vaccine to autism was such a story that was accepted, although the number of patients was admittingly small and the potential implications huge.18 The paper received enormous press coverage—and was later found to be based on fraud (see below). Editors must be aware of such potential biases in their decision process and should not allow themselves to be seduced by media attention and potential sources of income. To that end, they must stand firm against pressures from industry, if required.19

Reverse conflicts

Lastly, the most stringent critics of conflicts may themselves have a conflict. Indeed, the attention that critics of industry-sponsored studies receive in the media may introduce a bias as well, since they are usually heavily cited in articles, interviews, and TV features. In addition, not all of such criticisms have stood the test of time. In 1995 Circulation published a meta-analysis in which the authors claimed to demonstrate a dose-dependent increase in myocardial infarction with calcium antagonists which apparently had been denied by industry.20 The methodology was criticized, particularly the part related to dose–response, which is why the journal invited three editorialists to comment. Later, others also suggested biases in studies on calcium antagonists, if sponsored by industry.21 In the large ALLHAT trial, sponsored by the unsuspicious NIH, however, Curt Furberg and collegues could not confirm their initial findings since amlodipine faired very well even when compared with an angiotensin-converting enzyme (ACE) inhibitor,22 as other calcium antagoists did in subsequent controlled trials.23 Thus, we do not get closer to the truth by simply uncovering conflicts—in the end it is the reproducibility of scientific findings—the unbiased search for truth—which is essential and not the discussion on conflicts alone.

The worst

It should be stressed, however, that conflicts are distinct from fraud; indeed, although authors with conflicts of interest report the data as they are, the conclusions they reach may not be fully supported by the data and the implications they foresee may be overstretched. On the other hand, fraud involves partial omission or fabrication of data. Unfortunately, while the ethos of science assumes a disinterested and honest pursuit of truth, not all members of the community comply with such a requirement. Indeed, just recently the UK's General Medical Council found Andrew Wakefield from University College London guilty of dishonesty and serious scientific misconduct.24 It was found that his now retracted paper published in the Lancet18 in 1998 linking measles, mumps, and rubella vaccine with autism and bowel disease was ‘elaborate fraud’. Similarly, the Dutch psychologist Diederik Stapel falsified data of numerous studies. He managed to conceal the fraud for so long because of his elaborate methods, whereby he fabricated data sets that he supposedly had obtained from collaborating scientists, thereby concealing the fraud from his post-docs.25 In November 2011, Erasmus Medical Center in Rotterdam fired Don Poldermans, a well-known cardiovascular clinical researcher, for violations of academic integrity.26 In a statement, the Erasmus Medical Center said that Poldermans was careless in obtaining his results and used fictitious data to prop up his findings.

What can editors and their peers do about this? Obviously, they cannot prevent fraud from being published; indeed, particularly the most sophisticated fraud is hard to uncover, as outlined above. Nevertheless, editors should be cautious in accepting small sets of data, just because of their appeal, the current fashion of science, and potential for press coverage. Furthermore, reviewers should be aware of such behaviour, particularly when data appear too good to be true.

Editors' expectations

What do editors expect from authors? First of all, papers must be submitted according to the instructions for authors. It is unfortunate that journals do not appear to be able to agree on a common format, but, with close to 40 000 journals worldwide, this is understandable. Most journals reject up to half of the submissions without review. Thus, to pass the first hurdle, the message must be crisp and clear and this is best achieved in the abstract, i.e. that part of the paper that is read first. In another paper soon to be published in a forthcoming issue of the European Heart Journal, Winnik and collegues have analysed abstracts submitted to the ESC Annual Congress.27 Of note, abstracts on basic science, those reporting on >100 patients, and particularly those describing the results of randomized trials were most likely to be accepted for the programme. The fate of such abstracts was then further tracked in PubMed and it appeared that again studies on larger cohorts and randomized trials were most likely to be published and well cited. Thus, editors and peers expect relevant studies with an appropriate number of patients and an optimal design. In particular, the findings must be novel for sufficient priority for publication. Only published data exist and only the first set of them is truly innovative—thus, one has to be first. A tragic example of being late in publishing is the work of Rosalind Franklin who already in 1951 made seminal experiments on the structure of DNA.28 She showed her unpublished crystallographic pictures on the helical structure of DNA to James Watson when he visited her lab on 30 January 1953 and gave him an essential hint for the groundbreaking publication in Nature on the molecular structure of nucleic acids together with Francis Crick.29 At the end of their paper they acknowledged: ‘We also have been stimulated by … the unpublished experimental results and ideas of … Dr. R. E. Franklin …’. In 1962, however, when they received the Nobel Prize together with Maurice Wilkins, the name of the dark lady of DNA was missing.

This leads to the question: who is an author? An author should have made substantial intellectual contributions, specifically to the conception and design, data acquisition or analysis and interpretation, and drafting and/or writing of the study. Of note, authors should also be able to take responsibility for part of or the entire study and its content. It is obvious that particularly in clinical research there is an abuse of authorship, and the International Committee of Medical Journal Editors (ICMJE) therefore recommended that the contributions of each author be outlined in the submission letter. Further, authors should confirm that their work has not been published elsewhere whole or in part, in order to avoid double publications, which is clearly scientific misconduct.

For the integrity of clinical trials it is of utmost importance that the initial hypothesis and design as well as the anticipated statistical analysis are defined from the very beginning. To ensure this, the ICMJE recommended that trials should be registered30 and a design paper be published before the results are analysed.

Conclusions

Thus, good scientific publishing is more demanding than it used to be. Since ancient times, however, the basic principles of science, i.e. precision and honesty, remain the most important. Additional requirements must now be considered before submitting a manuscript (Table 1). The peer-review process can ensure optimal quality of published manuscripts, provided editors and their peers perform a rapid and fair assessement of the submitted work. While not perfect, this has clearly improved the level of research considerably. We could certainly try to do even better, but, while doing so, we should not forget Salvador Dali's words: ‘Don't go for perfection, you will never reach it!’—and he certainly knew what he was talking about.

View this table:
Table 1

Good scientific publishing: summary of requirements

Ethos of science
TransparencyDescribe how you have obtained the data and how you were funded
HonestyOnly describe what you have observed
TrustworthinessConfirm in your submission letter that the results of your manuscript have not been published previously or elsewhere, that they have been obtained after approval by the local ethics or animal committee, and that all authors have approved the final version of the manucript
RegistrationRegister your trial at www.gov.org or any other database. Provide design publication of clinical trials
Structure of paper
AuthorsOnly list those who have significantly contributed and gave their written approval before submission. Indicate individual contributions of each author in the submission letter
AbstractsSummarize the most important findings and the conclusions thereof
MethodsDescribe precisely how you have obtained data and/or recruited patients, what measurement techniques and what statistics you have used
ResultsOnly report results that you have obtained (in relative and absolute values) and that have not been previously published. Use state-of-the-art statistics to analyse your results and use figures with appropriate scales
DiscussionDiscuss the main findings and then every aspect of the study. Give credit to and reference previous work of others
AcknowledgementsList those who helped, but have not significantly contibuted to the study. List financial support by institutions and industry
Conflict of interest statementReport any financial conflicts related to this manuscript of all authors individually
ReferencesGive credit to those who previously worked in the area by appropriate referencing

Footnotes

  • The opinions expressed in this article are not necessarily those of the Editors of the European Heart Journal or of the European Society of Cardiology.

References