Research Ethics for Scientists - C. Neal Stewart - E-Book

Research Ethics for Scientists E-Book

C. Neal Stewart

0,0
32,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Research Ethics for Scientists is about best practices in all the major areas of research management and practice that are common to scientific researchers, especially those in academia. Aimed towards the younger scientist, the book critically examines the key areas that continue to plague even experienced and well-meaning science professionals. For ease of use, the book is arranged in functional themes and units that every scientist recognizes as crucial for sustained success in science; ideas, people, data, publications and funding. These key themes will help to highlight the elements of successful and ethical research as well as challenging the reader to develop their own ideas of how to conduct themselves within their work. Tackles the ethical issues of being a scientist rather than the ethical questions raised by science itself * Case studies used for a practical approach * Written by an experienced researcher and PhD mentor * Accessible, user-friendly advice * Indispensible companion for students and young scientists

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 457

Veröffentlichungsjahr: 2011

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Cover

Title Page

Copyright

Preface

Acknowledgements and Dedication

Chapter 1: Research Ethics: The Best Ethical Practices Produce the Best Science

Judge yourself

Morality vs ethics

Inauspicious beginnings

How science works

Summary

Judge yourself redux

Chapter 2: How Corrupt is Science?

Judge yourself

“Scientists behaving badly”

Do scientists behave worse with experience?

Judge yourself

Crime and punishment

Judge yourself

Judge yourself redux

Judge yourself redux

Judge yourself redux

Summary

Chapter 3: Plagiarise and Perish

Ideas

Sentences

Phrases

A hoppy example

What is plagiarism, really?

Judge yourself

How many consecutive identical and uncited words constitute plagiarism?

Self-plagiarism and recycling

Judge yourself

Judge yourself

Tools to discover plagiarism

Self-plagiarism and ethics revisited

Judge yourself

Is plagiarism getting worse?

The case of the plagiarising graduate student

Judge yourself redux

Judge yourself redux

Judge yourself redux

Summary

Chapter 4: Finding the Perfect Mentor

Caveat

Choosing a mentor

Judge yourself

Choosing a graduate project

Judge yourself

Mentors for assistant professors

How to train your mentor

Choosing the right research project: the new graduate student's dilemma

Judge yourself redux

Judge yourself redux

Summary

Chapter 5: Becoming the Perfect Mentor

Grants and contracts are a prerequisite to productive science

Judge yourself

Publications are the fruit of research

On a personal level

Judge yourself

Common and predictable mistakes scientist make at key stages in their training and careers and how being a good mentor can make improvements

Judge yourself redux

Judge yourself redux

Summary

Chapter 6: Research Misconduct: Fabricating Data

Why cheat?

Judge yourself

The case of Jan Hendrick Schön, “Plastic Fantastic”

The case of Woo-Suk Hwang: dog cloner, data fabricator

Judge yourself

Detection of image and data misrepresentation

Judge yourself

Neither here nor there – the curious case of Homme Hellinga

Judge yourself

Lessons learnt

Judge yourself redux

Judge yourself redux

Judge yourself redux

Summary

Chapter 7: Research Misconduct: Falsification and Whistleblowing

A “can of worms” indeed: the case of Elizabeth “Betsy” Goodwin

Judge yourself

Judge yourself

Judge yourself

Judge yourself

Deal with ethical quandaries informally if possible

Judge yourself

Cultivating a culture of openness, integrity, and accountability

Judge yourself redux

Judge yourself redux

Judge yourself redux

Judge yourself redux

Judge yourself redux

Summary

Chapter 8: Authorship: Who's an Author on a Scientific Paper and Why

The importance of the scientific publication

Judge yourself

Who should be listed as an author on a scientific paper?

Judge yourself

How to avoid author quandaries

Authorship for works other than research papers

The difference between authorship on scientific papers and inventorship on patents

Other thoughts on authorship and publications

Judge yourself

Judge yourself redux

Judge yourself redux

Judge yourself redux

Summary

Chapter 9: Grant Proposals: Ethics and Success Intertwined

Why funding is crucial

Judge yourself

Path to success in funding

Fair play and collaboration

Judge yourself

Judge yourself

Recordkeeping and fiscal responsibility

Pushing the limits on proposals

Judge yourself redux

Judge yourself redux

Judge yourself redux

Summary

Chapter 10: Peer Review and The Ethics of Privileged Information

The history of peer review

The nature of journals and the purpose of peer review

Which papers to review?

Anonymity

Judge yourself

Grant proposals

Confidentiality and privileged information

Reviewers

Judge yourself

Judge yourself redux

Judge yourself redux

Summary

Chapter 11: Data and Data Management: The Ethics of Data

Stewardship of data

Judge yourself

Judge yourself

Judge yourself

The land of in-between: ethics of data presented at professional meetings

Judge yourself

Future of data management

Judge yourself redux

Judge yourself redux

Judge yourself redux

Judge yourself redux

Summary

Chapter 12: Conflicts of Interest

The dynamic landscape of conflicts of interest

Potential conflicts of interest for university scientists

Judge yourself

Conflicts of interest within labs or universities

Judge yourself

Judge yourself redux

Judge yourself redux

Summary

Chapter 13: What Kind of Research Science World Do We Want?

“A culture of discipline and an ethic of entrepreneurship”

Judge yourself

Too much pressure?

Integrity awareness through ethics education

Accountability

We scientists

Judge yourself redux

Summary

Appendix

References

Index

This edition first published 2011

© 2011 by John Wiley & Sons, Ltd

Wiley-Blackwell is an imprint of John Wiley & Sons, formed by the merger of Wiley’s global Scientific, Technical and Medical business with Blackwell Publishing.

Registered office: John Wiley & Sons, Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial offices: 9600 Garsington Road, Oxford, OX4 2DQ, UK The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK 111 River Street, Hoboken, NJ 07030-5774, USA

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.

The right of the author to be identified as the author of this work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

Stewart, C. Neal Jr. Research ethics for scientists : a companion for students / Neal Stewart. p. cm. Includes bibliographical references and index. ISBN 978-0-470-74564-9 (pbk.) 1. Research–Moral and ethical aspects. 2. Scientists–Professional ethics. I. Title. Q180.55.M67S76 2011 174′.95–dc23 2011016038

A catalogue record for this book is available from the British Library.

This book is published in the following electronic formats: ePDF 9781119978879; Wiley Online Library 9781119978862; ePub 9781119979869; Mobi 9781119979876

First Impression 2011

Preface

My initial involvement with research ethics was quite accidental (to me) and commenced just as I began my own PhD programme as a student. I was selected by the Associate Dean of the graduate school to be the Chief Justice of my university’s graduate honour system. To this day, I still don’t understand how that all happened, but now I realise the huge affect it subsequently had on my career. Unbeknownst to me at that time, it paved the way for this book some 20 years later. As Chief Justice, my duties were to help investigate and hear cases of plagiarism, research misconduct, and cheating in courses by graduate students – my peers. I still recall my major professor’s response when I asked him what he thought about my taking the job. “If you don’t mind judging your fellow students…” In other words, I don’t think he believed it was such a good idea. I wasn’t altogether convinced about this new gig either – I thought it had the potential to be a significant diversion from the research I needed to do to graduate. Plus, truly, what scientist wants to judge the allegedly bad practices of his fellow peers in research? This, I find, is a common feeling among scientists. Few scientists are comfortable policing the conduct of other scientists.

The Graduate Honour System cases of alleged student misconduct were heard and decided by a panel of faculty members and graduate students. I simply presided over the proceedings and administered the system. If a guilty verdict was found, then a penalty would be prescribed, and I was the guy to tell the accused of their fates. These penalties ranged from probation to dismissal. After the hearings I walked downstairs from the hearing room and into the ersatz waiting room, personally delivered the good or bad news to the graduate student; always an anxious moment. This simple bearing of good or bad news showed me in a profound way that there is a face and heart behind every case of scientific misconduct.

Hearing these cases over three years opened my eyes to the world of bad behaviour in science (and most of the cases we heard were in fields of science or engineering) that I hadn’t realised even existed. It also helped me understand some of the psychology and pressures that precipitated academic misconduct. That experience helped steer my own career clear of major potholes and fatal wrecks alike. Oh, I still made my share of mistakes, but none were fatal. I had simply been given the somewhat unique chance to learn from lots of other people’s mistakes. And I think I could have steered clear of a few more of my wanderings had I read a book such as this one and/or sat through a one-hour graduate course on research ethics. I’ll make my own confessions throughout the book, and we will examine real and fictional case studies that should be fuel for thought as scientists wind their way through their careers.

With my PhD in hand and the busy day-to-day tasks of running a lab and teaching, the days of my ethical “trials” were a distant memory. Real-life research integrity didn’t hit home until just a few years ago when I was the “victim” of plagiarism. I vividly recall reading my own words from another person’s paper and thinking, “this looks familiar – and the writing’s not so hot.” A student’s plagiarism of my own work inspired me to pursue ethics anew in the form of co-teaching a graduate course on practical research integrity. This book then naturally arose from my teaching experiences, and from the fact that when my colleague and I searched for a book or material to help teach our graduate-level research ethics course, we learned there are a plethora of works on bioethics and many fewer that address research ethics. As a practicing biologist, I don’t consider this book to be a scholarly treatise in ethics; it is written to practically address common problem issues in scientific research with narrative and case studies. I wrote it as a guidebook of sorts – both for undergraduate students contemplating a life in science and those graduate students and early career scientists who find themselves in the thick of it. In the end, the book turned out to be more autobiographical than I’d set out for it to be. That said, all opinions are my own and all names I use in the fabricated case studies are also fabricated. Any resemblance to real people is purely accidental.

I am thoroughly convinced that the best ethical practices lead to the best science. Granting agencies such as the National Institutes of Health (NIH) and the National Science Foundation in the US must agree as they require research integrity training to their awardees. I think it is simply a matter of time before all US funding agencies follow suit. I see more and more scientists now motivated to teach courses in research ethics to address these needs. Aside from mandates set by funding agencies, there seems to be a growing number of colloquia, informal meetings and workshops on research ethics being held. This is a welcome trend to proactively address real concerns in a complicated research world. Research integrity is for everybody!

Knoxville, TN, USAMarch 2011

Acknowledgements and Dedication

Many people have shaped my life and career and have therefore shaped this book. I’m greatly indebted to my scientific mentors who took a chance on me as a trainee: Erik Nilsen and Wayne Parrott. For each I was unproven and a significant risk, but they saw past the risks to the potential. They are both superb mentors. I’m also indebted to my own trainees. My career was born and sustained because of these tireless and dedicated individuals who work the pipettors so I can attempt to make contributions in other ways. In addition, they and others have taught me innumerable and valuable lessons about best practices in science research.

I’m grateful to people who have joined me in teaching research ethics, especially Lannett Edwards who co-founded our course four years ago. Charlie Kwit and Lana Zivanovic joined me in teaching research ethics last year and graduate students H.S. Moon and Blake Joyce were teaching assistants and acted as peer teachers in the 2009 version of teaching ethics. In 2010, Mark Radosevich joined Lana in teaching the course and graduate students Charleson Poovaiah and Jonathan Willis have acted as teaching assistants. Without EPSN funding for partial graduate teaching assistantships for these four students, we’d all have been poorer without their input in our course. I’m also grateful for the help provided by graduate student Kim Nagle during the class. I’ve learned a lot about ethics from teaching with all of these capable individuals.

I include also Gary Comstock in this list of key people to thank. When I first got interested in teaching research ethics I was fortunate enough to call Gary to get his advice on the subject and attend one of his research ethics workshops. His vision and input was critical to what the course, and ultimately this book, became. He is a real professional in this field and is one of its leaders.

So many people helped on the book manuscript by rendering figures, organisation, proofreading, editing, and getting permissions, among other things. At the top of this list are our group’s able administrative specialists, Michelle Hassler and Jennifer Young Hines, who did much of the administrative work (and there was lots of it!) for the book. Reggie Millwood, Blake Joyce, H.S. Moon, Mitra Mazarei, Virginia Sykes, Dave Mann, Muthukumar Balasubramaniam, Jonathan Willis, Jason Abercrombie and other people in my lab played critical roles in contributing and fine tuning the contributions.

Thanks to Bob Langer and Daniel Anderson for allowing me to interview them on mentorship. Bob, especially, has personally inspired me to become a better mentor. Unbeknownst to him, he was also the inspiration for me to allow my lab to continue its growth from my self-imposed and arbitrary cap.

Thanks to Izzy Canning, Fiona Woods, Rachel Wade, and all the great people at Wiley-Blackwell in Chichester for the encouragement and guidance throughout this project. They were both kind and firm in their guidance, i.e., the perfect editorial staff. I also sincerely appreciate the time that the many peer-reviewers took to critique the various stages on the manuscript. I typically did not look forward to receiving the reviews, and then was not very happy with much of what they suggested, especially in the early stages, but in retrospect, their advice was critically important to the quality of the book. I owe a debt of gratitude to both the editorial team and reviewers.

This book is dedicated to my first and best mentor, Charles Neal Stewart, Sr. (1930–2010), who looked forward to seeing this book in print. I talked about this project with him on a regular basis during its development and he encouraged me to see it to completion. Thanks Dad.

To God be the glory.

Chapter 1

Research Ethics: The Best Ethical Practices Produce the Best Science

ABOUT THIS CHAPTER

Research science is becoming increasingly complex and riddled with pitfalls and temptations.Global competition and cooperation will likely change the face of science in the future.Science is an iterative loop of ideas, funding, data, publication, in turn, leading back to more ideas.Ethics can be a guide toward best practices.Best scientific practices lead to the best science results and discoveries.Best practices and mentorship produce the best scientists.

It seems that it is increasingly difficult to be a research scientist. The number and complexity of rules, electronic forms, journals and publishing, and government and university regulations are ever-growing. The competition for funding is often ruthless, and the criteria exacted to warrant publication in good journals also seem to be on the rise. Indeed, not just the pressure to publish, but the pressure to publish the ``right'' papers in the ``right'' journals is also increasing. Nominally, the preparation of proposals and publications has been ostensibly made simpler by computer technology, yet the potential for real- and faux-research productivity has also been enabled by computers. Technology is a double-edged sword: enabling high levels of knowledge creation and dissemination, but also enabling research fraud and shoddy science. Thus, ethical dilemmas seem to be appearing at an increasingly rapid pace, with research misconduct regularly being the subject of news articles in Science, Nature, and The Scientist. I wouldn't be surprised when and if these scientific periodicals hire ethics reporters who will specialise in reporting misbehaviour. Even people who don't keep up with science news are familiar with the term “cold fusion” and the infamous stem cell cloning and data fabrication case from South Korea. While the most notorious cases of misconduct have occurred in higher-profile fields of science, such as physics and biomedicine, it is clear that no area of science is immune to unethical behaviour (Angell 2001; Judson 2004).

We live in a “multiscience” world. Multitasking, multidisciplinary work and multi-authored works, to name a few, are ingrained in the fabric of science culture and certainly multi-multi is expected in order to succeed and move up the scientific ranks. The isolated small laboratory with the lone professor and few staff (see Weaver 1948 for a perspective) has given way to larger labs interacting in complex collaborations in interdisciplinary science. Complex relationships are accompanied with tough decisions regarding authorship, dicing the funding pie, and how to treat privileged data. And immense amounts of data at that, which are shared (or not) and curated in useful and meaningful ways (or not). In all this mix, the temptation to cheat, cut corners, and misbehave seems to be at its zenith for scientists wishing to compete at the highest levels of science, striving to get tenure and become rich and famous. Of course, one alternative to honest competition and competence, as seems to be the case for some scientists, is to con their way to the top. Cheating is front page news in business, politics and sports sections alike. Perhaps a bigger problem to outright fraud is cutting ethical corners. Thus, we have an apparent paradox – the antithesis of this chapter title – that the best (or highly rewarded) science is compromised with seemingly endless ethical issues. Whereas the lone professor and his or her graduate student worked in simpler and more linear paths in the past, modern science seems far too convoluted for its own good (Munck 1997). How can we win? How can sound science prevail in the face of all the obstacles?

If the situation is not complicated enough, it seems that there is growing concern about the abuse of graduate students and postdocs by their mentors. Some senior scientists feel that coercion, micromanagement and general overbearance of their trainees is an effective means to ensure high productivity. While research misconduct garners headlines, causing all sorts of angst upon university administrators, it might be the case that defective mentorship is actually a much weightier problem than outright cheating (Shamoo and Resnik 2003). But is it possible that these two problems could be interconnected (Anderson et al. 1997)? Mentorship is a current hot topic in science that has spawned cottage industries, self-help books and strategising among faculty members and university administrators alike. Everyone knows that finding good mentors is crucial for the young (and sometimes not-so-young) scientist wishing to be propelled into a sustainable career in the academic world of research and teaching or the private sector of research. Mentors share the unwritten rules of science. Mentors explain how these rules are intermeshed with research ethics and advise on best practices. Mentors help their students and postdoctoral trainees fulfil their dreams (should their dreams involve being a scientist). Bad mentors can shatter dreams and stagnate their trainees’ careers. But perhaps even the best mentoring is not effective in deterring certain research misconduct.

Research misconduct is a major threat to science. As much as some scientists wish to point fingers at politicians and the public as the principal bad players responsible for the lack of appreciation and funding that science deserves, I think the real enemy is within our own ranks. Indeed, Brian Martin (1992) maintains that modern science, the “power structure of science,” is to blame for much misrepresentation in research. Essentially scientists are not allowed to “tell it like it is” and must tell publishable stories; (he refers to the stories as “myths”). Research misconduct is insidiously damaging to the credibility of science and scientists in society since it erodes trust – not only trust in the individual researchers but in the system of science itself. Self-patrolling the profession from within is needed to reverse this damaging trend; the major pinch points for detecting research misconduct are at the levels of grant applications and manuscript review.

The ethical dilemmas in data collection, collaboration, publication and granting are likely to become even more complex and vexing in the future. More than ever, graduate students and postdocs must master more techniques, technologies and concepts in order to become and stay competitive in science. At the same time, young scientists must generate good ideas and raise increasingly scarce funds to make their research a reality. Global competition from scientists in developing countries, especially in Asia, is a new fact of life for the researchers in the West, who were formerly accustomed to the deck being stacked in their favour. At the same time, researchers in China, India, the Middle East, and other rapidly developing countries are enjoying increased levels of new funding. These new resources are coupled with even higher government and institutional expectations not only for results and publications, but groundbreaking results in publications in the most prestigious journals (e.g., Qiu 2010). From East to West, being a practicing scientist is certainly not getting any easier.

I don't wish to paint a picture of doom and gloom, however. Honestly, I can think of no more exciting time to be a scientific researcher than today with the booming innovations and opportunities to be found around every corner. We can also innovate and connect with other scientists and stakeholders across the globe in nearly instantaneous fashion these days. Certainly, the positive science news outweighs the negative news and its complications, but there is great consensus among scientists and others that the broken parts are in need of attention and fixing (Titus et al. 2008).

About four years ago, a colleague and I became convinced, for all of the above reasons (as well as others discussed later in this chapter) that a new course at my university needed to be taught on research ethics to graduate students, thus necessity spawned my new foray into ethics. After a couple of years teaching our new graduate course that met for one hour one day per week for 14 weeks, I decided that a book of this sort could be helpful to support the course (see Appendix for our syllabus), but also as a general help to young scientists just starting their research careers, and undergraduate students contemplating a career in scientific research. This book could be viewed as part guidebook, part virtual mentor, and part friendly polemic that should be helpful in addressing pragmatic problems that all research scientists experience. While virtual mentoring was part of my motivation, to substitute any book for finding a real mentor would be a mistake, which is one main reason a couple of chapters on mentorship are included. This book is on research ethics – a users’ guide to success in science by following the rules that scientists largely agree are requisites for success. This book will not focus on greater issues of morality or bioethics – these are vastly different topics than the one we're embarking on here. In addition, many, if not all the chapters in this book, are subjects in their own right; the deep expertise of researchers in the social sciences, philosophy and education.

And with that, I'll state up front that I don't have all the answers. I think I do ask most of the pertinent questions, but like most things in life, asking the questions is a good bit easier than answering them. One of my main goals in asking the questions is to enable the readers to judge themselves with regards to best practices. When I started in science, I expected that there would be one right way to do experiments illuminated clearly, then analyse the data and write up the paper. It didn't take long to learn that this was not the case, and indeed, I judged myself then and ever-frequently now. Science is very creative and individualistic. There are many ways to answer scientific questions, and many ways also to go wrong. That is not to say that we can't learn from our mistakes and at least not doom ourselves in repeating the same mistakes over and over again.

So, I urge the reader to think about the questions and the answers and think about ideas expressed here, especially analysing the case studies for current and future action where applicable. Talk about these issues with your colleagues and mentors. If the topics in this book are discussed more widely in labs, hallways, and classrooms, then the best ethical practices will be advanced throughout fields of science. After I began teaching on research ethics, I found the new lively hallway discussions about various topics related to our course content was proof positive that our new effort towards promoting best practices was worthwhile.

Judge yourself

Why are you interested in research ethics?

What are your motivations for pursuing research?

In what ways are these motivations synergistic or antagonistic with one another?

Morality vs ethics

What is the difference between morality and ethics? If morality is the foundation that ethics is built upon, research ethics is the top floor that is visible from the air. That moral foundation often has religious or spiritual ingredients and is engrained in substance that is far beyond the scope of this book. Ethics can be considered a sort of practical morality or professional morality that enables boundaries for the work of research to be played fairly. That is, if we think of problems not so much as in terms of right and wrong, but in terms of ought and ought not, then I think we understand how to parse morality vs. ethics. Many people are uncomfortable discussing morality, religion and politics. In contrast, most scientists are happy to share their opinions on ethics of their fields and science in general. It's ok if we don't all agree on the fine points of all the ethical considerations posed in this book. I worry more about the big picture.

One way to think about research ethics is in terms of best practices in conducting all aspects of research science – to maximise benefits and minimise harm. A very important ethics concept is non malfeasance – doing no harm (Barnbaum and Byron 2001). While the definitions and delineations on research ethics might seem a bit squishy, let's keep in mind that there is plenty of room for opinion. This book is about ethics much more than morality, and practical research ethics as opposed to theoretical ethics that would interest a philosopher. This book is for scientists. This book is about integrity in performing research. Summed up, this book is about scientific integrity.

Indeed, for our purposes here, this book is also about how to be a successful scientist. It can easily be argued that philosophers have thought about ethics for much longer, (e.g., Plato and other ancient Greek philosophers) than have scientists thought about science (a word not coined until the 1800s (Shamoo and Resnik 2003)). There are many viewpoints that philosophers have taken to conceptualise ethics. A few of these are utilitarianism, deontology and virtue ethics.

Utilitarianism is an example of teleological theory, which is based on outcomes rather than process. Utilitarianism seeks to do the most good for the most people; it is important to consider others and not just yourself. The utilitarian essentially does cost-benefit analysis to guide a person's path and decisions, and one that is widely implemented these days as a thought process (Barnbaum and Byron 2001).

Deontology is the ethics of duty. It strives to universalise rules that apply to everyone in guiding actions. One example here is the Golden Rule (or the rule of reciprocity), which is stated as, “Do unto others as you'd have them do unto you.” “Morality as a public system” (Gert 1997 p. 24) applies to research ethics in that all scientists know the rules to be followed and is not irrational for the people who agree to participate in the system to follow the rules.

Virtue ethics focuses on living the good life. In this system, a person ought to decide to do what a virtuous person should do in all circumstances. Similar to the other two systems above, virtue ethics considers the potential for harm and avoids doing things to harm others, as this is what the virtuous person ought to do.

A last self-centred way to look at ethics is through the eyes of egoism (Comstock 2002). Egoism states that a person ought to do what is in his/her own self interests. If a scientist wants to have a long and fulfilling career, then he or she should follow the rules and perform the best science. It will also be in their own self-interest, especially in the long run, to care about others and tell the truth in science.

As a scientist, it is difficult for me to actually decide which of these various systems is most effective. To me, they all point in the same general direction to guide behaviour. If we mash them up, a virtuous scientist will seek the truth for the better good of humanity in following the rules that most scientists agree upon because it serves the self-interest of individual scientists. Scientists, by definition, should desire to maximise benefit and minimise harm (normative principles).

Inauspicious beginnings

Up until the past few years, I had no real interest in ethics as a topic of study (except a fleeting fling during my PhD training), much less in writing a book about ethics. I reasoned that everyone valued common sense ethics and there was no need to study or discuss it. When I decided to pursue science and move towards obtaining the masters, then the PhD after a stint of teaching in public schools, I was totally focused on science and research – no time for what I considered to be lollygagging in philosophical musings. In my mind, this singular focus on research was by necessity. I had found myself in so far over my head and out of my comfort zone in science, with a motivation to learn as much as I could as fast as I could. It seemed to take every drop of energy I could muster, especially in the early part of graduate training, to keep from drowning. Even then, at times, I felt I was floundering in my classes and research. I think I would have considered any training or discussion about ethics, best practices in science, or even how to be a scientist a real distraction from science itself. How wrong I was!

Let's imagine a fictitious mechanical engineer who is fascinated with cars. The engine design, drive train, tires, chassis, brakes, the whole thing, is an obsession. Now after studying the theory of everything automotive, our ambitious engineer designs and builds a fully functional 500 horsepower machine that's capable of going 0 to 60 mph in less than four seconds. And after all these years, our engineer will now finally drive his first car – ever – his first car being the one of his own design. Unfortunately, before taking the wheel, he never learnt the rules of the road. He doesn't know what that octagonal sign means, whether to drive on the right or left side, and let's not even consider motoring courtesies. No, our engineer considered all these things to be a distraction from what was really important – the car itself – the engineering. A disastrous crash and the destruction of the beautiful work of motoring machinery are highly likely without this key knowledge. Sad to say, the unpleasant result could have been avoided by a short course on how to drive while sharing the road with others.

While this might seem like a trivial example, it illustrates how many young scientists – myself included – approach learning science and being a scientist, seemingly by osmosis. One might argue that our automotive engineer would gradually learn the traffic laws and the accepted motoring behaviour over time, perhaps aided by a competent personalised driving instructor. But how much damage could be done in the meanwhile? As more and more students come into my lab and leave as budding scientists, I've become thoroughly convinced that learning best ethical practices earlier rather than later in a research career results in a big payout both to the scientist and the science itself. There is merit to having a driving course and a handbook.

How science works

The illustration below summarises the flow of science, at least how it is currently practiced, with all its necessary components. Science is actually a reiterative loop in which successes beget successes and failures cause the research loop to be broken. One of the primary drivers for success, as indicated by a completed and reiterative loop, or failure, as indicated by a broken loop, is scientists themselves. Having the best trained people who are eager to do research using best practices are at the heart of all successful science (Figure 1.1).

Figure 1.1 The flow of research, which starts with a great idea and background information and ends with the public distribution of new discoveries and information.

Source: C. Neal Stewart original

For the sake of discussion, we will designate a spot in the loop as the logical endpoint: publications. The end product of science is actually new knowledge, which must be canonised as peer-reviewed journal articles. Although there are other legitimate outlets for knowledge dissemination, such as presentations in professional meetings, books, book chapters, patents, and oral histories, the “gold standard” for credible science is peer-reviewed journal articles. This has largely been the case since 1660, when the first journal, the Philosophical Transactions of the Royal Society, was published.

In most cases, a science paper is built on data from well-designed experiments that test hypotheses. While professors might likely have a hand in designing experiments and formulating hypotheses, it is the graduate students, postdocs and other bench scientists who actually collect and analyse data, and do most of the writing. Actually doing modern science from inception to publication is the rare luxury that few senior professors currently enjoy. While the old-professor-in-the-white-lab-coat myth continues to live in popular culture, professors are producing fewer and fewer data with their own hands in the lab; in the grand universe of data, professor-collected data are miniscule.

That's because they're busy writing grant proposals! Before I peeled away to the woods to work on this writing project today my morning was consumed with doing my part to participate in the preparation of writing parts of three separate grant proposals with three different principal investigators (PIs). The PI is defined as the scientist taking the lead role in the proposal and funded project – typically the professor or as my students fondly refer to me – the boss. None of the proposals, of course, was completed this morning. Proposals develop over weeks or months in response to requests for proposals from funding agencies. Proposal writing is so important because money is the fuel for science. In most colleges and universities, the only scientists who are typically paid from “hard” funding, that is, from university-level funding, are professors (and then again, in US medical schools, most professors are required to raise their own salaries from grants). Ironic is the world of science in that the least productive people, data-wise, are the ones who have a tenure system to protect their employment status and salary stability. Everybody else – the ones doing the work – are typically on “soft” (grant) money or term-limited funds. Why the disparity? A partial explanation is that faculty teach and are paid from university tuition income, but it is widely known that professors who attract a lot of grant funding and those with high research productivity (read, publications) are the scientists who are most esteemed in science (and by higher education administrators). In science, these professors are typically the scientists with the highest statures and salaries. Again, why? They are the ones who enable the funding of science to collect the data to publish the papers. Famous papers containing groundbreaking science in turn yield status to institutions (and more money), thus the financial circle is completed. Universities successful in research have greater reputation and funds enabling them to get even larger coffers, hire more faculty members and continue the trend of the rich getting richer.

If money is the fuel of science, ideas and preliminary data are the drill and refinery, respectively. Without ideas coupled with sufficient data to demonstrate that the ideas are sound and worth pursuing, it is difficult, if not impossible, to find appreciable funding for science research. Funders are generally a risk-averse lot. It is a long-dead myth that famous scientists can get funding on the basis of their name-recognition alone. Science does not allow the resting on laurels. To remain successful, scientists must continually generate good ideas for grant proposals. Do they do that alone? In most cases, no. They get help from postdocs and graduate students to make science get started and go-round: ideas → funding → data → publications (Figure 1.1).

One can see two potential problems arising already. First, many critical steps are being performed by young scientists-in-training who might be inexperienced with both the ethics and politics of science, not to mention the nuances of the science itself. Hence they could simultaneously be targets for exploitation and temptation. Tales abound of graduate students who are taken advantage of and not treated in such a way that their professional success is enabled. Second, each of these steps toward publication can be stumbling blocks where scientific and ethical problems might arise. Therefore, addressing potential ethical dilemmas in the context of modern scientific practices should be of some practical help to students just beginning this journey. In fact, I argue here that understanding the rules of science are necessary for running a laboratory and research projects. Subsequent chapters will build upon these themes.

Summary

Arguably, science is the most exciting and invigorating of pursuits and careers with seemingly endless opportunities to create knowledge. With the increased funding and emphasis of scientific research occurring worldwide, science is also growing increasingly complex with opportunities for funding and publications becoming more and more competitive. We can think of research ethics as a framework for creating a fabric of integrity, which should, in turn, make research findings stronger and the researcher happier.

Judge yourself redux

Why are you interested in research ethics?

What are your motivations for pursuing research?

What ways are these motivations synergistic or antagonistic with one another?

In the redux sections I will offer some of my own reflections. For many of these “judge yourself” questions there are no right or wrong answers, but the questions are designed to help the reader ponder his or her opinions and feelings about pertinent topics.

Most scientists with whom I've spoken about research ethics are not as interested in ethics as an academic discipline as they are in the result of ethics, which is to say, robust and trustworthy research results. The best scientists I know are driven by the quest for knowledge and they are eager to grasp an understanding about how things work. There are many legitimate reasons to become a scientist, including curiosity, autonomy, and the opportunity to research topics of interest. It is important to measure motivations as you mature. With more at stake, career motivations can change, and sometimes not for the better. However, often motivations become more noble and less self-centred. It is important to continually judge your own motivations with regards to actions – to see oneself clearly as if looking into a clean mirror.

Chapter 2

How Corrupt is Science?

ABOUT THIS CHAPTER

More than ever, science is in the public eye.Science is funded mainly by public sources and therefore held publicly accountable.Many scientists admit to dubious and unethical behaviour.Older scientists misbehave more often than younger scientists.Unethical behaviour can reach beyond activities classified as research misconduct, which is defined generally as fabrication, falsification and plagiarism (FFP).FFP carries strong sanctions.

This chapter title itself seems corrupt, bombastic and inflammatory. Substitute a number of other professions for “science” and few people bat an eye. How corrupt is … auto repair, or entertainment, or law, or politics? Justified or not, any of these seem to, at least, sound better than insinuating that science can somehow be a crooked pursuit. Like the arts, science seems to be one of those unassailable undertakings in which the pursuer has a higher calling; where idealism and truth trumps money and comfort. Ask any scientist, “Are you in it for the money?” While a few scientists have found the field to be quite lucrative, invariably, no scientist I've ever posed this question has answered in the affirmative. Contrast this with many other fields requiring a great deal of education. True, scientists typically don't enter science for the money, but motivations can change, and with these, behaviours can also change between the beginning and end of a career. Besides money, there is also the factor of sheer survival in a field populated with creative and smart people. Corruption can also be borne from the notion of gaining an unfair advantage and the “publish or perish” culture (Woolf 1997).

In addition to examining motivations, we need to assess how widespread is scientific misconduct. Does breadth necessarily define impact on true discovery and science itself? That is the key question. After all, perhaps science corruption is akin to cheating on taxes. Little money is at stake for a low wage earner, but if a billionaire cheats, then significant funds are in play. Of course, there is a huge practical difference between plagiarism on an undergraduate assignment and fabricating data sets on a Nature publication that is highly visible. But on the other hand, cheating is cheating, and little-to-none should be tolerated in science when found-out.

Judge yourself

Why did I decide to enter science as a profession? What were my motivations?

Am I generally tempted to cheat? In what ways?

“Scientists behaving badly”

A number of surveys and meta-analysis studies have been conducted on research misconduct through the years. For example, Ashworth and Bannister (1997), Eastwood et al. (1996) Fanelli (2009), Hard et al. (2006), Maurer et al. (2006), Pryor et al. (2007), Swazey et al. (1993), and Yank and Barnes (2003) have all examined behaviours of scientific misconduct from data taken by surveys. I wish to focus much of the following discussion on just one survey for simplicity's sake, yet I invite readers to dig deeper into the literature to judge for themselves the breadth and depth of the problem. In 2002, a survey was mailed to US National Institutes of Health (NIH) grant recipients, which asked them to anonymously self-report research misconduct by yes/no responses to several questions (Martinson et al. 2005; see also Anderson et al. 2007). The survey was sent to mid-career scientists who had received their first full-sized (R01) grants – these were typically associate professors, having a mean age of 44 years. It was also sent to younger scientists who were supported by NIH postdoc fellowships. On average junior responders were 35-years-old. From a few thousand surveys mailed, 52% responded from the older group and 43% from the younger group. To this point, I am struck by two surprises already. First, a high proportion of scientists were willing to report their misdeeds. Even if respondents could maintain anonymity, why risk self-reporting bad behaviour? Confession is good for the soul, true, but why take the chance of being discovered for data fraud or other bad behaviours? Second, the sampling of scientists chosen as the study subjects was highly skewed. NIH (R01) grant recipients are nearly always very well-qualified scientists who are serious about performing biomedical research. The scientists surveyed were no dilettantes. R01 grants are simply not easy to win as the current funding rates approach just above 10% (http://report.nih.gov/nihdatabook/); and the postdocs receiving NIH fellowships are no slouches either. These postdocs represent former graduate students whose grades, aptitudes and research are among the best in US biomedical fields. Prior to reading the results, and based upon these initial conditions and assumptions, I would have been shocked to learn that either group reported much bad behaviour; I just would not expect them to participate in research misconduct or in suspect research practices. What did the survey report?

Before we look at these particular results, the prevailing opinion, as reported by Martinson et al. (2005), was that incidence of falsification, fabrication and plagiarism in science is probably in the 1–2% range. But in 2004 the editorial office of the Journal of Cell Biology (Rosser and Yamada 2004) estimated that papers containing questionable data might be as high as 20% (Anonymous 2006). One editor of a Chinese “campus” journal recently reported that 31% of submissions contained plagiarism (Zhang 2010). While longitudinal data on cheating doesn't exist, most people in science would agree that if we do have a 20% incidence of misconduct, or even “questionable data,” then we have a huge research integrity problem; then there is plagiarism (Butler 2010).

According to Martinson et al. (2005) three researchers out of every thousand admitted to fabricating or “cooking” data. Cooking refers to altering existing data to “improve” a finding rather than outright data fabrication. Others’ ideas were used (plagiarism) without proper credit for 1.4% of the respondents. For these two items, there were no differences between mid-career and early-career scientists. There were differences for at least two notable questions, with mid-career scientists being worse. Twenty-four of a thousand older, as opposed to just eight of a thousand younger, scientists used supposedly confidential information in their research. One would hope that as scientists age, they would adopt better practices and not worse. A big offence, 20.6% mid-career scientists admitted to caving in to pressure from funding sources to alter their experimental design, methods, or results (emphasis added) as opposed to 9.5% of early-career scientists. Herein we have a striking dichotomy. Most scientists would agree that while they might be passionate about their science, scientists ought to approach research dispassionately and objectively since the main objective of science is the pursuit of truth. However, there is a public-held dogma that funders with profit-motives, or that ideological motivations of funders often drive or alter research results that are reported. The high numbers of researchers altering experimental design, methods, and especially results, demonstrate this skeptical viewpoint of motivation has validity. I might add that the NIH is among the most benign of funders in this regard; their grantees should feel no ideological or economic pressure to find one result over the other. It would be interesting to perform the same survey among recipients of funds from pharmaceutical, agricultural and chemical companies, where a research agenda is more obviously skewed toward economic impact.

If researchers are found to be guilty of many of the items above, there would be university and government sanctions levied against the perpetrators. But we actually hear of very few cases in which guilt is discovered and penalties are exacted. Most misbehaviour in science seems to go undetected. One of the reasons why this is the case will be seen in Chapters 6 and 7. Scientists are not fond of non-anonymously reporting blatant misconduct or even sloppy science; it seems that few people want to be known as whistleblowers, or in the childhood vernacular, tattletales. This is understandable, and in the above survey, 12.5% of scientists admitted to overlooking others’ flawed data or their own “iffy” interpretation of data. This figure doesn't even take into account any close examination of papers by peer-reviewers looking specifically for misconduct. Therefore, we see that many scientists would rather look the other way than “objectively” report on research honesty – even if they could do so anonymously, which is often the case in peer reviewing of grant proposals and publications. While scientists might grumble over bad players during the social hour, they are not anxious to call out their peers, even if not doing so results in damage to science as a whole (Gunsalus 1998). It would seem that the proverbial rug covers a profundity of scientific dirt.

The Martinson study illustrates that the magnitude of incidents and self-reporting scientists with dubious behaviour can hardly be considered insignificant. In fact, results in some of the categories are startling. What is even more profound is that the survey asked scientists to report on their behaviours during only the past three years. Furthermore, the sheer frequency of misconduct is staggering. Martinson et al. (2005) reported that best estimates of falsification, fabrication and plagiarism prevalence were thought to be in the neighbourhood of 1–2% – indeed this is in the range of a meta-analysis of FF (not P) surveys (Fanelli 2009). In the Martinson survey, however, one-third of respondents admitted to deeds that research officers would consider to be sanctionable! For early career scientists, the frequency was 28% and for mid-career scientists it was 38%! Therefore, it is almost unimaginable to estimate how high the proportion of scientists is who are guilty of gross unethical behaviour over a scientific lifetime. What if they had surveyed even older scientists? If we extrapolate, well over half of researchers nearing retirement would participate in bad behaviour during their final three years in science. Consider the fact that the average mid-career scientist in this survey had perhaps 20 or more years left in his/her career, the opportunity and propensity to participate in dubious activities is overwhelming. The survey selected relatively young scientists who had, seemingly, spent little time in science. And let's remember, that the numbers reported here are likely conservative since it is doubtful that all the guilty parties would self-report their misdeeds for fear of some sort of retribution.

Do scientists behave worse with experience?

Anecdotally, I've observed that many scientists become more savvy of the rules of science to the point that they know which ones they can break without being caught. The Martinson survey indicates this might be systematic among biomedical researchers. Of the 16 questions posed, 6 of these had statistically significantly different responses among mid-career and early-career scientists. In each case, the older scientists reported higher incidence of misconduct compared with younger scientists. The conclusion we must draw is that age and experience are important factors causing scientists to go bad. As Lutz Brietling opined in Nature later in 2005, younger scientists more likely have higher ideals and enthusiasm compared with older scientists who might become jaded. “In the rough world of today's science, they are exposed to an environment in which impact factors and awards are more important than advancing the knowledge of mankind.” (Brietling 2005). Sad to say, these same scientists are also probably running roughshod over their trainees. He thinks that the practice of science itself, as defined by how we do modern science with its pressures and rewards, is the problem; at its root, disillusionment is the problem. While I somewhat agree with his conclusion, I think the ultimate root of the problem must be deeper still than simply mere disillusionment. Breitling further states that he doesn't think sanctions are the answer to the problem. I'm not so sure I agree with him. In fact, another letter writer to Nature, Kai Wang, thinks that education along with stiffer penalties would go a long way toward improving scientific integrity. Wang, a graduate student, points out that there is little ethics education in graduate school (Wang 2005).

Judge yourself

What factors of science research might cause you to cheat? For example, how do you deal with pressure?

How might these be counteracted?

How much of a deterrent is embarrassment and punishment?

How do the results of the Martinson survey make you feel about the profession? About yourself?

Crime and punishment

Important to addressing research misconduct is reporting and sanctions. Nobody, it seems, is anxious to police science. Editors, to some degree, are the most proactive players in science in this regard, but clearly, peers, and especially students, are not anxious to make waves. As we'll see in Chapter 7, whistleblowing often comes at a steep price. But as Wang (2005) succinctly points out, “If the benefits of misbehaving outweigh the possibility of being punished, academic misbehaviour is probably inevitable.”

Is scientific misconduct inevitable? Unfortunately, to some degree I think it is indeed unavoidable inasmuch that corruption is present in every profession; why would science be immune? That said, we shouldn't abandon high standards of expectation for honesty and refuse to stem the tide of unethical behaviour. What can be done? First, as has been mentioned as at least a partial solution, education of the expectations and rules of science is crucial (Titus et al., 2008; Titus and Bosch 2010). Second, we must be aware of factors leading to potential disillusionment and corruption in mid-to-late-career. Third, scientists need to self-police the profession more effectively. I am not referring to the state of mind to which Nobelist Marie Curie abhors: “There are sadistic scientists who hurry to hunt down errors instead of establishing the truth.” Quality assurance is critical in verifying that published data and information are honest and real; I think that's one duty of science and scientists. One subdiscipline in science publishing that is emerging is informatics tools to catch cheating – either pre- or post-publication. Algorithms and routines to spot plagiarism and illustration manipulation exist and should improve. Journals should be the vanguards of these activities since they arguably have the most to lose by publishing papers containing FFP or dubious results (Berlin 2009; Butler 2010).