It may be spring but it’s kind of autumnal in academia. Instead of falling leaves, assignments have fallen from our students and need to be marked. After our academic autumn comes our summer. And that means desperately trying to meet the deadlines for things we said we’d do but haven’t got to. For me, that includes writing a chapter about teaching ethics.
If you ask Google Scholar for a precis of the history of psychological codes of ethics, you will learn many interesting things. For example, while many introductory psychology students think we have ethics codes only because of people like Stanley Milgram and his faux-electrical obedience experiments, or Philip Zimbardo and his slightly-too-real fake prison study, they would be wrong.
We have ethics codes because of medical experimentation. The Nuremberg Trials after World War II shone a light on “medical” atrocities such as freezing Jewish prisoners to death to simulate how long a Nazi pilot might expect to survive in inclement conditions.
And not just the Nazis. There was the Tuskegee Study of Untreated Syphilis in the Negro Male, which ran for 40 years from 1932. In it, African American men were promised free medical care in return for participation. The kicker is that they were not told the true purpose of the study, or if they had syphilis. About a quarter of the men died of complications of the disease, even though treatments were readily available before the “study” ended.
If you think this is bad, Googling “controversial medical experiments” will teach you that incarceration in a US prison could have exposed you to having testicles removed without permission, irradiation or injection with cancer cells, or suffering scars caused by testing dermatological treatments.
Even before Tuskegee was brought to light, much of the world had already taken steps to protect human “subjects”. The 1947 Nuremberg Code described 10 principles covering “permissable medical experiments” that include many we’re familiar with today – voluntary and informed consent; the research must have benefits that outweigh harms and can’t be achieved any other way; and if you know in advance that death might occur, that’s a hard no. Except, the code states, “perhaps, in those experiments where the experimental physicians also serve as subjects”.
In 1947, the World Medical Association was established and in 1964 it created the Declaration of Helsinki, describing a double handful of principles. Thanks to outrage at things like the Tuskegee Study, the 1970s saw a robust discussion about ethical principles that largely inform what we have today. They are based around four primary bioethical principles of autonomy (of the person to consent to research or treatment), beneficence and nonmaleficence (do good, not bad), and justice (make the world a better place where your expertise provides for that).
Before we throw the first stone, it’s worth noting we have our own unfortunate history of dodgy medical experiments – lest we forget the National Women’s Hospital cervical cancer study that began in 1966. The subsequent Cartwright Inquiry, in 1988, led to the Code of Health and Disability Services Consumers’ Rights in 1996.
Registered psychologists, and others who belong to a variety of psychological organisations, are required to adhere to the Code of Ethics for Psychologists working in Aotearoa/New Zealand. The current version, ratified in 2012, is framed around respect for dignity of persons and peoples, responsible caring, integrity in relationships, and social justice and responsibility to society.
The code is currently being revised so we can expect a new version in the near future, and I would anticipate that we’ll see some continued reliance on traditional principles, but with a stronger emphasis on biculturally-informed research and practice.