Alan C. Elms
Virtual Library
  Alan C. Elms Home Page

 Alan C. Elms Home Page  Virtual Library   Books Online

Personality in Politics [pdf]
Uncovering Lives [pdf]
 

Experimental Ethics

Alan C. Elms

experimental ethics:
issues raised by obedience research


Science leads to powers whose exercise spells disaster and nuclear weapons are not the deepest of these. In the great strides in the biological sciences, and far more still in the early beginnings of an understanding of man's psyche, of his beliefs, his learning, his memory and his probable action, one can see the origin of still graver problems of good and evil.

--J. Robert Oppenheimer, quoted in the New York Times, May 23, 1956

Psychology thrusts no new moral dilemmas upon us. At most, by increasing possibilities of prediction and control, it demands that we attend more seriously to the solution of old moral and philosophical dilemmas.

--Raymond B. Cattell, The Scientific Analysis of Personality

In most of Stanley Milgram's research, compliant volunteers engaged not in constructive but in destructive obedience, and seemed willing to abdicate their moral responsibility as they did so. But of course they were not freed from moral responsibility just because the experimenter demanded that they obey. Neither the moral codes of most modem religions nor the deliberations of such bodies as the Nuremburg tribunal would grant them that freedom. Milgram, in exploring the external conditions that produce such destructive obedience, the psychological processes that lead to such attempted abdications of responsibility, and the means by which defiance of illegitimate authority can be promoted, seems to me to have done some of the most morally significant research in modem psychology. A number of ministers who have based sermons on this research, and the Germans and Israelis who were the first to publish translations of Milgram's papers, apparently agree.

But by an odd twist of moral sensibilities, the Milgram studies have themselves been more extensively attacked on ethical grounds than any other recent psychological research. Part of the criticism has come simply from coffee-break moralizers, who feel their own vague personal codes of ethics to have been somehow bruised. But the studies have also been used as stepping-off points for serious discussions of the psychologist's ethical responsibilities, both to his research participants and to society at large.

Psychologists had not ignored the moral and ethical issues involved in experimentation prior to Milgram's research. The American Psychological Association's Ethical Standards of Psychologists has gone through several editions, with another major revision in progress (Ad Hoc Committee on Ethical Standards, 1971). Several APA divisions have additional codes of ethics, or committees to investigate ethical violations; and a good many college psychology departments have their own machinery for policing the ethical practices of faculty and student researchers. An APA committee investigated Milgram's obedience research not long after its first appearance in print (holding up his APA membership application for a year in the meantime), and judged it ethically acceptable. But the criticisms have continued, partly because the APA code is foggy enough to allow for wide differences of opinion, partly perhaps because really clear cases of violation seldom come along and some psychologists were looking for a convenient battleground. One's moral purity is hard to establish nowadays without a good fight.

The first and most widely published criticism of the Milgram studies, by Diana Baumrind (1964), raised several key issues. Milgram (1964b) has published a careful rebuttal of her specific points, but the general issues hold implications for other psychological research as well, and are worth considering at length.

THE ISSUE OF “ENTRAPMENT”

Is the psychologist ever justified in leading a volunteer into a situation the volunteer has not anticipated and to which he has not given his prior consent? It's tempting here simply to generalize from good medical research practice and argue (as Baumrind does) that the volunteer should always give his "informed consent" in advance, that he should be told what's going to happen and what the dangers are so he can decide whether he really wants to participate or not. This may be feasible when you want to inject a bacillus into people, because telling them about it won't do much to the bacillus one way or the other. But if you want to study the conditions under which a volunteer will obey an authority's orders, you'd better not tell him, "I want to see when you'll obey me and when you'll disobey me," or you might as well go play a quick game of ping-pong instead. The same is true for social conformity experiments, and in fact for a large proportion of the questions studied by social and personality psychologists.

This doesn't mean that deception in psychological experiments is a trivial problem, or above reproach. Misrepresentation or lying, in the service of science as well as anywhere else, is both unfair and possibly damaging to the recipient of the lie, and may ultimately harm the deceiver's interests as well. As more and more deception is used and becomes known in psychology, potential groups of volunteers are likely to build up layers of distrust toward any kind of unusual or particularly demanding situation, for fear of suddenly being told something like "Smile, you're on Candid Camera!" Not only will this inhibit spontaneous behavior; not only does it cause serious public relations problems for psychologists (particularly with their students); it also renders the results of many psychological experiments even outside the laboratory thoroughly suspect. I would distrust the results of any new social-psychological experiment using student volunteers from Harvard, Yale, Michigan, or several other over-researched universities, unless I knew much more than most experimenters learn about how the volunteers actually saw the experimental situation. Many researchers now make an attempt to question volunteers after an experiment about their perception of deception; but the attempt is often directed more toward reassuring the experimenter about the efficacy of his cover story than toward obtaining useful information about the incidence of doubt.

What can be done, then, to protect both volunteers and psychologists? An easy suggestion is to reduce deception to a minimum. Some psychologists have gone the other way: I know of a study where the volunteer was led to believe he was helping the psychologist dupe another volunteer who presumably thought he was helping dupe a third volunteer (and maybe there was a fourth one in there somewhere), when of course only volunteer number one was really being fooled and the others were experimental confederates. It may have been vital to go to such lengths in that particular case; but I think it's safe to say there's much more deception going on than need be. Part of it is simple one-upmanship; the psychologist feels good knowing a few things the volunteer doesn't know. I've done research myself in which I laid on perhaps twice as much deception as was necessary, to make the experiment more stimulating to my potential readers. I can say now that I wasn't mature enough then to know better, and that I won't do it again, and that I hope others won't either. But is that enough?

Herbert Kelman (1967) and others have suggested that we might instead resort to roleplaying: tell our volunteers what we're trying to study, and then have them go through the experimental situation while pretending they don't know what it's all about. This approach assumes that sophomores and other volunteers already know all there is to know about their own probable responses in real life, or that they will do the same things in a simulated situation as in a realistic one, both of which are patently wrong. I've already mentioned two occasions within the Milgram series alone that indicate the likely invalidity of such roleplaying studies: the fact that nonparticipants, even highly trained psychiatrists, are miserable predictors of real participants’ responses, and the discovery that Milgram participants, asked to describe what they would do in other situations of destructive obedience, often gave answers totally discrepant with their behavior in the obedience situation. Maybe they would do all those things; but I prefer to believe their real behavior, even in a laboratory experiment, over their roleplayed behavior. Milgram also asked volunteers in another study (1965a) what they'd be likely to do in the basic obedience situation, without actually putting them through it. Nobody said he'd go higher than 300 volts, though of course many real participants went to 450 volts. Roleplaying looks like one of the worst of many possible resolutions to the deception problem (see also Freedman, 1969).

So we have a set of studies such as Milgram's, which must involve some deception if they are to be done at all. Once we've limited that deception to the necessary minimum, is that all we can do? Just say, "Oh well, deception is necessary," and drop it?

No, we can still do a couple of things. One is to make it clear to the volunteer that any time the experiment becomes distasteful to him, calls upon him to do things he would not willingly do, he can get out of it. In a medical study this might not be very useful, if the patient is already under anesthesia or has his belly open or a needle in his spine. But for consenting adults in a psychology experiment, it can be made a live option throughout, and a number of college psychology departments make it an alternative to "informed consent." In the Milgram study, all volunteers were explicitly told at the beginning that they were being paid simply for coming, not for completing the experimental procedure, and they were paid in advance to free them from financial pressure to continue. In other types of experiments where willingness to obey is not an issue, volunteers can also be told repeatedly that they are free to stop at any time during the experiment.

We can also tell the volunteer, as soon as possible, what has been going on: undeceive him, so he won't go around still believing the misrepresentations. This can often be done most feasibly immediately after the experiment, particularly with college students who are likely soon to scatter to the four winds. Such a practice may still create difficulties where an experiment is spread over an appreciable time: participants are likely to talk to other people about their experience, if it is of any interest at all. Careful studies have shown that even when students sign agreements that they won't tell, volunteer performance may change significantly from those who participate early in an experiment to those who participate late. What Milgram initially did was a reasonable compromise: he told volunteers as soon as their participation was over that the victim hadn't gotten nearly as much shock as they'd thought. Then Milgram waited for several months, until the bulk of the studies was completed, to notify participants fully of the experiment's purpose, the extent of the deceptions, and the early results, as well as emphasizing the value of their participation. Volunteers who participated after the experimental series was further along were told immediately afterward exactly what was going on.

It's sometimes tempting not to "dehoax" volunteers at all. That way, presumably, word doesn't get out about the study before it's finished, the volunteer isn't made to feel like a fool for having been duped, and psychology isn't given a bad name by publicizing all those deceptions. According to this reasoning, maybe I shouldn't write anything about deception here. But word will get out sooner or later anyway, if the psychologist makes any effort to circulate information on his research, as he properly should, or if he trains his students at all in the techniques that research involves, as he also properly should. Better the word get out intentionally and in context, than be publicized as an expose several years later. If social psychology gets a bad name simply from the dispassionate or even sympathetic description of its research techniques, it will surely have earned that bad name. I don't think it has earned such a name; as I've said, experimental deception is sometimes a legitimate means to legitimate ends, and it can be presented to volunteers in that light. Deception will remain a technique to use in minimal fashion, only when nothing else will get the requisite information. Most psychologists have enough ethical distaste for deception as such to avoid it when possible, and maybe greater publicity about it will force the rest to devise alternate means of study. But deception need not be thought of as a mortal sin in and of itself.

THE ISSUE OF PSYCHOLOGICAL INJURY

The question of whether it's all right to make the volunteer feel like a fool by telling him you've tricked him brings me to another of Baumrind's points: Milgram's assailing of volunteers' "sensibilities." He upset them emotionally, pushed them into a situation where they could only obey and then hate themselves for it afterward, and all this was "potentially harmful because it could easily effect an alteration in the subject's selfimage or ability to trust adult authorities in the future."

The phrase "adult authorities" suggests how Baumrind really sees Milgram volunteers: she is a child psychologist and the volunteers are all children at heart, unable to resist the experimenter's wiles and therefore needing protection by someone who knows better, namely Dr. Baumrind. But of course the volunteers are actually grown men, in full legal possession of their senses and their wills, undrugged and faced with no physical force. If they could do nothing but obey, Milgram would have no experiment. But they can do something else (as the victim, by his protests, seeks to remind them), and a goodly number of them do; it's their choice, not Milgram's. The experimenter does demand that they obey, true; but if he is an "adult authority," they are adult moral agents in their own right. As Milgram (1964b) says, "I started with the belief that every person who came to the laboratory was free to accept or to reject the dictates of authority. This view sustains a conception of human dignity insofar as it sees in each man a capacity for choosing his own behavior. And as it turned out, many subjects did, indeed, choose to reject the experimenter's commands, providing a powerful affirmation of human ideals."

Several psychological studies have been conducted where the volunteer undergoes certain morally questionable manipulations with no choice on his part. He is falsely told that he has homosexual tendencies or that he is stupid or is repugnant to some group of respected people. Baumrind herself (1967) has placed small children in situations where they are sure to fail a task, in order to observe their responses to failure. One might be able to justify even these kinds of manipulations, if they are absolutely necessary to an important scientific enterprise and if great care is taken to alleviate afterward any misimpressions that the psychologist may have created in his volunteers. But I would still have doubts about such techniques, simply because the element of choice has been removed. The element of choice remains continually present in the Milgram situation; and choice, after all, is what makes morality.

Baumrind is distressed not only because Milgram puts his volunteers in a situation where (she says) they can do nothing but obey, but also because he gets them extremely excited and anxious and upset, because the whole situation is so "traumatic." Their disturbance, of course, is another sign that they recognize a moral dilemma; I for one am glad that few if any delivered shocks to the victim with cold impassiveness. But did Milgram have any right to upset them at all?

An unfortunate tradition within certain regions of social psychology dictates that social interactions under study should be stripped of any real-world referents. The dynamics of a small group discussion, for instance, may be studied by isolating volunteers in separate cubicles and having them pass notes to each other through little slots, in this way conducting a slow-motion argument about an issue of absolute triviality. Such studies have their partisans, but I am not one and I have avoided discussing them at any length in this book, since their social relevance seems minimal. It's possible to conduct research without raising a participant's blood pressure or arousing his slightest concern. But since the important aspects of human life often involve concern and heightened blood pressure, I don't feel researchers should avoid them.

Volunteers may not always want their concerns aroused, and we must keep the rights of the volunteer constantly in mind. But such considerations are not completely impervious to other demands. Just as the conformity experiment participant faces a conflict between objective and social information, the psychologist who wishes to study humans faces a conflict between the general value of his research, not only to "science" but to humanity at large, and the possible harm that participation may cause to his volunteers. He has a strong obligation to reduce negative effects to the minimum. But must he, as Baumrind demands, cancel his experiment if he cannot eliminate every possibility of "permanent harm, however slight"?

Most serious students of ethics at some point reach the conclusion that there is more than one source of ethical obligation, whether they approach ethics from a religious viewpoint or a philosophical or even a scientific one. You cannot use absolute words in talking about moral obligations toward the individual or toward society; you cannot insist on either the "greater good" or "the individual good" with any degree of certitude. When a majority of psychologists come to that realization, and they begin transferring their psychological investments from their own personal moral codes to a hard critical consideration of professional ethical complexities, perhaps we'll attain a more honest confidence about the resolution of our perplexing ethical decisions than we hold now. But absolute certitude, I doubt. The medical profession, with a much longer history of trying to deal with such issues, hasn't got there yet, and doesn't seem about to.

Milgram, as Baumrind herself indicates, was concerned with a very important social problem, one which has been directly implicated not only in the physical destruction of millions of people during the twentieth century alone, but with the moral abdication of millions more. The ethical case for his upsetting several hundred people in the laboratory in order to study this problem seriously is surely much better than the case against, though neither can at this point be established absolutely. He does have an obligation to minimize any long-term negative effects of such upsets as thoroughly as possible. Has he done so?

Baumrind, reading Milgram's two-sentence description of his procedure for alleviating volunteers' emotional tensions in his first report, didn't think so: "In view of the effects on subjects, traumatic to a degree which Milgram himself considers nearly unprecedented in sociopsychological experiments, his casual assurance that these tensions were dissipated before the subject left the laboratory is unconvincing." But the tension-dissipation procedures were in fact anything but casual; they were, as far as I know, nearly unprecedented in sociopsychological experiments.

The standard procedure in other sorts of experiments involving deception has been a straightforward explanation of what the study was all about, with the assumption that this would relieve any disturbances; in most cases it probably has. A few experiments, such as Asch's, have also involved discussions with the experimenter and sometimes with his confederates afterward, to clear up any misunderstandings and to further alleviate any obvious distress. Milgram went well beyond that. As I've indicated previously, the volunteer was told post-experimentally that the machine could not administer shocks harmful to humans; the victim came out in a very friendly mood, explaining he had been overly excited but not hurt; he stressed he had "no hard feelings," shook the volunteer's hand at least a couple of times, and often said he would have done the same in the volunteer's shoes; and the experimenter explained that the volunteer's behavior was by no means unique and indeed was what was expected in a memory-and-learning experiment. Between experimenter and "victim," a very effective job was done of relieving whatever tensions the volunteer had accumulated during the experiment. Baumrind actually complains that the experimenter was too nice: "the subject finds it difficult to express his anger outwardly after the experimenter in a self-acceptant but friendly manner reveals the hoax." But the volunteers seldom indicated any thought of anger at this point, and whatever anger existed was soon dissipated by their relief at not having hurt the "learner," or by the volunteer-accepting (not self-accepting) manner of both experimenter and victim. All this was reinforced later when volunteers received the written description of the experiment's true purpose. As Milgram says of this report, "Again their own part in the experiments was treated in a dignified way and their behavior in the experiment respected."

But Milgram didn't stop there. He had an extensive program of evaluating the effectiveness of these tension-relief procedures, in addition to the on-the-spot evaluations at the end of each volunteer's participation. For one thing, a short questionnaire was mailed to each volunteer along with the written report of the research results. Most returned the questionnaire and most indicated a continued positive response to their participation; only 1.3 per cent indicated any negative feelings about it. In my separate interviews with forty volunteers, conducted before the report was sent out, I questioned them about whether they had subsequently "felt bothered in any way about having shocked the learner," and probed gently about such things as guilt feelings or bad dreams. Only two people indicated even mild concern. Indeed most obedient volunteers were willing to participate again, as either teacher or learner. A majority of the defiant subjects interviewed were also willing to return to act as teacher again, though they had refused to do so at some point in the original experiment.

I didn't question these volunteers extensively about their feelings toward the obedience research, but a psychiatrist not otherwise connected with the studies did interview another forty in depth a year after their participation, purposely choosing those who seemed most likely to have suffered psychological harm from their participation. His conclusion (as quoted in Milgram, 1964b): "None was found by this interviewer to show signs of having been harmed by his experience. . . . Each subject seemed to handle his task [in the experiment] in a manner consistent with well-established patterns of behavior. No evidence was found of any traumatic reactions." A recent obedience study by Ring, Wallston, and Corey (1970) found a similar absence of lingering disturbance following experimental participation, as long as volunteers were told – as in the Milgram studies – that the victim was not actually harmed and that their own experimental behavior was appropriate.

In view of all these precautions and all the evidence that they were effective, I'd say that the Milgram research, far from being attacked for ethical shoddiness, might well be held up as a model of how one should proceed in conducting a serious experiment on serious human problems, with full consideration of the ethical issues involved in experimentation. If there's anything I'd complain about, it's that Milgram made it too easy for the obedient volunteers to ignore the ethical implications of what they'd done, by assuring them that other people did the same kinds of things and by implying that this was acceptable behavior. Maybe he should have left them a little more shook up than they were.

This raises the issue of what people consider to be psychological harm. Baumrind's position, again, is, "I do regard the emotional disturbance described by Milgram as potentially harmful because it could easily effect an alteration in the subject's self-image or ability to trust adult authorities in the future." Along somewhat the same lines, Herbert Kelman (1967) has argued, "In general, the principle that a subject ought not to leave the laboratory with greater anxiety or lower self-esteem than he came with is a good one to follow." Milgram rightly responds to the latter part of Baumrind's statement by noting that people should not indiscriminately trust authorities who order harsh and inhumane treatment. But what about this matter of self-esteem? Maybe the volunteers didn't have bad dreams later, or develop any neurotic symptoms as a result of shocking the learner; but surely some recognized the weakness of their own ethical systems, realized they'd proven themselves poorer human beings than they had previously thought? And aren't you unjustly harming a person when you lower his own evaluation of himself in this way?

Stanley Coopersmith (1967) and other psychologists have observed in some individuals a "discrepant" or "defensive" self-esteem, which is not based on accurate assessment of the individual's own behavior and indeed may be concealing truths about the person that he'd be better off knowing. I see no obligation for the psychologist to strengthen or maintain such defensive self-esteem, though if his actions are likely to weaken it he should be prepared to help the individual find more realistic bases for rebuilding a positive self-image. Self-esteem is another of those things that looks like an absolute good at first, to the ethics-oriented thinker looking for absolute goods, but that may prove at least partly illusory.

THE ISSUE OF INFLUENCE

Some would continue to reject such a view, arguing not only that psychological researchers should "do no harm," but that they should avoid intervening in an individual's behavior uninvited even if the behavior's continuation is likely to damage the individual's own interests or the interests of others. Kelman (1965) puts this in rather strong terms: ". . . for those of us who hold the enhancement of man's freedom of choice as a fundamental value, any manipulation of the behavior of others constitutes a violation of their essential humanity." He grants that manipulation is sometimes necessary to achieve good ends, but insists that "even under the most favorable conditions manipulation of the behavior of others is an ethically ambiguous act." Certain critics of psychology extend this argument not just to manipulating others' behavior, but to collecting information on others' behavior. One writer has gone so far as to suggest that a psychologist may be acting unethically simply by using his psychological skills to draw inferences about another person in casual interaction - much as it is a serious crime for a prizefighter to hit a private individual with his fists.

Such critics are, however, demanding that psychologists observe a moral absolutism that is found in no other field of human endeavor. Sound moral judgments cannot be made in the abstract, and when we try to make them in reality, there are nearly always conflicting claims on conscience, as Kelman (1967) recognizes. Psychologists in some cases (for instance in testing employees of certain governmental agencies and private corporations) may indeed have gone too far in invading personal privacy; but even privacy has no absolute moral guarantees. As Ross Stagner (1967) argued in testifying before a congressional committee on the creation of a National Social Science Foundation:

“[G]reat social dangers cry for investigations which may be blocked by excessive emphasis on the right of privacy. Consider the case of the rapist, the violent criminal. As a youth he may certainly object to ‘prying questions’ which might reveal his explosive, destructive, antisocial tendencies. Yet society is clearly entitled to look for measures to protect women from his hostile sexuality. A loaded way to phrase this question is to ask how we balance the right of the young man to privacy against the right of the woman to walk safely in the streets. A more defensible question is: how can social scientists gather the data which we so desperately need, the basic information for the prevention and correction of violent behavior, with proper consideration for the right to privacy?”

That should be language Congress can understand. If you think rapists and crime in the streets have been too much abused, look at it as a question of whether we should guarantee absolutely the right of the potentially destructive authoritarian against a confrontation with his own personality, or whether we should give some weight to developing means to prevent the manning of more gas chambers and concentration camps. Or would the moral absolutists still bar the psychologist from research on such problems, and trust their own moral indignation to keep the next six million away from the cyanide showers?

I am not here advancing the you-can't-make-an-omelette-withoutbreaking-eggs line. Omelettes aren't necessary foodstuff and human personalities are not eggs for the breaking. But psychologists do have important work to do; human behavior must be understood, if man himself is to survive in any dignity; and I do not see that anyone has any clear right to noninterference with his private person as long as the psychologist takes full care to do the minimum of probing and influencing necessary for his investigations, and to maintain the psychological health of his research participants.

Not only can one justify imposing certain unpleasant experiences on psychological research participants; one might even argue that the experience these people undergo can sometimes be a moral good in itself. As I've noted, Milgram did not falsely attribute any despicable qualities to his volunteers, as has occurred in a few studies. It happens to be quite true that the obedient volunteers were willing to shock innocent human beings upon command, and each volunteer proved this to himself. Should we instead leave people to their moral inertia, or their grave moral laxity, so as not to disturb their privacy? Who is willing to justify privacy on this basis? Who would have done so, with foreknowledge of the results, in pre-Nazi Germany? Do we not try to wake our friends, our students, our followers or leaders from moral sloth when it becomes apparent, and are we bound to use our weakest appeals when we do so? Who now condemns the Old Testament prophets for having tried to arouse people to the evil within themselves? Milgram doesn't claim prophetic stature, but his experiments may similarly awaken some of the people involved. It's true that these people didn't ask to be shown their sinful tendencies; but people rarely do. That's why ministers lure people with church social functions, why writers clothe their hard moral lessons in pretty words and stories, why concerned artists blend morality and estheticism: because people prefer not to face the truth about themselves if they can avoid it. I have heard the other side of this argument, come to think of it: the argument that a certain group of people doesn't want to be educated, that maybe they'd prefer to remain in happy ignorance, and therefore should be left to their familiar pattern of life. Yassuh, massa.

The thrust of such arguments in Milgram's case is that he was simply too effective in bringing volunteers into dramatic confrontation with their own conflicting moral trends and their own weaknesses. We don't hear the same complaints about other psychological studies, or about most public speakers or writers or teachers or preachers, because they seldom move their audiences enough to make complaints worthwhile. Plenty of ministers, I am sure, would be ecstatic over the possibility of giving their congregations such a harrowing contact with their own immoral inclinations as Milgram has done, and would feel the process producing this experience to be truly heaven-sent. (In fact, one doctor of divinity who was a research volunteer asked Milgram afterwards whether he would put some of the good reverend's divinity students through the procedure, and let the good reverend in on the results. Milgram, feeling a bit of doubt as to the ethics of such a procedure, said no.)

The Beatles once sang, "I'd love to turn you on." Thousands of moral advocates would love to turn their own audiences really on, at least for a few moments. Milgram has done just that, though most of his volunteers seem to have possessed sufficiently firm psychological defenses to backslide into their old ways again soon after. I wouldn't contend that the obedience studies are justified by such self-revelations as I've been describing. I am arguing that the studies' justification in terms of adding to our knowledge of authoritarian behavior and destructive obedience is not invalidated by any taint of questionable ethics through the induction of such self-revelations.

Further, it's impossible to avoid upsetting someone, lowering someone's self-esteem, sooner or later, if you do any significant research and publicize it. I didn't tell my rightist volunteers in Dallas what I thought of the bases for their political activity. But when they read this book, as I hope some will, they may feel a bit less self-esteem at finding they're either projecting inner conflicts or easing their social relationships or just getting their kicks out of rightist activity, instead of doing it because it's the most rational position around. Diana Baumrind is going to lower at least a few parent volunteers' self-esteem, when her conclusions that their childraising techniques produce personal characteristics not valued by our culture eventually filter out into the popular press. Some psychologists seem to hope their research can continue in a vacuum, can be bound up forever in the small-print journals and thus never help or hurt anyone, so that they needn't consider whether it's moral to do such research or not. They need to examine the ethical implications not only of their research, but of their wish to keep it as private knowledge.

Any research influences its participants, and some would not choose that influence. I've done research in which I tried to persuade people to stop smoking, and I've never heard any criticism of it on ethical grounds. But obviously it could so be criticized; it involves tampering with people's opinions when they don't necessarily want to be tampered with. I do generally try not to change anyone's behavior except through rational appeals, through overcoming psychological defenses which themselves are ultimately harmful; and the same could be said for Milgram's studies. But we might still get arguments on the morality of it all, from the tobacco companies if nowhere else. Any worthwhile research must influence its audiences, both the primary audience of participants and the secondary audiences that get the information through mass media, teachers, and community leaders - though again, even among the secondary audiences, some would just as soon not be influenced. Influence itself is neither moral nor immoral; it is a vehicle for ideas and feelings, and the content rather than the vehicle is what should be judged. In the case of Milgram's studies, I cannot see that the content of the influence, insofar as it got across, was anything but supremely moral.

THE ISSUE OF ULTIMATE VALUE

Baumrind finally comes down to arguing that although certain ethically ambiguous actions may sometimes be justified (as in medical research "at points of breakthrough," where harm to volunteers could be outweighed by benefits to humanity), no psychological experiments are of sufficient worth to warrant the slightest ethical risk. Although she agrees that Milgram's research deals with an important topic, she sees his methods as trivializing it: the psychological laboratory and Yale University's prestige together induce such high levels and unique patterns of obedience that the results are in no sense generalizable. We've dealt with this sort of argument before, pointing out that other volunteers were almost equally obedient in a non-university setting with a minimum of prestige, and that anyway many other institutions in our society have the potential for inducing at least as much obedience as Yale and its psychologists can. Further, our future obedience in contravention of ethical standards may be more and more commanded by appeals based on respect not for the traditional authorities, but for the authority of science and technology: obey, so that society can cope with the population explosion, the expansion of the cities, the shortcomings of the gene pool. Studies of obedience to scientific authority may be even more useful in the long run than studies of obedience to governmental authority.

So maybe a good word can be said for the value of Milgram's research. But what of the broader position advanced by Baumrind (who is mainly a clinician rather than an experimentalist) – the idea that psychological experimentation as a whole is so worthless that its value can be counted zero in any moral equation?

Psychologists are often inclined to exaggerate the immediate importance of their work, particularly when applying for research grants. But this entire book is an argument for the ultimate significance of experimental social psychology and related empirical work. Only if we confine ourselves to piddling trivialities that affront no one's moral sensibilities in the slightest, are we likely to fail to influence the development of human civilization in radical ways. Obedience to authority, for instance, is only one aspect of social psychological efforts to understand the conditions of human freedom and control. The whole area of experimental persuasion is crucial here too, and discoveries in this area could be potentially as dangerous, or more so, than the ultimate nuclear weapon. People can't be held down by fear of destruction forever; but a really smoothly persuaded person likes having been persuaded, so why should he ever rebel?

Such possibilities raise all the problems the physicists have faced: "value-free science," cooperation with governments, individual freedom to pursue or not to pursue a potentially destructive/beneficial line of research. Social psychologists have not yet actually devised the principles or methods that would allow them to wreak vast changes upon society; they haven't yet gotten their fingers burned seriously, as the nuclear scientists did in the blasts of Hiroshima and Nagasaki. So most psychologists have hardly begun to work through the moral implications of their research. Even the most serious and responsible among them, such as Herbert Kelman, have so far come up mostly with very limited and tentative formulations. Of those who do announce their concern, too many seem to play around with problems of good and evil mainly to raise their own self-esteem. We must all look at the moral implications of our field for weightier reasons than that; and once we are looking, we must pay much more attention to the problems of moral ambiguity, and to our individual and collective responsibilities as psychologists, than anyone in the field has paid so far.

We have our experts on response set and cognitive inconsistency and small-group dynamics. It is time we began producing our ethical experts as well. We don't rely merely on intuition to decide whether our experimental results are statistically significant; no more should we rely entirely on our moral intuition. We may occasionally seek expert mathematical advice on our statistical problems; and perhaps we should likewise look outside our field for moral inspiration. But the religionists and the philosophers have their own ethical hang-ups, so I'm afraid they'll be of no more than peripheral use as expert consultants. The severe criticisms of Milgram's morally important research for its presumed immorality indicate that somebody needs to straighten out our moral sense, or at least make it a good bit less curvilinear than it is presently. And this somebody had better start working on the problem now, before we really do have the ability to get ourselves and mankind either out of the frying pan or into the fire.


[From Alan C. Elms, Social Psychology and Social Relevance, Chapter 4, pp. 146-160. Boston: Little, Brown, 1972. Copyright © 1972 by Little, Brown and Company; copyright renewed 1998 by Alan C. Elms.]

For previous sections of this chapter, go to Acts of Submission.


Alan C. Elms Home Page   Virtual Library   Books Online Text Version | *Standard Version

Alan C. Elms
Alan C. Elms Home Page


Web Media development and management by Palyne Gaenir of Science Horizon Web Media.
All text, layout and images on the ulmus.net domain are Copyright © 1999-2003 by Alan C. Elms. All rights reserved.