Группа авторов

Bioethics


Скачать книгу

for gains to be accompanied by losses. John Maynard Smith, in his paper on ‘Eugenics and Utopia’,6 takes this kind of ‘broad balance’ view and runs it the other way, suggesting, as an argument in defence of medicine, that any loss of genetic resistance to disease is likely to be a good thing: ‘The reason for this is that in evolution, as in other fields, one seldom gets something for nothing. Genes which confer disease‐resistance are likely to have harmful effects in other ways: this is certainly true of the gene for sickle‐cell anaemia and may be a general rule. If so, absence of selection in favour of disease resistance may be eugenic.’

      It is important that different characteristics may turn out to be genetically linked in ways we do not yet realize. In our present state of knowledge, engineering for some improvement might easily bring some unpredicted but genetically linked disadvantage. But we do not have to accept that there will in general be a broad balance, so that there is a presumption that any gain will be accompanied by a compensating loss (or Maynard Smith’s version that we can expect a compensating gain for any loss). The reason is that what counts as a gain or loss varies in different contexts. Take Maynard Smith’s example of sickle‐cell anaemia. The reason why sickle‐cell anaemia is widespread in Africa is that it is genetically linked with resistance to malaria. Those who are heterozygous (who inherit one sickle‐cell gene and one normal gene) are resistant to malaria, while those who are homozygous (whose genes are both sickle‐cell) get sickle‐cell anaemia. If we use genetic engineering to knock out sickle‐cell anaemia where malaria is common, we will pay the price of having more malaria. But when we eradicate malaria, the gain will not involve this loss. Because losses are relative to context, any generalization about the impossibility of overall improvements is dubious.

      It may be suggested that there is a more subtle threat. Parents like to identify with their children. We are often pleased to see some of our own characteristics in our children. Perhaps this is partly a kind of vanity, and no doubt sometimes we project on to our children similarities that are not really there. But, when the similarities do exist, they help the parents and children to understand and sympathize with each other. If genetic engineering resulted in children fairly different from their parents, this might make their relationship have problems.

      There is something to this objection, but it is easy to exaggerate. Obviously, children who were like Midwich cuckoos, or comic‐book Martians, would not be easy to identify with. But genetic engineering need not move in such sudden jerks. The changes would have to be detectable to be worth bringing about, but there seems no reason why large changes in appearance, or an unbridgeable psychological gulf, should be created in any one generation. We bring about environmental changes which make children different from their parents, as when the first generation of children in a remote place are given schooling and made literate. This may cause some problems in families, but it is not usually thought a decisive objection. It is not clear that genetically induced changes of similar magnitude are any more objectionable.

      A related objection concerns our attitude to our remoter descendants. We like to think of our descendants stretching on for many generations. Perhaps this is in part an immortality substitute. We hope they will to some extent be like us, and that, if they think of us, they will do so with sympathy and approval. Perhaps these hopes about the future of mankind are relatively unimportant to us. But, even if we mind about them a lot, they are unrealistic in the very long term. Genetic engineering would make our descendants less like us, but this would only speed up the natural rate of change. Natural mutations and selective pressures make it unlikely that in a few million years our descendants will be physically or mentally much like us. So what genetic engineering threatens here is probably doomed anyway. […]

      […] One of the objections [to genetic engineering] is that serious risks may be involved.

      Some of the risks are already part of the public debate because of current work on recombinant DNA. The danger is of producing harmful organisms that would escape from our control. The work obviously should take place, if at all, only with adequate safeguards against such a disaster. The problem is deciding what we should count as adequate safeguards. I have nothing to contribute to this problem here. If it can be dealt with satisfactorily, we will perhaps move on to genetic engineering of people. And this introduces another dimension of risk. We may produce unintended results, either because our techniques turn out to be less finely tuned than we thought, or because different characteristics are found to be genetically linked in unexpected ways.

      If we produce a group of people who turn out worse than expected, we will have to live with them. Perhaps we would aim for producing people who were especially imaginative and creative, and only too late find we had produced people who were also very violent and aggressive. This kind of mistake might not only be disastrous, but also very hard to ‘correct’ in subsequent generations. For when we suggested sterilization to the people we had produced, or else corrective genetic engineering for their offspring, we might find them hard to persuade. They might like the way they were, and reject, in characteristically violent fashion, our explanation that they were a mistake.

      The risk of disasters provides at least a reason for saying that, if we do adopt a policy of human genetic engineering, we ought to do so with extreme caution. We should alter genes only where we have strong reasons for thinking the risk of disaster is very small, and where the benefit is great enough to justify the risk. (The problems of deciding when this is so are familiar from the nuclear power debate.) This ‘principle of caution’ is less strong than one ruling out all positive engineering, and allows room for the possibility that the dangers may turn out to be very remote, or that greater risks of a different kind are involved in not using positive engineering. These possibilities correspond to one view of the facts in the nuclear power debate. Unless with genetic engineering we think we can already rule out such possibilities, the argument from risk provides more justification for the principle of caution than for the stronger ban on all positive engineering. […]

      Suppose we could use genetic engineering to raise the average IQ by fifteen points. (I mention, only to ignore, the boring objection that the average IQ is always by definition 100.) Should we do this? Objectors to positive engineering say we should not. This is not because the present average is preferable to a higher one. We do not think that, if it were naturally fifteen points higher, we ought to bring it down to the present level. The objection is to our playing God by deciding what the level should be.

      On one view of the world, the objection is relatively straightforward. On this view, there really is a God, who has a plan for the world which will be disrupted if we stray outside the boundaries assigned to us. (It is relatively straightforward: there would still be the problem of knowing where the boundaries came. If genetic engineering disrupts the programme, how do we know that medicine and education do not?)

      The objection to playing God has a much wider appeal than to those who literally believe in a divine plan. But, outside such a context, it is unclear what the objection comes to.