Robert Newman

Neuropolis: A Brain Science Survival Guide


Скачать книгу

sort of evidence. According to Radio 4’s Inside Science program, for example, we’ll soon have robot lawyers.

      A senior IBM executive explained to Inside Science listeners that while robots can’t do the fiddly manual jobs of gardeners or janitors, they can easily do all that lawyers do, and will soon make human lawyers redundant.

      Interestingly, however, when IBM Vice President Bob Moffat was himself on trial in the Manhattan Federal Court, accused in 2010 of the largest hedge-fund insider trading in history, he hired one of those old-time humanoid defence attorneys. A robot lawyer may have saved him from being found guilty of two counts of conspiracy and fraud, but when push came to shove, the IBM VP knew there’s no justice in automated law.

      Not all the gigabytes in the world will ever make a set of algorithms a fair trial. There can be no justice in the broad sense without procedural justice in the narrow sense. Even if the outcome of a jury trial is identical to the outcome of an automated trial, due process leaves one verdict just and the other unjust. Justice entails being judged by flesh and blood citizens in a fair process. Not least because victims increasingly demand that the court consider their psychological and emotional suffering – which computers cannot do.

      There’s a curious contradiction here that nobody ever talks about: at the same time as science proclaims its moral neutrality, proponents of AI want machines to become moral agents. Never more so than with what Nature has taken to calling ‘ethical robots’.

      Ethical robots it seems will come as standard fittings on the driverless cars being developed by Apple, Google and Daimler. They will answer the big questions, automatically …

      Should driverless cars be programmed to mount the pavement to avoid a head-on collision? Should they swerve to hit one person in order to avoid hitting two? Two instead of four? Four instead of a lorry full of hazardous chemicals? This is what the ‘ethical robot’ fitted into each driverless car will decide. How will it decide? In July 2015, Nature published an article, ‘The Robot’s Dilemma’, which explained how computer scientists:

      have written a logic program that can successfully make a decision … which takes into account whether the harm caused is the intended result of the action or simply necessary to it.

      Is the phrase ‘simply necessary’ chilling enough for you?

      One of the computer scientists behind this logic program argues that human ethical choices are made in a similar way: ‘Logic’, he says, ‘is how we … come up with our ethical choices.’

      But this can scarcely be true. For good or ill, ethical choices often fly in the face of logic. They may come from gut instinct, natural cussedness, a desire to show off, a vague inkling, a shudder, a sense of unease, or a sudden imaginative insight.

      I am marching through North Carolina with the Union Army, utterly convinced that only military victory over the Confederacy will abolish the hateful institution of slavery. But I no sooner see the face of the enemy – a scrawny, shoeless seventeen-year old farm boy – than I throw away my gun and run sobbing from the battlefield. This is an ethical decision resulting in decisive action, only it isn’t made in cold blood, and it goes against the logic of my position.

      Computer scientists writing the logic program for an ethical robot may appear as modern as modern can be, but their arguments come from the 1700s. The idea that ethics are logical appeals to what – in another context – Hilary Putnam describes as:

      The thinking may be strictly 1700s, but the technology isn’t. The US Department of Defense is at work on tiny rotorcrafts known as FLACs (Fast Lightweight Autonomous Crafts) that will that will be able to go inside flats and houses, office blocks and restaurants and deliver a one-gram explosive charge to puncture the cranium. These FLACs are types of Lethal Automative Weapons Systems. (LAWS). If drones weren’t bad enough, LAWS are on a whole new level. With drones, a human always makes the decision whether to kill, from however far away. But LAWS are a break with tradition. They are fully autonomous.

      Конец ознакомительного фрагмента.

      Текст предоставлен ООО «ЛитРес».

      Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.

      Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.

iVBORw0KGgoAAAANSUhEUgAABM0AAAdiCAIAAAAJ6o0JAAAACXBIWXMAAC4jAAAuIwF4pT92AAAH aWlDQ1BQaG90b3Nob3AgSUNDIHByb2ZpbGUAAHjalZVZVJMHHsX/35KVkEAIEJDlg7AbSEBkFQqE VfZVwJUkHxAJJCZhq2LpqIjiAlYsVRAVpI4riFAcl0pFK+JYgQq44AZapVgVR9SplnngzLEvnXPm Pv3OPee/PN0LQOcFh4aEoUEA+QU6TVJECJGekUnQhgEDFJhAA+csmVYNfy0EYHoIEACAQVdplERS n+958Fhz2DQvv641kcXNh/8thkyt0QFQewCgV05qZQC0agBYV6xT6wDgJQDwNClJEgAEB6Csz/kT S//EmvSMTABqJQDwcma5HgB40lluBQBeekYmMXv208+yQk3RrIeeBQAmGIM1uIAnBEIUJMNSyAUN lMFGqIF6aIFWOAOX4AbchnF4Ae8RDGEjfESAuCLeSAgSgyxCspA8pAgpRzYjtUgjchg5hZxHepFB ZBR5ikwhH1EaaohaoA6oGPVHw9AENBMlUTW6Gq1Aa9B69BDajn6P9qHD6Bj6Av2A0TFjzBYTYQuw KCwNk2Ma7AtsK7YHO4x1YZexm9g49hpHcUPcBhfjQXgCvgJX4+V4Db4fP4lfxH/GH+NvKXSKOUVI CaQkUGSUQkolZQ/lOKWbcpPylPKBakC1o/pQY6lZ1CJqFXUftYPaR31AnaaxaAKaLy2elk0ro+2k HaVdpN2hvabr0QV0f3oKPZ9eQW+kd9EH6JMMKoNg+DFSGSrGZsZBRjfjLuMdk8cUM2OZCmYFs4l5 gXmP+buemZ63XqqeTm+HXpveDb2XLA5LxIpnqVjbWa2sftaUvpH+fP00/RL93frn9O+zUbY9O4qt ZFezT7FH2B85NpxwTh6nhtPJuWuAGjgZxBsUGTQY9BhMGhoZ+hvKDbcYdhje49K47txM7nruce4t I9xIZJRptMGozWiUx+B58WS8at453oQx3zjcWGfcZDxggpiITVaYVJtcMHlpamOaZLrOtMP0Cd+M H80v47fyx8xMzRaarTVrN/vF3MI80bzC/Kz51BynOcvn1M65ZoFb+FtoLY5YjFtaWqZZbrfstcKs AqyKrU5aPbd2tpZb77MeJcyINOIrot+GYxNrU2Vz1ZZhG2lbaXtFQBdECTYJ+uxYdnF21XYD9sb2 6fZ77O87CBxyHI46vHL0clzj+IMT3SnOqdbprrOds9L5lPMHlzCXbS4jc23nKud2ChFhjLBOOOYq dl3j2utm4iZza3ObEcWJGkSTYn9xlfiuu9C9zP26B+Gh8eiZZzpv5bxznoaepOfp+ez5svldXmwv u