Steve Brown

The Innovation Ultimatum


Скачать книгу

to REM sleep can indicate early-onset Alzheimer's. Alzheimer's is also indicated by repetitive patterns of movement or motion, which the sensor also detects. Changes in a person's gait can indicate the development of Parkinson's disease.

      By applying AI to a low-cost, wall-mounted RF sensor, we can monitor vital signs, sleep states, falls, and provide clinical insight on the development of conditions that include Alzheimer's, COPD, Parkinson's, and depression. All without a single wire being attached to the patient. The system operates without using a camera, limiting privacy concerns. Patients are monitored with their full consent, comfortable in the knowledge that no video images are ever being gathered. The sensor enables clinicians to “see” a wealth of information, all gathered wirelessly. We started this story with Star Wars and ended up with the sick bay beds from Star Trek.

      Super-sensing turbocharges simple sensors with AI to reveal more about our world. Google's “Project Soli” technology uses short-range radar signals and machine learning to detect fine-grain finger gestures. This approach will transform sliders, buttons, and twistable crowns on mobile and wearable devices into virtual controls. More super sensors are detailed in Chapter 10 on the future of healthcare. What other super sensors will we create? What super sensors will your business create?

      Solving Complex Problems by Learning from Experience

      Some challenges—optimizing a system with many variables or programming a robot to walk on two legs—are too difficult, too complicated, or too laborious to tackle with traditional computers. AI solves some of these tricky problems using a technique called reinforcement learning.

      Reinforcement learning teaches computers to perform complex optimizations, control complex equipment, and to play games really, really well. In 2018, researchers trained an AI to play the classic Sega console game, Sonic the Hedgehog. Sonic has two simple controls: run and jump. An AI was trained with the video game display as input and the game controls as the output. In reinforcement learning, AIs have an additional input known as a reward function. As the AI trains it tries to optimize the reward function. Game points increase the reward, and the reward decreases substantially if Sonic loses a life. At first, the AI plays terribly. Over time, the AI optimizes its model to run and jump at just the right moment, score maximum points, and keep the adorable blue hedgehog alive. The AI does not learn based on simple timing; it learns from what is happening on the screen, so it can succeed on game levels it has not seen before.

      The most regularly cited example of reinforcement learning is DeepMind's AlphaGo system. DeepMind, a subsidiary of Alphabet, built AlphaGo to play the ancient Chinese game of Go. Winning strategies for Go are opaque; even grand masters can't always describe why they choose some of the moves they make—they say the move just “feels right.” There are more possible configurations for pieces on a Go game board than there are atoms in the universe. To build a machine that understands the nuances and subtle strategies of this complex game is a monumental challenge.

      In 2017, DeepMind's next machine, named AlphaGo Zero, became a master Go player by playing millions of games against itself inside a simulation. It developed game strategies through practice rather than by observing human play. AlphaGo Zero now thrashes the original AlphaGo machine and is unassailable by all human grand masters.

      Reinforcement learning isn't just used to play games. Researchers at Warsaw University used reinforcement learning to train bipedal robots to walk more efficiently. The AI that controls the robots varies the combinations of movements made by the robot's motors and experiments with different walking strategies. The robot's AI gains a small electronic reward for strategies that speed the overall efficiency and pace of the walk. With this approach, roboticists achieved more efficient and natural-looking walking motions for their robots. One robot learned to walk almost twice as fast as it could using the best initial walking strategy programmed by its human creator.

      AI's ability to learn from experience is used to solve many business problems, including complex optimizations. AIs optimize traffic control systems, industrial chemical reactions, advertising bids, industrial automation, supply chain flow, product design, warehouse operations, inventory levels, yields, trading strategies, wind turbine controls, medication doses, smart grids, and commercial HVAC systems. Reinforcement learning also teaches AIs to drive. Like humans, AIs learn to drive by practicing. They drive real cars in real-world conditions but also drive millions of miles inside realistic software simulations. In part, Tesla AIs learn to drive from sensor data gathered while owners are driving.

      Creating and Co-Creating Content

      Computers now have the capacity for imagination. Artificial intelligence can compose music, paint pictures, and even write poetry. AI also co-creates content with humans: a partnership of digital intelligence with artists, designers, and engineers.

      GANs have incredible potential. They will fundamentally change the way we work. Within the next decade, many of you reading this book may partner with GAN-based AIs to co-create content and collaborate on business tasks.