With a background in physics, computational neuroscience, mathematical logic and philosophy, Nick Bostrom is a philosophy professor at Oxford University and author of the book Superintelligence: Paths, Dangers, Strategies. He is also the founding director of the Future of Humanity Institute, a multidisciplinary research center that drives mathematicians, philosophers and scientists to investigate the human condition and its future.
This metaphyshical discussion, reminiscent of a college philosophy course, explores how older A.I., programmed by code, has evolved into active machine learning. "Rather than handcrafting knowledge representations and features," Bostrom says, "we create algorithms that learn from raw perceptual data." In other words, machines can learn in the same ways that children do.
Bostrom theorizes that A.I. will be the last invention that humanity will need to make, and eventually machines will be better at inventing than humans -- which may leave us at their mercy as they decide what to invent next. A solution to control A.I., he suggests, is to make sure it shares human values rather than serving only itself (cue James Cameron's Terminator franchise).