2001, AI, and Cognative Bias

AI and bigger picture questions

During this holiday season, I’ve had a chance to think about some bigger picture questions. One question is the nature of Artificial Intelligence, as I’m about to get into AI in depth. More on that later.

Late on Xmas night, I rewatched the movie 2001. Some of you might remember the HAL 9000 computer. It was the model of perfection, until it was not. HAL waseffectively the 6th member of the crew, enhanced with a vision of AI from 1968.

The nature of the mission meant that the crew needed the ability to act independently. Messages from Earth, at the speed of light, would take over half an hour to arrive. So the crew had to rely heavily on HAL.

The powers that be had confidence in HAL. It had no known record of errors. But what’s an error? It depends on the teacher, and what the student learns from the teacher.

If HAL was perfect and error-free, how did it decide that a mission was more important than the lives of the astronauts on its mission? And if the mission was to find evidence of aliens, where’s the logic in excluding humans from the mission?

That’s where cognitive bias comes in. Computers can certainly solve well-defined problems more quickly and more accurately than humans. But the choices that AI makes depends on how its trained. And that depends on the experience of the teacher. The phenomena is known as “cognitive bias.”

While I believe in the future of AI, I recognize the problems, including GIGO and cognitive bias.

Last modified January.01.0001