In a recent Science article called ‘Basic Instincts,’ freelance writer Matthew Hutson discusses how robots need to think and learn more like humans. He suggests that the core of robotic development will require computers to learn from both trial and error and internal features that respond to instinct.
Deep learning network-based systems have discovered ways for robots to beat humans at games and to learn how to flip a burger, but they lack the ability to adapt to different environments. For instance, a robot trained to play chess may beat a human chess-master, but if that robot attempted to play a game of checkers against a child, the robot would most likely lose. Additionally, robots lack common sense and cannot always improvise on the spot.
By putting robots in random scenarios and testing their trial and error abilities, a robot’s ‘brain’ may be able to process information more like a human. Hutson believes that a few key human instincts will need to be baked into computers before robots can learn ideas such as death or cost-benefit situations.
Hutson has noted that computer science companies such as Vicarious and DeepMind have delved into ideas of implementing human instincts within a computer. Additionally, places like MIT and the University of New South Wales are learning how the human brain and machines can function in a similar manner.