Scientists from Cambridge and elsewhere are warning of the potential misuse of artificial intelligence (AI) in a report out today, says the BBC and other sources. The concerns include drones with facial recognition being targeted at specific individuals, accidents being caused deliberately in driverless cards and a great deal more besides.
Intelligent Sourcing was actually discussing something more interesting and no less disturbing only yesterday. Yes, all of the above is technically possible and if it’s possible then someone’s bound to try it sometime. The actions described are already illegal but enforcing them when someone can perpetrate the actions remotely is going to be challenging. We can only hope that the AI on the side of the good guys will keep pace.
Something that’s going to be just as difficult, however, is how to invest AI with some sort of moral compass. Take an example. A driverless car faces two arrant pedestrians – say they’re drunk, and hitting one of them is inevitable. One is a petty criminal, another a judge – but the judge is 80 years old so, logically, has a lot less life left to preserve than the criminal.
What should the AI do? As a sentient flesh and blood being I’d find that an impossible call, so where does a piece of AI end up taking this?
Deliberate criminals and terrorists using AI is something we’re going to have to face, it’s a sad inevitability. The more nuanced stuff is going to take some serious thought.