How do we build trust in AI, with a stick, a carrot – or both?
Trust in AI is essential, an idea that we all agree on. The question is, how do we best build that trust so that AI does not get away from us.
Europe is trying the stick approach to enforce trust in AI.
Europe’s digital strategy chief, Margrethe Vestager, believes that Europe will take the lead in managing trust in AI by imposing stingingly large fines on companies or countries that contravene new rules on what is acceptable.
“Trust in AI is a must,” she said, “and the EU is spearheading the development of new global norms to make sure AI can be trusted.”
The fines will amount to 6% of turnover or €30 million, whichever is greater, and the rules will set out what AI can and cannot be used for. These reflect the rules of society to a large extent. No exploitation, no spying, no putting people’s jobs at risk in unfair ways and countries can only use AI in counterterrorism and a few other defence-related activities.
This approach, she believes, will build trust in AI.
Which is fine, as far as it goes.
The real problem around building trust in AI is much bigger than fines targeting specific areas. The very ‘manufacturing of AI is hugely exploitative.
A scholar on this wider picture, Kate Crawford, has spent several years tracing the real cost and hardship that goes into producing AI and believes that talking about ‘ethics in AI’ is now too narrow and has become an easy way of justifying certain behaviours. Set up a team called ‘ethics in AI’, and the job’s a good one.
Crawford, who has published several books on the subject, spent a lot of time looking at AI from the point of view of the materials required to produce it, the inequalities it magnifies and the manipulation that it is capable of. Not since the railways and the railroad barons has so much power been in so few hands. And it scares her to the point she believes that trust in AI can only be achieved if we understand that it is not a ‘neutral’ technology but frighteningly powerful.
Crawford is happy with the regulatory approach but believes if you really want to build trust in AI, real trust, then we must stop thinking about it as a technology that will fix stuff but a societal revolution that can only be controlled by powerful civil rights groups and global organisations like the UN. In fact, AI should be put into the technology basket that needs a peacekeeping force. One with teeth.
To build trust in AI will not be easy and it is a long way out of the box. But as Vestager says, ‘trust is a must.’ Disruptive.Asia
You must be logged in to post a comment Login