Is the serious matter of AI ethics being treated as nothing more than ‘flavour of the month’ by our businesses and media outlets?
Emergence measured relative search volume for AI ethics-related terms against a timeline of the most notable ethical AI scandals of recent years, such as those surrounding Cambridge Analytica or Microsoft’s Tay AI. The results (below) show that companies take note – and search volume for the term “ethical ai” spikes – only when these high-profile incidents occur, and then quickly lose interest again soon after.
The positive is that there is an overall upward trend. The negative is the lack of consistency and substance in this uptick in AI ethics recognition.
More organisations have articulated responsible AI principles and values, but in some cases they’re little more than thin marketing veneer – some companies aren’t backing up their proclamations with anything real.
“Part of the challenge lies in the way principles get articulated. They’re not implementable,” Forrester analyst Kjell Carlsson recently told Information Week. “They’re written at such an aspirational level that they often don’t have much to do with the topic at hand.”
In other cases, initiatives feel rushed and forced, leading to PR disasters or wasted resources. For example, Google dissolved its AI ethics board after one week because of complaints about one member’s anti-LGBTQ views and the fact that another member was the CEO of a drone company whose AI was being used for military applications.
Achieving ethical harmony in technology is much more than a box-ticking exercise. There are blanket universal values we can all agree upon – for example the EU’s ethics guidelines for trustworthy AI – but ethics must go deeper than compliance or high-level principles, to the very heart of your company, your values and the industry in which you operate.
Getting a handle on the specific ethical quandaries associated with your stakeholders takes more than lip service to basic human rights. This is work that must be put in upfront – otherwise the flaws in your design will come back to bite you.
Take meaningful action with our Ethics in Technology Assessment
We built our Ethics in Technology Assessment (ETA) programme to help businesses go a little deeper in safeguarding stakeholders from the ethical risks associated with technology. ETA offers an independent and rigorous assessment of your digital conduct and processes and identifies strengths and weaknesses to help you address risks and become a digitally ethical brand.
The assessment is designed to provide a high-level but rounded view of the state of digital ethics in your organisation. The framework consists of five building blocks:
- Use of data: This goes beyond regulatory compliance to fair practices in handling and managing clients’ data
- Fairness in Artificial Intelligence (AI) and algorithms: Guidance and embedded good practice in the use of AI and algorithms
- Democratisation of digital skills: Removing the fear of unknown complex technology by making product information and learning accessible to interested parties and clients
- Employee enablement: Employee engagement and understanding of new technologies, and levels of corporate handholding to enable them to do their jobs better, and use advanced technologies considerately and productively
- Culture and mindset: The extent of adoption of ethical digital practices across the organisation, from leadership teams making strategic decisions, to employees working in front- and back offices.
At the end of the six-week assessment period you’ll get an actionable and impartial assessment of where you are in your journey to being a digitally ethical business, as well as clarity on what to do to improve and/or mitigate risks.
Technology vendors are already seeing the need for ETA, with their clients having started to ask questions about their algorithms and use of data. We are delighted to be working with a number of them already.