"One of the questions that has been bouncing around in the staff’s heads is how to square the resource demands for implementing Responsible AI within an organization? In particular, we keep running into scenarios where the organization is interested in implementing a Responsible AI program but has very little idea and sometimes no planned commitment for dedicating resources towards the implementation of that program. How should we navigate this challenge?"

Unfortunatly, showcasing from a moral standpoint that it's just the right thing to do isn't enough. The angle to take is to make executives realize the concrete and strategical benefits of implementing Responsible AI for their organization. To do this, one should start with building and creating company ethics guidelines, and then implementing them.

We can look at this from the lens of performativity.

The following ideas are taken from this amazing Mooc on AI and ethics ethics-of-ai.mooc.fi/ch…. I'll summarize and quote some passages below:

"[...] performativity is the capacity of words to do things in the world. That is, sometimes making statements does not just describe the world, but also performs a social function beyond describing. As an example, when a priest declares a couple to be “husband and wife”, they are not describing the state of their relationship. Rather, the institution of marriage is brought about by that very declaration – the words perform the marrying of the two people."

Similarly, ethics guidelines can serve as performative texts:

"Guidelines as assurances:

Others have argued that ethics guidelines work as assurance to investors and the public (Kerr 2020). That is, in the age of social media, news of businesses’ moral misgivings spread fast and can cause quick shifts in a company’s public image. Publishing ethics guidelines makes assurances that the organization has the competence for producing ethical language, and the capacity to take part in public moral discussions to soothe public concern.

Thus AI ethics guidelines work to deflect critique away from companies; from both investors and the general public. That is, if the company is seen as being able to manage and anticipate the ethical critique produced by journalists, regulators and civil society, the company will also be seen as a stable investment, with the competence to navigate public discourses that may otherwise be harmful for its outlook."

"Guidelines as expertise:

With the AI boom well underway, the need for new kinds of expertise arises, and competition around ownership of the AI issue increases. That is, the negotiations around AI regulation, the creation of AI-driven projects of governmental redesign, the implementation of AI in new fields, and the public discourse around AI ethics in the news all demand expertise in AI and especially the intersection of AI and society.

To be seen as an expert yields certain forms of power. Being seen as an AI ethics expert gives some say in what the future of society will look like. Taking part in the AI ethics discussion by publishing a set of ethical guidelines is a way to demonstrate expertise, increasing the organization’s chances of being invited to a seat at the table in regards to future AI issues."

The above implies massive benefits for a company if properly done and communicated. Not only is it the right thing to do in the face of increasingly ubiquitous and capable AI, but it is in my view a indispensable strategic advantage to focus on.

But just talk isn't enough. To truely cement oneself as an trustworthy expert and to not fall into the trap of ethics washing, one needs to also implement the guidelines in a way where tangible changes are made to the way the company is doing AI. This will reinforce what was mentioned above.

Then, the company can start implementing some of the great practical things you have suggested in the previous newsletter which will be easier to do once a ethics team is in place.

Just some thoughts.

I love that you are thinking about how to make ethics more practical. I'm taking notes and researching that myself too. I've also been wondering what the best approaches could be to get more people interested and involved in ethics and to not just focus on the techincal aspects. From my POV this is more of a challenge with applied AI practitionners than researchers.

1 Like
1 Reply
4:52 PM
Dec 21