Christina J. Colclough
Meaningful Inclusivity
- in Governing the AI Revolution
Joining Beeban Kidron, expert on children and AI and who film director and the chair of 5Rights Foundation; Renée Cummings with the School of Data Science at the University of Virginia on women and minorities; Helena Leurent, director-general of Consumers International on consumers; and Patrick Lafayette, the CEO of Twin Audio Network on people with disabilities this panel was all about how to include underrepresented groups into the governance of AI.
Tasked to speak about the inclusivity of workers in workplace AI governance, hear me argue why the current commodification of workers must be stopped, why dialogue needs to be fashionable again, and why any form of governance void of the workers' voice is not governance at all.
Here is what I said (starting 1:15:00 in):
Transcript CHRISTINA COLCLOUGH:
Thank you very much. It is always a little bit daunting being the last one because so many good and valid points have been made.
I want to pick up on what Patrick said around the governance of these new technologies of AI. In workplaces this governance is in the majority of cases totally lacking when it involves the inclusion of workers. Their voices, their agreement to first the surveillance they are subject to but also to how their data is used, how it is inferred, for what purposes, if the data is sold, and so on.
This is stunning to me. It is stunning that the majority of us in various forms, being self-employed, in the informal economy, as employed workers, and as digital labor platform workers, we are workers, and yet this whole notion of co-governance of algorithmic systems is totally out of fashion.
And here a little wink-wink with a smile to Renée, in your work you do with the C-suite, include the workers. As I said to the OECD ministers when they adopted their great AI principles, which I saw Joanna was referring to before in the Chat, I said to them: "This is great. Now you must ask, fair for whom?" What is fair for one group of, in my case, workers, is not necessarily fair for the other. How can we make those tradeoffs and those decisions explicit but also consensual in the sense that we at times might have to have positive discrimination towards one group, and what is our preparedness for this?
Then you can ask: What if we don't do this? The Why Not Lab, why not? What would happen if we don't do all of this? Then I am afraid that the current commodification of work and workers will continue to the extent that it is almost beyond repair when the inferences about us predict our behavior, where we as the humans become irrelevant, where we might be chatting in three years at a conference like this around how do we defend the right to be human. For all of our fallacies and beauties and good sides and bad sides, this is what is at stake.
Unions have always fought for diverse and inclusive labor markets, and I am very afraid—and I think Renée's work in criminology points in this direction—that we are heading toward a judgment against a statistical norm that will exclude lots and lots of people and therefore harm the diversity and inclusion of our labor markets.
My call here is very much let's find a model for the co-governance of these systems. Let's put workers first. We have the principles in AI of people and planet first. But we cannot do that if we actually do not bring dialogue back into vogue.
It is also very telling that if you look at the data protection regulations across the world, either workers are directly exempt from being able to enjoy those protections or worker's data rights are very, very poorly defined. We have that in the CPIA. We have that in Thailand, in Australia. The GDPR even had in their draft GDPR stronger articles on specifically workers' data.
My call here would be to bring dialogue back into vogue. We have to stop enemizing one another. We should definitely work on workers' collective data rights, move away from the individual notion of rights as enshrined in much of our law to collective data rights. We need to balance out this power asymmetry which is growing to a dangerous and as I said irreparable level, and then we must talk about the regulatory requirement for the co-governance of these systems, thereby not saying that workers should have the responsibility that must lie with the companies, the organizations who are deploying these systems. We need much stronger transparency requirements between the developers and the deployers of these technologies. We must avoid a situation where the developers can hide behind intellectual property or trade secrets to not adjust their algorithms, their training data, and so on.
My last call is we need our governments to up their game. This cannot work under national different laws. We need the Global Partnership on AI (GPAI). We need OECD. We need the United Nations to start working towards a global governance regime that also caters for value and supply chains and that also caters for varying economic situations in each country, that we must stop what I call the "colonialization" of much of the developing countries around this digital change.
This is a macro thing. We need governments to regulate, we really need to get them to the table—I am on the board of GPAI, and can say there is resistance to commit to any form of joint regulation—and we need companies to include their workers. Data protection impact assessments on workers' data must include the workers. Algorithmic systems deployed on workers for scheduling, hiring, or whatever must include the workers.
Then we have to all stop enemizing one another, and we must also realize that most of the people listening to this are workers, and we should have a voice. Thank you.
/end/
- the panel was hosted by the International Congress for the Governance of AI