• Christina J. Colclough

AI and Human Rights

In this, one of the most interesting panels I participated in in 2021, the speakers discuss the current relation between AI and Human Rights. Hear the Why Not Lab's Christina Colclough argue why we don't need just one global convention on AI, but many. Why algorithmic systems must be co-governed, why the draft EU AI Act needs to be scrapped, and why we all need to uphold article 1 of the Universal Declaration of Human Rights: “all humans are born free and equal in dignity and rights


Panel at the 2021 "Athens Roundtable on AI, Human Rights and the Rule of Law". With Elizabeth Thomas-Raynaud from GPAI, Marielza Oliveira from UNESCO, Cornelia Kutterer from Microsoft EU, Patrick Penninck from the Council of Europe, and the Why Not Lab's Christina Colclough



Here's what the Why Not Lab's Christina Colclough said:


Thank you Bruce, and fascinating to hear all of you speak and it's changing what I had thought I was going to say. But let me take your questions and as usual, and Elizabeth you're going to laugh now, I will do it in reverse order.


Should there be a convention you ask in singular? No! There should be many! And this is the thing we need to. I mean there's lots and lots of comments in the chat here about the complexity of all of this, well you know let's peel the layers´ off the onion here and then really start looking at the core features of artificial intelligence, it's deployment in the public, the private and in workplaces and see where is it that we need actually to have some conventions. These could be around transparency, around the co-governance, around the co-design of algorithmic systems to ensure that they do not intentionally or unintentionally harm - that's number one.


I agree with most of what my fellow speakers have said, but I really think we need to start from the ground up here. What I do in the work in the Why Not Lab is really work with workers and unions across the world in all regions to bridge a huge gap knowledge gap here and that is around data around AI, algorithms. How do we understand these new technologies that are being introduced into workplaces and how then on that understanding can workers in unions start building a response to this and to really with the ultimate aim of tabling an alternative digital ethos.


Now this leads into the idea of a convention. What we are seeing now is several things in workplaces and I’m going to limit my comments to the workplace. What we see is that management are introducing tools and systems which, for the vast vast majority are third-party systems, they have not necessarily been trained in identifying harms or risks in understanding what could be the unintended consequences of the use of these systems in terms of violations, of discrimination and bias and so forth. But also lots of other harms we can see workers are subject to: increased work speed. intensity etc etc.


So, what we see here is that management are introducing these systems and they are not governing them, and if they are governing them at all it’s from a risk perspective, risk of being hacked or safety or something. it’s not from a social technical perspective. One of the things that the Why Not Lab is helping the unions with is actually starting that conversation with management around how we could co-govern these systems not to remove the responsibility and liability from management but actually to ensure that management does take that responsibility seriously. So, empowerment from the ground up I think is extremely essential.


Can law keep up?

Now this is a question which is almost fixating law in as a constant. Now law can be kept up if our politicians took responsibility. Now we are standing, and I said this when I bowed out of the GPAI Steering Committee, we are standing on the shoulders of giants. Politicians who before in history dared take responsibility, and I think the world is now looking at the current global politicians to say take responsibility.


Let's face it, the current digital ethos which is running around the world right now is doing more harm than good, especially in a human rights perspective. The Universal Declaration of Human Rights, which has formed the basis of many human rights laws around the world, is so profound- and I really want to support what Marielza has said: We just have to enforce these absolute rights.

Thinking that through, at the moment so many both workers and citizens of course are being manipulated to a degree that we must ask: Do they really have a freedom of thought? How is this being manifested in relation to their work opportunities? For example, are we narrowing the labour market into very exclusive labour markets where anything outside of the norm is actually thrown out the window?

And then I really want to say something, because I am, if I can be so rude, really really tired of hearing governments and high politicians talk about how they respect human rights and yet they are allowing the abuse of human rights within their borders. Just in the world of work, union busting, for example, this is an abuse of human rights. So I really think we should tidy up our own backyards and then acknowledge we don't just need one international convention, we need several.


And we need to break this down into the very core of artificial intelligence, or whatever you want to call it, algorithmic systems so that it is a co-building, a co-governance of these systems no matter where they are deployed.


Moderator: Christina I’m sure you have some things you want to say on that topic of how companies can step up more?


Has Microsoft ever consulted with their employees?

Absolutely, I absolutely do. Cornelia [from Micrsoft]- now I’m lovingly looking at you.

In everything that Microsoft has done, have you included a representative sample of your employees in forming those policies or practices? If you have, do you regularly check in with them around lived harms, lived experiences and so forth?

What companies could do is bring dialogue back into vogue. To stop perceiving their employees as their enemies. To really value that the union representatives or the workers themselves have their ear to the ground. They are the ones who are living the impacts, or in the majority of cases, the harms that these systems are subjecting them to.

Management are not experts

And I am so frustrated in almost every single governance model that has been produced from academia, experts and think tanks there is an assumption in them that management knows what they're dealing with. They don ‘I. This is a fallacy. The majority of companies I’ve spoken to who are deploying third-party systems do not know how to govern these systems in a social technical environment. So we need to bring dialogue back into vogue. That is one thing.


Second thing, what can companies do? Respect the collective agreements, respect human rights, freedom of association, the right to collective bargaining and through collective agreements start actually discussing the implementation, the purposes of these systems.

Certification

My third point is, and this is this is another mind-boggling thing about international law including the EI AI Act, if you certify at all, you are also certifying it as it is at the time of certification. I don't think you need to be much of a technical expert to know that the majority of these systems either self-learn through machine learning or get adapted because the instructions to the algorithms get changed. You cannot certify once. We need this and here I think that if Paul Nemitz is still on the call one of the genius things of the GDPR, although not many are living up to this, is the periodic reassessment of data protection impact assessments. For example, and this we need to understand, we need to periodically reassess these systems and you cannot, nobody can, justifiably unilaterally do this. You have to do this with stakeholders, multiple stakeholders at the table.

Co-governance

So what can companies do? That's it. You co-govern these systems, take their responsibility, educate themselves so they know what they're actually dealing with, and periodically commit to reassessments and to if harms are being experienced to throw the system out the window if it cannot be adjusted.


Moderator: I'm sure you would agree with this Christina, but we need big tech to step up and say here's a great way of doing it, let's not wait to be regulated let's actually do some of the things that you described, which would be best practices. We can get people to behave more in the right way as opposed to just not doing anything.

Bruce I would love that but how many of the big tech companies coming out of your country have a tradition of collective agreements and positive, constructive dialogue with their employees? None. So, I don't think the best practice is going to come from them.

All algorithmic systems are harming

I just want to very very quickly say that all algorithmic systems are harming certain people, certain groups, certain countries. So when is a harm big enough for it to lead to a ban? I just think we have to turn this upside down and say we cannot introduce any algorithmic system if it is not governed, and it has to be governed by those who are subject to its harms or impacts. So that's how I would turn it around.

EU AI Act doesn’t mention workers once

On the EU AI Act - can you imagine they come out with a draft Act and they don't mention workers with one single word. They admit that systems introduced in workplaces are high-risk systems, yet they shy away from any form of governance saying self-assessment, self-regulation is good enough. This is an absolute disaster and , why they moved away from the rights-based GDPR to the risk-based EU AI Act I do not understand. It has to be redone totally.

In conclusion: Uphold UDHR art 1

I could go on with you guys forever this is so great. Now i would like to pose a question. There are lots of responsible technologies, which are being produced which are very underfunded and so forth. AI could do a hell of a lot of good in relation to the world of work. Where and why is this responsible technology not being funded?


If people and governments really were committed to human rights there would be funding towards responsible tech. ´Then I just want to end by quoting article one of the Universal Declaration of Human Rights and remind us all that:

“all humans are born free and equal in dignity and rights” and this, I think, we should commit ourselves to uphold.
 

See all keynotes and panels from the 2021 Athens Roundtable here