86 items found for ""
- KL - der skal mere til
Kommunernes Landsforening har den 23. maj 2022 udgivet deres Digitaliseringspolitik. Den er et skridt på vejen, men som med strategien fra Digitaliseringsstyrelsen er den simpelthen ikke nytænkende eller dybdetænkende nok. Den er hellere ikke fremtidssikret. Her er mine kommentarer i nogenlunde vilkårlig rækkefølge Ikke et eneste sted nævnes der i politikken, at de digitale systemer der anvendes skal løbende forvaltes (styres), og at dette skal gøres i samarbejde med relevante borgergrupper og medarbejdere. Det er en iøjnefaldende mangel. Rundt omkring i Danmark, Europe og verdenen har digitale systemer haft meget skadelige effekter på borgere og ansatte. I Holland måtte regeringen gå af, da algoritmer der skulle spotte snyd med børnepengene viste sig at diskriminere mod ikke-etniske hollændere. I Danmark har en fejl i et algoritmisk system betydet at 513 hjertepatienter kan have risikeret at være blevet opereret unødigt. I England har algoritmiske systemer konsekvent givet børn fra fattigere områder lavere karakter end børn fra mere velståede områder. Eksemplerne er mange - denne database over fejl i digitale systemer giver god anledning til panderynk. Det er derfor en seriøs bekymring at KL enten ikke kender til disse mange tilfælde, eller ikke forstår vigtigheden af at digitale systemer skal løbende styres, så (u)tilsigtede skader rettidigt ses, rettes og undgås. KL's teknologitænketank anbefaler godt nok at kommunerne skal "tænk i samarbejde og borgerinddragelse, når i udvikler digitale løsninger." Det er altså ikke nok at borgerne (og om vi må tilføje 'relevante medarbejdere') inddrages i udviklingsfasen. De skal i den grad inddrages i den ovenfor omtalte løbende evaluering (styring) af de implementerede systemer. Selvom vigtigheden af borgernes tillid nævnes flere gange, indeholder politikken ikke nogle klare mål hvad angår transparens. Borgernes tillid afhænger direkte af at de a. ved hvad deres data bliver brugt til, b. ved hvilke data der bruges, c. hvad deres klagemuligheder er, d. hvem de skal henvende sig til i kommunen, e. hvilke systemer der anvender disse data, til hvilket formål, f. hvem der har udviklet systemet, og g. systemudviklerens rettigheder til at gøre brug af data til andet formål og meget mere. Det ville have været ønskværdigt at KL havde inkluderet transparens krav i deres digitaliseringspolitik. I indsatsområde 5 står der under "Fremtiden skal bygges på det fælleskommunale IT-fundament" at kommunerne skal "Undgå leverandørafhængighed". Det er såmænd en rigtig god ide, at det er kommunerne selv, der udvikler deres løsninger. Men alt peger just nu på at langt de fleste systemer bliver udviklet af tredje part. Kompetencerne i det offentlige skal i den grad øges for at kommunerne selv kan udvikle og forbedre deres systemer. Under punkt 2: Bedre Service står der ganske rigtigt at: "kommunerne hver gang skal vurdere om tilbuddet til borgerne skal og kan være digitalt". Dette er det eneste sted man kan ane forsigtighedsprincippet i hele politikken. I det hele taget læses den som om 1. det digitale er uundgåelig, og 2. nødvendig, og 3. ønskværdig. Alle 3 antagelser uden rette styring af disse systemer kan, og allerede har, ledt til utilsigtede og often skadelige indvirkninger på borgerne og de ansatte. Det havde klædt KL at antage dataminimeringsprincippet fra GDPR: The principle of “data minimisation” means that a data controller should limit the collection of personal information to what is directly relevant and necessary to accomplish a specified purpose. They should also retain the data only for as long as is necessary to fulfil that purpose. In other words, data controllers should collect only the personal data they really need, and should keep it only for as long as they need it. EDPS Hvad angår kompetencer nævnes "Derfor er det vigtigt, at nye teknologier bliver bredt forankret, og at medarbejderne har kompetencer til og mod på at bruge dem." At selvsamme medarbejdere skal have rette kompetencer til kritisk at styre og forvalte systemerne fra et rettighedsperspektiv og for at sikre at borgerrettigheder ikke overtrædes eller skades nævnes ej. Igen, dette er en alvorlig mangel i politikken. Hvis ikke medarbejderne i kommunerne ved hvordan og hvorfor de skal forvalte/styre systemerne og ej hellere hvem de skal henvende sig til hvis de mistænker eller iagttager skævheder i systemet, hvem skal så? Kommunerne skal også: "Bruge data som en værdiskabende ressource". Og videre at "Der er behov for at rydde alle sten af vejen for at kunne dele data, der hvor det gavner borgerne." Her kunne det være interessant at dykke ned i ordvalget "værdi" og "resource". Og hvem bestemmer egentligt hvad "der er til gavn for borgerne"? Mine personlige data er altså ikke en vare der kan ændres, handles med og sælges igen med gevinst (læs: merværdi) som mål. Mine data skal ses i et menneskerettighedsperspektiv, og de burde varetages i det lys. Dette handler ikke om værdi, eller om ressourcer, men om rettigheder. Det er derfor bekymrende at politikken fra KL indeholder ønsket om at: Kommunerne og KL skal fortsat arbejde målrettet i både fælleskommunalt og fællesoffentligt regi for at bruge data klogt og sikre koordinering på tværs for borgeren. Det skal lovgivning støtte op om og ikke spænde ben for Hvis data skal deles mellem kommuner, skal det kun kunne lade sig gøre hvis borgeren frit og eksplicit har givet tilsagn om det. Der er gode erfaringer og praksis i Estland's data tracker system, der giver borgere mulighed for at se hvilken offentlig myndighed har tilgået borgernes data og hvorfor. Igen, transparens kunne løftes meget mere frem i politikken fra KL. Og endeligt, der er virkelig intet nyt i hele KL's politik. Borgernes tillid kan ikke vindes eller vedligeholdes ved små lappeløsninger på et system, der gentagne gange har vist sig at slå fejl. Selvom etik eller 'etisk' nævnes nogle gange, så går politikken slet ikke i dybden med hvordan denne etik skal formuleres, fortolkes og egentligt efterleves. Igen skal "styringsmekanismer" der er inklusive med i denne politik for at den overhovedet kan påberåbe sig etik. Men derudover savner jeg visioner for hvordan kommunerne kunne gå forrest i at anvende decentrale servers, differentieret databeskyttelse (engelsk: differential privacy) eller lignende systemer, mere demokratisk borgerinddragelse, data som et kollektivt gode, data trusts og meget mere. Med andre ord, der skal simpelthen mere til hvis denne politik skal anses som visionære. Det er den ikke, og det er i sig dybt bekymrende.
- Just published: "Righting the Wrong: Putting Workers’ Data Rights Firmly on the Table"
Christina J. Colclough contributes with this chapter in a new book edited by Professor Mark Graham and DPhil Fabian Ferrari. Read more about the book published by MIT Press below. Understanding the embedded and disembedded, material and immaterial, territorialized and deterritorialized natures of digital work. Many jobs today can be done from anywhere. Digital technology and widespread internet connectivity allow almost anyone, anywhere, to connect to anyone else to communicate and exchange files, data, video, and audio. In other words, work can be deterritorialized at a planetary scale. This book examines the implications for both work and workers when work is commodified and traded beyond local labor markets. Going beyond the usual “world is flat” globalization discourse, contributors look at both the transformation of work itself and the wider systems, networks, and processes that enable digital work in a planetary market, offering both empirical and theoretical perspectives. The contributors—leading scholars and experts from a range of disciplines—touch on a variety of issues, including content moderation, autonomous vehicles, and voice assistants. They first look at the new experience of work, finding that, despite its planetary connections, labor remains geographically sticky and embedded in distinct contexts. They go on to consider how planetary networks of work can be mapped and problematized, discuss the productive multiplicity and interdisciplinarity of thinking about digital work and its networks, and, finally, imagine how planetary work could be regulated. Get the book as pdf , or even better buy it! https://direct.mit.edu/books/oa-edited-volume/5319/Digital-Work-in-the-Planetary-Market Download Chapter 17: Righting the Wrong: Putting Workers' Data Rights Firmly on the Table here Contributors Sana Ahmad, Payal Arora, Janine Berg, Antonio A. Casilli, Julie Chen, Christina Colclough, Fabian Ferrari, Mark Graham, Andreas Hackl, Matthew Hockenberry, Hannah Johnston, Martin Krzywdzinski, Johan Lindquist, Joana Moll, Brett Neilson, Usha Raman, Jara Rocha, Jathan Sadowski, Florian A. Schmidt, Cheryll Ruth Soriano, Nick Srnicek, James Steinhoff, Jara Rocha, JS Tan, Paola Tubaro, Moira Weigel, Lin Zhang
- Reminding the G7 - workers' rights are human rights
The Why Not Lab was invited to the G7 Labour dialogues to discuss digitalisation of work and workers with the German Minister of Labour and Social Affairs, Hubertus Heil. See what we discussed here. The German government holds the Presidency of the G7 this year. In the Labour Dialogues held in Berlin May 12 and 13, union leaders had the opportunity to discuss Climate Change, the new social contract, workers rights post pandemic, supply chains and digitalisation. Hosted by FES and DGB these discussions with ministers were highly valuable. Not only for the G7 unions but also in relation to ensuring that the G7 respects and bridges global divides. Digitalisation and the need for inclusive governance Hubertus Heil covered many issues in his opening speech. From Germany's just adopted human rights and due diligence law, to the scheduled law improving workers' data rights to the EU AI Act. The Why Not Lab's Christina J. Colclough was tasked to comment on the Minister's speech and the German G7 political priorities on digitalisation. Here is what she covered. Standards and certifications as envisioned in the draft EU AI Act, the policies of GPAI, the OECD and the G7 must be accompanied by mandatory inclusive and periodic governance obligations. This requirement is critically missing in all multilateral political discussions. In workplaces, this co-governance must include workers and their representatives. Linked to this, and in line with the German government's planned law on workers' data rights, workers must have much stronger collective data rights. This to prevent the commodification of work and workers, and the subsequent narrowing of labour markets. Workers' Rights are Human Rights, and the current manipulation of workers through algorithmic systems must stop. Politicians should stop fetishizing the 'free flow of data' and in line with Shoshana Zuboff work towards banning markets in human futures. We need a de-datafication of labour markets. Colclough then moved on to highlight the need to ensure that the developers of new digital systems and the companies/workplaces deploying them must have the necessary competencies to govern these technologies from a human rights' perspective. So must we ensure that workers and their unions have the competencies they need to truly engage in this human rights and workers' rights governance. On one crucial point, Colclough disagreed with the Minister on some of the details in the EU AI Act. She stressed that in it's current form the Act deviates from the GDPR on key areas: namely on the lacking need for transparent impact assessments, the lacking recommendation to consult with workers on workplace systems and the missing rights of authorities to access these assessments. This leads to a self-regulation of even high risk workplace systems, which is simply unacceptable. She too pinpointed that it is somewhat interesting that the EU AI Act is a risk-based regulation, and again therefore deviates from the rights-based GDPR. The Minister took careful note of all of these points. E-commerce/Digital Trade We then discussed digital trade. Colclough expressed that many of the positive policy wishes the Minister was presenting during the meeting are actually in contradiction with the German G7 policy priorities' push for a reform of the WTO to include digital trade. Here she underscored: that the free flow of data does not equal the free and equal access to data. That the governments and the EU Commission keep repeating this demand will only lead to the increased commodification of workers and citizens, and will only benefit the companies who already extract enormous amounts of data. Colclough mentioned the other demands that are on the table, which would all stifle governments scope to regulate, they will deepen digital divides and disempower citizens and workers. These are: 1. A ban on data location 2. A removal of technology transfer obligations 3. A removal of the obligation to reveal source code 4. A removal of all obligations to have a physical and/or legal presence in the host country. Colclough concluded that the current e-commerce trade demands must be refuted and blocked. Read why in the briefing the Why Not Lab has written for the G7 meeting here
- Towards New Labor Futures: Voices from the Frontlines
The DataSyn Team, at renowned ITforChange asked 21 experts from across the world about their views on some of the big victories of the labor movement in the past few years in the context of platformization, the biggest challenges facing workers and the key opportunities for strategic intervention for workers in the future. Read all 21 inspiring, worrying yet also uplifting inputs here Here's insights from the Why Not Lab's Christina Colclough What do you count as some of the big victories of the labor movement in the past few years in the context of platformization? The continued effort from old and new trade unions is testified to by the number of cases raised in courts across the world questioning the classification of workers as independent contractors. As a result, more and more courts are ruling that workers on digital labor platforms are indeed employees. This is very important. Work is work, and all workers should enjoy the same benefits and rights. Another interesting recent development is how some unions are using data protection regulations to submit data subject access requests and to challenge algorithmic management. Here, the work of the App Drivers and Couriers Union (ADCU) in the UK is pioneering, and the wins they have obtained in their cases against Uber deserve celebration. A third positive development is the increase in the number of unions which are signing collective agreements with digital labor platforms. What are the biggest challenges facing workers in the present moment? Across the world, precarious forms of work are on the rise, stripping workers of their rights and leaving them to bear the risk of the market on their own shoulders. This is not least driven by digital labor platforms that utilize regulatory gaps and regulatory insufficiencies to outcompete brick-and-mortar companies as well as to exploit labor. In connection with this, the digitalization of work and workers is increasingly threatening workers’ fundamental rights, freedoms, and autonomy. Algorithmic management, obscure or blackbox algorithms and automated decisions coupled with, or even facilitated by, regulatory slugginess is leading to the commodification of workers. At the same time, digital technologies are increasingly being used by platforms to identify and destroy organizing efforts by workers. These organizing efforts are further hampered by market regulations that are aimed at preventing cartels. Given that workers on digital labor platforms are still mainly regarded as independent contractors and therefore sole proprietors, market rules prevent them from organizing. All of this combined, is leading to an exploitation of labor and a downward pressure on working conditions and rights. Acknowledging that digital labor platforms do offer workers in many parts of the world a means to earn an income, the flexibility offered to the workers should not come at the expense of their rights. Here, governments need to take responsibility and regulate digital labor platforms in order to protect workers’ fundamental rights, freedoms, and autonomy. What are key opportunities for strategic intervention for workers in the future? First of all, unions must lead the way and demand that governments close regulatory gaps so that all workers, in all forms of work, have the same strong social and fundamental rights. These rights must be enforceable. Secondly, algorithmic management systems and practices must be regulated. This regulation must include stringent demands to transparency, accountability, and fairness, as well as to the necessary ongoing governance of said systems and practices by workers and platforms in co-operation. Thirdly, while the data-driven commodification of workers must ultimately be refuted, digital labor platform workers and their unions can tap into the potential of digital technologies by collecting and analyzing their own data. Here, several good examples exist from Driver’s Seat, to GigBox and the app WeClock to name a few. In these cases, the power imbalances between workers and platforms that arise through the unequal access to information have been successfully addressed. Fourthly, unions must spearhead a human rights-centered campaign aimed at highlighting the violations of said rights through the unfettered digitalization of work and workers. Importantly, the four issues raised here demand governmental cooperation, action, and responsibility. Discussions held in intergovernmental fora such as the Organization for Economic Co-operation and Development (OECD), the International Labor Organization (ILO), Global Partnership on AI (GPAI), and G7 unfortunately provide little indication that governments are prepared to regulate from a rights perspective. Changing this will be one of the most crucial battles for unions going forward. Dr. Christina Colclough is founder of the Why Not Lab - a boutique value-driven consultancy that puts workers at the centre of digital change. She is regarded as a thought leader on the futures of work(ers) and the politics of digital technology and works with unions, interest organisations and governments across the world on issues such as AI governance, workers' data rights and human rights, and the development of responsible digital technology. Christina is a Board and Committee member in several international bodies focussed on the Ethics of AI. See Christina's wikipedia page here.
- Union Club Spotlights Data Ethics
FSU-DK's union club in Nordea have put their sharp spotlight on the bank's use of employee data. Together with the Why Not Lab and their union colleagues in Sweden, Norway, Finland and Poland they are putting Data Ethics on the agenda. President of Nordea Union Board Mette Højby comments: Employee's data rights is a topic that we are going to work more on in the coming months. The collection and use of data is here to stay, but transparency is important and that only the necessary data is saved. The employees must know what data about them Nordea is saving, for how long, and for what purposes. We must demand that this purposes are well founded. Head of Secretariat Christine Asmussen continues: We experience that Nordea is very focussed on customer data, how they are used and saved. The same considerations must be given to employee data. Asmussen and Højby continue their reflections in the article by adding that new technology could be applied to really good purposes: a reduction of overtime, a fairer distribution of tasks, and a more balanced assessment of employee performance. But they caution that the very same technologies can be used to violate privacy rights. Read their article (in Danish) here:
- AI and Human Rights
In this, one of the most interesting panels I participated in in 2021, the speakers discuss the current relation between AI and Human Rights. Hear the Why Not Lab's Christina Colclough argue why we don't need just one global convention on AI, but many. Why algorithmic systems must be co-governed, why the draft EU AI Act needs to be scrapped, and why we all need to uphold article 1 of the Universal Declaration of Human Rights: “all humans are born free and equal in dignity and rights Panel at the 2021 "Athens Roundtable on AI, Human Rights and the Rule of Law". With Elizabeth Thomas-Raynaud from GPAI, Marielza Oliveira from UNESCO, Cornelia Kutterer from Microsoft EU, Patrick Penninck from the Council of Europe, and the Why Not Lab's Christina Colclough Here's what the Why Not Lab's Christina Colclough said: Thank you Bruce, and fascinating to hear all of you speak and it's changing what I had thought I was going to say. But let me take your questions and as usual, and Elizabeth you're going to laugh now, I will do it in reverse order. Should there be a convention you ask in singular? No! There should be many! And this is the thing we need to. I mean there's lots and lots of comments in the chat here about the complexity of all of this, well you know let's peel the layers´ off the onion here and then really start looking at the core features of artificial intelligence, it's deployment in the public, the private and in workplaces and see where is it that we need actually to have some conventions. These could be around transparency, around the co-governance, around the co-design of algorithmic systems to ensure that they do not intentionally or unintentionally harm - that's number one. I agree with most of what my fellow speakers have said, but I really think we need to start from the ground up here. What I do in the work in the Why Not Lab is really work with workers and unions across the world in all regions to bridge a huge gap knowledge gap here and that is around data around AI, algorithms. How do we understand these new technologies that are being introduced into workplaces and how then on that understanding can workers in unions start building a response to this and to really with the ultimate aim of tabling an alternative digital ethos. Now this leads into the idea of a convention. What we are seeing now is several things in workplaces and I’m going to limit my comments to the workplace. What we see is that management are introducing tools and systems which, for the vast vast majority are third-party systems, they have not necessarily been trained in identifying harms or risks in understanding what could be the unintended consequences of the use of these systems in terms of violations, of discrimination and bias and so forth. But also lots of other harms we can see workers are subject to: increased work speed. intensity etc etc. So, what we see here is that management are introducing these systems and they are not governing them, and if they are governing them at all it’s from a risk perspective, risk of being hacked or safety or something. it’s not from a social technical perspective. One of the things that the Why Not Lab is helping the unions with is actually starting that conversation with management around how we could co-govern these systems not to remove the responsibility and liability from management but actually to ensure that management does take that responsibility seriously. So, empowerment from the ground up I think is extremely essential. Can law keep up? Now this is a question which is almost fixating law in as a constant. Now law can be kept up if our politicians took responsibility. Now we are standing, and I said this when I bowed out of the GPAI Steering Committee, we are standing on the shoulders of giants. Politicians who before in history dared take responsibility, and I think the world is now looking at the current global politicians to say take responsibility. Let's face it, the current digital ethos which is running around the world right now is doing more harm than good, especially in a human rights perspective. The Universal Declaration of Human Rights, which has formed the basis of many human rights laws around the world, is so profound- and I really want to support what Marielza has said: We just have to enforce these absolute rights. Thinking that through, at the moment so many both workers and citizens of course are being manipulated to a degree that we must ask: Do they really have a freedom of thought? How is this being manifested in relation to their work opportunities? For example, are we narrowing the labour market into very exclusive labour markets where anything outside of the norm is actually thrown out the window? And then I really want to say something, because I am, if I can be so rude, really really tired of hearing governments and high politicians talk about how they respect human rights and yet they are allowing the abuse of human rights within their borders. Just in the world of work, union busting, for example, this is an abuse of human rights. So I really think we should tidy up our own backyards and then acknowledge we don't just need one international convention, we need several. And we need to break this down into the very core of artificial intelligence, or whatever you want to call it, algorithmic systems so that it is a co-building, a co-governance of these systems no matter where they are deployed. Moderator: Christina I’m sure you have some things you want to say on that topic of how companies can step up more? Has Microsoft ever consulted with their employees? Absolutely, I absolutely do. Cornelia [from Micrsoft]- now I’m lovingly looking at you. In everything that Microsoft has done, have you included a representative sample of your employees in forming those policies or practices? If you have, do you regularly check in with them around lived harms, lived experiences and so forth? What companies could do is bring dialogue back into vogue. To stop perceiving their employees as their enemies. To really value that the union representatives or the workers themselves have their ear to the ground. They are the ones who are living the impacts, or in the majority of cases, the harms that these systems are subjecting them to. Management are not experts And I am so frustrated in almost every single governance model that has been produced from academia, experts and think tanks there is an assumption in them that management knows what they're dealing with. They don ‘I. This is a fallacy. The majority of companies I’ve spoken to who are deploying third-party systems do not know how to govern these systems in a social technical environment. So we need to bring dialogue back into vogue. That is one thing. Second thing, what can companies do? Respect the collective agreements, respect human rights, freedom of association, the right to collective bargaining and through collective agreements start actually discussing the implementation, the purposes of these systems. Certification My third point is, and this is this is another mind-boggling thing about international law including the EI AI Act, if you certify at all, you are also certifying it as it is at the time of certification. I don't think you need to be much of a technical expert to know that the majority of these systems either self-learn through machine learning or get adapted because the instructions to the algorithms get changed. You cannot certify once. We need this and here I think that if Paul Nemitz is still on the call one of the genius things of the GDPR, although not many are living up to this, is the periodic reassessment of data protection impact assessments. For example, and this we need to understand, we need to periodically reassess these systems and you cannot, nobody can, justifiably unilaterally do this. You have to do this with stakeholders, multiple stakeholders at the table. Co-governance So what can companies do? That's it. You co-govern these systems, take their responsibility, educate themselves so they know what they're actually dealing with, and periodically commit to reassessments and to if harms are being experienced to throw the system out the window if it cannot be adjusted. Moderator: I'm sure you would agree with this Christina, but we need big tech to step up and say here's a great way of doing it, let's not wait to be regulated let's actually do some of the things that you described, which would be best practices. We can get people to behave more in the right way as opposed to just not doing anything. Bruce I would love that but how many of the big tech companies coming out of your country have a tradition of collective agreements and positive, constructive dialogue with their employees? None. So, I don't think the best practice is going to come from them. All algorithmic systems are harming I just want to very very quickly say that all algorithmic systems are harming certain people, certain groups, certain countries. So when is a harm big enough for it to lead to a ban? I just think we have to turn this upside down and say we cannot introduce any algorithmic system if it is not governed, and it has to be governed by those who are subject to its harms or impacts. So that's how I would turn it around. EU AI Act doesn’t mention workers once On the EU AI Act - can you imagine they come out with a draft Act and they don't mention workers with one single word. They admit that systems introduced in workplaces are high-risk systems, yet they shy away from any form of governance saying self-assessment, self-regulation is good enough. This is an absolute disaster and , why they moved away from the rights-based GDPR to the risk-based EU AI Act I do not understand. It has to be redone totally. In conclusion: Uphold UDHR art 1 I could go on with you guys forever this is so great. Now i would like to pose a question. There are lots of responsible technologies, which are being produced which are very underfunded and so forth. AI could do a hell of a lot of good in relation to the world of work. Where and why is this responsible technology not being funded? If people and governments really were committed to human rights there would be funding towards responsible tech. ´Then I just want to end by quoting article one of the Universal Declaration of Human Rights and remind us all that: “all humans are born free and equal in dignity and rights” and this, I think, we should commit ourselves to uphold. See all keynotes and panels from the 2021 Athens Roundtable here
- A Digital Rallying Cry to Politicians and Business Leaders
The Why Not Lab's Christina Colclough was interviewed by a Danish newspaper. Here she argues that far too many business leaders, and not least politicians, are guilty of knowing far too little about artificial intelligence, yet are opening the doors to a technology they don't know the consequences of. See the Danish article below, published by FAA on 15 February 2022. Photo and text by Journalist David Bernicken:
- Collectivise data, or not?
In this short impulse, Dr Christina Jayne Colclough argues why workers could benefit from collectivising their data, yet also why this is, or could be, a dangerous path. Responsbile data stewardship is a must to prevent the commodification of work and workers. "In relation to this conference, I'm really torn. On the one hand it would be fantastic if workers could collectivize their data, start analyzing them and pushing back on the very corporate driven narratives that we are fed. Those who have the data are also those who in a way can monopolize "the truth." On the other hand we also have to see, if unions or workers start collectivizing their data, whether we are entering into a world where the workers accept that data is seen as a commodity. And if we think that thought through, are we also ourselves then accepting to become commoditized? So on the one hand I want to stop this commodification of work and workers and on the other hand I think there's a deep need for us, maybe in the interim, to really collectivize our data, break the monopolization of truth, and let the world know what 'workers' realities really are. Here in that interim period the idea of a data cooperative, a data trust could be really really helpful, but with caution and with care. I think the whole idea of responsible data stewardship must come at the forefront of all cooperatives or data trusts that we are talking about today" About the New Common Sense Conference From November 12 to 18, 2021, the Platform Cooperativism Consortium (PCC), the Institute for Ecological Economy Research (IÖW), and the WZB Berlin Social Science Center hosted #TheNewCommonSense, the sixth edition of the annual PCC Conference. Over seven days, more than 80 cooperative pioneers, workers, trade unionists, policymakers, researchers, and activists from over 20 countries came together to take stock of how platform co-ops have responded to the pandemic, to analyze supportive policy, and to discuss current experiments with data co-ops, token systems, and feminist tech infrastructures, among other subjects. Watch all of the other impulses from activists, researchers, coop founders and workers here
- Tech-Talk on AI and EU Labour Market
In this panel debate moderated by ETUI's Aida Ponce del Castillo, Jan P. Brauburger from IndustriALL and I discuss how AI is affecting workers in the European labour market. Organised by the Competence Centre on the Future of Work of the Friedrich-Ebert-Stiftung as part of their TechTalk series, we cover a range of issues such as the #GDPR, #algorithmicmanagement, jobs and skills. It was a high energy discussion - do listen up!
- WeClock feat. in Suddeutsche Zeitung
In an article in Suddeutsche Zeiting, one of Germany's leading newspaper, WeClock is featured: Because new forms of employment also require new forms of observation of employment relationships, apps such as weclock.it give gig jobbers a kind of self-tracking tool that enables them to analyze how many hours are unpaid - an important concern for employees who paid per "gig", but often idling for hours waiting for orders. Read the original article in German here, translation below. Human as a drone Future of work? The "gig economy" is more of a relapse in times before the welfare state. But now the precarious jobbers are taking matters into their own hands. By Michael Moorstedt What some neoliberal thought leaders still regard as the future of work is, in fact, a real shitty job. We are of course talking about the so-called gig economy. For example, at the Gorillas delivery service. In view of the billions valued by investors, one forgets centuries-old achievements of the welfare state such as labor law. The drama surrounding the question of whether the company's precarious employees are even allowed to elect a works council has been dragging on for more than a year. Just last week, the delivery service announced that it would take legal action against the works council again. Labeled by smart spin doctors as a way to combine self-determination and gainful employment, gig work is in truth inherently inhuman. Because regardless of whether it is a delivery or driving service: In it, people become drones. The instructions are received from an algorithm on the smartphone. The software has authority and says who is used where and when. Right to have a say? Rather not. In addition to the already poor working conditions, the main issue is the lack of accountability. Because without a human contact person, objection is pointless. Uber drivers have often wondered why the system does not take them into account. Anyone who does not meet their quota will be quietly downgraded and will no longer receive any orders. This is called "Shadowban" in modern language. Even the tips that some customers give before they sit back on the "Tatort" couch after delivery are often enough withheld by the company. The gig workers defend themselves against the exploitation with their own tracking apps The labor dispute in the 21st century is not only fought on the familiar fronts such as wage equity or self-determination. It also affects new battlefields such as big data analyzes. Because the rules according to which one is preferred or disregarded remain opaque. According to the EU's General Data Protection Regulation, every single gig worker may obtain information about the data he has collected - but that does not help to establish a comprehensive set of rules that is applicable to all, which could ensure that platform workers have more insight . In order to make the balance of power a bit more balanced, software developers have published numerous apps in recent months that are supposed to help collectively record working hours, detect wage theft, track underpaid payments, collect data and, last but not least, build solidarity and organize yourself. Because new forms of employment also require new forms of observation of employment relationships, apps such as weclock.it give gig jobbers a kind of self-tracking tool that enables them to analyze how many hours are unpaid - an important concern for employees who paid per "gig", but often idling for hours waiting for orders. Instead of relying only on the arbitrariness of the algorithms, these programs promise answers to essential questions: Which platform is the right one for me? Which working hours are the most lucrative? Or also: Is it even worth it? Other projects go a step further and try to build delivery platforms that are collectively owned by the workers themselves. So the spirit of history is turning a new lap. After pig capitalism comes communism.
- WeClock featured in Huck
In the article "Gig workers are fighting back by surveilling their employers - Turning the Tables" journalist Anna Dent presents ways in which gig workers are fighting back against algorithmic management. WeClock is one of the ways this is done. Read the full article here, excerpt below. Gig workers are fighting back by surveilling their employers In the fight against bad bosses, workers are adopting the platform tactics used against them and adapting them for their own benefit. (photo Unsplash: Carl Campbell) Pay rates for Deliveroo drivers in Edinburgh have gone up recently, but workers don’t know why. The variables determining how much they get paid are contained by an algorithmic ‘black box’, which no-one outside the company has access to. Workers are left in the dark about why their incomes rise and fall – an unpredictability which causes stress and insecurity, says Deliveroo rider Lena (who asked for her surname to be withheld to protect her anonymity). “At the end of the week you’re anxious, because you don’t know how much money you will earn,” she tells Huck. “Rates are changing every day. We don’t know enough about why rates change.” [....] But relying solely on the market to boost incomes is risky, as riders could just as easily see their earnings fall again if more people join the platform, says Roz Foyer, general secretary of the Scottish TUC. If riders knew more about how their rates were determined, they might be able to achieve permanent increases. [....] In response to this data-driven power imbalance, platform workers, unions and academics are fighting back. Dr Christina Colclough, founder of the Why Not Lab, sums up the challenge: “How do we, without increasing the surveillance of workers, give them control?” One approach gaining momentum is to adopt platform tools and techniques and adapt them for workers’ benefit. “Companies have an attitude of ‘We can watch you and not tell you’… Now workers are wanting to say, ‘We can watch you too’,” says Cailean Gallagher, coordinator of the Edinburgh Workers’ Observatory. [....] Dr Colclough, when serving as director at UNI Global Union, worked with partners at Guardian Project, MIT Media Lab and Cambridge University to develop WeClock. The app allows workers to track a wide range of data produced while they work, such as a record of time spent in the workplace, whether proper breaks are taken, and time between shifts. The data might show that workers are putting in more hours than they are paid for or not given the breaks they are entitled to, helping them to gather evidence and build a case for change or compensation. These new tools are not just the preserve of well-funded startups. In many cases, they are built by gig workers themselves, using their existing coding or copywriting skills. Boyan, for example, uses data skills gained from his new job in tech and his degree. It’s early days for many of these initiatives, with teams only just getting to grips with what is technically and legally possible, and the best tools to achieve their goals. On top of fighting for better working conditions in the short-term, this idea of workers taking control of their data could feed into bigger conversations. This includes the future of platform ownership, and whether worker-owned platforms could one day become the norm. For Roz Foyer, it is critical that unions embrace these new tactics and help to build digital tools that provide new ways of empowering workers. But Dr Colclough sounds an important note of caution, emphasising the need to avoid replicating the problematic aspects of the platforms workers want to challenge: “We have to be very, very careful that we don’t jump into this new technology without thinking about protecting the privacy of members,” she says.
- WeClock featured in Wired
'Worker Data Science' Can Teach Us How to Fix the Gig Economy Written by Dr Karen Gregory on the background of the amazing conference she and her team put together Digital Worker Inquiry, this article in Wired is a great resource to get an idea of all the union and worker activism that is currently in place to support workers in their fight against poor and exploitative working conditions in the 'gig economy'. It includes mention of our app WeClock, as well as a wealth of information about workers' data collectives, legal rights, practical experiences in collectivising data, algorithmic management and more! The Why Not Lab is quoted too: For Christina Colclough, the founder of the Why Not Lab, unions must specifically build capacity to understand the “ins and outs of data and algorithms” and develop their own teams of data analysts. As Colclough has argued, trade unions have a fundamental role to play in protecting workers’ collective digital rights. While digital inquiry tools may offer new forms of data, it is essential that these projects help build union strength, rather than fracture or privatize worker interests. Any long-term change that might be made possible through these tools will come through drawing unions into larger political conversations about data governance. Read the full article as pdf here
- Your digital self-defense
The digital world is still relatively new to us humans. Unlike in nature, where we can recognize a threat, we are still learning to move safely in the digital jungle. We can't help but be watched on our journey, but here are my top ten tips to protect you against the worst predators. This article was originally written for the Danish Insurance Sector Union. See the Danish article below, English version under the Danish insert. My top ten tips This article wraps up the ones I have written for the Danish Insurance Sector Union on the challenges of the digital world. This one is all about my top ten tips for protecting yourself and others from the shadow side of the digitised world. The digital jungle, I might be tempted to call it, because while it may look charming on the surface, threats are hiding everywhere. As with the jungle’s mosquito net, boots and protective clothing, my recommendations for your journeys in the digital jungle will also seem cumbersome. That's the way it should be, because digital tools, apps and services are precisely designed to make it as easy as possible for you to use them without having to think too much about what they do and what you don't see. They offer you simplicity and convenience while collecting data from you and tying you ever stronger to them. To protect your own and others' rights, I hope you will consider changing some of your digital habits. Let's look at what that means. Tip 1: Employer-provided mobile phones and computers If your employer has given you a mobile phone, they probably also installed Mobile Device Management (MDM) software on it. MDM protects the data that you produce when you use your company's e-mail system, download and save files, and the like. But MDM also allows your employer to see which apps you've installed and how you generally use your phone. Let's say you downloaded an app about pregnancy: Will it tell your employer something that you may not have told them yourself yet? Or maybe your employer has turned on a "find my phone" service. In principle, it allows your employer to track where the phone is at any time of the day – and therefore also where you are. So, did you call in sick while you were actually visiting your family on the other side of the country? There could be consequences. An employer paid mobile is the employer's property. Do you use it for your social media? To save pictures or take notes? Keep in mind that all of this information is in principle your employer's. Anything you privately store on your employer mobile can be used against you at worst. So my first tip is quite simple, but also one of the more cumbersome: Buy a private mobile phone and turn off the employer's after working hours. Only use the employer paid mobile for work purposes. Everything else you should do from your private mobile. And the same applies to employer-paid computers, tablets and other devices: use them for work only and turn them off after working hours. Use your private devices for everything else. Tip 2: Your work email address This leads me to the next piece of advice: Your work email. It is also your employer's property. Every single email is theirs, and they have the right to check your emails if they have good reasons for doing so. There are several reasons why we should be careful when using our work email. The General Data Protection Regulation (GDPR) contains something called the 'Right of access by the data subject’ (DSAR). This right ensures that a person can obtain a copy of all the personal data that a company has on him or her. You can use this right as an employee to get a copy of the personal data your employer has collected about you. For example, they should then give you a copy of all emails, files, SMS texts, images from surveillance systems and data-driven inferences, that include your data. They must also provide you with information about who this data has been disclosed to and what else the information has been used for. While the DSAR is a strong right, it also means that if you have used your work email address to write about a particular person and that person makes use of the data subject access request, also your emails must be included in the package they receive. This most probably raises a lot of questions and maybe some nervousness with you as well. So, the second tip is that you should use your work email address for work purposes only. Nothing else. Tip 3: Mind the apps! Fortunately, Apple and Android (Google) have developed their services, so you now get an overview of which apps on your phone or tablet have access to which sensors on your device. It's good and practical, as we – hand on heart – never read the endless privacy policies, terms and conditions, but just blindly accept them when we install an app. But we should read them. Your mobile phone and tablet typically contain 14 sensors that for example provide information about where you are, at what speed you move and reveal what floor of a building you are on. Apps can collect a wealth of information through these sensors - unless you take the time to take control over which sensors your apps have access to and which they don’t. The tip here is that you should make use of the possibilities to restrict your apps from collecting data about you and your whereabouts. Delete the apps you never use. Every one of them is a potential tool to collect information about you. Tip 4: Social media We've become accustomed to using social media to stay in touch with our surroundings, check what they're doing, see their pictures and show others what we are doing. There's a whole bunch of social media, and you're probably using more than one. Did you use the same email address to sign up for them? Have you used your mobile phone number for that? Bear in mind that while these services can be really convenient, their whole business model is built on harvesting data from you, analysing them and selling them on. It's fun and exciting to use their services, but there's no free lunch here – you pay one way or another. So here are some tips to protect your identity and privacy on social media. Stop using them! Many have. I have stopped using Facebook to protect my privacy. This might seem like a big step, so instead try to share less about you and your life. You really don't have to share everything. These services are not only interested in you, they also extract lots of information about the places you visit and about the friends you have. Do you ask for permission before uploading pictures of your friends? Or your children? Online live is forever. Even a 15-year old photo can compromise a person's job or opportunities in life. There are already many companies that make a living vacuuming social media and creating profiles of people's activities and attitudes. They are used by HR managers, recruitment experts, banks and governments far more often than we would want to know. Use a dedicated email address. When creating social media accounts use a dedicated email‑address that you only use for this purpose. Never use your work email or your private e-mail. Beware of criminals. If you provide your real name, date of birth, mobile number and the like, it is not difficult for criminals to steal your identity. They love to share. Facebook - or Meta, as they call themselves now - owns WhatsApp and Instagram. Google owns YouTube. They love to share data between their different services. Therefore, think twice when using these services. For example try Signal or Telegram instead of WhatsApp, especially if you need to send and receive confidential data. Tip 5: Beware of free wi-fi connections When you're out and about traveling, or just in a public space, think twice before logging on to free wi-fi connections. Remember that nothing is free! Even if they don't charge money, for every millisecond you're on these networks you are giving them data. Often you need to sign in with your email or flight number, and then they know who you are and where you're going. If you have exceeded the amount of data that your mobile plan holds, consider using a virtual private network (VPN). If you still have data left, use it — it's your safest connection. Tip 6: Virtual Private Networks VPN In fact, we should always connect to the Internet through a VPN connection that creates a secure connection to the digital world. A VPN is a Virtual Private Network. Think of it as a way to conceal your online activities and mask what you are doing and where you are. Use a VPN service wherever you go. It will give you a protection that is greater than even a secured Wi-Fi hotspot: VPN makes it hard to snoop on your traffic and pick up on what you are doing. VPNs can also prevent websites and Internet services from knowing your real location. And with a VPN, your ISP can't monitor your activity. I have a VPN system on my mobile phone that blocks all sorts of things. Some months it has blocked up to 18,000 trackers! Tip 7: Cookies Cookies – digital cookies – are the reason you get all these pop-up messages when you are browsing the web. Cookies ensure that we do not have to log in again and again on websites we frequently visit. They also allow websites to keep track of what webpages we visit, how often, what news articles we read and much more. Cookies also provide data to some of the most popular analytics tools, such as Google Analytics. Google Analytics (and systems like it) provide website owners with real-time reports about users’ location, what webpages they read, for how long they are on a site- and much much more. Already in 2009, the EU adopted the so-called 'cookies directive'. It places restrictions on the tracking of online activities and requires website owners to ask for a user’s informed consent to the use of cookies. That's why you get these many cookie pop-ups. This important law aims to protect your privacy. So use the few seconds it takes to "reject all cookies" – or at least only allow "strictly necessary" ones. If you "accept all cookies" you will probably be tracked in everything you do online. Tip 8: Go incognito Related to rejecting cookies is the possibility to choose private mode – incognito or 'private internet browsing'– when using the web. This is an option in most browsers and ensures that the browser doesn’t store your browsing history, your temporary Internet files and/or cookies. In Google Chrome, it is called 'Incognito Mode'. In Firefox it’s the 'Private Browsing' option, and Edge/Internet Explorer uses the name 'InPrivate' browsing. Common to them is that it is a privacy feature that prevents a lot of tracking. But even these private mode are not entirely private. When you search in incognito or private mode, your ISP can still see your browsing activity. If you use a company computer, your IT department can follow it as well. Even websites you visit can track you. So whilst, incognito browsing has certain advantages, do use the other tools to protect your digital privacy, not least a VPN. Tip 9: Keep everything up to date Updates help you keep your mobiles, computers, and other devices safe. A good tip is that you check all your devices for system updates at regular intervals, daily or weekly. Those who have developed the systems we use actually do what they can to protect us from hacking and other bad stuff. I start the week by checking all my devices and installing updates. You can also set your devices to do this automatically, but often it can be convenient to decide that updates should only be downloaded when the devices aren't being used anyway, rather than in the middle of a meeting. Tip 10: Remember rest time My final tip has to do with our increasing reliance on our smart devices. Let's just admit it - we hardly go anywhere without our mobiles. But maybe we should! There is no need for Google or Apple to know where we are 24 hours a day, 7 days a week. Go for a walk without your phone. We must dare turn it off every now and then and give ourselves the right to a free space and rest. I wish you the best of luck in changing your digital habits. Do it for your own sake – and for all those affected by your data. Make protecting your privacy a fun thing to do together with your family, friends and colleagues.
- Worker Data Collectives
- Between commodification and empowerment In this 8-minute impulse for FES #DigiCap I introduce my thoughts on worker data collectives. Listen up for why I think that data collectives have their merit on the medium term - but not unconditionally so. The impulse is part of a panel featuring Professor Trebor Scholz and Senior Fellow Sigurt Vitols. We discuss how democratic participation is changing in the European economy, and to what extent digitalisation is putting pressure on it. Which innovative ideas, such as data collectives or platform cooperatives, can we use to strengthen co-determination and democratic participation in the economy? Worker Data Collectives Demands In my introductory impulse I argue that I am torn between two opposites. Whether workers collecting data will add to the commodificaiton of workers, or will it empower them? Worker data collectives have their merit, but not unconditionally so. Here I lay out some of the basic requirements for the responsible collection of worker data: We need to learn from the finest principle in the GDPR: data minimisation. Workers should only collect the necessary data for a particular purpose. Hoarding data is not the solution. Redlines must be set for what the data can be used for, by whom and why. Workers data collectives should avoid the temptation to sell access to the data: Selling means we accept that data is a commodity and therefore we are. I continue by arguing that data collectives can be necessary to push back on the industrial monopolisation of the narrative that we are currently subject to. When businesses are those who are hoarding the data and turning this into uncontested "truths", workers (indeed all citizens) are being manipulated exclusively for the good of capital. But regulation is needed, here is what we need: Regulation Needed To prevent the uncontested commodification of work and workers I call for the following global policies: Workers need the right to know what data generated inferences they are subject to. No regulation in the world allows for this, even the GDPR fails us here. Yet these inferences can significantly impact who gets a job, who is penalised, who is offered which opportunities and so forth. We need to ban the markets that trade in human futures. I. e. the trading of datasets and inferences that include people's data. This market, which nobody really knows who occupies, is leading to the commodification of us all. And I call for the co-governance of algorithmic systems in workplaces. No audit, impact assessment or governance can take place without the voices of those subject to these systems. See the full FES event website here: https://www.fes.de/en/digitalcapitalism for all of the amazing speeches and interviews. The entire panel debate here: https://youtu.be/zIpcQ4ABpJk
- Human Rights Must be Upheld
Today I held my last intervention as Steering Committee member of the Global Partnership on AI - an intergovernmental cooperation counting 19 governments and the EU. I was asked how GPAI could be developed, and offered three areas of utmost importance. - Human Rights, Diversity and Commitment I mentioned how Governments today are standing on the shoulders of giants, where world leaders at crucial moments in history dared commit to one another. I stressed that the world today needs the same level of commitment. That tech knows no boundaries and global regulation is needed. On diversity that is key for GPAIs future, I asked for new structures and polices that would allow the voice of those subject to the harms and impacts of AI to be present. Also if they do not have the resources to volunteer their participation. And, I added that the most pertinent issue is that of upholding Human Rights. How we can debate ethics, debate values, but we cannot debate human rights. That these rights should permiate all of GPAIs work. And then some more. See video recording See the interventions by Co-Chair Baronnes Joanna Shields (excellent opening), Co-chair Jordan Zed, incoming SC member Dr Yoku Harayama and me here
- Seven ways platform workers are fighting back
The Trades Union Congress (TUC) in the UK has just published this collection of essays on how platform workers are fighting back against algorithmic management, the datafication of work and precarity. It includes an essay by Christina Colclough, the founder of the Why Not Lab. Collection Summary Platform working is an expanding part of the economy. Globally the number of platforms has grown five-fold in a decade. And the coronavirus pandemic seems to have been the catalyst for further surge in platform growth in the UK and elsewhere as many homebound workers opted for internet shopping and food delivery. New polling data published in this collection shows that 14.7 per cent of working people in England and Wales, equivalent to approximately 4.4 million people, now undertakes platform work at least once a week. Almost a quarter (22.6 per cent) of workers have done platform work at some point. It can seem that practices like casualisation, management by algorithm rather than human and a complete absence of trade unions are baked into the way that platform economy is run. There are fears that it is therefore only a matter of time before these spread to other jobs. Yet platform workers, who range from private hire drivers to translators, and their representatives are fighting back in many important areas and have secured notable victories. However, often this activity is conducted in isolation: the labour lawyers plot improvement to employments rights in one corner while the tech enthusiasts highlight discriminatory algorithms in another. Meanwhile, union organisers plug away in the vital work of signing up new members. This essay collection seeks to unite these experiences to build a picture of the various areas where platform workers are fighting for their rights with the aim of informing and inspiring future union activity. All work should be decent work. These essays set out ways this might be achieved in the platform economy. See the full table of contents here: Building Union Data Capacity Colclough's essay argues that we need to turn the tides, and tip the scales so workers can be empowered and protect their rights. Digital platform workers and their unions could beneficially tap into the powers of digital technologies to form their responses. While it would be ill-advised to simply duplicate, or increase, the surveillance of workers and the commodification of work they are already subject to, she presents two inspiring possibilities (WeClock and Driver's Seat), a helpful guide (Lighthouse) and a vision for the future (workers' data collectives). Common for them is that they empower workers through the responsible collectivisation of worker data. Read her essay below, download it, and do read the full publication