90 items found for ""
Blog Posts (83)
- Protecting workers’ rights in digitised workplaces
Published by Equal Times on May 4, 2023, this op-ed by the Why Not Lab argues that workers and their unions must understand the means through which harms are caused by digital technologies to protect workers' fundamental rights, values and freedom. This implies acquiring new capacities to fill the holes in current #AI and #dataprotection through collective bargaining. Read on to find out what must be done, and why employers and governments need to know, what they need to know, too! Protecting workers’ rights in digitised workplaces - Knowing what we need to know Across all sectors of the global economy automated and algorithmic management systems (AMS) of various kinds are being deployed by employers to, supposedly, increase productivity and efficiency. Whilst this quest is in no way new – management has since the dawn of capitalism surveyed workers and sought to improve productivity – the depth and the breadth of the impacts of these systems are. Whilst some AMS can have a positive impact on working conditions, many don’t. Across the world, workers have reported about a range of negative impacts, amongst others the intensification of work and the pace of work, discrimination and bias, and the loss of autonomy and dignity. Whilst these effects unfortunately are issues workers and their unions have had to deal with even long before the digitisation of work, the means to these harms are different. Preventing them from happening in digitised workplaces requires therefore that we understand the means. In the case of (semi-) automated and AMS this implies that we understand what data, algorithms, artificial intelligence (AI)/machine learning, inferences and a lot more are, and how these in turn can affect workers. So what AMS are we already seeing? Berkeley Labour Center’s classification is here useful and identifies three different types of systems: Human resource analytics, such as hiring, performance evaluation, and on-the-job training. Algorithmic management,, such as workforce scheduling, coordination, and direction of worker activities. Task automation, where some or even all tasks making up a job are automated using data-driven technologies. Common for all three types is that they: 1) delegate managerial decisions to (semi-) automated systems; 2) all use and collect data, from either workers and their activities, or from customers (for example how they rate a worker), and/or from third party systems (such as online devices, profiling systems, public datasets, previous employers, and/or data brokers); and 3) have been programmed to fulfil a particular purpose. Some have been given instructions as to how to fulfil that purpose. More advanced systems that, for example, use machine learning or deep learning, are however not told how to fulfil the purpose but rather get feedback from a human when they are on the right track. Regardless of the individual system, at some point in the life cycle from its development to its use, humans have been involved. They have determined the purpose, developed the system, maybe they have decided to reuse a system designed for one purpose and altered it somehow to fit another purpose, someone has determined the instructions, decided which datasets it should be trained on and later use, and so forth. Preventing harm requires capacity building All of the above hints to things we need to know and understand so that we can defend workers’ rights in digitised workplaces. Firstly, we need to know what digital technologies are being used in our workplaces. We then need a basic understanding of them, such as: who has developed them, how do they work, what data has been used to train the system, are these data representative of our culture, traditions and institutions? We need to know what algorithmic inferences are and which ones are being used in the system and/or are subsequently made. We must find out what the instructions to the systems have been and who has given them, and how all of this together and independently can impact workers’ rights up and down value and supply chains today as in the future. Admittedly, this is not a small set of tasks. To know what to ask and what to look for requires specific, and for many new, competencies. In some countries, management might be obliged by law to provide workers with some of this information. In some, management might be interested in engaging with the workers and therefore are happy to share what they know. In others, management might be tight-lipped and say nothing nor engage meaningfully with the workers. In all cases, workers and their representatives could begin by defining general principles for the use of digital technologies in workplaces (see, for example, the British TUC’s 2021 report When AI is the Boss). They could then map, analyse and query each system used. With this knowledge, they can negotiate guardrails and permissions both around the systems’ current but also future impacts on workers. Managerial fuzz Yet it is not only the workers who urgently need to build capacity – so too do the employers who are deploying digital technologies. Reports from unions from across the world reveal that managers “don’t know what they should know” either. Maybe the human resources department is using an automated scheduling tool that the IT department has purchased on the order of executive management. Who is responsible for the impacts of the tool? Who has been trained to identify and remedy harms? In many cases, the division of responsibility between managers with regards to the governance of these technologies has not been made clear. Managerial fuzz abounds. Who has informed the employees about the system? Do the systems actually do what they claim? How should they be governed for (un)intended harms and by whom? Who is evaluating the outcomes and making the final decision to go with the system’s recommendations or results, or not? What are the rights of those affected? It is alarming, to say the least, that so many workers report that they have never been informed about what digital technologies their employer is using to manage them. Equally concerning is the fact that managers are deploying technologies they have not properly understood. Given that the vast majority of digital technologies deployed in workplaces are third-party systems, if employers aren’t governing the technologies that are designed by others yet deployed in their workplace, control and power seems to be slipping further away from the workers and into the hands of the vendors and/or developers. The labour-management relation is thus becoming a three-party relation, yet few fully realise this. The increasing power of third-party vendors and developers is occurring at the expense of the autonomy of labour and management. This, in turn, will indirectly, if not directly, have a negative influence on worker power. Governmental omissions Many governments across the world have already improved or are looking into improving data protection regulations but also into regulating AI. An element of these regulation proposals has to do with mandatory audits or impact assessments. Whilst this is good, there are some worrying tendencies. Firstly, no government is discussing that these audits or assessments should be made in cooperation with the workers and/or their representatives. This includes within the EU – otherwise heralded as a region in support of social dialogue. Secondly, they all assume that the tech developers and/or management have the competencies they need to meaningfully conduct these audits and assessments. Do they though? Is self-assessment sufficient? Is it acceptable that they alone decide (if they at all actively do) that a system is acceptable to use if it is fair to 89 per cent of the workers? What about the remaining 11 per cent? Shouldn’t the workers concerned have a say? Capacity building is happening To fix all of these issues, there can be little doubt that capacity building is required. Fortunately, over the last one to two years, more and more unions are doing exactly this. The global union for public services unions, PSI, is this year concluding a three-year capacity building project called Our Digital Future. It is training regional groups of digital rights organisers, trade union leaders and bargaining officers and equipping them with tools and guides to help bridge the gap from theory to practice and strengthen their collective bargaining. The International Transport Workers’ Federation is running a two-year union transformation project that is introducing unions to a tailor-made Digital Readiness Framework that seeks to help unions tap into the potentials of digital technologies - but responsibly and with privacy and rights at heart. Education International has launched a three-part online course on their ALMA platform on the challenges of EdTech and possible union responses. The British TUC has just launched an e-learning course called Managed by Artificial Intelligence: Interactive learning for union reps that in a practical and guided way helps unions map the digital technologies used and from there form critical responses. The AFL-CIO in the United States has created a Technology Institute that according to AFL- CIO president Liz Shuler is “a hub for skills and knowledge to help labour reach the next frontier, grow and deploy our bargaining power, and make sure the benefits of technology create prosperity and security for everyone, not just the wealthy and powerful.” Three Norwegian unions, Nito, Finansforbundet and Negotia, have collaborated to create an online course for shop stewards that is a general introduction to AI systems in workplaces as well as provides a tool to support shop stewards in asking the necessary questions to protect workers’ rights and to hold management responsible Many other national, regional and global unions are leaping into this capacity building work through workshops and conferences on the digitalisation of work and workers. These events are inspiring their continued work to transform their strategies and table new demands in collective bargaining. The thrust from the unions will bring employers to the table, and in turn entice them to know what they need to know to address the union demands. Given the sluggishness and gaping holes in current governmental AI regulation discussions, collective bargaining will be essential for workers and their unions in order to reshape the digitisation of work in respect of workers’ fundamental rights, freedom and autonomy.
- Keeping Work Human (podcast)
In this episode of the Pondering AI podcast by Kimberly Nevala, we get to cover many of the topics I find pertinent in our quest to reshape the digitalisation of work. From tech determinism, to the value of human labour, what I call 'managerial fuzz', collective will, digital rights, and participatory AI deployment. Hear us discuss the path of digital transformation and the self-sustaining narrative of tech determinism and why I think there is an urgent need for robust public dialogue, education and collective action. Hear me decry the ‘for-it-or-against-it’ views on AI and why I embrace being called a Luddite - why? See image below and do read Jathan Sadowski's full article called "I’m a Luddite. You should be one too" It's not all doom and gloom though. We also discuss the concept of "Inclusive Governance" - how AI techonlogies could be less harmful and more supportive of fundamental rights if management and labour governed the technolgoies together. TO this end we all need to capacity build so we can tap into the benefits of AI while avoiding harm. I end with what Kimberly calls with a "persuasive call for responsible regulation, radical transparency and widespread communication to combat collective ignorance" Access the transcript of this episode here.
- Når din chef er en algoritme. Podcast
Late 2022, with a cold in hand, I was invited to discuss Surveillance at Work with Christiane Vejloe and Peter Svarre. It's in Danish - jump on in and listen to examples of what goes wrong when your boss is an algorithm. Description below in Danish. Podcast here Synes du nogle gange, at chefen behandler dig uretfærdigt? Så er tanken om at udskifte ham eller hende med teknologi fri for fordomme og nag måske tillokkende. Forestil dig et arbejdsliv hvor du bliver mødt præcist på det, du laver og ingen kommenterer hverken dit køn, din alder, din hudfarve, din seksualitet, dit udseende eller dit dårlige humør. Du bliver simpelthen blot objektivt vurderet på din performance. Det hedder algoritme management. Og det bliver stort. Men bliver det godt? Vi tager livtag med algoritmechefen i dagens afsnit af Del og Like. I panelet: Digital rådgiver Peter Svarre. Ekspert i the future of work(ers) Christina Jayne Colclough. Vært er Christiane Vejlø. Programmet er produceret af Elektronista Media for ADD-projektet.
Other Pages (7)
- TOOLS & GUIDES | The Why Not Lab | Championing ALT Digital
Tools & Guides One of the Why Not Lab's missions is to co-develop and deploy digital technologies that empower minority groups and safeguard human rights. Check out our privacy-preserving self-tracking app WeClock . Or the tool for good data governance Lighthouse . Or dive into our guides for ensuring worker empowerment in digitalised workplaces. Young Workers Lab: Quote Thoughtexchange Worker Empowerment Guides (updated) by the Why Not Lab Check out our Data Life Cycle at Work guide that will help you navigate the topics for bargaining for much stronger workers' data rights. A void left by inadequate regulations across the world. Our Co-governance of Algorithmic Systems Guide buckets 19 questions into 7 different themes. Essential for getting a seat at the table and decision-power over the algorithmic systems in place at work. A check list of sorts, walk this guide and keep management responsible and liable for the systems they use. Note you can turn each question into a collective bargaining clause! Check out our beta version of a step-by-step guide to negotiating for improved Data Rights . It brings legal rights to you as you map the digital tools used and prepare negotiations. Provided here by Public Services International. How do you transform your union so you tap into the potentials of digital technologies but responsibly and with your members privacy at the core? Go through our digital readiness framework - map where you are now, pick a dimension to work on and redo the framework in 6 months. Provided here by Public Services International. WeClock the app for workers by workers WeClock offers a privacy-preserving way to empower workers and unions in their battle for decent work. WeClock gives an indication of the present and changing nature of work by providing insight about the: presence or absence of decent or fair work, working conditions, or work/life balance. Built with workers in mind, WeClock empowers change. Check out the app's website here for more info and download links. WeClock Lighthouse Lighthouse Online Tool for Good Data Governance Given that WeClock will be a data-gathering tool for unions, we decided to work with a UK union Prospect , along with Digital Public and Small Scale , to develop Lighthouse – an online, privacy-preserving tool to help unions become stewards of good data governance. Lighthouse takes the form of an online guide - or quiz - where those participating get to rate their methods and practices along a range of topics. Use Lighthouse courtesy of Prospect here Digital Tools for Trade Unions 2019 Report As part of the Young Workers Lab project, we scanned the market for digital tools either built by trade unions or that could be an inspiration for trade unions. Our insights are available in our "Connective Action: Digital Tools for Trade Unions report." Download the report here . Thoughtexchange the crowd-sourcing platform As work becomes more precarious, piecemeal and decentralised, we needed a communication tool that could reach all members, no matter where they were and when they worked. Thoughtexchange is that very tool! Read user stories from UNI Global's members and sectors here: Engaging members in new ways (Unions21) Grim Reality for Young Workers Strengthening Union Democracy Contact Thoughtexchange here
- About | The Why Not Lab | Championing ALT Digital
about the Why Not Lab The Why Not Lab is a boutique value-driven consultancy that exclusively serves progressive organisations, trade unions, public services and governments. The Why Not Lab has a three-fold mission to ensure that the digital world of work is empowering rather than exploitative. We: Equip workers and their unions with the right skills, know-how and know-what to shape an alternative digital ethos that ensures collective rights in the digital age; Put workers' interests centre stage in current and future digital policies Support public services and governments in ensuring that the use of digital technologies respects human rights and is fair, transparent and responsible. The Why Not Lab is run by Dr Christina J. Colclough - a fearless optimist who believes that change for good is possible if we put our minds and heart to it. She works with experts and partners across the world to provide the best advice at all times. Read more about Dr. Colclough below Please note: We support workers and organisations across the world and have adopted a differential pricing principle. Do contact us with any inquiries. Dr Christina J. Colclough Widely regarded as a thought leader on the futures of work(ers) and the politics of digital technology, Dr Christina J. Colclough is an advocate for the workers’ voice and for strong, quality public services. She founded the Why Not Lab with the aim to reshape the current digitalisation trajectory, so human rights, freedoms and autonomy are respected and protected. Christina’s background is in labour market research and in the global labour movement , where she led their future of work policies, advocacy and strategies for a number of years. She was the author of the union movement's first principles on Workers' Data Rights and the Ethics of AI. A globally sought-after keynote speaker and workshop trainer with over 400 speeches and trainings the last 3 years, Christina created the Why Not Lab as a dedication to improving workers' digital rights. She is included in the all-time Hall of Fame of the world's most brilliant women in AI Ethics. Trusted Positions Christina is a Fellow of the Royal Society of Arts in the UK and Advisory Board member of Carnegie Council's new program: AI and Equality Initiative. She is also a member of the UNESCO #Women4EthicalAI Platform, the OECD One AI Expert Group and is affiliated to FAOS, the Employment Relations Research Center at Copenhagen University. In 2021, Christina was a member of the Steering Committee of the Global Partnership on AI (GPAI). Dr Christina J. Colclough Our Digital Future Our Digital Future is a 3-year project with Public Services International aimed at capacity building unions in all regions of the world on digitalisation of work and workers and co-designing union responses. Training material (reports & slides) are available upon request. MOOC for unions With Education International we recorded over 30 videos on the impact of education technologies on the rights, freedoms and autonomy of teachers and education support personnel. It's now a full blown MOOC brilliantly edited by EI for their members. Co-governing A.I Through thematic advice and training we are supporting a group of unions in a European country on the co-governance of algorithmic systems in workplaces. Their aim is to scale to the entire labour market. UnionTech Find the material & recordings from this 4-part series of workshops on #UnionTech here - courtesy of participants, presenters and FES . These workshops united participants to build their capacity to critically use & challenge digital technologies. Digital Training USA Pretty honoured to be working with a top-notch university in the US to create a series of workshops on digitalisation and the impacts on work and workers. The first round of workshops is tailor-made union leaders. Data Storytelling The team behind WeClock offers with support from FES an in-depth, hands-on course on data storytelling. Through responsible data collection, to designing and running the campaign, participants learn how to analyse their data and use this in their campaigning. Current Projects Current Projects Testimonials Testimonials John C. Havens E.D., IEEE Global Initiative on Ethics of Autonomous & Intelligent Systems & Council on Extended Intelligence . In an environment where rhetoric often rules all, Christina provides hard-hitting yet pragmatic and solutions-oriented counsel on issues including the future of work, human autonomy, human rights, and technology governance in general. She is my "go to" person on any issues related to AI and the future of work based on her specialized knowledge of worker's rights and actual global policy and economics relating to these issues versus only aspirational techno-utopian ideals. She is also a gifted and personable speaker, transforming highly nuanced and complex technical and political issues into conversational, story-oriented speeches. What I am reading
- Podcasts | The Why Not Lab | Futures of work | Championing ALT Digital |
Podcasts << Jump to Speeches Podcasts: Services Keeping Work Human - Pondering AI 🎧 Podcast: Apr 19, 2023 Covering many of my overall views on the digitalisation of work, in this podcast with host Kimberly Nevala we discuss the value of human work, the divisiveness of digital platforms, and sustainable governance. Why emerging AI regulations give workers short shift, whether AI regulation is being privatized and the dangers of the quantification of humans. Selvmål eller high score? Tracking af din arbejdsindsats 🎧 Podcast: Dec 2, 2022 Podcast in Danish Synes du nogle gange, at chefen behandler dig uretfærdigt? Så er tanken om at udskifte ham eller hende med teknologi fri for fordomme og nag måske tillokkende. Det hedder algoritme management. Og det bliver stort. Men bliver det godt? Vi tager livtag med algoritmechefen i dagens afsnit af Del og Like. Worker surveillance is spreading 🎧 Podcast: Oct 5, 2022 As more of our daily work takes place on digital platforms, it has become possible for employers to collect data on everything we do. It is, however, a development that should be discussed and regulated say critics Peter Svarre and Christina Colclough, in this podcast in Danish. Read English summary here . Digital Technology at Work 🎧 Podcast: Feb 16, 2022 Tune into this conversation between Christina Colclough and Henrik Skaug Sætra on power, efficiency, digital tech and data at work. Hear Christina argue why the lack of knowledge about the depth and breadth of digital technologies are leading to a lack of regulation and governance. This leads to human rights abuses and the dangerous commodification of work and workers. Disrupted Asia - AI, data rights & futures of work 🎧 Podcast: Jan 26, 2021 Interviewed by FES in Asia for their podcast series Disrupted Asia. this podcast analyses the effects of digitalisation, Artificial Intelligence and data protection in the workplace. We talk about worker surveillance, the data lifecycle at work, algorithmic decision-making and union responses and ways forward for the labour movement in the Asia-Pacific region. A Brave New World of Work 🎧 Podcast: Oct 20, 2020 In this podcast, Dr Christina J. Colclough discusses with Simon Sapper what unions can, should and must do to safeguard their members in the digital world of work, and what’s likely to happen if they don’t! We tune in on the urgent need for union capacity building, for new negotiation strategies on the governance of AI and worker surveillance. The Future of Work in Education 🎧 Podcast: Jul 29, 2020 Listen up for why Dr Christina Colclough cautions that the current unfettered digitalisation of education is a fundamental attack on human rights and what unions across the world should both do and demand to shape a safer, better and inclusive future of education. Recorded by Martin Henry from Education International - the Global Union for 32.5 million teachers and other education employees across the world. Empowering Workers in the Digital Future 🎧 Podcast: May 29, 2020 Companies increasingly use digital systems to hire, fire, and monitor employees. But who is keeping employers in check? Former director at UNI Global, and one of the most influential thinkers in the ethics of AI, Dr Christina Colclough joins Azeem Azhar from Exponential View to explore how to ensure that the increasingly digital workplace of the near future protects workers. Yours, mine and our data 🎧 Podcast: Feb 11, 2020 Recorded at a debate meeting at Kulturhuset in Oslo, Dr Christina Colclough takes the audience through an awareness raising journey on why we need to push for a new digital ethos that protects human rights, our right to be human, our data rights , democracies and more. This podcast is all about data... (starts in Norwegian but continues in English) In Data We Trust? 🎧 Podcast: Sep 9. 2 019 Element AI’s Marc-Etienne Ouimette spoke with some of those leading the charge around taking back control of our data and the notion of data trusts — or as Dr Christina Colclough also calls them "workers' data collectives'" - with Ed Santow, Australia’s Human Rights Commissioner, Neil Lawrence, Professor of Machine Learning at the University of Sheffield and Dr Christina Colclough. The Robots Are Coming! 🎧 Podcast: Jul 22, 2018 Host Paul Dillon from the Office Block delves into one of the biggest topics facing people working in finance today. What are tech changes doing to our jobs? What will a future finance sector look like, as automation and digitalisation continue apace? What's all this about data and our rights? Featuring Dr Larry Stapleton, Dr Michelle O'Sullivan, Dr Lisa Wilson and Dr Christina Colclough Taming The Robots 🎧 Podcast: May 6, 2018 Are the fears for the future justified? How can we use this new technology to our benefit? There is no one better qualified or more articulate on this most pressing of subjects. In a heartfelt defence of the need for “human agency ” in the industrial process, Dr Christina Colclough and Jonnie Penn set out not just why this is so important, but how we can make change happen. Digital Future of Work 🎧 Podcast: Dec 15, 2017 Listen to this podcast where Dr Christina Colclough explains why unions need to fight for workers' data rights and protection in a time where digital change is upon us. We must ask whether data, big data, algorithms and AI are taking the human out of human resource management - and we must know what is needed to keep the balance of power in companies. The podcast is made by Kvinneperspektiv, Nina Hanssen from LO-Norway. Data Future of Work