top of page

86 items found for ""

  • Podcast: Digital technologies have entered the chat

    Regulating EdTech Recorded in Canada by the Canadian Teachers' Federation this podcast discusses: digital technologies, including artificial intelligence, and how they are impacting teachers and education workers, Protecting labour rights and privacy in the current trajectory of digitization, Support and practical tools to leverage when bargaining around digital tech Next steps in digital rights advocacy, And more! Tune in to hear my recommendations for union action, how we can get a grip on #EdTech and why I think our governments are letting workers down in their AI policy proposals. EPISODE HIGHLIGHTS What is artificial intelligence (AI) exactly? Christina Colclough (CC): In technical terms, AI is a set of rules in computer programming code aimed at solving a particular problem or performing a task. But, there's no one definition of AI. What we can do is think about AI as a recipe, imagining that we’re going to make tomato soup, for example. It needs ingredients, or instructions. Your tomato soup will not taste as good if your tomatoes are rotten. So, if the data that it relies on are not representative for your culture, for your people, or if they for some reason are historically biased, then the outcome will be biased. Your tomato soup will not taste good if the data – or the ingredients – were rotten. That said, we should be saying digital technologies, because there's a lot of digital technology out there, which is having a lot of impact, which is not AI. It can be machine learning, deep learning, which are subcategories of AI, or it can just be data driven analysis. What are the research findings on the impacts of education technology on education workers, unions, and the system? CC: What was very clear from the research is that there's a sharp increase in the use of education technology or “EdTech”. Another thing we found out in that research was that across the world, teachers were simply not consulted on what their needs are and what technologies they would prefer. Another thing that stood out was that teachers’ training needs on all things digital were overwhelmingly not met. So, they were asked to use technologies, but they were not trained in using them or in understanding them. And to no surprise, work intensification was a big problem. Lastly, another conclusion that stood out is the whole idea around negotiating for our data rights and for the right to have a seat at the table in the use of these technologies. What role do education unions play in protecting teachers’ labour and digital rights? CC: First and foremost, we have to understand that we're in this together and it’s important that we all speak the same language. Once we have that common vocabulary in place and we've done a critical analysis of the benefits and harms, there’s a few key topics to bring to the negotiation table: data rights and the right to be free of algorithmic manipulation (the right to know what data is being collected, why, and by who), the need for teachers to be consulted in a continuous evaluation of how these systems are actually working and their impacts, and ensuring a good balance between work life and private life. One example of this in practice is in California, where they recently made amendments to their data protection regulation to include a de-commodification clause, which is the right to opt out of the selling of your data. This is the only data protection clause in the world. What do you see as the next steps in advocating for digital rights in education? CC: Well, we need to address a gaping hole in a lot of our governmental discussions around the world regarding certifying these systems so that they’re allowed into the education system. But what none of our governments are seriously discussing is how do we acknowledge that the majority of these systems are fluid and changeable. And if they can learn something, they can also learn the wrong thing. So, if we want to protect human rights, workers rights, citizen rights, then we have to continuously govern them. A bridge we need to connect is making governance inclusive governance by including the voices of teachers, educations workers, and students, and making that mandatory in the use of education technology. #Education #AI #DigitalTechnologies #CollectiveBargaining

  • Digitisation in the public sector

    Recommendations for union action. A report by the Why Not Lab for the TUC Written for the Trades Union Congress in the UK, this report presents recommendations for union action as public services and work in said services become increasingly digitised. Methodologically, the report is based on interviews with trade unions and workplace representatives conducted in late 2022 as well as desk top research. The report starts with an introduction to the current digitisation trajectory and the interviewees’ experiences. Three sections then follow, each listing recommendations for union action. The first is concerned with the level of national policy, the second with sectoral and workplace collective bargaining and the third with training needs and structures for union officers and workplace representatives. Digitisation riddled with problems Whilst most interviewees sympathise with the need to keep public services efficient, the transition to the new digital technologies is riddled with problems of a structural, organisational and political nature. Structurally, the systems’ design process and agile rollout means that systems are taken into use before they are fully complete and checked for errors. As a result, citizens are harmed and their rights violated. For the workers, this is having detrimental effects on their rights and working conditions. Privacy rights in relation to third party access to sensitive data through the use of private sector developers and vendors is also a major concern, although not one explicitly mentioned by the interviewees. Organisationally, the lack of transparency coupled with the top-down roll out, lack of meaningful consultation, and deficient co-design/co-governance efforts are violating workers’ dignity, rights, freedoms and autonomy. Politically, the cost-saving aims of ‘improving’ public services are partially sought through the digitisation of public services, but also through negative pay policies, lay offs, office closures and more. In addition, the increasing reliance on private sector solutions and the lack of involvement of the workers and/or their unions in this process are posing a threat to workers’ rights and inclusive and diverse labour markets. Policy and Collective Bargaining Recommendations The report offers a number of concrete national policy recommendations that address some of the workplace realities as reported by the interviewees. These include: The lack of transparency The lack of meaningful consultation with workers with regards to the systems deployed and their effects on workers as well as functionality requests from the workers The lack of sufficient training in how to use the systems The lack of meaningful governance of the (un)intended effects of the systems on the workforce as well as the public, as well as the lack of structures to deal with these effects in an inclusive manner. It then proceeds to offer concrete collective bargaining recommendations which mostly address issues that were indirectly raised by interviewees. What this means is that the interviewees described the consequences of the use of digital systems and some of the organisational, structural and political solutions that could overcome them. However, they often did not go one step deeper into addressing the means of these affects, and/or what could be done to overcome exactly them. For example, this interviewee spoke about the need to repress the digital system due to its faults: Sometimes we need to lie to the system otherwise we can’t get to the next stage. We call it “workarounds”. It is much improved compared to what it was. I used to train people on how to be a case manager. One of the most difficult things was to teach them how to use the system and the workarounds. The means here relate to poor system design and poor system purpose definitions. They are probably also caused by poor communication between developers and deployers. If the workers were meaningfully consulted on the digital systems, and if their feedback was listened to and the system amended, the need for workarounds would have disappeared and not become what seems to be rather institutionalised. What this indicates is that there is a much stronger need for inclusive governance across the entire design and deployment life cycle. The recommendations offered expand on the TUCs principles by adding: Anti-commodification clause - that datasets that include workers’ personal data and/or personally identifiable information cannot be sold, given away or transferred to third parties without the explicit consent of the workers. Note: The Californian data protection regulation, the CPRA, includes an anticommodification clause as the only data protection regulation in the world. It states that workers (and consumers) have the right to opt-out of employers selling or sharing their data to third parties such as data brokers . Governance clause - in line with the National Policy recommendation listed above, all digital technologies deployed at the workplace must be inclusively and periodically governed. Responsibility clause - stipulating that management at all times is the responsible party and is liable for intended as well as unintended harms caused by the deployment of the digital technologies. Explainability clause - that all systems deployed must be explainable by management. They continue by recommending clauses that not only hold the public services in compliance with the UK GDPR, but also demand: Transparency Data Protection and Rights Inclusive Governance Management and workplace representatives’ digital competencies The right to training/lifelong learning Limitations to employers surveillance of workers, including outside of working hours via apps on private devices. Union’s rights to organise remote or hybrid workers Training Recommendations The report concludes with recommendations for the competencies workplace reps and unions need to monitor the deployment and effects of digital technologies effectively. These include foundational and advanced training modules that will help unions identify, map, access, understand and table collective bargaining demands. Read the full report on the TUCs website here

  • Negotiating the two faces of digitalisation

    Published in Equal Times, this op-ed offers insights into how we can begin to map the current and future impacts of the digitalisation of work and workers. It suggests we focus on a quadrant consisting of (semi)automation as well as quantification processes across a timeline from immediate to long-term. This disruption mapping will provide workers and unions with an overview of real and possible futures and can be a good method to use when planning for collective bargaining and/or policy advocacy. Full or semi-automation/robotisation Whilst the process of automation is nothing new, the extent and speed of it is. This is not least due to the launch this year of corporate-driven generative AI systems, such as OpenAI’s #ChatGPT and Google’s #Bard. It has been estimated that over 300 million jobs worldwide will be severely affected by these systems. In time, this disruption will hit workers across all occupations. In education, teachers will be able to use these systems to prepare lesson plans or evaluate student exams. In the film and media world, scripts could be written, special effects designed, and actors replaced by automation. Journalism can be automated, fiction writing too. In the health sector, patient care plans, illness diagnosis and even care workers can be substituted by machines. Coders, accountants, game developers, could all be out of work. Customer call centres could be fully automated, research jobs as well. Yet the impact of this disruption will not be equally felt across the world and across skill levels. A recent report from global consulting firm McKinsey finds that: “Adoption is also likely to be faster in developed countries, where wages are higher and thus the economic feasibility of adopting automation occurs earlier. Even if the potential for technology to automate a particular work activity is high, the costs required to do so have to be compared with the cost of human wages.” It further states that “generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work”. To map this process, start by reflecting on the immediate impacts that (semi-) automation/robotisation will have on jobs, tasks, worker autonomy and working conditions. Then reflect on the long-term consequences of these disruptions. Quantification The process of quantification is more opaque, yet just as disruptive. Quantification refers to how data and algorithmic systems turn our actions and non-actions into measurable events. Put simply: “You are late six days out of 10” or “your productivity rate is higher than your peers”. In reality, these calculations can include many more inputs: your gender, age, ethnicity, postcode, educational level, shopping habits, BMI or other health data, and much, much more. The calculus can also be way more complex: it can compare all of your attributes against very large datasets. It can find patterns and thereby create ‘facts’ or ‘truths’ that few have insights into. These opaque quantifications can have immediate consequences: you can get fired, hired or promoted. But importantly, they are fed into algorithmic systems that future workers will be measured against. For example (again, simplified for the sake of explanation): a system has found out that your productivity is declining and has been for the last three years. You are 52 years old, a woman, divorced, and you rent a small apartment on the outskirts of a small town. Your health seems to be in decline as your BMI is rising. Future job applicants that share all or most of these characteristics will likely be flagged as ‘less productive’ or of ‘declining productivity’ by a (semi-) automated hiring system. Meaning they will most likely never be considered for a job similar to yours. But what if the calculations missed some crucial facts about your life? You had been in a car accident and broke your knee a year ago. You have been going to physiotherapy since but feel progress is slow as you still can’t run as fast as you used to. You have never owned a house and have always rented as you believe this is better. You prefer small village life. What if the algorithms calculated your behaviour negatively, whilst much of it isn’t? Now take those miscalculations and replicate them by the thousands. The effects on future workers would be very real, yet all on the wrong basis. Such quantification and labelling occurs all of the time. Wolfie Christl, a privacy researcher at the Vienna-based research institute Cracked Labs, recently discovered a 650,000 row spreadsheet on ad platform Xandr’s website. It revealed a massive collection of ‘audience segments’ used to target consumers based on highly specific, sometimes intimate information and inferences. To map the immediate and possible long-term effects of quantification, start by describing the digital systems the employer is using. These could be automated scheduling tools, hiring tools or productivity scores. Then imagine the profiles/inferences you think these systems are creating and on which data. Map the effects that these might have in the future. Union responses Unions must mitigate against these severe disruptions to ensure inclusive and diverse labour markets now and in the future. To do so they could unpack these two faces of the digitalisation of work as well as their immediate and future effects. This disruption mapping will provide workers and unions with an overview of real and possible futures and can be a good method to use when planning for collective bargaining and/or policy advocacy. Here are a few pointers as to what could be included in either or both: Disruption’s obligations: In most countries across the world, companies who introduce disruptive technologies have few obligations towards the employees. Unions could demand that disruption cannot take place without obligations towards those who are disrupted. This could include the obligation to continuously upskill or re-skill workers in working time. To offer support to workers who will lose their jobs in the form of career advice, training programmes and the like. The cost of these programmes should be borne exclusively by the disruptive employer. Data: We need to know what data management (and importantly third parties who might have access to this data) is collected, from which sources and for what purpose(s). Inference transparency: We should have the right to be informed about all of the inferences/profiles created using our data. Importantly, we also need to demand the right to know what inferences we are subjects of, measured against and manipulated by. Far from all data protection regulations in the world provide us with these rights, especially not the latter. Freedom from algorithmic manipulation: Human rights/civil rights should be extended to include the right to be free from algorithmic manipulation. This is strongly linked to the demand for inference transparency but goes further by offering an all-encompassing opt-out clause. There can be no freedom of thought, expression or being if our life opportunities are algorithmically defined and limited. Data rights: We should also negotiate for our rights around the use of the data extracted from us, including the inferences. These rights must be extendable to the employer and to any third party who might have access to our data. Whilst many data protection regulations include a number of rights, far from all grant data subjects (i.e. us) with the eight rights European workers have under the GDPR. These should be clearly extended to inferences – collective and individual, present and future. This includes adding a new data right currently only found in the Californian data protection regulation, the CPRA, namely the right to prohibit the selling of one’s personal data. Co-determination and governance rights: Whilst it is nearly impossible to explain these two big topics in a single paragraph, workers should have the right to: (a) be consulted on the introduction of any new technology (German workers have strong rights in this respect); (b) co-determine the purpose, the data used and the instructions provided to the system; (c) to know about the inferences made and edit or block them; (d) be involved in the necessary continuous governance of digital technologies at work so their experiences and opinions are part of the impact assessments. This latter point is very important as obligations towards ex-post governance seem to be lacking in many policy proposals. Read the full op-ed on Equal Times in EN, SP and FR

  • Protecting workers’ rights in digitised workplaces

    Published by Equal Times on May 4, 2023, this op-ed by the Why Not Lab argues that workers and their unions must understand the means through which harms are caused by digital technologies to protect workers' fundamental rights, values and freedom. This implies acquiring new capacities to fill the holes in current #AI and #dataprotection through collective bargaining. Read on to find out what must be done, and why employers and governments need to know, what they need to know, too! Protecting workers’ rights in digitised workplaces - Knowing what we need to know Across all sectors of the global economy automated and algorithmic management systems (AMS) of various kinds are being deployed by employers to, supposedly, increase productivity and efficiency. Whilst this quest is in no way new – management has since the dawn of capitalism surveyed workers and sought to improve productivity – the depth and the breadth of the impacts of these systems are. Whilst some AMS can have a positive impact on working conditions, many don’t. Across the world, workers have reported about a range of negative impacts, amongst others the intensification of work and the pace of work, discrimination and bias, and the loss of autonomy and dignity. Whilst these effects unfortunately are issues workers and their unions have had to deal with even long before the digitisation of work, the means to these harms are different. Preventing them from happening in digitised workplaces requires therefore that we understand the means. In the case of (semi-) automated and AMS this implies that we understand what data, algorithms, artificial intelligence (AI)/machine learning, inferences and a lot more are, and how these in turn can affect workers. So what AMS are we already seeing? Berkeley Labour Center’s classification is here useful and identifies three different types of systems: Human resource analytics, such as hiring, performance evaluation, and on-the-job training. Algorithmic management,, such as workforce scheduling, coordination, and direction of worker activities. Task automation, where some or even all tasks making up a job are automated using data-driven technologies. Common for all three types is that they: 1) delegate managerial decisions to (semi-) automated systems; 2) all use and collect data, from either workers and their activities, or from customers (for example how they rate a worker), and/or from third party systems (such as online devices, profiling systems, public datasets, previous employers, and/or data brokers); and 3) have been programmed to fulfil a particular purpose. Some have been given instructions as to how to fulfil that purpose. More advanced systems that, for example, use machine learning or deep learning, are however not told how to fulfil the purpose but rather get feedback from a human when they are on the right track. Regardless of the individual system, at some point in the life cycle from its development to its use, humans have been involved. They have determined the purpose, developed the system, maybe they have decided to reuse a system designed for one purpose and altered it somehow to fit another purpose, someone has determined the instructions, decided which datasets it should be trained on and later use, and so forth. Preventing harm requires capacity building All of the above hints to things we need to know and understand so that we can defend workers’ rights in digitised workplaces. Firstly, we need to know what digital technologies are being used in our workplaces. We then need a basic understanding of them, such as: who has developed them, how do they work, what data has been used to train the system, are these data representative of our culture, traditions and institutions? We need to know what algorithmic inferences are and which ones are being used in the system and/or are subsequently made. We must find out what the instructions to the systems have been and who has given them, and how all of this together and independently can impact workers’ rights up and down value and supply chains today as in the future. Admittedly, this is not a small set of tasks. To know what to ask and what to look for requires specific, and for many new, competencies. In some countries, management might be obliged by law to provide workers with some of this information. In some, management might be interested in engaging with the workers and therefore are happy to share what they know. In others, management might be tight-lipped and say nothing nor engage meaningfully with the workers. In all cases, workers and their representatives could begin by defining general principles for the use of digital technologies in workplaces (see, for example, the British TUC’s 2021 report When AI is the Boss). They could then map, analyse and query each system used. With this knowledge, they can negotiate guardrails and permissions both around the systems’ current but also future impacts on workers. Managerial fuzz Yet it is not only the workers who urgently need to build capacity – so too do the employers who are deploying digital technologies. Reports from unions from across the world reveal that managers “don’t know what they should know” either. Maybe the human resources department is using an automated scheduling tool that the IT department has purchased on the order of executive management. Who is responsible for the impacts of the tool? Who has been trained to identify and remedy harms? In many cases, the division of responsibility between managers with regards to the governance of these technologies has not been made clear. Managerial fuzz abounds. Who has informed the employees about the system? Do the systems actually do what they claim? How should they be governed for (un)intended harms and by whom? Who is evaluating the outcomes and making the final decision to go with the system’s recommendations or results, or not? What are the rights of those affected? It is alarming, to say the least, that so many workers report that they have never been informed about what digital technologies their employer is using to manage them. Equally concerning is the fact that managers are deploying technologies they have not properly understood. Given that the vast majority of digital technologies deployed in workplaces are third-party systems, if employers aren’t governing the technologies that are designed by others yet deployed in their workplace, control and power seems to be slipping further away from the workers and into the hands of the vendors and/or developers. The labour-management relation is thus becoming a three-party relation, yet few fully realise this. The increasing power of third-party vendors and developers is occurring at the expense of the autonomy of labour and management. This, in turn, will indirectly, if not directly, have a negative influence on worker power. Governmental omissions Many governments across the world have already improved or are looking into improving data protection regulations but also into regulating AI. An element of these regulation proposals has to do with mandatory audits or impact assessments. Whilst this is good, there are some worrying tendencies. Firstly, no government is discussing that these audits or assessments should be made in cooperation with the workers and/or their representatives. This includes within the EU – otherwise heralded as a region in support of social dialogue. Secondly, they all assume that the tech developers and/or management have the competencies they need to meaningfully conduct these audits and assessments. Do they though? Is self-assessment sufficient? Is it acceptable that they alone decide (if they at all actively do) that a system is acceptable to use if it is fair to 89 per cent of the workers? What about the remaining 11 per cent? Shouldn’t the workers concerned have a say? Capacity building is happening To fix all of these issues, there can be little doubt that capacity building is required. Fortunately, over the last one to two years, more and more unions are doing exactly this. The global union for public services unions, PSI, is this year concluding a three-year capacity building project called Our Digital Future. It is training regional groups of digital rights organisers, trade union leaders and bargaining officers and equipping them with tools and guides to help bridge the gap from theory to practice and strengthen their collective bargaining. The International Transport Workers’ Federation is running a two-year union transformation project that is introducing unions to a tailor-made Digital Readiness Framework that seeks to help unions tap into the potentials of digital technologies - but responsibly and with privacy and rights at heart. Education International has launched a three-part online course on their ALMA platform on the challenges of EdTech and possible union responses. The British TUC has just launched an e-learning course called Managed by Artificial Intelligence: Interactive learning for union reps that in a practical and guided way helps unions map the digital technologies used and from there form critical responses. The AFL-CIO in the United States has created a Technology Institute that according to AFL- CIO president Liz Shuler is “a hub for skills and knowledge to help labour reach the next frontier, grow and deploy our bargaining power, and make sure the benefits of technology create prosperity and security for everyone, not just the wealthy and powerful.” Three Norwegian unions, Nito, Finansforbundet and Negotia, have collaborated to create an online course for shop stewards that is a general introduction to AI systems in workplaces as well as provides a tool to support shop stewards in asking the necessary questions to protect workers’ rights and to hold management responsible Many other national, regional and global unions are leaping into this capacity building work through workshops and conferences on the digitalisation of work and workers. These events are inspiring their continued work to transform their strategies and table new demands in collective bargaining. The thrust from the unions will bring employers to the table, and in turn entice them to know what they need to know to address the union demands. Given the sluggishness and gaping holes in current governmental AI regulation discussions, collective bargaining will be essential for workers and their unions in order to reshape the digitisation of work in respect of workers’ fundamental rights, freedom and autonomy.

  • Keeping Work Human (podcast)

    In this episode of the Pondering AI podcast by Kimberly Nevala, we get to cover many of the topics I find pertinent in our quest to reshape the digitalisation of work. From tech determinism, to the value of human labour, what I call 'managerial fuzz', collective will, digital rights, and participatory AI deployment. Hear us discuss the path of digital transformation and the self-sustaining narrative of tech determinism and why I think there is an urgent need for robust public dialogue, education and collective action. Hear me decry the ‘for-it-or-against-it’ views on AI and why I embrace being called a Luddite - why? See image below and do read Jathan Sadowski's full article called "I’m a Luddite. You should be one too" It's not all doom and gloom though. We also discuss the concept of "Inclusive Governance" - how AI techonlogies could be less harmful and more supportive of fundamental rights if management and labour governed the technolgoies together. TO this end we all need to capacity build so we can tap into the benefits of AI while avoiding harm. I end with what Kimberly calls with a "persuasive call for responsible regulation, radical transparency and widespread communication to combat collective ignorance" Access the transcript of this episode here.

  • Når din chef er en algoritme. Podcast

    Late 2022, with a cold in hand, I was invited to discuss Surveillance at Work with Christiane Vejloe and Peter Svarre. It's in Danish - jump on in and listen to examples of what goes wrong when your boss is an algorithm. Description below in Danish. Podcast here Synes du nogle gange, at chefen behandler dig uretfærdigt? Så er tanken om at udskifte ham eller hende med teknologi fri for fordomme og nag måske tillokkende. Forestil dig et arbejdsliv hvor du bliver mødt præcist på det, du laver og ingen kommenterer hverken dit køn, din alder, din hudfarve, din seksualitet, dit udseende eller dit dårlige humør. Du bliver simpelthen blot objektivt vurderet på din performance. Det hedder algoritme management. Og det bliver stort. Men bliver det godt? Vi tager livtag med algoritmechefen i dagens afsnit af Del og Like. I panelet: Digital rådgiver Peter Svarre. Ekspert i the future of work(ers) Christina Jayne Colclough. Vært er Christiane Vejlø. Programmet er produceret af Elektronista Media for ADD-projektet.

  • Speech: New Regulation Needed

    In this 15 minute speech at the Chilean Congreso Futuro held in January 2023, I suggest a number of regulatory policies needed to protect workers' rights in digitalised workplaces. In the speech I make 3 main points: The current digital trajectory is leading to narrow, exclusive labour markets and violations on human rights Globally, regulation specific to digitalisation of labour markets and workers is poorly developed We need our governments to reprioritise labour By offering examples of the harms workers are experiencing in digitalised workplaces, I end the speech by offering suggestions to what new - or revised - policies we need as well as why the current regulatory move towards regulating AI through certifications and standards is problematic. Congreso Futuro The congress: Without Real Limits is organised by the Chilean government. It welcomes scientists from across the world to offer their visions of what a universe of infinite possibilities would be like. The Congreso is open to the public across all 16 regions in Chile as well as live-streamed to over a million viewers.

  • The Competence Wheel

    Based on a municipality's wish to deploy digital technologies responsibly and knowledgably, the Why Not Lab has created the Competence Wheel - a 5 step journey to ensure fairness, transparency, empowerment and ownership over the digital technologies deployed. Identifying the Problem In this decentralised municipality, over 300 digital systems are in use - although some more than others. Most of the systems had been developed in each department independently from other departments. None of the current systems used artificial intelligence, machine learning or deep learning, yet the municipality expected such systems to be used sooner rather than later. Most of the systems are third party systems, developed either by the association of municipalities or by private companies. The municipality was rather surprised by the number of digital systems they had, and they had no clear idea on what criteria the systems had been procured. Whilst the systems did support some decision making, the leadership also believed that they could benefit far more from evidence (data-driven) policy making than they already do. Many of the department heads reported that the competencies to operate the systems were not widely shared, nor did the municipality have a digital strategy across departments. Realising the need to have a more coherent municipality-wide strategy to support policy making and to protect the generalised trust levels between citizens and the municipality, the Why Not Lab was tasked to offer some suggestions to support a transparent, inclusive and deliberate digital transformation in the municipality. Deploying Digital Technologies Responsibly Drawing on conversations with the working group as well as lessons learnt from public and private sector failures in the deployment of digital technologies, the Why Not Lab created the Competence Wheel - 5 key competencies that should be mainstreamed within and across all municipal departments. These competencies will ensure: that digital technologies empower the public sector and the civil servants responsible for deploying them that the use of digital systems is made transparent to the citizens together with clear lines of responsibility and points of contact that relevant staff and political heads understand and can explain how the systems work, what their purpose is, and what rights those affected by the systems have that all digital systems are periodically governed to ensure that rights are respected and (un)intended negative consequences identified and rectified that all deployed systems can be adapted based on the outcomes of the governance The 5 competencies in short The 5 competencies are interrelated. One will strengthen the others and all depend on the others for a coherent and responsible digital deployment. #1 - Negotiate: This competence has to do with the demands the municipality puts in the contracts with suppliers and procurement partners. Here it is important to negotiate for (1) joint data access and control, (2) stringent third party limitations on the repurposing of data derived from the systems, and (3) for the right to demand changes to the systems if harms or other negative outcomes are detected. This competence is strongly linked to the next one on understanding the systems. #2 - Understand: To ensure that the civil servants and departments deploying the systems are the responsible parties, they must understand how the systems work, what data they are trained on, what the instructions are, and how the systems each the outcomes they do. This is not a given especially in machine learning systems, but at the same an essential competence if humans are to be the responsible agents. #3 - Explain: The municipality must at all times be able to translate this understanding into coherent explanations of how the systems work and how outcomes have been reached. This competence is essential to protect citizen rights, to ensure inter-municipal learning and to uphold trust levels between citizens and the public sector. #4 - Govern: The governance competencies relate to conducting impact assessments prior to deploying digital technologies as well as establishing governance bodies tasked with the responsibility of periodically governing the technologies for harms and other unintended or unwanted negative outcomes. These bodies must be inclusive, in other words consist of representatives of those who are subjects of the systems in addition to management and employees. #5 - Adapt: How systems should be adapted will depend on the outcome of the governance stage. In addition, adaptation rights will depend on the contracts signed and whether in the negotiation stage the public authority has ensured they have the right to demand adaptations if harms or other negative outcomes are identified. At all times it must be the inclusive governance body who decides whether the technical adaptations are sufficient and acceptable. In other words, governance and adaptation responsibilities must never be a purely technical endeavour. Working with the Competence Wheel The competence wheel will be used to address the problems identified in the start of the project. In a form of centralised-decentralisation the municipality's various departments will be guided through the competence wheel in cooperation with a central team. The aim is to ensure that the employees and management team take control and ownership over the systems deployed and ensure that any third party systems are designed and run with citizens' rights and privacy at heart. For the citizens impacted by outcomes of these digital systems the outcome should be increased understanding of how the municipal decisions are made, on what basis, what their rights are and who to turn to with questions and queries. The Why Not Lab will be supporting the work in the municipality as they pursue the 5 competencies and the sub tasks under each.

  • We Don't Know, What We Don't Know

    Unless workers and their unions capacity build to understand how digital technologies work, and what causes the harms and otherwise negative impacts on workers, they will forever be one step behind. They simply must know, what they need to know, to collectively reshape the digitalisation of work and workers. Collective Lessons from 2022 During 2022, the Why Not Lab has provided training courses, keynote speeches and workshops for trade unions in all regions of the world. And one lesson stands out: they don't know, what they don't know. Put differently, they - like the majority of folks - do not know how digital technologies work, what data or inferences are, the role of algorithmic instructions, and how all of these together and independently can be impacting workers' rights. They therefore don't know why and how digital technologies affect workers, and therefore don't know where to set in, what questions to ask and demands to make to flush these harms out and remedy them. Without these insights into what makes digital technologies so different from their analogue ancestors, the depth and the breadth of the often uneven effects of these technologies remain unexplored and uncontested. Instead, unions and workers are left dealing with the consequences after harms are caused rather than putting safeguards and demands in place to as best as possible prevent these harms from ever happening. Empowering Workers Requires Capacity Building There is no doubt that to fix this, capacity building is required. Fortunately, this is happening across the world, and we are heavily involved. Three Norwegian unions have collaborated to create a course for shop stewards that is a general introduction to artificiel intelligence systems in workplaces. The global union for public services unions, PSI, is running a 3-year capacity building project Our Digital Future. It is training regional groups of Digital Rights Organisers, trade union leaders and bargaining officers and equipping them with tools and guides to help bridge the gap from theory to practice and strengthen their collective bargaining. The International Transport Federation, ITF, is running a 2 year Union Transformation project that by using the Why Not Lab's Digital Readiness Framework seeks to help unions tap into the potentials of digital technologies - but responsibly and with privacy and rights at heart. Education International has launched a MOOC for their members on the pitfalls, challenges and potentials of EdTech. Many other national and regional unions are leaping into this work too through workshops and conferences on the digitalisation of work and workers. These events are inspiring their continued work to transform their strategies and table new demands in collective bargaining. Data, Algorithms and AI So what more concretely do the unions and workers need to learn? First and foremost: data. What it is, from where it is extracted, how it is inferred and otherwise used and what then happens to it. From our everyday lives to the workplace, we can no longer hardly escape this datafication of self and others. A common thought seems to be: "I've done nothing wrong, so who cares if they (the apps or digital platforms) take my data?" Shared by many, this very thought expresses all too well that the connection between data, inferences and algorithms and what jobs we are offered or advertisements or opinions we are fed is not known. Nor is it known that what you do, and don't do, can affect the lives of others. Understanding algorithms, AI and machine learning, is therefore also important. Has an algorithmic system (based on data inputs) found a connection between age, gender, postcode and productivity, then the likelihood of someone getting that job who doesn't match the connection will significantly decline. (A Why Not Lab presentation held December 2022) Managerial Fuzz Yet it is not only the workers who urgently need this capacity building - so too do the employers who are deploying these technologies. Reports from unions from across the world reveal that managers don't know, what they should know either. Maybe the human resource department is using an automated scheduling tool that the IT department has purchased on the order of executive management. In many cases, the division of responsibility between managers with regards to the governance of these technologies has not been made clear. Managerial Fuzz abounds. Who has informed the employees about the system? Do the systems actually do what they claim 'on the tin'? How should they be governed for (un)intended harms and by whom? Who is evaluating the outcomes and making the final decision to go with the system's recommendations or results, or not? What are the rights of those affected? It is alarming, to say the least, that so many workers report that they never have been informed about what digital technologies the employer is using to manage them. Equally concerning is the fact that managers are deploying technologies they have not properly understood. Yet the employers are, and should be held, responsible for the impacts of these systems. The Pac Man Interestingly, all models for algorithmic audits or impact assessments never question whether management has the competencies they need to actually conduct these audits and assessments, or indeed govern the technologies. They all implicitly assume management has. This in turn begs the question: Who is actually in control here? Given that the vast vast majority of digital technologies deployed in workplaces are third party systems, the answer is probably the vendors and/or developers. The labour-management relation is thus becoming a three-party relation, yet few realise this. The increasing power of the Pac Man, i.e. the third party vendors and developers, is occurring at the expense of the autonomy of labour and management. This, in turn, will indirectly if not directly have an influence on worker power. What's next ? Looking ahead into 2023, the Why Not Lab's calendar is bursting with exciting projects that will continue to capacity build unions, governments and public services. Together with the unions we will expand on the contents of our courses. We will refine the tools we have piloted to support unions' collective bargaining on workers' digital rights. And we will work with unions to co-develop and implement our Algorithmic Co-governance Guide. We will also be working on projects with universities bridging the expertise of academia and workers' realities. Excitingly, plans are in the making to author a "book" - a combination of text, podcasts and videos - for workers about the digitalisation of work and workers. And we will not stop doing ours to highlight the gaps identified above within the OECD, the G7, G20, the EU, the WTO and elsewhere with regards to ensuring that digital technologies respect workers' rights, freedoms and autonomy. This year has been a blast. High fives to every organisation who has shown leadership in the quest to understand and reshape digitalisation!

  • Panel: Worker Insights & Charting a Better Path for Workplace AI

    In Partnership on AI's webinar, "Worker Insights & Charting a Better Path for Workplace AI" on Wednesday, October 12, PAI's Research Scientist for the Shared Prosperity Initiative Stephanie Bell presented the results of an international study with workers on their experiences of AI, and opportunities to improve job quality and business outcomes by increasing worker voice and participation in the development and deployment of workplace AI. In this panel debate, Dr Christina Colclough from The Why Not together with Dr. Laura Nurski (Research Fellow & Program Lead, Future of Work, Bruegel), were asked to comment on the study. AI and Job Quality - Insights from Frontline Workers The report is well worth a read. It highlights: The way workplace AI’s organizational function and use, and the status of the workers using it—not its technology type—shapes outcomes for workers The importance of inclusive governance and increasing worker voice in AI development and deployment, both for improved job quality for workers and business outcomes for employers The common disconnects between technological needs perceived by AI creators and purchasers (e.g., senior company leadership), and the needs identified by frontline workers who use these technologies The need for better public education, especially for workers and managers, on workplace AI systems and their potential impacts The connections to current and proposed legislation related to AI & work (including the EU’s GDPR and AI Act), and recommendations for policymakers to take action on these issues. Many of these insights mirror the Why Not lab's work with unions and the policy changes we recommend. Check what Colclough had to say at 33.33 minutes and forward. Kick back and watch the video, but also do read Laura Nurski's newest paper: "The impact of artificial intelligence on the nature and quality of jobs" - it is excellent.

  • Training Bargaining Officers - summary from Peru

    This article was written by Mayra Castro, Public Services International Inter-Americas. "As trade unionists, we cannot accept that ‘the system’ is more important than the workers" This statement was made by Christina Colclough during her presentation on the fundamentals of data, artificial intelligence and algorithms in society and at work at PSI's workshop on collective bargaining on digitalisation for negotiators and trade union leaders that took place in Lima on October 16-17. The Covid-19 pandemic forced everyone to adapt to a new reality in which work and most day-to-day activities moved to the virtual world. Even in the case of professions for with we never imagined this would be possible, including health care and basic education, were moved to the digital world. While this reality existed before the pandemic, it was accelerated and intensified, making the big tech companies even more profitable while workers had to learn how to demand their rights in this new reality. Governments around the world are implementing digital solutions using new technologies with little oversight or accountability. Christina presented the basic concepts of data and algorithmic systems in our society and at work, "it is the trade union movements, especially those in the public sector, that have the greatest capacity to advance the debate about the power of large digital companies". Digitalisation of public services is already a reality in many countries and trade unions must work to protect workers' rights so that digital transitions are transparent and inclusive, without letting the systems overtake them. "As trade unionists we cannot accept that ‘the system’ is more important than the workers," said Cristina. While presenting the definition of an algorithm and how its systems work on the basis of data, Cristina reminded us that in the end, it is we humans who determine what algorithms should do, and therefore it is up to trade unions to question their use and demand a more transparent and fairer implementation. It is also up to trade unions to negotiate the right to our data and data protection. "Data is about power; we have to understand that there is power in data. So we have to think about how we deal with that power in trade unions and how to make those that hold and control data more accountable," said Christina. A solution to this reality, according to Christina, is that "we have to reshape the digitalisation of work and workers. We, as trade unions, can say that some technologies are good, but we can't do that without reshaping digitalisation. We are being commodified. We must also protect our right to be human." Unions must negotiate how the data is collected; why and by whom it is analysed; where the data is stored and who has access to it; and at the last stage ask where it ends up after it has been used (offboarding). "The solution to data ownership and the benefits of data is data taxation. We need to stop quantifying people. I hope that we can limit this through collective bargaining," she explained. Christina presented a guide for the establishment of collective bargaining for the co-governance of algorithmic systems, which includes questions trade unionists can as during the implementation of digital systems in workplaces. Priority areas include transparency, accountability of the tools, right to redo, data protection and rights, threats and benefits, whether it is possible to make adjustments to the tool, and finally co-governance. Digital Bargaining Hub PSI’s Digital Bargaining Hub was presented at the end of the workshop. The hub is an online resource for trade unionists and others interested in promoting workers' rights through collective bargaining. It helps users to understand key issues related to the digitalisation of work, and includes real-world bargaining clauses and language that can be adapted and used at the bargaining table. "We have categorised the information into key topics and subtopics with commentary to help unions find what they need. The information can also be accessed through the database of existing clauses and model texts collected from unions around the world," explained Hanna Johnston, an expert who supported the creation of the Hub. The issue of digitalisation at PSI Gabriel Casnati, PSI project coordinator responsible for the digitalisation project in the region, explained that the "strategy to grow this in PSI is to make its transversality with the main priorities of the organization clear: the future of trade union organisation, the future of work, freedom of association, democracy, tax justice, human rights, free trade agreements and quality public services". Read the original article in English and Spanish here

  • Worker surveillance is spreading - podcast

    This podcast (in Danish) is concerned with the rise of digital tools and systems used to surveil workers. Read the description of the podcast in Danish here. English translation below. Workflow ep. 26: Worker surveillance is spreading As more of our daily work takes place on digital platforms, it has become possible for employers to collect data on everything we do. It is, however, a development that should be discussed and regulated, say critics. As more and more of our daily work takes place on digital platforms, it has become possible for employers to collect data on everything we do: how many e-mails we write, how often we call customers or other employees, how many times we press the keyboard, how much time we spend on social media or other pastimes, and so on. It can also be about tracking how much we move and perhaps even what we weigh, and how much we eat, if we participate in health programs in the company. On the one hand, it might help us to plan our work better, ensure more focus time, optimize meetings and make us more productive (and perhaps even make us healthier), on the other hand there is a risk that employee monitoring becomes a huge overstepping of the boundaries of our privacy. We have spoken to digital consultant and author Peter Svarre and consultant and trade union expert Christina Colclough about what kind of tools companies use, what the idea behind them is, and what it looks like here in Denmark. Links / show notes Also read IDA's topic: Are you being monitored at work? Read more about Peter Svarre on his website Read about Christina Colclough's Why Not Lab Read more about the ADD project: Algorithms, Data & Democracy Read more about HK's digital little sister HK LAB Find out about Workflow - a podcast about the future of working life Find Workflow on Apple Find Workflow on Spotify Workflow is produced by IDA the The Danish Society of Engineers by: Lecturer at the Department of Sociology at the University of Copenhagen and labor market researcher Nana Wesley Hansen, and Tech journalist Anders Høeg Nissen

  • Training material - Digitalisation of Work and Workers

    The global union for public services unions - PSI - is running a 3-year capacity building project called "Our Digital Future". The Why Not Lab is fortunate to be working with PSI on this project. It trains trade unions in all regions of the world on the issue of the digitalisation of public services and employment. We are making all of the training material public. Read on! Digital Rights Organisers The project works with 3 distinct groups of trade unionists. Firstly, in each region: Africa, Asia-Pacific, Latin-America and Europe, North America and the Caribbean, Digital Rights Organisers participated in 3 in-depth sessions aimed at equipping them with key insights. knowledge and practical tools to become the regional experts who can support unions in their collective bargaining and political advocacy for much stronger rights for workers in digitalised workplaces. See the training material for Europe, North America, Canada and the Caribbean below. If you are interested in the material for the other regions, please contact me. Union Leaders The second group we held workshops for were executive level trade union leaders. They are the gatekeepers of union transformation and standing in murky waters as political, organisational and strategic decisions need to be made yet with little union-specific direction available. These workshops offered the union leaders a space to share their concerns, experiences and strategies. Once again, the material for these workshops is available for all regions upon request, including our bespoke Digital Impact Framework - an online guide to help unions tap into the potentials of digital technologies yet at the same time protect their members privacy and uphold human rights. Collective bargaining officers and negotiators The third and final group we worked with are those who are responsible for collective bargaining and negotiations as well as the union staff who are there to support them. As more and more workplaces and public services deploy digital tools and systems, and as many workers across the world have very few legal rights and protection, collective bargaining will be key to reshaping digitalisation so fundamental rights and freedoms are protected. These hands-on, practical workshops were held autumn/winter 2022-2023 and introduced participants to 3 tools developed for this project: The Negotiating Data Rights Tool - an online tool that step-by-step helps bargaining officers traverse the complex fields of data protection and managerial opaqueness with the aim to bridge legal gaps and ensure workers have strong, collective data rights. The Co-Governance Guide to Algorithmic Systems - a guide consisting of 7 themes and 21 questions workers should be asking management to ensure digital transparency, liability and responsibility at the same time as working with management to ensure these systems protect labour market diversity and rights. (sneak peak the guide here) PSIs Digital Bargaining Hub - a searchable database of collective bargaining clauses related to digitalisation. These clauses and framework agreements are from public and private sector unions from across the world, organised in a taxonomy. As mentioned above, we can share all of the training material for all regions. Simply ask

  • OECD report: Using Artificial Intelligence in the workplace

    July 8, 2022: OECD Social publishes a new working paper on the main ethical risks in connection with the deployment of #AI in workplaces. It's a lengthy report, but well worth a read. Some of the Why Not Lab's work is cited in the report. Abstract Artificial Intelligence (AI) systems are changing workplaces. AI systems have the potential to improve workplaces, but ensuring trustworthy use of AI in the workplace means addressing the ethical risks it can raise. This paper reviews possible risks in terms of human rights (privacy, fairness, agency and dignity); transparency and explainability; robustness, safety and security; and accountability. The paper also reviews ongoing policy action to promote trustworthy use of AI in the workplace. Existing legislation to ensure ethical workplaces must be enforced effectively, and serve as the foundation for new policy. Economy- and society-wide initiatives on AI, such as the EU AI Act and standard-setting, can also play a role. New workplace-specific measures and collective agreements can help fill remaining gaps. Conclusions from the Executive Summary Trustworthy use of workplace AI means recognizing and addressing the risks it can raise about human rights (including privacy, fairness, agency and dignity); transparency and explainability; robustness, safety and security; and accountability. AI’s ability to make predictions and process unstructured data is transforming and extending workplace monitoring. The nature of the data that can be collected and processed also raises concerns, as it can link together sensitive physiological and social interaction data. Formalizing rules for management processes through AI systems can improve fairness in the workplace, but AI systems can multiply and systematize existing human biases. The collection and curation of high-quality data is a key element in assessing and potentially mitigating biases – but presents challenges for the respect of privacy. Systematically relying on AI-informed decision-making in the workplace can reduce workers’ autonomy and agency. This may reduce creativity and innovation, especially if AI-based hiring also leads to a standardization of worker profiles. On the other hand, the use AI systems at work could free up time for more creative and interesting tasks. For transparency and consent, job applicants and workers may not be aware of AI system use, and even if they are may not be in a position to refuse its use. Understandable explanations about employment decisions that affect workers and employers are too often unavailable with workplace AI systems. Improved technical tools for transparency and explainability will help, although many system providers are reluctant to make proprietary source code or algorithm available. Yet enhanced transparency and explainability in workplace AI systems has the potential to provide more helpful explanations to workers than traditional systems. Workers can struggle to rectify AI system outcomes that affect them. This is linked to lack of explainability but also to lacking rights to access data used to make decisions, which makes them difficult to challenge. Contract- and gig-workers in particular can face such issues. AI systems present many opportunities to strengthen the physical safety and well-being of workers, but they also present some risks. Risks include heightened digital security risks and excessive pressure on workers. It can also be more difficult to anticipate the actions of AI-based robots due to their increased mobility and decision-making autonomy. Deciding who should be held accountable in case of system harm is not straightforward. Having a human “in the loop” may help with accountability, but it may be unclear which employment decisions require this level of oversight. Audits of workplace AI systems can improve accountability if done carefully. Possible requisites for audits include auditor independence; representative analysis; data, code and model access; and consideration of adversarial actions. Enforcing and strengthening existing policy should be the foundation for policy action, even as society-wide and workplace-specific measures on AI help fill gaps. The reliance of workplace AI systems’ on data can bring them into conflict with existing data protection legislation. For example, cases brought under Article 22 of the EU’s General Data Protection Regulation (GDPR) have required companies to disclose data used in their AI systems, or to reinstate individuals dismissed solely based on algorithms. Employment anti-discrimination legislation is relevant to address some concerns about workplace AI bias. Legislation on deceptive practices and consumer protection is being used to require more transparency from companies about the functioning of workplace algorithms, and require developers to meet the ethical standards they advertise about their products. Workers’ legal rights to due process in employment decisions can be used to require increased transparency and explainability. A number of OECD countries are considering society-wide AI legislative proposals that would also apply to the workplace. A notable example is the EU AI Act, which would classify some AI systems used in employment as “unacceptable risk” (e.g. those considered manipulative) and the rest as “high risk”. This would subject them to legal requirements relating to data protection, transparency, human oversight and robustness, among others. National or international standard-setting, along with other self-regulatory approaches, can provide technical parameters for trustworthy AI systems, and notably for workplace use. Regulatory efforts have also zeroed in on use of AI in the workplace. In the US, Illinois and Maryland require applicant consent for the use of facial recognition tools in hiring. The New York City Council mandates annual algorithmic bias audits for “automated employment decision tools”. Formalising an agreement between unions and business associations, legislation in Spain now mandates transparency for AI systems affecting working conditions or employment status. Indeed, social partners have proactively set out proposals on workplace AI use, and will be key stakeholders in developing new legislation. Citation: Salvi del Pero, A., P. Wyckoff and A. Vourc'h (2022), "Using Artificial Intelligence in the workplace: What are the main ethical risks?", OECD Social, Employment and Migration Working Papers, No. 273, OECD Publishing, Paris,

  • Reshaping the Digitization of Public Services

    This article in the New England Journal of Public Policy argues that the current digitalisation of public services is occurring in a void. Caused by poor public procurement and/or supplier contracts, insufficient digital laws with a lack of governance processes and bodies, and competency gaps from all parties involved, the article suggests how and why this void can be filled to protect quality public services and decent work. Abstract From the vantage point of public services as a service as well as a workplace, the article discusses potential remedies to ensure that digitalization does not affect the quality of public services as services and as places of employment. It spells out the additional measures that will be needed to fill the void ethically and ensure that fundamental human rights, freedoms, and autonomy are protected. It concludes that we need to simultaneously slow down and hurry up. We must take the time to get the necessary safeguards in place and continually ask whether more technology really is the right solution to the challenges we face. But also, we need to hurry up to build a critical understanding of the current mode of digitalization so alternatives can be tabled. The article is based on conversations with union members across the world, a literature review, and the author’s own studies of the digitalization of public services and employment. Get the full article here: Recommended Citation Colclough, Christina J. (2022) "Reshaping the Digitization of Public Services," New England Journal of Public Policy: Vol. 34: Iss. 1, Article 9. Available at:

  • Building a network of tech champions

    In the spring of 2022, the Why Not Lab held a training course for the UK trade union Community aimed at empowering the staff to become Digital Champions. Below Anna Mowbray, Research and Policy Officer at Community reflects on the course. Community staff recently finished participating in a training course delivered by Dr Christina Colclough, Head of the Why Not Lab, and expert in the interrelationships between work and technology. Staff from across Community have been trained up as tech champions— empowered to tackle digital issues in all of our workplaces. Over the course of the sessions, we explored the data lifecycle at work, what technologies are used in workplaces, the different stages of the process and when we can ask relevant, useful questions about our members data and their rights at each stage. We also drew out the ways in which we as a union can ensure that workers have a seat at the table— questions we should ask, points we should raise, and structures we should set up to make sure that we have a space to work with management to shape technology change, for example by setting up a tech forum. For me a particular highlight of the training was the algorithm game— in groups we set about trying to design steps for a recruitment algorithm which showed us just how much room there would be for biases to creep in, and how trying to formulate shortcuts and rules to categorise and classify candidates can so easily create gross unfairness. This unfairness during recruitment is another crucial reason why trade unions like Community are taking this conversation so seriously — it’s about ensuring workers’ rights are protected even as the world of work changes around us. During the sessions we talked about algorithmic management— an umbrella term being used to talk about a whole range of technologies being used to discipline, measure, track, hire and fire workers. Recently, we’ve seen examples of smart cameras being used to observe workers behaviour in invasive ways that threaten their right to privacy. We considered the key questions that we need to all be asking of management throughout the process, for example, what rights of redress can we ensure if this goes wrong? Christina equipped us with a whole range of tools to enforce our member’s rights. We talked about how we can use the articles in the GDPR, including through running a transparency survey (based on GDPR article 13 on data collection) with shop stewards. And we explored the importance of going beyond this to negotiate for stronger collective data rights where there are gaps in the law such as being able to object where workers are subject to inferences that are not directly related to their own personal data. Running this training at Community was critical because if we don’t know what technologies are used at work then we cannot move further to protect workers’ rights, dignity, and autonomy. Sometimes the conversation about digital tools at work sounds like it’s remote, or isolated from our workplaces. But the examples that shone out during the sessions show that these issues are happening in our workplaces today. Another key takeaway for me was how much is matters is how we talk about this issue to make sure that people can see how critical it is. Does talking about automation and technology leave you cold or confused? Perhaps it’s better to be talking about digital? This training course followed the work that Community has done developing recommendations for bargaining around technology at work. The next step in this important process will be the reps training that’s due to be delivered this year— equipping our reps with the same tools to address digital systems at work. Thank you to Christina and to all the staff who joined the sessions and took the time to share such powerful insights which we hope we can translate into real changes in our workplaces going forwards. If you’re a workplace rep, check out Christina’s guide to workplace co-governance of algorithmic systems, Community’s guide to bargaining around technology in the workplace, and you can reach out to the research team if you want to discuss digital at work. **************************** Original post here A huge thank you from the Why Not Lab to the Community staff who were so actively engaged. Look forward to following your work on this.

bottom of page