81 items found

Blog Posts (74)

  • Training material - Digitalisation of Work and Workers

    The global union for public services unions - PSI - is running a 3-year capacity building project called "Our Digital Future". The Why Not Lab is fortunate to be working with PSI on this project. It trains trade unions in all regions of the world on the issue of the digitalisation of public services and employment. We are making all of the training material public. Read on! Digital Rights Organisers The project works with 3 distinct groups of trade unionists. Firstly, in each region: Africa, Asia-Pacific, Latin-America and Europe, North America and the Caribbean, Digital Rights Organisers participated in 3 in-depth sessions aimed at equipping them with key insights. knowledge and practical tools to become the regional experts who can support unions in their collective bargaining and political advocacy for much stronger rights for workers in digitalised workplaces. See the training material for Europe, North America, Canada and the Caribbean below. If you are interested in the material for the other regions, please contact me. Union Leaders The second group we held workshops for were executive level trade union leaders. They are the gatekeepers of union transformation and standing in murky waters as political, organisational and strategic decisions need to be made yet with little union-specific direction available. These workshops offered the union leaders a space to share their concerns, experiences and strategies. Once again, the material for these workshops is available for all regions upon request, including our bespoke Digital Impact Framework - an online guide to help unions tap into the potentials of digital technologies yet at the same time protect their members privacy and uphold human rights. Collective bargaining officers and negotiators The third and final group we will work with are those who are responsible for collective bargaining and negotiations as well as the union staff who are there to support them. As more and more workplaces and public services deploy digital tools and systems, and as many workers across the world have very few legal rights and protection, collective bargaining will be key to reshaping digitalisation so fundamental rights and freedoms are protected. These workshops, which will be held autumn/winter 2022-2023 will be hands-on, practical and will introduce participants to 3 tools developed for this project: The Negotiating Data Rights Tool - an online tool that step-by-step helps bargaining officers traverse the complex fields of data protection and managerial opaqueness with the aim to bridge legal gaps and ensure workers have strong, collective data rights. The Co-Governance Guide to Algorithmic Systems - a guide consisting of 7 themes and 21 questions workers should be asking management to ensure digital transparency, liability and responsibility at the same time as working with management to ensure these systems protect labour market diversity and rights. (sneak peak the guide here) PSIs Digital Bargaining Hub - a searchable database of collective bargaining clauses related to digitalisation. These clauses and framework agreements are from public and private sector unions from across the world, organised in a taxonomy. Stay posted for our training material! As mentioned above, we can share all of the training material for all regions. Simply ask

  • OECD report: Using Artificial Intelligence in the workplace

    July 8, 2022: OECD Social publishes a new working paper on the main ethical risks in connection with the deployment of #AI in workplaces. It's a lengthy report, but well worth a read. Some of the Why Not Lab's work is cited in the report. Abstract Artificial Intelligence (AI) systems are changing workplaces. AI systems have the potential to improve workplaces, but ensuring trustworthy use of AI in the workplace means addressing the ethical risks it can raise. This paper reviews possible risks in terms of human rights (privacy, fairness, agency and dignity); transparency and explainability; robustness, safety and security; and accountability. The paper also reviews ongoing policy action to promote trustworthy use of AI in the workplace. Existing legislation to ensure ethical workplaces must be enforced effectively, and serve as the foundation for new policy. Economy- and society-wide initiatives on AI, such as the EU AI Act and standard-setting, can also play a role. New workplace-specific measures and collective agreements can help fill remaining gaps. Conclusions from the Executive Summary Trustworthy use of workplace AI means recognizing and addressing the risks it can raise about human rights (including privacy, fairness, agency and dignity); transparency and explainability; robustness, safety and security; and accountability. AI’s ability to make predictions and process unstructured data is transforming and extending workplace monitoring. The nature of the data that can be collected and processed also raises concerns, as it can link together sensitive physiological and social interaction data. Formalizing rules for management processes through AI systems can improve fairness in the workplace, but AI systems can multiply and systematize existing human biases. The collection and curation of high-quality data is a key element in assessing and potentially mitigating biases – but presents challenges for the respect of privacy. Systematically relying on AI-informed decision-making in the workplace can reduce workers’ autonomy and agency. This may reduce creativity and innovation, especially if AI-based hiring also leads to a standardization of worker profiles. On the other hand, the use AI systems at work could free up time for more creative and interesting tasks. For transparency and consent, job applicants and workers may not be aware of AI system use, and even if they are may not be in a position to refuse its use. Understandable explanations about employment decisions that affect workers and employers are too often unavailable with workplace AI systems. Improved technical tools for transparency and explainability will help, although many system providers are reluctant to make proprietary source code or algorithm available. Yet enhanced transparency and explainability in workplace AI systems has the potential to provide more helpful explanations to workers than traditional systems. Workers can struggle to rectify AI system outcomes that affect them. This is linked to lack of explainability but also to lacking rights to access data used to make decisions, which makes them difficult to challenge. Contract- and gig-workers in particular can face such issues. AI systems present many opportunities to strengthen the physical safety and well-being of workers, but they also present some risks. Risks include heightened digital security risks and excessive pressure on workers. It can also be more difficult to anticipate the actions of AI-based robots due to their increased mobility and decision-making autonomy. Deciding who should be held accountable in case of system harm is not straightforward. Having a human “in the loop” may help with accountability, but it may be unclear which employment decisions require this level of oversight. Audits of workplace AI systems can improve accountability if done carefully. Possible requisites for audits include auditor independence; representative analysis; data, code and model access; and consideration of adversarial actions. Enforcing and strengthening existing policy should be the foundation for policy action, even as society-wide and workplace-specific measures on AI help fill gaps. The reliance of workplace AI systems’ on data can bring them into conflict with existing data protection legislation. For example, cases brought under Article 22 of the EU’s General Data Protection Regulation (GDPR) have required companies to disclose data used in their AI systems, or to reinstate individuals dismissed solely based on algorithms. Employment anti-discrimination legislation is relevant to address some concerns about workplace AI bias. Legislation on deceptive practices and consumer protection is being used to require more transparency from companies about the functioning of workplace algorithms, and require developers to meet the ethical standards they advertise about their products. Workers’ legal rights to due process in employment decisions can be used to require increased transparency and explainability. A number of OECD countries are considering society-wide AI legislative proposals that would also apply to the workplace. A notable example is the EU AI Act, which would classify some AI systems used in employment as “unacceptable risk” (e.g. those considered manipulative) and the rest as “high risk”. This would subject them to legal requirements relating to data protection, transparency, human oversight and robustness, among others. National or international standard-setting, along with other self-regulatory approaches, can provide technical parameters for trustworthy AI systems, and notably for workplace use. Regulatory efforts have also zeroed in on use of AI in the workplace. In the US, Illinois and Maryland require applicant consent for the use of facial recognition tools in hiring. The New York City Council mandates annual algorithmic bias audits for “automated employment decision tools”. Formalising an agreement between unions and business associations, legislation in Spain now mandates transparency for AI systems affecting working conditions or employment status. Indeed, social partners have proactively set out proposals on workplace AI use, and will be key stakeholders in developing new legislation. Citation: Salvi del Pero, A., P. Wyckoff and A. Vourc'h (2022), "Using Artificial Intelligence in the workplace: What are the main ethical risks?", OECD Social, Employment and Migration Working Papers, No. 273, OECD Publishing, Paris, https://doi.org/10.1787/840a2d9f-en.

  • Reshaping the Digitization of Public Services

    This article in the New England Journal of Public Policy argues that the current digitalisation of public services is occurring in a void. Caused by poor public procurement and/or supplier contracts, insufficient digital laws with a lack of governance processes and bodies, and competency gaps from all parties involved, the article suggests how and why this void can be filled to protect quality public services and decent work. Abstract From the vantage point of public services as a service as well as a workplace, the article discusses potential remedies to ensure that digitalization does not affect the quality of public services as services and as places of employment. It spells out the additional measures that will be needed to fill the void ethically and ensure that fundamental human rights, freedoms, and autonomy are protected. It concludes that we need to simultaneously slow down and hurry up. We must take the time to get the necessary safeguards in place and continually ask whether more technology really is the right solution to the challenges we face. But also, we need to hurry up to build a critical understanding of the current mode of digitalization so alternatives can be tabled. The article is based on conversations with union members across the world, a literature review, and the author’s own studies of the digitalization of public services and employment. Get the full article here: https://scholarworks.umb.edu/nejpp/vol34/iss1/9/ Recommended Citation Colclough, Christina J. (2022) "Reshaping the Digitization of Public Services," New England Journal of Public Policy: Vol. 34: Iss. 1, Article 9. Available at: https://scholarworks.umb.edu/nejpp/vol34/iss1/9

View All

Pages (7)

  • About | The Why Not Lab | Championing ALT Digital

    about the Why Not Lab The Why Not Lab is a boutique value-driven consultancy that puts workers at the centre of digital change. We offer our expertise exclusively to progressive organisations, trade unions and governments. The Why Not Lab has a two-fold mission to ensure that the digital world of work is empowering rather than exploitative. We: ​ Equip workers and their unions with the right skills, know-how and know-what to shape an alternative digital ethos that ensures collective rights in the digital age; ​ Put workers' interests centre stage in current and future digital policies ​ ​ To bridge digital divides and prevent the objectification of workers that is currently underway, workers must be empow ered so they can table an alternative digital ethos. The Why Not Lab aims to support exactly this through our training, policy and strategic support. ​ The Why Not Lab is run by Dr Christina J. Colclough - a fearless optimist who believes that change for good is possible if we put our minds and heart to it. She works with experts and partners across the world to provide the best advice at all times. Read more about Dr. Colclough below Please note: All workers across the world need to champion ALT Digital strategies and policies. We have therefore adopted a differential pricing principle so we can support workers and organisations from all regions. Do contact us with any inquiries. Dr Christina J. Colclough Regarded as a thought leader on the futures of work(ers) and the politics of digital technology, Christina is an advocate for the workers’ voice. She has extensive global labour movement experience, where she led their future of work policies, advocacy and strategies for a number of years. She was the author of the union movement's first principles on Workers' Data Rights and the Ethics of AI . A globally sought-after keynote speaker and workshop trainer with over 200 speeches and trainings the last 3 years, Christina created the Why Not Lab as a dedication to improving workers' digital rights. She is included in the all-time Hall of Fame of the world's most brilliant women in AI Ethics. See Christina's wikipedia page here . ​ Trusted Positions ​ Christina is a Fellow of the Royal Society of Arts in the UK and Advisory Board member of Carnegie Council's new program: AI and Equality Initiative. She is also a member of the OECD One AI Expert Group and is affiliated to FAOS, the Employment Relations Research Center at Copenhagen University. In 2021, Christina was a member of the Steering Committee of the Global Partnership on AI (GPAI). Our Digital Future Our Digital Future is a 3-year project with Public Services International aimed at capacity building unions in all regions of the world on digitalisation of work and workers and co-designing union responses. Training material (reports & slides) are available upon request. MOOC for unions With a good camera, professional lighting, a yummy Røde microphone, the scene is set to shoot a number of short videos that in time will become a full blown MOOC. Its all about the datafication of work, digital technologies and union responses. ​ Co-governing A.I Through thematic advice and training we are supporting a group of unions in a European country on the co-governance of algorithmic systems in workplaces. Their aim is to scale to the entire labour market. UnionTech Find the material & recordings from this 4-part series of workshops on #UnionTech here - courtesy of participants, presenters and FES . These workshops united participants to build their capacity to critically use & challenge digital technologies. Digital Training USA Pretty honoured to be working with a top-notch university in the US to create a series of workshops on digitalisation and the impacts on work and workers. The first round of workshops is tailor-made union leaders. Data Storytelling The team behind WeClock offers with support from FES an in-depth, hands-on course on data storytelling. Through responsible data collection, to designing and running the campaign, participants learn how to analyse their data and use this in their campaigning. Current Projects Testimonials John C. Havens E.D., IEEE Global Initiative on Ethics of Autonomous & Intelligent Systems & Council on Extended Intelligence . In an environment where rhetoric often rules all, Christina provides hard-hitting yet pragmatic and solutions-oriented counsel on issues including the future of work, human autonomy, human rights, and technology governance in general. ​ She is my "go to" person on any issues related to AI and the future of work based on her specialized knowledge of worker's rights and actual global policy and economics relating to these issues versus only aspirational techno-utopian ideals. She is also a gifted and personable speaker, transforming highly nuanced and complex technical and political issues into conversational, story-oriented speeches.

  • Contact | The Why Not Lab | Championing ALT Digital

    Let's Connect! And reshape Digital @ Work Send We believe in the richness of diversity, equal opportunities and inclusive meetings, panels, speaker line ups etc. We urge all requestors to diversify their events as much as possible, and will happily recommend excellent folks in our stead. Thanks for contacting The Why Not Lab. You will hear from us soon, Christina

  • Workshops | The Why Not Lab | Championing ALT Digital

    Workshops See examples of the workshops we hold right here. Get inspired! We can mould and combine them so they fit your needs. Our workshop series include practical exercises, good literature to read and lots and lots of valuable information to boost your digital demands. What's all this about digitalisation? A critical introduction to the myths and realities of digitalisation. What's all this about digitalisation and how is it affecting our life and career opportunities? Algorithms @ work ​ Surveillance and monitoring systems are in sharp demand. What should workers demand to protect their human rights and stop the commodification of workers? Co-governing algorithms Algorithmic systems at work must be governed. We have the model! Learn how to hold management responsible and accountable to the digital systems they are applying. Rights, gaps - use & fill them Digital systems sometimes cut across existing laws and regulations. What are our legal rights, where are the gaps, and how should we fill them? Negotiate the data lifecycle Workers' collective data rights need improving across the world. We have developed the data lifecycle at work to illustrate key areas where unions and regulators must set in - and why! The future of skills is competencies Many future of skills debates neglect the key role of human competencies. This workshop puts these back on the agenda. Appraisal systems need improving to ensure diverse, inclusive labour markets! Unions & data storytelling ​ Use tech wisely and get access to new sources of data that you can use in your negotiations, campaigning and organising. Know what's available, and how you can tap into it responsibly Organisational change ​ Transforming your organisation requires changes on many fronts: cultural, strategic, leadership, tools, skills and more. We have the tools to guide you. Map the disruption and head for change

View All