OECD report: Using Artificial Intelligence in the workplace
July 8, 2022: OECD Social publishes a new working paper on the main ethical risks in connection with the deployment of #AI in workplaces. It's a lengthy report, but well worth a read. Some of the Why Not Lab's work is cited in the report.
Artificial Intelligence (AI) systems are changing workplaces. AI systems have the potential to improve workplaces, but ensuring trustworthy use of AI in the workplace means addressing the ethical risks it can raise. This paper reviews possible risks in terms of human rights (privacy, fairness, agency and dignity); transparency and explainability; robustness, safety and security; and accountability. The paper also reviews ongoing policy action to promote trustworthy use of AI in the workplace. Existing legislation to ensure ethical workplaces must be enforced effectively, and serve as the foundation for new policy. Economy- and society-wide initiatives on AI, such as the EU AI Act and standard-setting, can also play a role. New
workplace-specific measures and collective agreements can help fill remaining gaps.
Conclusions from the Executive Summary
Trustworthy use of workplace AI means recognizing and addressing the risks it can raise about human rights (including privacy, fairness, agency and dignity); transparency and explainability; robustness, safety and security; and accountability.
AI’s ability to make predictions and process unstructured data is transforming and extending workplace monitoring. The nature of the data that can be collected and processed also raises concerns, as it can link together sensitive physiological and social interaction data.
Formalizing rules for management processes through AI systems can improve fairness in the workplace, but AI systems can multiply and systematize existing human biases. The collection and curation of high-quality data is a key element in assessing and potentially mitigating biases – but presents challenges for the respect of privacy.
Systematically relying on AI-informed decision-making in the workplace can reduce workers’ autonomy and agency. This may reduce creativity and innovation, especially if AI-based hiring also leads to a standardization of worker profiles. On the other hand, the use AI systems at work could free up time for more creative and interesting tasks.
For transparency and consent, job applicants and workers may not be aware of AI system use, and even if they are may not be in a position to refuse its use.
Understandable explanations about employment decisions that affect workers and employers are too often unavailable with workplace AI systems. Improved technical tools for transparency and explainability will help, although many system providers are reluctant to make proprietary source code or algorithm available. Yet enhanced transparency and explainability in workplace AI systems has the potential to provide more helpful explanations to workers than traditional systems.
Workers can struggle to rectify AI system outcomes that affect them. This is linked to lack of explainability but also to lacking rights to access data used to make decisions, which makes them difficult to challenge. Contract- and gig-workers in particular can face such issues.
AI systems present many opportunities to strengthen the physical safety and well-being of workers, but they also present some risks. Risks include heightened digital security risks and excessive pressure on workers. It can also be more difficult to anticipate the actions of AI-based robots due to their increased mobility and decision-making autonomy.
Deciding who should be held accountable in case of system harm is not straightforward. Having a human “in the loop” may help with accountability, but it may be unclear which employment decisions require this level of oversight.
Audits of workplace AI systems can improve accountability if done carefully. Possible requisites for audits include auditor independence; representative analysis; data, code and model access; and consideration of adversarial actions.
Enforcing and strengthening existing policy should be the foundation for policy action, even as society-wide and workplace-specific measures on AI help fill gaps.
The reliance of workplace AI systems’ on data can bring them into conflict with existing data protection legislation. For example, cases brought under Article 22 of the EU’s General Data Protection Regulation (GDPR) have required companies to disclose data used in their AI systems, or to reinstate individuals dismissed solely based on algorithms.
Employment anti-discrimination legislation is relevant to address some concerns about workplace AI bias.
Legislation on deceptive practices and consumer protection is being used to require more transparency from companies about the functioning of workplace algorithms, and require developers to meet the ethical standards they advertise about their products.
Workers’ legal rights to due process in employment decisions can be used to require increased transparency and explainability.
A number of OECD countries are considering society-wide AI legislative proposals that would also apply to the workplace. A notable example is the EU AI Act, which would classify some AI systems used in employment as “unacceptable risk” (e.g. those considered manipulative) and the rest as “high risk”. This would subject them to legal requirements relating to data protection, transparency, human oversight and robustness, among others.
National or international standard-setting, along with other self-regulatory approaches, can provide technical parameters for trustworthy AI systems, and notably for workplace use.
Regulatory efforts have also zeroed in on use of AI in the workplace. In the US, Illinois and Maryland require applicant consent for the use of facial recognition tools in hiring. The New York City Council mandates annual algorithmic bias audits for “automated employment decision tools”.
Formalising an agreement between unions and business associations, legislation in Spain now mandates transparency for AI systems affecting working conditions or employment status. Indeed, social partners have proactively set out proposals on workplace AI use, and will be key stakeholders in developing new legislation.
Salvi del Pero, A., P. Wyckoff and A. Vourc'h (2022), "Using Artificial Intelligence in the workplace: What are the main ethical risks?", OECD Social, Employment and Migration Working Papers, No. 273, OECD Publishing, Paris, https://doi.org/10.1787/840a2d9f-en.