Skip to main content

AI, Language and Social Justice

Overview

  • Credit value: 30 credits at Level 7
  • Convenor and tutor: Dr Kinga Kozminska
  • Assessment: a presentation (20%) and 4000-word report (80%)

Module description

AI technologies are becoming an integral part of social life, transforming our communicative routines and sociolinguistic capabilities. Yet, the benefits of these technologies are not distributed equally among all contexts and users. Research shows that natural language processing technologies may challenge, but also perpetuate, reinforce or even amplify existing biases and social inequities. 

This module equips you with critical tools for examining AI technologies from a sociolinguistic perspective and for contesting emerging disparities. By combining classic studies in linguistics and intersectional theory with recent work on voice-AI or machine translation, you will make interdisciplinary connections to identify and analyse the ethical implications of natural language processing technologies, and better understand discussions surrounding societal inequities. You will develop strategies to foster inclusivity in AI-driven applications and debates surrounding the introduction and use of these technologies in wider society.

Indicative syllabus

  • Introduction to human language and large language models
  • Understanding bias in language: linguistic diversity and allocational bias
  • Understanding bias in language: representational bias
  • Data, models and design choices
  • Voice AI: automatic speech recognition, dialect prejudice and racial disparities
  • Voice AI: intersectional approaches to bias in text-to-speech technologies
  • Machine translation and gender bias
  • Translation quality disparities and questions of cultural differences
  • Linguistic bias in AI-driven decision-making
  • Reflections and future directions

Learning objectives

By the end of this module, you will be able to:

  • critically evaluate how allocational and representational biases are embedded, reinforced and challenged in the design and use of AI technologies
  • analyse the societal impact of biased language and algorithms on marginalised communities and users of under-resourced language varieties
  • design and propose strategies to mitigate for the observed biases in AI systems and foster equitable practices
  • identify ethical principles that support justice in the development and deployment of AI-powered speech recognition and natural language processing technologies
  • engage in informed discussions about the responsibilities of technologists, policymakers, linguists and users in addressing biases.