Skip to main content

Critical Voices on AI - Part 5

When:
Venue: Birkbeck Central

Book your place

In the fifth talk of the Critical AI seminar series, we welcome Emine Akar and Dr Nathan Moore from Birkbeck Law School.

 

  • "Emotional Artificial Intelligence and Its Impact on the 'Emotional Privacy'" - Emine Akar

Emotion AI (EAI) refers to the process by which machines interpret data concerning an individual's emotional state, drawing insights from facial expressions, body language, vocal tone, and other behavioural indicators. This practice raises significant ethical concerns, as the commodification of human emotions for surveillance or categorisation by powerful corporate and governmental entities directly challenges the principle of human dignity.

Despite the emergence of various legal frameworks addressing EAI, such as the EU’s AI Act, which explicitly acknowledges technologies aimed at emotion detection, a robust theoretical foundation for these legal measures remains underdeveloped. This presentation seeks to address this theoretical void by introducing and conceptualising the notion of "emotional privacy."

Emotional privacy encompasses two inherently complex concepts: emotions and privacy. By exploring the critical roles these concepts play in everyday life, it becomes possible to evaluate the potential risks EAI poses to them. The analysis categorises emotions based on their survival, existential, and social functions to underscore their significance. Building on this foundation, the concept of emotional privacy is theorised at a granular level, providing a structured framework for future legal and ethical discourse.

Attendees will gain a clear understanding of EAI, how it functions, and its reliance on biometric data to infer emotions. They will learn about the scientific and ethical flaws in emotion detection, including the epistemological gap and the lack of a direct link between emotions and physical expressions. The lecture will introduce the concept of emotional privacy, explaining its significance in an era where emotions are increasingly commodified. Guests will also explore legal and regulatory perspectives, such as the EU AI Act and GDPR, highlighting concerns about emotional surveillance. They will be encouraged to rethink traditional privacy metaphors and consider emotional privacy through collective, contextual, and harm-based frameworks. Finally, attendees will leave with a critical perspective on EAI’s societal impact, its risks of discrimination and manipulation, and the necessity for stronger legal and ethical safeguards.

 

  • "Could AI dissent?" - Dr Nathan Moore

The appeal courts are the highest courts in the UK jurisdiction.  They are composed of a panel of judges, each of whom can provide a reasoned decision for each case.  With this practice, it is possible for the judges to disagree with one another.  When a judge does not agree with the majority of the panel, their judgment is a dissenting one.  It is against this background that this lecture asks about the imagining and potential of AI to decide legalistically. 

It is already clear that when it comes to textual legal work – reading, collating and synthesizing – AI is extremely capable; but what of more significant legal tasks?  In particular, can we imagine AI being able to decide a legal claim in the adversarial context of the common law?  More to the point, in an adversarial system that also allows for dissents?  If, in the future, we are prepared to accept such decision making from AI, what will have to have changed for that to be so? 

Nathan will discuss the specificity of judicial decision making by focusing on dissenting judgments.  It is not true to say that the dissenting judge does not understand the law, nor that they are wrong about its potential application; similarly, we cannot simply bracket our dissents on the basis that they are, inherently, less likely to be consequential. 

Key to this is the question: why allow dissenting judgments at all?  What does their existence and acceptance tells us about judicial deciding?  More to the point, can we imagine AI dissenting?  Would that be anything more than a case of algorithmic undecidability and, if so, at all comparable to what currently happens in judicial dissent?

 

Agenda:

14:00: Registration

14:30: "Emotional Artificial Intelligence and Its Impact on the 'Emotional Privacy'" delivered by Emine Akar

15:00: "Could AI dissent?" delivered by Dr Nathan Moore

15:30: Q&A

16:30: Event Close 


We encourage academics and students from all areas as well as interested laypeople to join us! The event will be held in room BCB 211 of Birkbeck Central.

Contact name:

Speakers
  • Dr Nathan Moore -

    Dr Nathan Moore is a senior lecturer at Birkbeck Law School. He is convenor of Land Law and the postgraduate module Algorithms and the Law. His latest publication is ‘Holy Motors: Law and Technology’ in Becci, Katsiginis, and Van Daalen, Law and Film, 2024.

  • Miss Emine Akar -

    Emine Akar is a dedicated legal scholar specialising in the intersection of law, technology, and ethics. Her academic journey began with an LLM in General Public Law at Gazi University, Ankara, where her thesis on the legal status of child soldiers was later published as a book. She continued her studies with an LLM in Public Law at University College London, earning an Outstanding Distinction for her thesis on recognising Artificial Intelligence as legal persons. Ms Akar also had the privilege of serving as a Visiting Researcher at the Digital Governance Centre, Erasmus School of Law.

    In addition to her research, Ms Akar is a Lecturer in Law at Birkbeck, University of London. She previously taught as a Visiting Lecturer in Public Law at King’s College London. Currently pursuing her doctoral studies, her research examines the potential ramifications of Emotional Artificial Intelligence on privacy rights. Her PhD is generously supported by the ESRC through the London Interdisciplinary Social Science Doctoral Training Partnership studentship.

    Publication: https://link.springer.com/chapter/10.1007/978-94-6265-639-0_4