OPUS 29
Grant Number 2025/57/B/HS6/00608
Founded by the National Science Center in Poland (NCN)
At the Egocentric Morality Lab, we investigate how self-interest shapes moral and fairness judgments. Our earlier research demonstrated that individuals systematically distort moral evaluations when personal or group interests are at stake.
This project extends that framework to algorithmic decision-making. As artificial intelligence increasingly governs access to jobs, credit, healthcare, and education, fairness is no longer only a technical question—it is a psychological one. Even when systems meet formal fairness criteria, public acceptance depends on how those decisions are perceived.
This research program advances a broader theory of motivated fairness cognition in both human and algorithmic decision-making systems. It asks not only whether decisions are objectively fair, but why people accept or reject them.
How does self-interest bias shape perceptions of fairness in algorithmic decision-making (ADM) compared to human decision-making (HDM)?
Do individuals perceive objectively fair algorithmic decisions as unfair when outcomes are personally unfavorable?
Are people more accepting of biased or imperfect decisions when those outcomes benefit them?
Does the type of decision-maker (algorithm vs. human vs. hybrid system) influence the strength of egocentric distortions?
What psychological mechanisms underlie self-interest bias in fairness judgments?
Do perceived closeness, perspective-taking, and perceived expertise moderate reactions to ADM systems?
Can self-interest bias in fairness evaluations be reduced?
Do fairness norm activation, accountability, or educational framing increase acceptance of objectively fair decisions?
Do cultural context and institutional domain shape how people evaluate fairness in AI-driven systems?
The project employs rigorous experimental methods from social and moral psychology across fifteen preregistered studies conducted over four years.
Experimental designs systematically manipulate:
Decision outcomes (self-beneficial vs. self-costly)
Decision-maker type (algorithmic, human, or hybrid systems)
Psychological moderators (closeness, perspective-taking, perceived expertise)
Debiasing interventions (fairness norm activation, accountability prompts, educational framing)
Studies are embedded in realistic institutional scenarios, including hiring, credit allocation, and public resource distribution.
Data are collected using both WEIRD and non-WEIRD samples via international recruitment platforms, ensuring cross-cultural generalizability. All studies follow open science practices, including preregistration, transparent reporting, and reproducible analysis pipelines.
This project reframes algorithmic fairness through a psychological lens. Rather than treating fairness solely as a mathematical property of decision procedures, it conceptualizes fairness as a motivated social judgment shaped by self-interest, identity, and context.
By integrating moral psychology with AI governance research, the program contributes to:
Theories of motivated moral cognition
Models of fairness perception in institutional contexts
Human-centered AI design
Public trust in algorithmic systems
Together with prior research on egocentric moral bias, this project forms a coherent, long-term research agenda on the psychological foundations of moral objectivity and distortion in modern societies.