Papers
arxiv:2604.19548

Taming Actor-Observer Asymmetry in Agents via Dialectical Alignment

Published on Apr 21
· Submitted by
Li Bobo
on Apr 28
Authors:
,
,
,
,
,
,

Abstract

Large language model agents exhibit cognitive bias where self-reflection and mutual auditing lead to inconsistent error attributions, which are addressed through a dialectical reasoning framework that promotes perspective-invariant decision making.

AI-generated summary

Large Language Model agents have rapidly evolved from static text generators into dynamic systems capable of executing complex autonomous workflows. To enhance reliability, multi-agent frameworks assigning specialized roles are increasingly adopted to enable self-reflection and mutual auditing. While such role-playing effectively leverages domain expert knowledge, we find it simultaneously induces a human-like cognitive bias known as Actor-Observer Asymmetry (AOA). Specifically, an agent acting as an actor (during self-reflection) tends to attribute failures to external factors, whereas an observer (during mutual auditing) attributes the same errors to internal faults. We quantify this using our new Ambiguous Failure Benchmark, which reveals that simply swapping perspectives triggers the AOA effect in over 20% of cases for most models. To tame this bias, we introduce ReTAS (Reasoning via Thesis-Antithesis-Synthesis), a model trained through dialectical alignment to enforce perspective-invariant reasoning. By integrating dialectical chain-of-thought with Group Relative Policy Optimization, ReTAS guides agents to synthesize conflicting viewpoints into an objective consensus. Experiments demonstrate that ReTAS effectively mitigates attribution inconsistency and significantly improves fault resolution rates in ambiguous scenarios.

Community

Paper author Paper submitter

A few things from the paper:

(1) Multi-agent self-reflection has a built-in cognitive trap. The same model attributes the same failure to opposite sources just because we re-label its role. When it acts and then self-reflects, it blames external factors. When it observes another agent during mutual auditing, it blames internal faults. We call this Actor-Observer Asymmetry, after the analogous effect in human social psychology. Across most frontier models it shows up in over 20% of cases on our Ambiguous Failure Benchmark.

(2) Our fix is ReTAS (Reasoning via Thesis-Antithesis-Synthesis). Instead of forcing the model to commit to one perspective, we train it to surface both attribution sides as separate hypotheses and then synthesize a perspective-invariant resolution. The recipe is dialectical chain-of-thought combined with GRPO.

(3) The probe domains are FinQA (financial QA) and Spider (text-to-SQL), since both have clear correct-answer signals and multi-step pipelines where attribution is genuinely ambiguous. ReTAS substantially narrows the Actor-Observer gap and consistently improves end-task accuracy over self-reflection baselines.

Data: huggingface.co/datasets/BradNLP/ReTAS
Code: github.com/unikcc/ReTAS
Paper: arxiv.org/abs/2604.19548

Paper author Paper submitter

(1) Multi-agent self-reflection has a built-in cognitive trap. The same model attributes the same failure to opposite sources just because we re-label its role. When it acts and then self-reflects, it blames external factors. When it observes another agent during mutual auditing, it blames internal faults. We call this Actor-Observer Asymmetry, after the analogous effect in human social psychology. Across most frontier models it shows up in over 20% of cases on our Ambiguous Failure Benchmark.

(2) Our fix is ReTAS (Reasoning via Thesis-Antithesis-Synthesis). Instead of forcing the model to commit to one perspective, we train it to surface both attribution sides as separate hypotheses and then synthesize a perspective-invariant resolution. The recipe is dialectical chain-of-thought combined with GRPO.

(3) The probe domains are FinQA (financial QA) and Spider (text-to-SQL), since both have clear correct-answer signals and multi-step pipelines where attribution is genuinely ambiguous. ReTAS substantially narrows the Actor-Observer gap and consistently improves end-task accuracy over self-reflection baselines.

Data: huggingface.co/datasets/BradNLP/ReTAS
Code: github.com/unikcc/ReTAS
Paper: arxiv.org/abs/2604.19548

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.19548
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.19548 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.19548 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.