december 1, 2025
Peter Carr seminar-Friday 5th December 2025, 14-15:30
Università di Bologna- Auditorium – Piazza Scaravilli, 1 + Microsoft Teams Meeting
Pasquale Cirillo- A Calculus for Biased Agents and Explainable AI
| What if an AI could reason through its own worldview (biases and all) without breaking the rules of probability? We introduce a formal yet intuitive framework that extends Bayesian inference to subjective agents. The core idea is simple: pass ordinary probabilities through a “lens” that encodes an agent’s personal map of reality. This lens defines a consistent subjective calculus (inspired by non-classical arithmetic) so belief updates stay coherent even when the worldview is skewed. Through the lens, posterior beliefs remain the usual ones: just seen fully subjectively. Crucially, if the lens flattens small but real chances to zero, the agent becomes blind to rare events, capturing threshold effects and “black swan” blindness. We derive a generalized Bayes’ rule for this setting, pinpoint when subjective and objective views coincide, and show how to recover the lens from judgments using either parametric or isotonic calibration. A toy safety case study illustrates how the approach reproduces non-linear expert reasoning. The result is a rigorous, interpretable foundation for explainable, flexible, and safety-aware agentic AI |