The process of belief revision is impacted by trust relationships between agents. In the simplest case, information is reported from a single source and the belief change that occurs is dependent on the extent to which that source is trusted over a particular domain. In this paper, we are concerned with the more complicated case where the new information is reported by a set of agents. We first introduce a simple model of trust in an agent, and show how it influences the process of belief revision. We then define an joint notion of trust that is built by combining the trust held in each individual agent in the reporting set. We use this formal framework to define precisely when a collection of agents can be seen as a trusted authority over a particular formula for revision. While our framework is based on a particular model of trust, we argue that this approach can be used to define a suitable notion of joint trust in a wide range of settings.
Article ID: 2023S1
Publisher: Canadian Artificial Intelligence Association