Security Games with Ambiguous Beliefs of Agents

9 Aug 2015  ·  Hossein Khani, Mohsen Afsharchi ·

Currently the Dempster-Shafer based algorithm and Uniform Random Probability based algorithm are the preferred method of resolving security games, in which defenders are able to identify attackers and only strategy remained ambiguous. However this model is inefficient in situations where resources are limited and both the identity of the attackers and their strategies are ambiguous. The intent of this study is to find a more effective algorithm to guide the defenders in choosing which outside agents with which to cooperate given both ambiguities. We designed an experiment where defenders were compelled to engage with outside agents in order to maximize protection of their targets. We introduced two important notions: the behavior of each agent in target protection and the tolerance threshold in the target protection process. From these, we proposed an algorithm that was applied by each defender to determine the best potential assistant(s) with which to cooperate. Our results showed that our proposed algorithm is safer than the Dempster-Shafer based algorithm.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here