Date of Original Version



Technical Report

Abstract or Description

Audits to detect policy violations coupled with punishments are essential to manage risks stemming from inappropriate information use by authorized insiders in organizations that handle large volumes of personal information (e.g., in healthcare, finance, Web services sectors). Our main result is an audit mechanism that effectively manages organizational risks by balancing the cost of audit and punishment against the expected loss from policy violations. We model the interaction between an organization (defender) and an employee (adversary) as a suitable repeated game. We assume that the defender is fully rational and the adversary is near-rational (i.e., acts rationally with high probability and in a byzantine manner otherwise). The mechanism prescribes a strategy for the defender that when paired with the adversary’s best response to it yields an asymmetric subgame perfect equilibrium. This equilibrium concept, which we define, implies that the defender’s strategy is approximately optimal (she might only gain a small bounded amount of utility by deviating) while the adversary does not gain at all from deviating from her best response strategy. We provide evidence that a number of parameters in the game model can be estimated from prior empirical studies, suggest specific studies that can help estimate other parameters, and design a learning algorithm that the defender can use to provably learn the adversary’s private incentives. Finally, we use our model to predict observed practices in industry (e.g., differences in punishment rates of doctors and nurses for the same violation) and the effectiveness of policy interventions (e.g., data breach notification laws and government audits) in encouraging organizations to conduct more thorough audits.