Jurisdictions around the world have adopted algorithmic policing technologies. The response to concerns about fairness and privacy often includes a claim that these tools are necessary to control crime without ballooning the size of police departments. Thus, effectiveness and payroll are put forward as factors to balance against fairness concerns. However, there is little evidence to suggest that the deployment of such tools without significant new hiring can improve policing outcomes. AI tools increase the volume and flow rate of police information. In order for these tools to be effective, police must be able to keep up with that flow, while also ensuring the integrity of the algorithm and incoming data over time. We argue that algorithmic policing tools require the hiring of additional resources. These new resource demands represent a digital variant of Parkinson’s Law.
Given the documented risks these tools pose to some populations, it is necessary to work towards a mathematical understanding of the minimum incremental staffing needed to enable such tools to operate successfully, potentially providing benefits greater than their harms. In an effort to stimulate further research in this area, we offer a preliminary "digital Parkinson’s law" reflecting a first attempt to quantify the mathematical relationships between the factors involved.
Article ID: 2022L11
Publisher: Canadian Artificial Intelligence Association