The problem of algorithmic bias in AI-based military decision support systems
Ingvild Bode and Ismail Bhila explore the biases in AI models in this article in Humanitarian Law and Policy
Algorithmic bias has long been recognized as a key problem affecting decision-making processes that integrate artificial intelligence (AI) technologies. The increased use of AI in making military decisions relevant to the use of force has sustained such questions about biases in these technologies and in how human users programme with and rely on data based on hierarchized socio-cultural norms, knowledges, and modes of attention.