Abstract
Against a backdrop of algorithms that disempower, dehumanise, disenfranchise and discriminate, there are increasing calls to centre the human in AI development processes and to humanise AI development in practice; centring dignity in AI development could provide a way forward. Despite the inclusion of dignity in many Artificial Intelligence (AI) ethics frameworks, like many other AI ethics principles, there is little operational understanding of what dignity can look like in practice when it comes to the development of algorithms. Drawing on cybernetics and a model of dignity developed in the field of international conflict resolution, this paper presents our work-in-progress tool - the Dignity Lens - for considering dignity throughout the AI development lifecycle, and practitioner reflections from using the tool. This work is an initial step towards articulating what dignity-centred AI development could look like in practice, assisting practitioners designing and developing algorithms to actively consider dignity.