Summary: | <p>The use of facial recognition technology by police presents a range of risks to the enjoyment of human rights across society. Human rights law assists legislators to understand the legal parameters for how this technology can be used without undermining the rule of law and its use leading to widespread state surveillance.</p>
<p>The focus of this thesis, the use of live facial recognition (LFR) by the Metropolitan Police Service (MPS) in London, is an example of the risks posed by police use of AI. MPS invokes the language of human rights in justifying their use of LFR, without ensuring appropriate safeguards are in place to protect rights and prevent widespread surveillance. The MPS has used LFR since 2016, and this use threatens the enjoyment of a range of protected rights, including Article 8 of the European Convention on Human Rights (the protection of private and family life). While the technology itself poses novel challenges to the enjoyment of a range of rights, the contribution made by this thesis highlights the risk of police ineffectually self-regulating their use of artificial intelligence (AI) tools without ensuring effective oversight, accountability or appropriate rights protections. The conclusions reached in this thesis call for appropriate and effective regulation and oversight of police use of LFR to ensure rights are protected from unlawful and disproportionate harm.</p>
|