Independent review – Independent advisory group on new and emerging technologies in policing: final report
The final report of the Independent advisory group on new and emerging technologies in policing.
5. Ethical and social implications and good practice
This chapter covers ethical considerations including the use of ethics panels and lessons learned in Scotland, social and ethical implications, and good practices in ethical frameworks from other fields and jurisdictions. It is based on the work of the commissioned research report (Connon et al., 2023) and the report of the first workstream of the IAG (Daly et al., 2023), with some input from the oversight, scrutiny and review workstream of the IAG (Ross et al., 2023).
Ethics:
The workstream 4 report (Ross et al., 2023) considers ethics as a system of moral principles that shape how people make decisions, lead their lives and carry out their work. They point out that Police Scotland's Code of Ethics sets out the standards expected of officers and staff, which reflect the values (integrity, fairness, respect and human rights) of the service. As Daly et al. (2023) state, ethical considerations associated with emerging technology in policing can be operationalised through 'live' impact assessment documents (which can adapt to new knowledge), through advisory engagement or debate on proposed initiatives through consultation and panes/forums. Force policies, guidance and training are also important to inform officers and staff about ethical consideration and, standards and the ways in which behaviour is compliant with bias mitigating efforts. As Raab (2020) points out there has been a large volume of work seeking to define principles and frameworks for the ethical use of advanced technologies and there has been a regulatory 'turn' to ethics, including through the use of ethics panels.
IAG public consultation:
As outlined by Daly et al. (2023) some of the responses to the IAG's 'Call for Evidence' discussed ethical dimensions, with both an 'ethical and legal assessment framework' and 'ethics panels' proposed to address challenges relating to ethical standards. Many responses highlighted that introducing technology into operational domains without proper ethical frameworks to engage critical assessment or external consultation is likely to result in negative outcomes for all stakeholders, including eroding public trust. Ethics panels were said to allow subject matter experts from a range of disciplines to independently grapple with the ethical and legal issues associated with emergent policing technologies. It was recommended that practitioner, professional, community and academic voices should be included in such fora. One response suggested that ethics panels should include people who understand power asymmetries in the use of technologies. As ethics panels influence decision making and inform public policy, it was assumed that these spaces should not include individuals or groups with financial interests. It was suggested that ethics panels should embrace an equality and human rights-based approach to understand impacts on individuals and outcomes should provide strong and unbiased evidence on whether the proposed technology will entrench existing inequalities. Clearly this links to considerations from chapter 4.
Ethics advisory panels:
As outlined by Ross et al. (2023), Police Scotland have introduced four tiers of Ethics Advisory Panels (EAPs), which provide an opportunity for staff, officers, and external participants to discuss ethical dilemmas. The ethics panels are not decision-making bodies but provide advice and support to the decision maker (or dilemma holder), who remains responsible for taking the decisions, with due consideration of the panel's views in their rationale. The objectives of panels include improving service delivery, supporting police officers, staff and leaders, developing and enhancing a visible ethics culture and supporting organisational learning.
Information about the four tiers of panels may be found in Ross et al. (2023: 20-21) and Daly et al. (2023: 39-41) but in short, Regional Panels (North, East and West) focus on ethical dilemmas that impact on local and or operational decision making and are comprised of staff and officers and chaired by trained senior officers and staff. National Panels will focus on ethical dilemmas which impact upon national, strategic and tactical decision making and comprise those with a national remit and chaired by trained senior officers and staff. The Independent Panel is chaired by an independent member, with DCC Professionalism in co-chair, and considers dilemmas that impact public service and confidence (e.g. Remote Piloted Aircraft Systems, BWV), providing external consideration and scrutiny from members drawn from a broad spectrum of society, to advise the decision maker. Chaired by the Convenor of the Scottish Youth Parliament's (SYP) Justice Committee (with CIU Ethics and Preventions holding the role of PS Delegate), the Youth Panel sits parallel to the Independent panel and is run in partnership with the SYP with trained MSYP's engaging the voice of Scotland's young people in police decision making.
It should be noted that the Regional and National EAPs only have internal Police Officers or Staff in attendance and the organisation would benefit from ensuring that externals are present at these to ensure a variety of subject matter expertise. Whilst EAPs may help Police Scotland to improve service delivery and consider ethical implications when deliberating the implementation of new technologies there is not currently a clear expectation of inclusion of the findings in the Full Business Case template. Therefore, a clear explanation of how the findings and advice from the EAPs helped shape the solution, planned implementation or preferred option for the new technology should be included in a new section of the FBC template. This links to a recommendation from chapter 8 (key consideration 4 in Ross et al. (2023) to develop a sixth ethics and human rights case in Business Cases.
In my view as Chair, in addition to improving clarity on action taken as a result of EAPs i.e. how the findings/advice from ethics panels are used to shape decision making there and embedding these expectations in oversight e.g. through business cases there are potentially some further enhancements that could be made to ethics panels. For example, it does not appear that minutes are made public so it is suggested that anonymised minutes or a summary of meeting discussions and outcomes are published (either publicly by Police Scotland or to relevant SPA committees) order to enhance transparency.
Reflections from Dr Marion Oswald (Daly et al. 2023: 40) on Police Scotland's Ethics Advisory Panels suggest that it is important to link the administrative or committee arrangements that are being established to operationalise the framework in order that there are clear oversight processes to ensure the framework is implemented (and it is not just principles on paper). Oswald points out that the proposed structure is quite different to the structure of structure of West Midlands PEC and Police Data ethics Committee (West Midlands police website) and Police Data ethics Committee which was established to oversee technological developments, and has specific terms of reference detailing its aims, principles against which projects will be reviewed, transparency, independence etc. Although Oswald acknowledges there are still many issues (Marion Oswald research paper) with this sort of oversight, including the relationship with legal compliance and practical issues around budget and resourcing, the structure is generally regarded as best practice in the absence of any nationally agreed model because of its semi-independence and the commitment of the force to the model. Oswald questions whether Police Scotland's EAPs have the expertise and independence to influence technological developments within the force. She also emphasises that consideration should be given to how EAPs could be involved in a system of rolling review, from proposal/pilot to implementation in order to track progress and give ongoing advice.
Data ethics framework:
Data and data-driven technology provides new opportunities and the potential for innovation but this needs to involve responsible and trustworthy use of data. Police Scotland's new Data Ethics Framework will guide this responsible use and provide governance required to identify and address ethical challenges posed by novel uses of data and data-driven technology. It has been developed in collaboration with the Centre for Data Ethics and Innovation (CDEI) and through engagement across policing and externally. It is being introduced in order to ensure that 'data-driven' technology solutions are using data responsibly, and any associated data ethics risks are identified, managed, scrutinised (internally and externally) appropriately. Whilst the Independent Ethics advisory Panels address the 'should we' type of individual ethical dilemmas, the Independent Data Ethics Group (similar to WMP) will focus on the 'how do we' implement the new data-driven technologies, typically reviewing project proposals. For more information on the Data Ethics Framework see Police Scotland's account (Daly et al. 2023: 42) and for its role in governance see Chapter 8 below and Ross et al. (2023).
The framework is principles based, using questions which encourage robust, evidence-based responses and are open to internal and external scrutiny in order to enhance trustworthiness. Key themes covered in the questions include: value and impact (measured and evidenced benefit to individuals or society); effectiveness and accuracy (assess ability to improve accuracy, with a need for monitoring or independent evaluation for sensitive projects); necessity and proportionality (intrusion must be necessary to achieve policing aims and be proportionate in relation to benefits); transparency and explainability (ensuring purpose, details and notice of deployment are understandable and made public and open to scrutiny); reliability and security (measures in place to ensure data is used securely and protects privacy). This approach is designed to help the policing system in Scotland to use data ethically by helping to identify potential harms, risks and challenges and weigh these up with potential benefits and opportunities.
The approach of the framework to embedding good governance (see chapter 8) is anticipated to contribute to building public confidence but also to assist with: being transparent and open (communicating uses clearly ad accessibly and proactively where possible); engaging with diverse views (and where possible demonstrating the path to impact such engagement has); drawing on specialist and multi-disciplinary expertise (to ensure the use of data and data-driven technology is robust, evidence-base and effective); clearly articulating the purpose and value (and ensuring these are measured and met and include trade-offs and public acceptability); identifying and mitigating potential harms; creating an environment for responsible innovation (where new approaches are explored within frameworks of rigorous oversight, evaluation and transparency).
Reflections from Dr Marion Oswald (Daly et al. 2023: 43) on Police Scotland's Data Ethics Governance Framework acknowledge the value the adoption of a triage process to identify high risk applications. Oswald highlights the value of long-term and robust evaluation methods and the importance data and outputs of data-driven technology being accurate and not leading to detrimental unintended consequences. Oswald emphasises how crucial it is to publish papers, advice and minutes relating to the new Independent Data Ethics Scrutiny Group and the need to allocate budget for secretariat support.
In summary, Daly et al. (2023) conclude that ethical considerations around emergent technologies in policing can relate to ensuring and communicating the legal basis for police use of a technology, but also typically consider how technology reifies or augments power relations. Examples include technology enabled mass surveillance or social sorting, expansion of use cases of technology (i.e. function creep), potential chilling effect on populations, collateral intrusions, and insufficient safeguards surrounding analytical capabilities. Independent oversight of ethics processes and due transparency over them is crucial to ensuring ethical outcomes.
Lessons learned and good practices relating to ethical considerations:
Some of the lessons learned and good practices highlighted by Daly et al. (2023) through the Scottish case studies are of relevance here. In relation to Cyber Kiosks, when it comes to viewing the contents of an individual's mobile device there are many ethical contentions that arise. For example, potential for collateral intrusion to occur (intrusion into private life of friends, family, and other people situated in the social network of the individual). In addition, there is potential for police overreach (if not target searches of personal data may be viewed and invasive levels of privacy interference may occur).
The End of Project Report (ERP) for Cyber Kiosks recognised there was not full consideration of or consultation with relevant stakeholders' concerns relating to the use of Cyber Kiosks and there was not enough time spent considering public perceptions or concerns. Therefore, public consent, public concern, engagement and consultation and ethical considerations should be addressed in future through effective risk management, via business cases (following HM Treasury Green Book's framework and accounting for ethical considerations) and impact assessments. Police Scotland asserts (Daly et al. 2023: 52) that lessons learned through Cyber Kiosks have resulted in improvements relating to the implementation of policing technologies e.g. through the involvement of external stakeholders and reference groups and the use of post implementation reviews and enhanced governance (business cases, EqHRIA, DPIA etc.).
Social and ethical implications:
This section summarises findings from a review of literature undertaken by Connon et al. (2023: 27-59) for the IAG to explore various social and ethical implications associated with three broad categories of technology type: electronic databases, biometric identification systems; electronic surveillance and tracking devices. This is based on a systematic review of interdisciplinary social sciences research literature on the development, trial and implementation of emerging technologies in policing.
Electronic databases:
Connon et al. (2023: 27-28) describe what is meant by electronic databases and consider various specific types of electronic database technologies and uses discussed in the literature. Data sharing and third-party data sharing platforms raise a number of social and ethical issues under the following themes: safety of information held; human rights and privacy; lack of standardisation and accountability; differences in organisational practice; bias embedded in data, data organisation and data sharing processes. Ensuring the safety of information held and preventing data breaches is central in for example: preventing risk of increased victimisation, inequalities or inefficiency (Clavell, 2018); building public confidence and facilitating information sharing with the police online as well as in person (Aston et al., 2021a).
In relation to human rights and privacy, for example, Holley et al. (2020) highlights concerns with decentralisation and fragmentation of security of personal information, with greater control to private security governance professionals. Neyroud and Dilsey (2008) argue that the effectiveness of electronic databases in detecting and preventing crime should not be separated from perceptions of legitimacy and ethical and social questions surrounding the impact on civil liberties. Therefore, as they argue strong transparent management and oversight of these technologies is essential, including ensuring integrity and reliability of the technology, alignment between purpose and use in deployment, transparency in governance, and ensuring public confidence in the technology. McKendrick (2019) argues for broader access to less intrusive aspects of public data and direct regulation (including technical and regulatory safeguards to improve performance and compliance with human rights legislation) of how those data are used – including oversight of activities of private-sector actors. It is worth noting that private sector actors should only be processing personal data collected for policing purposes when acting as a processor for a competent authority or if data is shared with them by a competent authority (who has obligations to ensure this complies with DP law). As data flows are complex DP must be considered at the design and procurement stage.
In relation to lack of standardisation and accountability Babuta and Oswald (2020) note that there is a lack of organisational guidelines or clear processes for scrutiny regulation and enforcement and these standards and clear responsibilities for policing bodies in relation to them should be addressed as part of a new draft code of practice. Differences in organisational practices can result in digital divides and problems with data integration (Sanders and Henderson, 2013). Bias embedded in data, data organisations and data sharing processes can include data containing existing biases reflecting over-policing of certain communities (e.g. disadvantaged socio-demographic backgrounds) and racial bias which are reproduced by the application of datasets (Babuta, 2017).
Community policing applications raise risks of enhancing racial inequalities e.g. via community–instigated policing such as the Nextdoor app which embeds unchallenged racist attitudes in neighbourhood monitory data (Bloch, 2021) and exacerbation of inequalities via use of community policing apps as part of hot spots policing (Hendrix et al., 2019). Instead of enhancing inclusion and social and technological capital, community policing applications can widen the gulf in participation and community-police relations and result in inequalities between those who provide information and those whose information is being recorded (Brewster et al., 2018). Maintaining public trust is a central issue, for example van Eijk (2018) argue that transparency about the aims of engagement and how data will be held, O'Connor (2017) stresses that visibility and storage of information must be considered and Aston et al. (2021a) emphasise key concerns around anonymity and privacy of information, risk of abuse of personal data and the importance of allowing people to opt out of having personal data stored.
Challenges in relation to data pulling platforms include inequalities in police resources impacting the utilisation of big data and its integration with administrative and open data sources and platforms impacting effectiveness (Ellison et al. 2021) and different culture and practices in various sectors regarding collection, sharing, processing and use of different types of data creating shifts in distribution of power between various sectors (National Analytics Solutions 2017).
Social media platforms and data storage raises a number of issues including lack of alignment in organisational culture impacting the collection, storage, management and use of social media data. This means, for example, that social media has not helped facilitate the desired interaction between police and communities in England (Bullock, 2018) and the lack of clear policies and guidance for the collection, management and use of social media data pose a potential ethical risk (Meijer and Thaens, 2013). With regard to the legitimacy of police action the availability of social media data can be used by the public to question police over their practices (Ellis, 2019), whilst Goldsmith (2015) highlights reputational problems for off-duty use of social media by police officers. Issues with the management of use of sensitive information obtained through social media data are raised in relation to use in police surveillance activities, digital forensics and covert online child sexual exploitation investigations and the ethical issues with extended surveillance and storage of data (Fussey and Sandhu, 2020). Risks of enhancing actual and perceived social injustices posed by social media include unprecedented capacities to monitor the police and expose injustice (Walsh and O'Connor, 2019), but also the ability to monitor social media data streams risk enhanced surveillance of particular community groups which may negatively affect police-community relations (Williams et al., 2013).
Open-source data can result in increased victimisation if not adequately managed (Clavell et al., 2018), may 'drive' predictive policing strategies and sometimes unnecessary pre-emptive police action (Egbert and Krausmann, 2020) and can lead to over-policing in the sphere (Kjellgren, 2022).
Vulnerable population databases and datasets raised various issues including surveillance of vulnerable populations for example, improperly restricted data availability and lead to disproportionate profiling, policing and criminalisation of marginalised groups (Hendl et al., 2020) and the importance of guidance as to how and when information should be shred (Storm 2017). Issues of human rights and justice include the potential for discrimination through systematic marginalisation and Malgieri and Niklas (2020) highlight issues of consent and call for vulnerability aware interpretation. They also called for greater communication with vulnerable people as to how data is stored and used and Lumsden and Black (2020) discuss the importance of ensuring data and service areas responsive to needs of deaf citizens. Lack of guidance and prioritisation for data collection and management is an issue e.g. Babuta (2017) calls for the development of a clear decision-making framework to ensure ethical use of vulnerable population data.
Biometric identification systems:
Facial recognition technology literature raised various social and ethical issues including trust and legitimacy, identified by Bradford et al. (2020) as important factors in the acceptance and rejection of these technologies, whilst McGuire (2021) explains that perceptions of misuse of technologies and denial of rights can threaten the viability of policing. As Bragias et al. (2021) argue, a deterioration of police-citizen relations, with the public often being sceptical about how the police will use the technology and for what purposes. It also carriers a risk of enhancing inequalities for marginalised groups, with police concerns including anti-discrimination law (Urquhart and Miranda, 2021) and Hood (2020) discussing the dangers of integration of facial recognition into police body-worn camera devices and risks of reinforcing racial marginalisation. Furthermore, Chowdhury (2020) argues that even with improved accuracy facial recognition technologies, used disproportionately against people and communities of colour, will likely still exacerbate racial inequalities. Privacy and security concerns raised include the right to respect for private life (Keenan, 2021). Lack of standardised ethical principles and guidance were raised by Babuta and Oswald (2017).
Artificial intelligence raised issues including reproduction of systemic bias of human decision makers in predictive policing (Alikhademi et al., 2022). Issues of accuracy fairness and transparency include bias and lack of operational transparency (Beck, 2021), with decisions of algorithms viewed as less fair than a police officer decision (Hobson et al. 2021), with Asaro (2019) raising concerns about treating people as guilty of future crimes for acts they have not committed or may never commit. Risks of racial and gender bias may be embedded in the design and implementation (Noriega, 2020) of AI technologies, although the potential of AI to promote a non-biased environment was also acknowledged. There was a call for clear ethical guidelines and laws to minimise potential harms associated with AI in policing. The risk of potential use of AI by perpetrators of crime was acknowledged (Hayward and Maas, 2021).
Voice recognition technologies and mobile, cloud, robotics and connected sensors are associated with concerns related to: privacy and security and political and regulatory factors affecting interoperability and concerns about standards (Lindeman et al., 2020), human rights and a lack of well-established norms covering the use of AI technology in practice (McKendrick, 2019).
Surveillance systems and tracking devices:
Drones raised issues relating to legitimacy of use of unmanned devices by police departments (Miliakeala et al., 2018); issues of the development of an arial geopolitics of security e.g. implications for power relations (Klauser, 2021); public confidence and trust e.g. issues with using drones to monitor political protest in the US (Milner et al., 2021); concerns relating to racial biases in deployment, with e.g. Page and Jones (2021) questioning the ability of drones to make policing more efficient and 'race-neutral'; and serious concerns about use in domestic policing personal privacy and intrusion of surveillance in people's daily lives (Sakiyama et al., 2017).
Smart devices and sensors raised key ethical issues relating to privacy e.g. concerns regarding the level of increased surveillance (including to officers) posed by highly networked systems (Joh, 2019); trust and legitimacy of police use e.g. Joyce et al. (2013) emphasising it requires ongoing collaboration with the public and researchers.
Location and 'Hot' Spot' analysis tools literature discuss issue relating to effectiveness in reducing crime with the applications being used for surveillance and enforcement and having little if any direct measurable impact on officers' ability to reduce crime in the field (Koper et al., 2015). The use of advanced electronic monitoring schemes (combining GPS tracking and radio frequency technology) in the context of privatisation of probation in England and Wales raises challenges concerning the legitimacy of product selection given enquiries relating to providers overcharging the government for their services (Nellis, 2014). Lack of guidance or integration of technology within specific crime reduction agendas raises concerns about policing adopting technologies without giving consideration to how they fit within their operational goals (Hendrix et al., 2019).
Body worn video cameras literature highlighted implications for public-state relationships e.g. Hamilton-Smith et al. (2021) found that technologies such as hand-held cameras and BWV had a detrimental impact on police-fan relationships, interactions and dialogue. In relation to impacts on police officers and police practice Henne et al. (2021) argue that the use of BWV redefines police violence into a narrow conceptualisation rooted in encounters between citizens and police and direct attention away from the structural conditions that perpetuate violence. Miranda (2022) concludes that use of cameras and how they operate technically, raise ethical issues for data management and storage. Concerns about racial biases inherent in deployment of the technology were raised by Hood (2020) regarding racial marginalisation, and Murphy and Estcourt (2020) who argued they could contribute to over-surveillance of minority communities.
Serious ethical challenges with autonomous security robots were discussed by Asaro (2019) as they can potentially deploy violent and lethal force against humans and there is increased interest in developing and deploying robots for enforcement tasks, including robots armed with weapons. Though not usually acceptable, police officers are authorised by the state to use violent and lethal force in certain circumstances in order to keep the peace and protect individuals and the community from an immediate threat and therefore the design of human-robot interactions (HRIs) in which violent and lethal force might be among the actions taken by the robot pose problems.
CCTV and visual/optical technologies pose concerns regarding a lack of standards and principles (Brookman and Jones, 2022), with Clavell et al. (2018) arguing that if they are not managed correctly, they can result in increased victimisation, inequalities or inefficiency.
Best practices:
Connon et al. (2023) also outline best practice for implementation and dissemination from research and policy relevant literature on electronic databases, biometric identification systems; electronic surveillance and tracking devices
Electronic databases:
Recommendations for improving databases and third-party data sharing include better management of expectations and communication of the needs of different organisations to strengthen interoperability of working with multiple datasets as well as manging data subjects' privacy and human rights (Neiva et al., 2022), and the need for greater material, social and organisational integration to enable effective use of technologies (Sanders and Henerson, 2013). Neyroud and Disley (2008) argue that strong transparent management and oversight of data sharing technologies with third party organisations are essential. McKendrick (2019) recommend clear transparency regarding the handling of data, especially by private companies and clear information and communication as to data access and limitations by third parties.
The National Analytics Solutions (2017) provide specific guidance for greater standardisation of practices and argue there is a need for greater clarity over legal obligations on data storage and processing across all parties, consent issues relating to data subjects and the duration of storage (see Connon et al., 2023: 83). They provide an ethical framework (underpinned by four dimensions of society, fairness, responsibility and practicality) for data management and sharing. Babuta (2017) recommends standardisation of concepts for entering information into police databases and the creation of Multi-Agency Safeguarding Hubs (MASH) for better data sharing practices underpinned by the development of a clear decision-making framework at the national level to ensure ethical storage, management and use of data.
Regarding social media platforms and data Williams et al. (2021) recommends greater cooperation between policymakers, social science and technology researchers for the development of workable, innovative guidance for working with social media data in the policing of hate crime and malicious social media communications. In relation to vulnerable population databases and datasets Asaro (2019 recommends an Ethics of Care approach to the management of use of data whereas Babuta (2017) suggest that MASH databases would help facilitate this. Community policing applications literature argues that improvements to data storage systems and protections and procedures may help improve public confidence in policing and information sharing (Aston et al. (2021a) and Clavell et al. (2018) present a set of ethical guidelines.
Biometric identification systems:
Recommendations pertaining to the use of facial recognition technologies focus on improving public support and emphasise the need to devise new ethical principles and guidelines for its use including calling for: transparency (Bragias et al., 2021); interrogation of biases prior to development (Williams, 2020); a draft code of practice (Babuta and Oswald, 2020); clear ethical principles and guidance implemented in a standardised manner (Smith and Miller, 2022); further trials (National Physical Laboratory and Metropolitan Police Force, 2020)[24]; and a generational ban until further guidelines (plus mandatory equality impact assessments, collection and reporting of technicity data, independent audits etc.) and legal stipulations have been developed (Chowdhury, 2020).
In relation to Artificial Intelligence the focus of the existing research is on minimising biases towards marginalised communities, establishing standards for predictive policing technologies and raising awareness. Asaro (2019) recommends an AI Ethics of Care approach, taking a holistic view of values and goals of system designs, whereas Whittlestone (2019) argues that high level principles can help ensure costs and benefits of use of technologies for marginalised groups should be weight up prior to implementation for specific purposes. Alikhademi et al. (2022) develops a set of recommendations for fair predictive policing to minimise racial bias including pre-processing of data to reduce dependence on variables identified as discriminatory, use of counterfactual analysis processes to detect and correct bias, post-processing of results to make the respect group and individual fairness and analysing results to evaluate the fairness of outcomes for groups (see Connon et al., 2023: 90).
Surveillance technologies and tracking devices:
In relation to location and 'hot spot' analysis technologies Koper et al. (2019) call for greater training on strategic uses of IT for problem-solving and crime prevention and greater attention to behaviour effects of technology on officers and Hendrix et al. (2019) suggest that police should improve planning regarding how these forms of technology fit within operational goals and guiding philosophy. Regarding body worn video cameras Lum et al. (2019) emphasise that to maximise positive impacts more attention needs to be paid to the ways and contexts (organisational and community) in which BWV are most beneficial or harmful and address how they can be used in police training, management and internal investigations to achieve long-term potential to improve police accountability and legitimacy. Murphy and Estcourt (2020) recommend that the public should be involved in the formulation of police guidelines concerning the use of BWV whilst Todak et al. (2018) recommend a comprehensive planning process that incorporates the views of all stakeholders in implementation.
Asaro (2019) states that given serious challenges of automating violence at the very least the use of autonomous security robots requires the development of strict ethical codes and laws, but ultimately argues that their use should be banned in policing. Pertaining to CCTV and visual/optic technologies recommend the need to introduce and refine clear standards and principles concerning their use in forensic investigations (Brookman and Jones, 2020).
Best practice in ethical frameworks:
Drawing on research for the development of ethical standards in relation to facial recognition technologies Almeida et al. (2021) argue for the need for better checks and balances, transparency, regulation, audit etc. and pose ten ethical questions to be considered for ethical development, procurement, rollout and use. These include who controls the development, purchase and testing to challenge bias; the purposes and contexts for use; what specific consents, notices and checks and balances should be in place for these purposes; the basis for building facial data banks and consents, notices checks and balances in place for fairness and transparency; limitations of performance capabilities; accountability for different usages and how it can be audited; complaint and challenge processes; and counter-AI initiatives to test and audit (see Connon et al., 2023: 96).
On Artificial Intelligence, Whittelstone et al. (2019) explore various published prescriptive principles and codes e.g. Asilomar AI Principles that list ethical and values AI must respect, Partnership on AI which established a set of criteria guiding the development of AI which technology companies should upload, five principles form the House of Lords Select Committee on AI and the cross-sector AI code, Global Initiative on Ethics of Autonomous and Intelligence Systems' set of principles for guiding ethical governance and found substantial agreement and overlap between different sets of principles. Oswald (2019) draws on lessons learnt from the West Midlands data ethics model to recommend a three-pillar approach (law plus guidance and policy interpreted for the relevant context; ethical standards attached to personal responsibility and scientific standards and a commitment to accountability at all levels) to achieving trustworthy and accountable use of AI and the lessons that can be learned including in relation to effective accountability and the role and necessity of human rights framework in guiding the committee's ethical discussion. Oswald recommends that a national ethics approach would require clear scientific standards that are written with the policing context in mind.
Dechesne (2019) draws on research and lessons learned from policing in the Netherlands to develop a set of recommendations for the responsible use of AI to ensure alignment with ethical principles. These include: creating an AI review board and considering an AI ombudsperson to ensure independent critical evaluation; updating the organisational 'code of ethics'; incentivising the inclusion of ethical, legal and social considerations in AI research projects; training AI scientists on ethical consideration; developing a regress process; clear processes for accountability and responsibility; evaluation procedures; auditing mechanisms; measures to prevent, detect and mitigate errors; transparent systems to enable accountability; respect for privacy; and human agency (see Connon et al. 2023: 98).
Lessons learned from health, children and family sectors:
Connon et al. (2023: 102-107) also cover lessons learned from research on the trial and adoption of emerging technologies in the health, children and family sectors. With regards to electronic databases Faca et al. (2020) examined ethical issues with digital data and its use in relation to minors within the health sector, including consent, data handing, minors' data rights, private versus public conceptualizations of data generated through social media and gatekeeping. Concerns were raised regarding the preclusion of minors from important research (given ethical considerations) and the need for greater discussion to co-produce guidelines or standards concerning ethical practice between researchers and minors. Schwarz et al. (2021) explored the effects of sharing electronic health records with people affected by mental health conditions and found access to information about themselves was associated with empowerment and trust (though negative experiences resulted from inaccurate notes, disrespectful language or undiscussed diagnoses) and recommended guidelines and training. This raises important considerations for policing in relation to setting standards regarding subject access to records held about them. Birchley et al. (2017) on ethical issues involved in smart-home health technologies emphasise provision of clear information about the sharing of data with third parties. It is worth noting that the ICO has produced an Age Appropriate Design Code which certain online services must conform to and is also a useful reference on how to ensure the best interests of the child are considered in any service.
Regarding Artificial Intelligence Ronquillo et al. (2021) identified challenges in the context of nursing emphasising the importance of consideration of: professionals need to understand the relationship between the data they collect and the AI technologies they use; the need to meaningfully involve professionals in all stages of AI (from development to implementation); the need to address limitations in knowledge so professionals can contribute to the development of AI technologies. Work on smart devices and sensors include privacy (Birchley et al., 2017) and privacy and security (Zhu et la., 2021) again arguing professionals should be involved in the design and implementation of these technologies to help promote ethical awareness and practice.
In respect of lessons learned relating to ethical frameworks from these sectors in relation to AI, voluntary guidelines on ethical practices (from governments and other professional organisations) are regarded as weak in terms of standards for accountability, enforceability, and participation and for their potential to address inequalities and discrimination (Fukada-Parr and Gibbons, 2021). It is argued that there is a need for governments to develop more rigorous standards grounded in international human rights frameworks that are capable of holding Big Tech to account and they recommend that AI guidelines should be honest about their potential to widen socio-economic inequality and not just discrimination and that governance of AI design, development and deployment should be based on a robust human rights framework to protect the public interest from threats of harmful application. Leslie (2019) outlines critical components of an ethically permissible AI project (see Connon et al., 2023: 107) including the project being fair and non-discriminatory, worthy of public trust and justifiable. Furthermore, the ICO guidance on AI and data protection provides practical guidance on how to ensure that the use of AI is fair and transparent and how bias and discrimination can be addressed.
Chapter 5 summary and conclusion
Ethical considerations can be particularly contentious and difficult to operationalise in the domain of policing. These can be considered in practical terms through the use of impact assessments (understood to be 'live documents' able to adapt to new knowledge), and through advisory engagement or debate on proposed initiatives. Police Scotland uses Ethics Advisory Panels and is introducing a new Data Ethics Framework. Force policies, guidance, and training may be used to inform officers and staff about ethical standards and the methods in which behaviour is compliant with bias mitigating efforts.
Ethical considerations around emergent technology in police work can relate to ensuring and communicating the legal basis for police use of a technology, but also typically consider how technology reifies or augments power relations. Examples of this could include technology enabled mass surveillance or social sorting, expansion of use cases of technology (i.e. function creep), potential chilling effect on populations, collateral intrusion, and insufficient safeguards surrounding analytical capabilities. It is important to ensure appropriate safeguards are in place, but there also a need to facilitate adoption of technology in order for police to fulfil their statutory duties. Therefore, the expectations in terms of evidence gathering, evaluation and oversight related to the introduction of new technologies should vary depending on the existing evidence base and level of risk. Police Scotland has many governance processes in place and being introduced to address the ethical issues and independent oversight and transparency over them is central to ensuring ethical outcomes.
Social and ethical issues associated with various forms of emerging technologies explored by Connon et al. (2023) included storage of sensitive information, risks to enhancing social injustices and surveillance of vulnerable groups relating to certain uses of electronic databases. Issues of accuracy, fairness and transparency were particularly discussed in relation to Artificial Intelligence applications and usage in predictive policing. Live facial recognition raised questions regarding trust and legitimacy, privacy, personal security, enhancing inequalities and the lack of standards, ethical principles and guidance were highlighted key gaps. Concerns relating to privacy, surveillance of minorities and public confidence were particularly pertinent when discussing surveillance systems and tracking devices including drones, smart devices and sensors, location and 'hot spot' analysis, body worn cameras, autonomous security robots, CCTV and visual/optical technologies.
Best practice for implementation of emerging technologies in policing highlighted by Connon et al. (2023) include, in relation to electronic databases and third-party data sharing, strong transparent management and oversight; a clear decision-making framework and standardisation of practices regarding data storage, management, sharing and use; better management of expectations and communication of the needs of different organizations to strengthen interoperability of working with multiple datasets as well as managing data subjects' privacy and human rights. Research relating to use of live facial recognition (LFR) technologies focuses on ethical principles and guidelines including calling for a code of practice, transparency, interrogation of biases prior to development, further trials, and a ban until further guidelines and legal stipulations have been developed. In relation to Artificial Intelligence the focus is on various recommendations to minimise biases towards marginalised communities and establishing standards for predictive policing technologies. Asaro (2019) states that the use of autonomous security robots requires the development of strict ethical codes and laws, but their use should be banned in policing. Best practice in ethical frameworks include for example, ten ethical standards in relation to facial recognition technologies (Almeida et al. 2021), on AI various published prescriptive principles and codes (Whittlestone et al. 2019), and Oswald's three-pillar approach.
A number of key considerations relating to ethical and social implications are outlined here but see Appendix C (and Connon et al., 2023: 4-7) for more details.
5.1 Police Scotland should continue to reflect on and evaluate its uses of technologies, recognising lessons learnt and the implementation of measures such as ethics panels, improved internal processes, engagement and transparency.
5.2 Police Scotland and the SPA shouldcontinually improve the use of Ethics Advisory Panels (EAPs) to enhance external involvement and independence, transparency and the role of EAPs in continual review.
5.3 Consideration could be taken of a number of potential policy and practice suggestionshighlighted by Connon et al. (2023: 134-139), relating to various technologies (electronic database technologies, biometric identification systems and AI technologies, surveillance systems and tracking devices) which will be of interest and may be found in full on pages 134-139 of the Stirling report.
5.4 a) Policing bodies and scrutiny bodies should ensure a monitoring mechanism, to record data on its equality and human rights impacts, is incorporated into the design and implementation of an emerging technology. Police Scotland should routinely gather and use equality information relevant to all protected characteristics, including ethnicity data, which should be reported transparently in order to protect minority groups. Policing bodies should make data on equality impacts of trial use of technologies publicly available. b) Training to ensure awareness of equality and human rights obligationsshould be given to all officers involved in the use or monitoring of emerging technologies. Force polices, guidance, and training (developed in accordance with EA2010 and PSED) may be used to inform officers and staff about ethical standards and the methods in which behaviour is compliant with bias mitigating efforts.
Contact
Email: ryan.paterson@gov.scot
There is a problem
Thanks for your feedback