Building trust in the digital era: achieving Scotland's aspirations as an ethical digital nation
An Expert group supported by public and stakeholder insights has reviewed evidence and provided recommendations which will support and inform future policy. The report has a focus on building trust with the people of Scotland through engaging them in digital decisions that affect their lives.
Harm Protection when Online
Objects of Trust:
Privacy: is my information confidential? Are there laws/ regulations to protect me?
Fairness: could it be used for discrimination? Is it exploitative?
Transparency: are the people behind it being truthful? Are there other motives?
What Is Harm Protection Online?
Engaging online opens up a world of opportunity, but this does not come without risks and challenges. The term Online Harms refers to psychological, financial, physical and societal damage arising from our engagement with internet platforms and social media. Examples include accidental poisoning from fake medicines sold online, self-harm encouraged by toxic forums, bullying and sexual exploitation; romance scams and identity theft for financial gain, and propaganda aimed at undermining social cohesion, trust in institutions or democratic processes. This may be via direct or indirect methods, for example, consuming harmful information posted online or directly being targeted by bad actors.
There can be lots of illegal and hurtful information posted online, and this can make it difficult to feel safe when using digital services. These harms can stem from the behaviours of people towards each other online, purposeful mal-intention or through inadvertent carelessness. Personal information warrants protection in the same way online as it would offline. For example, an individual should feel just as safe logging into their online banking app to manage their finances as they would if they walked into a brick and mortar bank on the high street.
In order to be protected from online harms, there is a need to develop a culture of transparency, trust and accountability that is supported by strong regulatory practices. It is important that harmful content and behaviours do not undermine the benefits that data and digital can offer to society. This is why protection against online harms, such as child abuse and cybercrime, should be a priority for citizens, organisations and governments.
Why is Harm Protection Online Important?
Online harms can surface in many ways, and can put people at risk from serious emotional or physical damage. These harms are very real, and cannot be ignored. What users see and experience online can cause immediate and lasting impact, particularly on vulnerable groups such as children and young people, or on businesses and organisations.
Some of the ways that online harms can present are:
- Mis/disinformation
- Financial harms (e.g. scams)
- Cybercrime
- Bullying and harassment
- Data or identity theft
- Digital gambling.
Case Study:
Elections and Social Media
Prof. Shannon Vallor
As we are able to access more information, we are more at risk of harm from false and misleading information that can have devastating impacts on wider society.
One example of this is the potential impacts mis/disinformation can have on the political landscape. Online behaviours aimed at influencing political opinions and voter choices represent a substantial portion of social media activity globally and in Scotland. Social media lower many traditional barriers to political engagement. For those with a smartphone, tablet or computer, the services are free and easy to use. They do not require travel outside the home, or formal affiliation with a party or other political organisation.
However, online social media are widely recognised as contributing to a number of democratic ills: most notably, misinformation (false or misleading information shared unwittingly); disinformation (false or misleading information shared with the intent to deceive); manipulation (targeting emotional or psychological vulnerabilities of others in order to undermine their capacity for reasoned political choice) and inauthentic political behaviour (political activity that misrepresents the intentions, identity or nature of the author or authors). Of course, misinformation, disinformation, manipulation and inauthentic political behaviour are nothing new; each has been a part of political life since politics began.
However, their online manifestations on social media pose unique risks to the health of Scotland’s political community, not only due to the unprecedented speed and scale of their influence, but also the potential to leverage new forms of data and increasingly sophisticated algorithmic techniques to coordinate their impact, disguise their origin, amplify their negative effects, and make them harder for authentic political actors to mitigate or resist.
Between 2018 and 2020, Facebook removed hundreds of accounts linked to the Islamic Republic of Iran Broadcasting Corporation, which were associated with suspicious online activity in numerous countries including the United Kingdom. Pages removed included Free Scotland 2014 and The British Left (Scotsman, 2018); both posted about the 2014 Scottish independence referendum. These efforts preceded the Russian foreign interference campaign associated with the Brexit referendum in 2016. In August 2018, unverified reports and opinion pieces in The Herald (Leask, 2018 & Jones, 2018) alleged that local Scottish activists may have used ‘retweet bots’ – spambots that use automated scripts to seek out posts to retweet – to boost the hashtag #dissolvetheunion, and to attack pro-independence Scottish women. Later that year, a report[7] commissioned by MEP Alyn Smith confirmed that Scots were a target for malign bots controlled by state and non-state actors, with between 4% and 12% of Scottish Twitter activity determined to be “potentially malign.” Along with the report, a website (scotorbot.scot, currently inactive) was launched to connect people with free ‘bot detection’ tools. In 2020, The Times reported, “SNP cybersecurity experts have detected a rise in divisive social media posts” linked to accounts in the United States, “particularly in relation to transgender rights.” (McLaughlin & Andrews, 2020)
So why is inauthentic online activity a serious problem for democratic health at all, given that deception and obfuscation have always been part of the political landscape? One reason is that inauthentic activity seeks to exploit cognitive biases that are antithetical to effective reasoning and deliberation – such as our tendency to be irrationally influenced by how many times we have heard an idea, or how recently we have heard it, or how closely in our social circle. When we cannot reason effectively, we cannot self-govern effectively. Nor can we effectively deliberate together with our civic fellows. Thus exploitation of these biases at online scales and speeds not previously accessible to political manipulators not only strikes at the weakest point of any democracy, it does so with far greater force than we are used to.
How to be safe online
There are so many benefits to online activity, and with ‘digital by default’ quickly becoming the norm it is important that all members of society have the skills and confidence to take small steps to protect themselves online, whilst balancing this with a need to ensure that businesses, technology organisations and governments are well equipped to enforce stronger regulatory processes that promote safe online practices.
Education and awareness of the types of online harm that can surface, and the steps that can be taken on the individual level to protect against these, is particularly important. Having the confidence and ability to be able to fact-check information, identify a trusted app or site or understanding what information is being shared online will help citizens to be more confident about their online activity.
Platforms, in terms of hardware and software used to host an application or service, do have a responsibility to make sure that, as far as possible, their sites and content are not promoting online harms to their users. This could be achieved in a number of ways, such as:
- Controlling advertising algorithms
- Removing harmful content automatically
- Flagging fake news, mis/ disinformation
- Having clear and user-friendly routes for reporting online abuse or harassment
- Stronger age verification for child safety.
(National Digital Ethics Public Panel Insight Report, 2021)
“As users if we want to be online, we have to take responsibility for looking out for ourselves and not assuming everything is benign.”
National Digital Ethics Public Panel Insight Report, 2021, P. 33
Contact
Email: digitalethics@gov.scot
There is a problem
Thanks for your feedback