Tom Orrell
5 min readOct 10, 2018

--

Balancing human and machine perspectives: what is the ‘public interest’ in the AI era?
By: Tom Orrell and Melissa Stock

Tom Orrell is the Founder of DataReady, a company that provides services and research in the digital policy and international development fields. Melissa Stock is a practicing information law barrister and manages the privacy law barrister blog. Together, we are exploring the legal dimensions of the AI era. If you are working on issues touched upon in this blog, we’d love to hear from you.

Privacy law has never been a straightforward affair in England. Its development has been piecemeal over the past two centuries.[1] In fact the laws that govern information today are spread across multiple frameworks, including: libel (protecting reputation), data protection (rights over the processing of personal data), breach of confidence (disclosure of confidential information), freedom of information (accessing information recorded by public authorities), misuse of private information[2](where private information is unjustifiably used), and harassment[3](making information public to cause alarm or distress to a person).

Which framework is applied depends on the nature of the information, how it was obtained and is being used, the relationship between the parties, and what solution is being sought. Whether or not there is a public interest in the disclosure, or retention, of the information is often a question posed to the courts, and the way the ‘test’ is applied in answering this question varies across privacy, defamation, data protection and freedom of information laws.

Although the precise wording may differ between them, public interest tests generally require that: “a public authority, or oversight body, weigh the harm that disclosure would cause to the protected interest [e.g. the right to privacy] against the public interest served by disclosure of the information.”[4]

In the world we live in today, unanticipated technological advances have led to the near ubiquitous use of electronic devices for mass communication, online consumerism and the distribution of information. As societies increasingly use and share information in more complex and diverse ways, data is becoming big business. Only now as the flaws emerge — the Cambridge Analytica scandal, Google’s internet tracking of Apple Safari users in 2011, the use of Facebook in Burma to spread information to target the Rohingya Muslim minority — are we asking questions about whether our laws adequately address the unanticipated negative effects that can result from the use of mass information. Infringements in the digital era can affect millions of people, whole societies and communities, and even democratic structures.

The ‘public interest’ is a concept that is bound closely to fundamental notions of how rights and interests are balanced fairly and proportionately in a democratic society. It is simultaneously a legal, sociological and policy issue. A question that in our view remains under-explored is where does Artificial Intelligence (‘AI’) fit into the current legal frameworks that govern information law, in particular in the context of the public interest? How do public interest tests apply to, and impact upon, data and information that is produced by automated means?

The European Union’s much-hyped General Data Protection Regulation (‘GDPR’) that has recently come into force, grants enhanced rights to individuals to effectively control and monitor the use of their data. Whilst Article 22 of the GDPR refers to ‘automated processing’ and ‘profiling’, the rights given to individuals to object are only applicable where there have been ‘legal effects concerning the data subject’.

It is unclear how this would apply in situations where the outcome of automated processing or profiling is less tangible, but no less important on a large scale. The GDPR does not directly address the question of how to challenge the use of information that is created by AI with the input of data relating to (potentially) millions of people at a time.

In our view there are at least four scenarios where the concept of the public interest under the broad umbrella of information law manifests in relation to AI methods. We believe that each one of these areas merits more detailed guidance on how public interest tests should be balanced. The four areas cover cases where:

  1. big data techniques are used to analyse consumer and citizen-generated content online and identify trends (potentially without individuals’ knowledge);
  2. bots (automated programmes that function on the world wide web) produce data, target individuals and distribute (dis)information through social media and other digital channels;
  3. automated algorithms generate information that is potentially libellous or in breach of an individual’s right to privacy — i.e. can a public interest test that is designed to balance two human rights (for e.g. privacy vs. free expression) be used to balance an automated machine output vs. a human right?
  4. individuals file freedom of information requests with public bodies requesting information on how automated programmes are being used — but what happens when an automated neural network produces an output that lacks traceability but impacts upon the public in some way?

The need for guidance in this area is well documented. In the UK,the Parliamentary Committee that covers data and media recently published a report on disinformation and fake news in light of the Cambridge Analytica scandal. Their conclusions call for more research, the development of guidance, and collaboration with other countries and organisations to help them better understand how these technologies impact public life. We hope to contribute by exploring the above four scenarios in more depth.

It is time that we opened up a conversation as a society about what our collective ‘public interest’ is in relation to (dis)information production, dissemination and consumption. We need to have inclusive, multi-stakeholder conversations between AI developers and innovators, lawyers and legal reformers, politicians and civil servants, journalists and media professionals, as well as civil society and private actors. We need to collectively devise common sets of principles and guidance that can help us to re-imagine and delineate the boundaries for what we are willing to accept is, and is not, in the public interest in an AI era.

[1]The law of confidence is considered to have originated from the case of Prince Albert v Strange (1849) 41 ER 1171, prior to this most cases concerned letters and literary works disputed under private property or copyright.

[2]The leading case of Campbell v Mirror Group Newspapers Ltd [2004] UKHL 22; [2004] 2 A.C. 457 created a two-part test.

[3]The Protection from Harassment Act 1997.

[4]Right2Info. Harm and the public interest test. Online, available at: https://www.right2info.org/exceptions-to-access/harm-and-public-interest-test[accessed 6 October 2018].

--

--