Blog

Fake News: Interview with Professor Vian Bakir

Loading

This blog post featuring Professor Vian Bakir first appeared as an interview in CSI-COP’s third newsletter .

Please tell us a little about yourself?

I am a Professor in Journalism and Political Communication at Bangor University, Wales UK – perhaps among the most beautiful university locations in the world, nestled between the mountains and the sea.  The expanse and intensity of nature allows me mental space to delve into complex, sometimes very dark, and often frustratingly opaque, political, social and cultural phenomena to do with the impact of the digital age on dataveillance, disinformation and deception.

As a result, I’ve produced books on Intelligence Elites and Public Accountability (2018) (exploring relationships of influence between intelligence agencies, their wider political, commercial and military networks, and civil society);  Torture, Intelligence and Sousveillance in the War on Terror (2016) (that examines the secretive torture-intelligence nexus put in place by the administration of George W. Bush following 9/11, as well as how this was exposed by civil society); Sousveillance, Media and Strategic Political Communication: Iraq, USA, UK (2010) (that examined the impact of social media on the ability of nations and military institutions to control their message during the 2003 Iraq War and its aftermath).

One of my first big projects was to co-edit a book (with David Barlow) on Communication in the Age of Suspicion: Trust and the Media (2007) – so you can see a trend here in what my academic interests are. Right now, I trying to finish a book (with Andrew McStay) on the rise of ‘Emotional AI’ (namely, the profiling of emotions using automated systems) and disinformation. This takes a global look at these twin phenomena, with a view to diagnosing key harms, assessing solutions so-far proposed, and keeping an eye of the horizon line in terms of how these emergent technologies may develop. Our book is contracted with Springer – and should be out by the end of next year.

It was inspired, really, by the work that I’ve been doing with Andrew McStay advising parliaments on these issues. Across2017-19, we advised the UK Parliament’s Inquiry into Fake News and Disinformation. We’ve also advised the UK All Party Parliamentary Group on AI (2020), the UK All Party Parliamentary Group on Electoral Transparency (2019-20), and, most recently, the Parliament of Victoria (Australia) Electoral Matters Committee (2020-21) on the Impact of Social Media on Elections. I’ve also worked (with Paul Lashmar) with the UK’s National Union of Journalists to prepare guidance for journalists seeking to avoid undue surveillance by the security state; and (with Andrew McStay) I’ve advised business on public attitudes towards emergent technology.

The paper has attracted a lot of attention across the world, also by policymakers and governments. It was among the first to address the issue of ‘fake news’, and it assesses key solutions that a range of stakeholders had proposed at that point in time (2017). It also suggests that we need to pay more attention to the commercial drivers of fake news and online disinformation, as well as keeping an eye on future, likely, technological developments that could make things even worse. We published an update to this in an edited book last year on Affective Politics of Digital Media: Propaganda by Other Means. The book I’m currently writing with Andrew McStay very much builds on these papers.

  • Can you tell us about your Emotional AI project?

I’m lucky enough to be on a three-year, interdisciplinary project with fabulous colleagues, examining Emotional AI in Cities in the UK and Japan. Funded by ESRC and Japan Science and Technology fund, we’re exploring the social impact of ‘emotional AI’ (namely, technologies that use affective computing and AI techniques to sense, learn about and interact with human emotional life). Usually, emotional AI refers to technologies such as biometrics (tracking various bodily signals and traces), for instance in facial coding of expressions, or in voice analytics, but I also regard social media as the globally dominant form of emotional AI (as I mentioned earlier, they are designed to maximise users’ engagement by profiling their emotions).

Our overall project investigates the societal implications of the emergence of these technologies, how they will be deployed in our cities, what is coming next, how citizens feel about it, whether policies are appropriate, and how to account for data ethics in societies with quite different histories and demographics. The team has conducted interviews with diverse stakeholders interested in deployment of emotional AI (e.g. policing, security, law, car manufacturers, political campaigners, privacy NGOs, technology standards developers) to get their perspectives on this rapidly emerging area. At the UK end, we’ve also just completed a series of online workshops with ‘ordinary people’ to surface their views on various possible deployments of this emergent technology. That was fun, as it involved developing a fictional narrative of a world where near-use cases of emotional AI are deployed as the protagonist moves across a city; at each use case, a contravision of better outcomes and worse outcomes is presented (while avoiding the polarising utopian/dystopian trope), and we asked participants for their views at each juncture. We developed our fictional narrative on Twine – an online platform akin to those story books that were popular in the 70s and 80s were you could choose different plot outcomes as you progressed through the book. We’re looking forward to getting stuck into analysing this data – our initial run though shows it to be rich and insightful. Expect a stream of academic papers on this next year!

  • What do you see as some solutions to misinformation / disinformation? For example, how can we avoid Nicki Minaj type episodes that waste health professionals time in debunking ridiculous claims?

The Nicki Minaj episode [1] elides at least three highly emotional phenomena:  (1) anti-vax conspiracies (that generally feed off people’s lack of trust in authority e.g. government, medicine, experts, ‘Big Pharma’); (2) fear of harm to self and future generations (in her case, her tweet that the COVID19 vaccine makes a man’s testicles swell); and (3) her massive pop fanbase that, by the nature of fandom, is probably adoring of Minaj and pays attention to what she says. And all conducted over social media. Big data studies from across the world repeatedly show that both emotions and deception travel further, faster and deeper on social media than non-emotional or truthful content.

This is because social media platforms are carefully designed and constantly tweaked to maximise user engagement, and it is emotional content that is most engaging to humans. Deceptive content can also be maximally emotional, as it need bear no relation to the truth.  The ultimate solution would be to change the business models of social media platforms, so that they do not seek maximal user engagement at all costs, and so that they do not design algorithms that make emotional and/or deceptive content go viral. But this will never happen without either (a) a mass exodus of users (unlikely) or (b) strong government intervention (this could be possible in certain parts of the world, but probably at the cost of free speech). So, we are left to tinker at the edges with solutions. I see four broad solution areas here.

1) Educating people in terms of how to recognise, avoid and be sceptical of deceptive, emotive claims on social media is a good place to start – but massively difficult (e.g. people are full of biases that make them ignore information that they don’t agree with, or aren’t familiar with), and it’s a slow process (you’d need to educate everyone on social media).

2) Dominant social media platforms can (and in the case of COVID19, do) moderate content to flag, take down or rebut false and socially harmful claims (e.g. ones that have gained viral attention and advocate stances that go against the scientific consensus). A problem with COVID19 is that our knowledge base on this virus is still evolving and there is still much that scientists do not know– so few scientific claims on it are, as yet, absolute. This opens a massive gateway to widespread scepticism and doubt – and these doubters take many guises – in Minaj’s case a global celebrity, but we’ve also seen these stances from politicians, and even doctors, nurses and academic propaganda experts! At the best of times, content moderation is resource intensive – it requires human moderators to understand conversational nuance (so that humour, irony or political speech is not censored, for instance); and algorithms need to be constantly updated in order to keep pace with developments in linguistics, slang and code words. As a result, content moderation is not universally enforced, even in countries where social media platforms devote maximal resources – and certainly not in most countries around the world. This is problematic given the global scale of COVID19 disinformation and misinformation, as well as its weaponization by political actors for soft power or information warfare purposes.

3) As well as education and content moderation, we should also try and encourage public figures to be less deceptive. Enforcing higher standards in public life might work for some elected politicians (although looking at the UK, it is hard to see this really working!) but I can’t see that approach working for celebrities. 4) Ultimately, I think, a strong, vibrant public sphere is needed, where truthful, impartial and trusted information can be easily and freely found. This requires sustained support from government (i.e. subsidies), because the business models of the press have been destroyed by social media and search engine platforms, driving press outlets out of business, and increasing journalism’s reliance on public relations handouts and clickbait, thereby damaging the quality of the news (and people’s trust in it).


[1] Nicki Minaj BBC article, September 2021: https://www.bbc.co.uk/news/newsbeat-58571353

Author

0 Comments

}