As the UK heads to the polls tomorrow (July 4th), the use of artificial intelligence (AI) in the electoral process is a hot topic. Recent revelations have shown that AI-driven bots on the social media platform X have generated over 60,000 tweets, as reported by Global Witness, which have been viewed approximately 150 million times since the election was called. This has intensified the debate over AI's impact on democracy; but, as concerns mount, it's essential to explore not just the risks, but also the potential benefits of AI in enhancing democratic engagement and transparency.
AI and Election Integrity: A Double-Edged Sword
The deployment of AI in elections presents multiple issues. On one hand, AI bots can distort public discourse by amplifying divisive content and spreading misinformation. This manipulation threatens the core of democratic engagement, potentially swaying voter opinions and undermining trust in electoral integrity. For instance, bots spreading anti-Semitic, Islamophobic, and climate change denial content have been identified, raising alarms about the ethical use of AI. Conversely, there is the potential that AI can help strengthen democracy, through training and content creation.
Ethical Concerns of AI and Politics:
Manipulation of Public Opinion: AI bots can create an illusion of widespread support or opposition, skewing public perception and influencing voter behaviour. Not only is AI trained to push political content on social media and news pop ups dependant on search history and engagement, but it can now replicate human behaviour on these platforms to further highlight political ideas and support. This undermines the democratic process by drowning out authentic voices.
Spread of Misinformation: The ability of AI to generate realistic deepfakes and misinformation is a significant concern. These technologies can create videos and audio clips of politicians, saying things that are either out of context or completely fabricated. With the widespread distribution of ‘fake news’, members of the public can be easily misled by the constant exposure to these lies.
Foreign Interference: The potential for foreign bodies to exploit AI, to interfere in elections, is real. Previous incidents, such as the use of Russian bots during the Brexit referendum, highlight the threat. These actions can destabilise democratic processes and sway election outcomes.
Balancing the Scale: The Benefits
Amidst these challenges, there is potential for AI to be harnessed to strengthen democracy, rather than undermine it. Campaign Lab, for example, are working to integrate AI tools into political campaigns to enhance democratic engagement. Their initiatives include:
AI-Enhanced Campaign Tools: Campaign Lab runs hack days to develop tools that use AI to support election campaigns. For instance, they use ChatGPT to draft election leaflets, reminding campaigners to review AI-generated content carefully, given its tendency to distort facts.
Chatbots for Canvassing: AI-powered chatbots are being tested to train canvassers, making doorstep conversations more engaging and effective. This approach aims to free up time for campaigners to focus on direct voter interaction.
An Arena for Debate: Polis, an AI-powered tool that allows groups of people to share opinions and make decisions through votes and discussion, can be used to facilitate debates. At this time, it is better suited to solving local debates, rather than nationwide elections, but has the potential to grow and become more!
Implications for Investment Managers
The ethical and practical implications of AI in politics, can have direct bearings on investment strategies. For example, the reputational risk of companies that develop or deploy AI technologies for political manipulation. Although AI’s interference doesn’t appear to affect Investment decisions directly, the continued evolution and interference of AI in politics is something investment managers should note, for reasons including:
Regulatory Compliance: As governments tighten regulations to safeguard democratic processes, compliance becomes paramount. The UK’s Online Safety Act and similar regulations globally require tech companies to mitigate risks associated with AI-driven disinformation. Non-compliance can result in substantial fines and legal consequences, affecting a company’s financial health and market position.
Long-Term Sustainability: Ethical AI practices are essential for the long-term sustainability of tech companies. Firms that adhere to responsible AI guidelines are better positioned to thrive amid regulatory scrutiny and public demand for transparency. Investors should prioritise companies committed to ethical AI use, ensuring sustainable growth and mitigating risks related to disinformation and manipulation.
There is no magical formula that will ensure we reap the benefits of AI over its potential risks, but it is beneficial to investment managers (and the public as a whole) to be aware of such influences. As AI evolves daily, as does the list of issues with using AI in crucial parts of our society, particularly in politics. However, by creating awareness of reliable information and AI fabrication, we can help steer people to informed decisions, both in tomorrow’s election and with their investments.