Is it ok for politicians to use AI? Survey shows where the public draws the line
New survey evidence from the UK and Japan shows people are open to MPs using AI as a tool, but deeply resistant to handing over democratic decisions to machines.
Artificial intelligence is creeping into every corner of life and is beginning to become a feature of politics. Conservative MP Tom Tugendhat recently criticised colleagues for using ChatGPT to draft their parliamentary speeches, warning that elected representatives should not outsource their judgment to machines. His comments capture a wider unease. Should AI have a place in democratic decision-making?
Supporters of AI in parliament argue it could help MPs cope with the flood of legislation, public submissions and policy documents they have to deal with in their work. But critics worry that over-reliance on AI may undermine accountability and public trust.
In our new research, our TrustTracker team surveyed people in the UK and Japan to see where they drew the line on the use of AI among the people who represent them. They were cautiously accepting but were far more comfortable with politicians using AI as a source of advice but not as a replacement for them when making decisions.
In the UK, almost half of our 990 respondents said they did not support at all the idea of MPs even using AI for support. And nearly four in five rejected outright the notion of AI or robots taking decisions in place of parliamentarians.
Our 2,117 Japanese respondents were slightly more open, which we may expect, as Japan has considerable experience of automation and robotics. But they too expressed strong opposition to the idea of delegating decisions to the robots. Support was higher for assistance, but was still cautious.
Younger men were consistently more supportive of AI in politics. Older people and women are more sceptical. And we found that trust matters. People who trust their government are more willing to back AI in supporting MPs.
Our results were also heavily reflective of our participants’ broader attitudes towards AI. People who see AI as beneficial, and who feel confident in using it, were much more supportive. Those who fear AI were strongly opposed.
Curiously, ideology also plays a role, but in opposing ways. In the UK, people on the political right are more supportive of AI in parliament. In Japan, it is people on the left who express more openness.
Public tolerance for the use of AI in politics exists, but with limits. Citizens want their representatives to use new tools wisely. They do not want to hand over the reins to machines.
That distinction between assistance and delegation is key. AI can make parliaments more efficient, helping MPs sift through evidence, draft better questions, or simulate the outcomes of policy choices. But if citizens feel that AI is replacing human judgment, support evaporates.
For parliaments, which are institutions that depend on trust and legitimacy, this is a red flag. Public wariness could quickly turn into backlash if reforms outpace public consent.
National contrasts
The cross-national comparison is interesting. Japan has a cultural openness to robotics and automation. Concepts like Society 5.0 frame AI as part of a positive national future. Yet even here, people draw a line when it comes to political decision-making. In the UK, debates tend to be framed in terms of ethics and accountability. British respondents are generally more cautious, but also more polarised by ideology.
Taken together, these cases show that public opinion does not simply mirror cultural stereotypes. Support is conditional, context-specific, and tied to wider trust in politics.
AI is coming to politics whether we like it or not. Used carefully, it could help parliaments work better, faster and more transparently. Used carelessly, it could erode trust and legitimacy at the heart of democracy. In other words: AI can advise, but it cannot rule.
Steven David Pickering receives funding from the ESRC (grant reference ES/W011913/1) and the JSPS (grant reference JPJSJRP 20211704).
