
AI chatbots give inaccurate and inconsistent medical advice that could present risks to users, according to a study from the University of Oxford.
The research found people using AI for healthcare advice were given a mix of good and bad responses, making it hard to identify what advice they should trust.
In November 2025, polling by Mental Health UK found more than one in three UK residents now use AI to support their mental health or wellbeing.
Dr Rebecca Payne, lead medical practitioner on the study, said it could be "dangerous" for people to ask chatbots about their symptoms.
Researchers gave 1,300 people a scenario, such as having a severe headache or being a new mother who felt constantly exhausted.
They were split into two groups, with one using AI to help them figure out what they might have and decide what to do next.
The researchers then evaluated whether people correctly identified what might be wrong, and if they should see a GP or go to A&E.
They said the people who used AI often did not know what to ask, and were given a variety of different answers depending on how they worded their question.
The chatbot responded with a mixture of information, and people found it hard to distinguish between what was useful and what was not.
Dr Adam Mahdi, senior author on the study, told the BBC while AI was able to give medical information, people "struggle to get useful advice from it".
"People share information gradually", he said.
"They leave things out, they don't mention everything. So, in our study, when the AI listed three possible conditions, people were left to guess which of those can fit.
"This is exactly when things would fall apart."
Lead author Andrew Bean said the analysis illustrated how interacting with humans poses a challenge "even for top" AI models.
"We hope this work will contribute to the development of safer and more useful AI systems," he said.
Meanwhile Dr Bertalan Meskó, editor of The Medical Futurist, which predicts tech trends in healthcare, said there were developments coming in the space.
He said two major AI developers, OpenAI and Anthropic, had released health-dedicated versions of their general chatbot recently, which he believed would "definitely yield different results in a similar study".
He said the goal should be to "to keep on improving" the tech, especially "health-related versions, with clear national regulations, regulatory guardrails and medical guidelines".

'My mum took the pressure off
Actress Wunmi Mosaku is nominated for an Oscar for her role as Annie in the vampire horror Sinners.(0 )Readerstime:2026-02-15
Gallery's entire art collectio
The art gallery‘s bosses say it is people‘s first opportunity in decades to see the full collection.(0 )Readerstime:2026-02-15
The Instagram-friendly shoppin
Merry Hill is using leisure and entertainment to increase its customer base, the manager says.(0 )Readerstime:2026-02-15
Should stolen Shakespeare foli
The damage caused when Durham‘s First Folio was stolen remains - for now.(0 )Readerstime:2026-02-15Inverness Castle officially op
The castle is expected to attract about 450,000 visitors a year after a major overhaul.2026-02-09Arctic Monkeys drummer doesn't
Drummer Matt Helders tells BBC Radio 1 teaming up to make a charity track showed "they could still do it".2026-01-27South Africa to deploy troops
President Cyril Ramaphosa says the army will work with the police to fight organised crime and illegal mining.2026-02-13Afghan asylum seeker guilty of
The 12-year-old suffered "extremely horrific sexual offences" in the Nuneaton attack, police say.2026-02-10