
The new GLP-1 weight-loss wonder jabs have become one of the biggest health stories of recent years, literally transforming lives.
But they also come with side-effects – some expected, some serious and some that are only now starting to become clear.
This was brought home to me vividly a few weeks ago, when I saw a 27-year-old teacher in A&E with such severe abdominal pain he needed morphine.
He’d been struggling with his weight, was finding it harder to exercise and decided he wanted to improve his health with Mounjaro.
But within weeks of starting on it he’d had to come to hospital – and when I checked his blood tests, his levels of an enzyme produced mainly by the pancreas were sky-high.
He had pancreatitis – inflammation of the pancreas – which in extreme cases can be life-threatening. He had to be admitted and given intravenous fluids.
Of course, he came off Mounjaro, never to be taken again. His pancreas settled and he thankfully recovered.
Pancreatitis is a known potential risk of weight-loss drugs – but his story was a stark reminder that even treatments with enormous benefits can sometimes cause very real harm.
Medicine has always been good at learning from formal research such as clinical trials and official reporting systems, where doctors and patients report suspected side-effects.
It is much less good at listening to huge numbers of patients all at once when a treatment moves from being tested in relatively small groups of people and going out into the big wide world.
And once a treatment is being used by huge numbers, the question is no longer simply whether it works – we know it does – it is how quickly can we recognise the problems when they appear?
That is why a new study by the University of Pennsylvania, in Nature Health, is significant.
The researchers looked at the side-effects of GLP-1s and – as well as those most of us know about (nausea, constipation, diarrhoea) – they picked up previously overlooked side-effects, including menstrual changes (e.g. heavy bleeding and irregular cycles), chills, hot flushes, fatigue and sleep difficulties.
The important point is that this does not prove the drugs caused all these symptoms, but it does show the sort of patient-reported effects that may not be well captured in formal trials or in drug patient leaflets.
The researchers were able to spot these thanks to artificial intelligence – they used it to analyse more than 410,000 posts on the website Reddit from people discussing semaglutide and tirzepatide (the GLP-1 drugs in Ozempic and Mounjaro).
Social media posts like these are useful for doctors not because they are perfect science – clearly they’re not. But patients often talk online in massive numbers, quickly and honestly, about symptoms they may not mention in a ten-minute appointment.
(We saw something similar with Covid, where trends in Google searches, such as ‘loss of smell’, preceded what was being seen in cases being reported to doctors – with researchers arguing this data is useful for monitoring disease spread.) No doctor, regulator or scientist could read that volume of conversation on social media and spot patterns across it. AI can.
That, for me, is where this becomes so important.
AI has had a lot of bad press, but it’s been positively revolutionary in medicine.
For instance, it can help develop new drugs, searching vast amounts of scientific data for possible new treatments.
For families living with a rare disease (around three million in the UK), the advances in medicine due to AI matter more than most people can imagine.
There are no textbooks, and very few – if any – studies for these diseases, and sometimes patients know more than the professionals simply because they are living with the condition every day.
They are on social media, comparing symptoms, setbacks, discussing treatments and noticing patterns that no formal study has captured. AI can do that.
As well as looking at data from patients, AI can help the patient sitting in front of the doctor.
It is already being used to improve our interpretation of scans and X-rays, and to make better cancer diagnoses with more accurate interpretation of pathology slides and blood tests.
One of the things I worry about most in A&E is not the obvious disaster, it is the subtle one; the patient with a broken hip after a fall whose X-ray is thought to be normal, so they are discharged.
Or worse, the patient with a sudden headache whose brain scan contains a tiny bleed from a haemorrhage that is missed, with devastating consequences.
We like to imagine that doctors get these things right every time. We do not.
That is why the new wave of AI being used to support diagnosis in emergency medicine is starting to look like another area where AI could significantly improve patient care – and I can’t wait to start using it for my patients.
A large review, published in the journal Annals of Medicine, pulled together 26 studies and found that when clinicians used AI to help interpret bone X-rays, they became markedly better both at spotting genuine fractures and at correctly recognising when an X-ray was normal.
It’s not just broken bones. A study this year, led by Oxford University Hospital NHS Trust looking at CT brain scans, found that AI helped A&E clinicians detect more critical signs, such as small bleeds, that are easy to miss but potentially devastating.
Most strikingly, A&E clinicians using AI performed similarly to specialist radiologists at picking up serious problems on the scans.
That matters because in real life we often wait for specialist radiology reports before making decisions to discharge patients based on normal CT scans. With AI we won’t need to.
And it’s increasingly apparent how AI can help us understand the risks of treatment faster, sifting through vast amounts of information and data quickly and spotting patterns and harms earlier, which might otherwise have taken years to emerge.
Traditionally, medicine had a set way of doing that.
First come the animal studies, then come the clinical trials – and once the drug is out in the world, we rely on doctors and patients reporting suspected side-effects through systems such as the Yellow Card scheme run by the Medicines and Healthcare products Regulatory Agency.
Those systems are life-saving but imperfect, as they depend on someone spotting a possible link and actually reporting it.
In the messiness of real life, that does not always happen, especially when many of the side-effects are vague, embarrassing or easily brushed aside.
It’s been estimated that only around 10 per cent of serious reactions and 2 to 4 per cent of non-serious ones are reported to the Yellow Card scheme.
I don’t think AI will replace doctors – as the human touch is so key – but it has the potential to work at both ends of medicine at once, helping us understand trends across millions of patients while also helping us make better decisions for one frightened patient in one room on one day.
Every powerful medical advance has risks and every tool can be misused. But that has never been a reason to reject progress.
The teacher I saw is a reminder of why we need systems that can detect patterns earlier and feed that knowledge back to patient care before more people are harmed. AI can help with that.
@drrobgalloway
Original source: gb