p>With deepfake images being circulated all over social media, the scientists have discovered a potential threat to the healthcare systems.A study published in Radiology has found out that even the most seasoned radiologists were finding it hard to distinguish AI-generated X-rays from the real scans.A deepfake can be a photo, video or anything visual which isnโt based on reality and has been created or altered using artificial intelligence.The AI-models generating the deepfake can take some time to detect the synthetic scan, however, some experts have discovered a few patterns associated with these fabricated images.Experts are saying better detection tools are needed to prevent tampering with medical documents.In 2025, the All India Medical Institute of Medical Sciences (AIIMS) was caught up in a controversy around an AI-generated X-ray report that went viral. Instances like these are raising concerns among experts who fear AI could also be used to fabricate medical information..Deepfake interview fraud is now real,
a big threat to recruiting companies.What did the study find?Scientists from the Radiological Society of North America included 17 radiologists from 12 different centers across six countries, including the United States, France, Germany, Turkey, United Kingdom and United Arab Emirates.The radiologists participating in the experiment had work experience ranging between 0 to 40 years, some beginners, others specialists.Among the 264 images shown to them, half of them were real and the other half was fabricated using artificial intelligence.The first set of images presented to them had mixed real and AI-generated scans of various anatomical structures, however, the second set included X-ray images which were half real and half generated by a sophisticated AI model developed by Stanford Medicine researchers.None of the radiologists were informed about the manipulated X-ray scans.Only when asked if they find anything unusual in the images, only 41 percent identified AI-generated images.However, after informing the radiologists about the fabricated scans, the accuracy in detection increased up to 75 percent.Interestingly, when the same images were uploaded on different versions of ChatGPT, Gemini and Llama, their detection ranged from 57 percent to 85 percent.Even the version of ChatGPT used to create these fabricated X-rays was not easily able to detect the manipulated parts.The study didnโt find any correlation between the years of experience of a radiologist and their ability to catch the deepfake X-rays at the earliest.Some patterns in deepfake X-raysAs per one of the researchers leading the study, there are some patterns associated with deep-fake X-rays:They look too perfect.Bones look overly smooth.Lungs look too symmetrical.Blood vessels patterns look unusually uniform.Fractures seem unusually clean and often on one side of the bone.Experts, however, recommended using high-quality tools to detect fabrication and also invisible watermarks on real X-rays. ConcernAs per Dr Tordjman, one of the leading scientists on the study, said fabricated X-rays are just a tip of the iceberg, with better AI tools could generate 3-D images like CT-scan and MRI in coming years.The experts also implied the need to safeguard medical information which could be manipulated for fraudulent reasons like seeking insurance, cyberattacks and others.</p
Original source: in