The results show that AI can support clinicians in flagging potential abuse early, said Dr. Bharti Khurana, one of the study’s authors and an emergency radiologist at MGB.
“The idea is to share resources sooner rather than later,” Khurana said. “This is something I call proactive screening, instead of waiting for them to disclose [abuse] and then offering services.”
The Centers for Disease Control and Prevention estimate that one in three women and one in six men experienced intimate partner violence in their lifetimes. Despite its frequency, few domestic violence sufferers disclose it to medical professionals, because of fear of being judged, fear of their partner finding out, or a financial or psychological dependence on the person abusing them.
Khurana said she began noticing subtle patterns in scan results in patients who were experiencing intimate partner violence. But radiologists, who review X-ray, CT scan, and MRI results for only a couple minutes at a time, don’t have the capacity to scour past medical records for additional signs of abuse.
AI, however, could review electronic medical records to pick up on clues that someone might be in danger at home. It’s one of the latest examples of AI being used to augment the limited time and attention of clinicians and flag conditions that would otherwise go unnoticed.
In the study, the authors said they plan to develop a decision support tool that would be embedded in electronic medical record systems to perform risk evaluation. (It’s too early to say how that would be implemented in a non-research setting, or how privacy concerns would be addressed.)
Khurana and her team trained the AI models to pick up on certain injuries, like those to the face, neck, and upper body, types of visits, and even the time of day a patient came to the emergency department.
“All these patterns became more visible once we started using machine learning,” Khurana said.
The researchers developed their models using patient data from nearly 850 women enrolled at the Brigham’s domestic abuse intervention and prevention center between 2017 and 2019 and 2021 and 2022. Patients from 2020 were excluded because of the unique nature of the COVID-19 pandemic. The models were also trained on a group of about 5,200 control patients who had not experienced intimate partner violence, but were otherwise similar in demographics to the patients who had.
The models were then tested against a different group of patients from Mass General Hospital, Khurana said.
The study included three AI models: one that evaluated data such as medications, vital signs, and demographics, one that evaluated clinical notes and radiology notes, and one that considered both. The combination model had the highest accuracy in predicting violence.
Navigating the conversation of intimate partner violence is difficult, and can have detrimental effects if approached incorrectly.
While the AI models in this study are useful advances, researchers and clinicians have to be careful that the tools don’t become crutches, said Dr. Brigid McCaw, who previously served as the medical director of the Kaiser Permanente Family Violence Prevention Program.
“We need to be very, very cautious about how [AI] information is used for clinicians so that they don’t become over-reliant on algorithms without understanding what the data are that drive the algorithms,” McCaw said.
McCaw emphasized that it’s important that any domestic violence screening tools be rigorously tested — and that they take survivors’ opinions into account.
“This is very early, and there’s so much excitement about AI,” McCaw said. “I’m putting the caution out there that there’s a lot that needs to be learned and that we really need the voices of survivors.”
Khurana at MGB said she had to adjust the models to make sure they were catching as many victims as possible without accidentally including patients who weren’t actually at risk.
“If there are too many false positives, then you lose trust and nobody’s using it,” Khurana said.
Khurana’s team is continuing to train the models on data through 2025 and is in talks with researchers globally about how they can strengthen the tool.
“My hope is to bring more institutions in so that we can learn from different ZIP codes, different areas, not only in the US,” she said.
Marin Wolf can be reached at marin.wolf@globe.com.
Original source: us