All in the Family – Spouses Work Together on AI and Breast Ultrasound   

Hello BRIGHT Run Family, 

I hope you are doing well and gearing up for our 18th BRIGHT Run. 

In my last letter, I mentioned that one of our research projects was presented at the International Symposium on Biomedical Imaging (ISBI 2025) held in Houston, Texas. As a recap, I discussed the application of an advanced AI model called vision-language model (VLM) for categorizing low-suspicious and high-suspicious mammograms. 

VLMs are designed to understand both images and text, allowing them to describe images through words. I also mentioned one of the two reasons that made the conference special for me. 

In this letter, I will discuss the second research project and the second reason the event was so special for me.. 

In this study, we investigated whether existing medical VLMs (trained with medical images and text) and non-medical VLMs (trained with general images and text) performed differently when they were used to categorize breast ultrasound in benign and malignant categories. Then we determined how to improve the performance of these VLMs.  

Our findings were as follows: 

  •  The existing models performed differently depending on how tumour labels were described; performance varied if we used “non-malignant” instead of “benign.” 
  •  Performance improved when we adapted the existing models by using examples from breast ultrasound datasets. 
  •  We found that adding visual markers, such as outlining the tumour in the image, helped AI focus on the tumour and improved classification accuracy. This technique is called visual prompting.  

VLMs require huge computing power and effort to train. Our work adds to literature that shows how the pre-trained VLMs can be adapted for testing in specific medical datasets. While we performed this work for breast ultrasound, we are hoping this would apply to other breast imaging types, reducing the costly model training.  

This project was very special because I worked on it in collaboration with my husband, Dr. Dibyendu Mukherjee, who has expertise in VLM adaptation. (see attached photo). You could call this “homegrown” research. The conference is technically focused, meaning, it is more aligned to the technical novelty and contribution in the engineering aspects of biomedical imaging. Hence, it was ideal for our interdisciplinary partnership. This is similar to how doctors with different specializations team up to investigate a complex health issue. 

Stay well. 

Best, 

Ashirbani 

Dr. Ashirbani Saha is the first holder of the BRIGHT Run Breast Cancer Learning Health System Chair, a permanent research position established by the BRIGHT Run in partnership with McMaster University.