Artificial Intelligence and Radiology Reports – First Steps
Hello BRIGHT Run Family,
Happy June! Hope you are enjoying summer.
I am happy to share with you some research publication news. Since last year, I have worked with my collaborators Levi Burns, MD candidate at the DeGroote School of Medicine, McMaster, and Dr. Ameya M. Kulkarni, radiologist at McMaster University, on reviewing AI applications using radiology reports for breast cancer. The work is now published as an open-access article (you will not need any subscription for access) and you can find it here: Frontiers | A scoping review of natural language processing of radiology reports in breast cancer (frontiersin.org).
Specifically, we performed a scoping review of the AI applications performed using the radiology reports in breast cancer. A scoping review is a preliminary assessment of the potential size and scope of available research literature in a specific area of study.
Radiological images include various image types such as mammogram, digital breast tomosynthesis (DBT), positron emission tomography (PET), computed tomography (CT), and magnetic resonance imaging (MRI). After these images are acquired, a radiologist reads them and looks for important findings.
They document their findings through several sentences and therefore, the reports prepared by them are presented in a non-structured data format. In contrast, a structured data would be represented by a table such as a spreadsheet with well-defined headers and cell values.
This practice of unstructured reporting has been prevailing for many years. The reports used to be hand-written but since the advent and proliferation of Electronic Health Record (EHR) systems, the reports are digital – they are typed/dictated. These digital reports are relatively easier to be processed by natural language processing (NLP) algorithms for structuring and developing AI tools.
However, to structure those or to obtain information or prediction from those using AI, one needs to follow methodological steps to obtain the data, remove patient identifiers for patient privacy, prepare some gold-standard information to train and evaluate the AI tool.
Per our findings, majority of the studies using breast cancer radiology reports worked with mammography reports for screening and diagnosis. Some studies combined radiology reports with other clinical information (such as pathology reports, demographic data, the images themselves) and tried to obtain a different objective relevant for treatment, follow-up or palliative care. These studies, therefore, are relying on multiple sources of patient data. The AI tools were not limited to reports written in English and were applied to reports in Dutch, Italian, Persian, Chinese etc.
We also found that researchers are releasing their computer codes more to be used by fellow researchers, but release of data for these studies has not increased. The researchers are also using newer AI developments to improve the performance and to answer important questions.
We hope that with advent of new tools and systems, we will see more progress in this area in future.
And again, this tells us how important it is to continue research using the newer tools and techniques and to keep making progress for improving the lives of patients with breast cancer and the survivors of breast cancer.
With happy thoughts about a better future,
Best,
Ashirbani
Dr. Ashirbani Saha is the first holder of the BRIGHT Run Breast Cancer Learning Health System Chair, a permanent research position established by the BRIGHT Run in partnership with McMaster University.