Large language models (LLMs) are reshaping radiology through their advanced capabilities in tasks such as medical report generation and clinical decision support. However, their effectiveness is heavily influenced by prompt engineering—the design of input prompts that guide the model’s responses. This review aims to illustrate how different prompt engineering techniques, including zero-shot, one-shot, few-shot, chain of thought, and tree of thought, affect LLM performance in a radiology context.
We scan the top radiology sources so you don’t have to.
From AI breakthroughs to imaging trends, we serve up real-time radiology insights.
-
Of Prompt Engineers and Babel Fish
By now, you’ve almost certainly heard of Chat Generative Pre-Trained Transformer (ChatGPT) and more recently, DeepSeek, artificial intelligence (AI) chatbots based on GPT large language models (LLMs) [1,2]. One way that a foundational GPT LLM can be adapted for targeted task-specific or subject matter domain-specific systems is by prompt engineering. Prompt engineering involves creating instructions for the model using natural language text questions or commands that may include relevant context…
-
Using a large language model for post-deployment monitoring of FDA approved AI: pulmonary embolism detection use case
Artificial intelligence (AI) is increasingly integrated into clinical workflows. The performance of AI in production can diverge from initial evaluations. Post-deployment monitoring (PDM) remains a challenging ingredient of ongoing quality assurance once AI is deployed in clinical production.
-
State of the Field: Radiology Residency Clinician Educator Tracks
Radiology resident Clinician Educator Tracks (CETs) are designed to prepare residents for a career in academia. The number and structure of United States radiology CETs is unknown. This study sought to describe the current state of the field of U.S. diagnostic radiology CETs.