Document Type
Article
Publication Date
2026
Abstract
This exploratory study evaluates evidence-based AI information literacy strategies for health sciences professionals by comparing two prompt engineering frameworks: the CLEAR (Concise, Logical, Explicit, Adaptive, and Reflective) framework and the PICO (Population/Patient/Problem, Intervention, Comparison/Control, and Outcome) model. Nineteen fourth-year medical students participated in a crossover-design study involving three instructional sessions on using an AI-assisted interdisciplinary database to respond to patient-care scenarios. Participants' prompts were analyzed for word count, format, and keyword usage. Results showed that prompt instructions influenced construction patterns: participants using PICO created more question-based prompts and demonstrated lower variability in prompt length. Both frameworks increased word count and age-related keyword inclusion over time. While both frameworks led to more detailed prompts, all 7 participants in the post-search interviews stated they preferred PICO for its clarity and direct clinical applicability. The health literature-based AI instruction was highly valued, with participants appreciating the credibility and transparency of AI-provided peer-reviewed sources. These findings emphasize the importance of context-specific, domain-relevant frameworks in AI literacy training for health sciences professionals, suggesting that effective health sciences-focused AI tools must provide reliable sources and features tailored to clinical decision-making needs. The results suggest that health sciences librarians can contribute greatly by integrating information literacy and AI prompt engineering into evidence-based professional education.
Publication Title
Journal of Academic Librarianship
Recommended Citation
Chen, Hsin-liang (Oliver) and Langenau, Erik E., "AI prompts for clinical questions: Evaluating information literacy frameworks in health sciences education" (2026). PCOM Scholarly Works. 2358.
https://digitalcommons.pcom.edu/scholarly_papers/2358
DOI: https://doi.org/10.1016/j.acalib.2026.103217
Comments
This article was published in Journal of Academic Librarianship.
The published version is available at https://doi.org/10.1016/j.acalib.2026.103217.
Copyright © 2026 Elsevier Inc. This postprint is available under a CC BY-NC-ND 4.0 license.