Large Language Models (LLMs) can provide powerful tools for data science, research, and healthcare applications, enabling tasks like automated summarization, predictive modeling, and more efficient knowledge extraction. However, the sensitive nature of clinical and biomedical research data requires careful risk management when tuning, training, and deploying these models.

This document provides introductory guidance — not an exhaustive review or a policy — to help you navigate data security, regulatory compliance, and ethical risks when working with LLMs. As AI tools and LLMs evolve, we will periodically edit and update this guidance in response to policies, available tools and models, and guidance and implementation details as they become available.

Important

At Fred Hutch you can contact dataprotection@fredhutch.org to find support for current policies, best practices and resources related to this content.

Data Privacy and Security

Guidance

What NOT to Do

Performance, Ethics and Bias Mitigation

Guidance

What NOT to Do