Regulatory and Ethical Considerations for Using De-Identified Patient Data for AI
The context and location in which patient data is derived often governs how data can be used or disclosed. In the U.S., for example, HIPAA governs the uses and disclosures of most patient data, and what it means for the data to be “de-identified” and no longer subject to HIPAA is delineated under HIPAA. Yet it is often unclear when or whether HIPAA applies, whether data properly de-identified under HIPAA will remain de-identified when new data elements are added to the de-identified data set, or whether other regulatory or contractual requirements will restrict data usage. If de-identified patient data is used to create an algorithmic or artificial intelligence (AI) product, the HIPAA concepts of data privacy and security can still be helpful in understanding the purchaser’s or user’s expectations and help frame the contractual relationship. AI product developers and users must also consider ethical issues related to using and relying on the de-identified patient data, including whether the data is or may become biased over time and lead to inaccurate or unfair decision making, and whether there was transparency surrounding subsequent possible uses of the data at the time of collection. Join us as we discuss these and other regulatory and ethical considerations for using de-identified patient data for AI.

