#3 Exempt? Expedited? Convened Review? Exploring the Levels of IRB Review
This month’s blog discusses IRB criteria involving AI submissions that determine how elaborate an IRB’s review might be, i.e., exempt, expedited, or a full board review. Before proceeding, though, I should say that if a scientific project even hints at “research” involving human participants, it would be very foolish to withhold it from IRB submission. Although this blog does not represent Emory University’s or any other institution’s IRB rules and policies, I feel certain that any IRB would agree that investigators should always err on the side of caution and submit their applications to the IRB when they involve research on or data derived from human beings no matter how strongly they feel otherwise. This blog explores how AI research applications in particular can occasionally present challenges to an IRB’s deciding the level of review it would require. What follows are some things I’ve gleaned from the literature and from conversations with colleagues.
Exempt or not?
The most obvious cases witnessing non-exempt IRB review would be AI research applications that involve data collected from randomized clinical trials involving living humans and whose purpose is to influence medical decisions. The data that the model examines might involve a tissue sample or testing a drug, device, biologic assay, or medical software.
So, suppose an AI scientist wants to develop a model that improves breast cancer detection. The investigator’s team collects data from their university’s breast imaging center and uses only that data to create a dataset that would train and then be used to test their model. Further assume that all the data have been collected from (previously) consenting patients and the data have been properly scrubbed of identifiers—so much so that the research team would not be able to identify the original patient-participants. While the project would need to be submitted to the IRB, I’d think it would likely be exempt (or expedited at most) if it doesn’t contemplate collecting and analyzing any new data for which new consents would be required. If, on the other hand, the study included an additional validation study that would require new data and additional participant consents to test the model, or if the study would want to test the model at another site to assess its generalizability, a full review might be required.
What if the data are collected only from decedents? Or what if the data are collected only from an Open Access source to which subjects had consented to their data being made available and which have been acceptably de-identified. In either case, I suspect the IRB would exempt or expedite the project. In the first case, because the research participants are deceased and, hence, are not understood as human subjects—although HIPAA rules might still apply for data use—and in the second, because the model is analyzing publicly available, de-identified data that have not been collected by its institutional personnel who enrolled and consented the institution’s patients for that purpose.
A perplexing example, however, can occur with distinguishing quality improvement “research” from human subjects research. Typically, a straightforward quality improvement project not requiring IRB oversight is one that doesn’t involve treatment interventions or treatment decision making; doesn’t involve data collected from living patients; only intends to use findings to improve the institution’s service delivery; doesn’t plan to generalize those findings beyond the institution; and hopes to implement those findings rapidly into institutional programs and practices rather than require additional studies that confirm or advance the findings of the original one. (Conducting an Effective IRB Review of Artificial Intelligence Human Subjects Research) But things can get sticky when the presumptive quality improvement project analyzes treatment protocols that are already accepted as standard of care and attempts to identify the best one; collects that data from its patients who consent to participate; and plans to disseminate those findings to all of the 100 hospitals in its network. This latter investigation no longer seems all that “local,” and its goal is clearly to affect treatment decisions by determining the most effective treatment intervention out of multiple, standard of care interventions. I would think the latter project would clearly be non-exempt, even if it is touted as “quality improvement.”
More perplexing cases
Microvascular anastomosis requires enormous surgical skill. Nicolas Gonzalez-Romo and his research team used a convolutional neural network to study hand motions during such surgeries. The model used video data of hand positions of experts, intermediates, and novices. The AI utilization created a template of expert surgical techniques that featured the elimination of excessive hand motions so that less experienced surgeons could study and improve their technique by mimicking the experts.
As a training project, an AI product would typically be IRB exempt. But Gonzalez-Romo’s study obviously needed to secure informed consents from the participants and the possibility existed, however remote, that re-identification of the participants could occur. Yet, the study assumed rather than questioned the quality of the surgical techniques the experts used, so it doesn’t appear that the acquisition of generalizable knowledge was at issue. Furthermore, the anastomosis procedures were simulated on synthetic vessels, not during actual surgeries on humans. So, is this human subjects research or not? I’d be inclined to say it isn’t because in the end, the project doesn’t appear to be attempting to confirm a hypothesis or perform interventions on living human beings.
Cases like this one point to an interesting aspect of deciding the level of IRB review: It might not be any one thing or factor in an AI application that decides an IRB’s ruling, but rather how a host of variables in the application weight or vector the decision in one direction or another. Because of that, I’ll conclude with this table that identifies common properties of non-exempt and exempt projects presented to an IRB:
The model is intended to assist, inform, or drive medical decisions for human beings | The model only performs a literature review or produces a feasibility/proof of concept study with no human participant involvement |
The model is planned to evaluate a drug, device, intervention, biologic, assay or medical software | The model only uses synthetic or retrospective data; it stops short of evolving analytic or decisional functions |
The research findings are intended to be generalizable | The model is only to be used within an institution and its outputs are not intended to be generalizable |
The data are collected from or about living persons | The data are collected from deceased persons or from surveys or published interviews |
Research participants are identifiable or re-identifiable | The data are anonymized or the research participants are not identifiable |
Research participants expect their data to be protected | Informed consent is not required |
The model is being used in a randomized, clinical trial with a fixed protocol and a specific number of patients to be enrolled | The model is intended to be used for improving institutional practices and protocols involving non-health related activities, e.g., marketing, scheduling, security, billing, program evaluation, etc. |
The model’s developers intend to submit it for FDA approval and a marketing permit | FDA approval is not planned |
The project’s timeline is long and intends to confirm a hypothesis at the project’s end | The project’s timeline is short with no hypothesis confirmation planned |
The model seeks to establish a new standard of care for patients | The model is using evidence-based guidelines to bring current practices up to the standard of care |
The model is planned to be used at multiple sites beyond the geographical location of the original research institution’s | The model will only be used at the originating institution |
Participation requires more than minimal risk | There is no risk in participating |
Once again, the above does not represent the views or practices or Emory University’s IRB or any other research institution. They are rather gleaned from the literature which continues to evolve with newer AI applications. But because IRBs evolved many of their practices prior to the age of big data and machine learning applications, it’s easy to imagine certain AI research investigations submitted to an IRB that cause perplexity and disagreement. Consequently, scholars who follow and contribute to these conversations should find much to study and discuss at the national level in the coming years.
Note: Much thanks to my colleagues at Emory, especially Professor Aryeh Stein, who made very helpful comments on an earlier version of this blog entry. Any errors or misrepresentations in this blog, however, are mine and mine alone.
Author
— by John Banja, PhD, professor at the Center for Ethics at Emory University and a member of the Regulatory Knowledge and Support Program of the Georgia CTSA, 8/2024
Continue the conversation! Please email us your comments to post on this blog. Enter the blog post # in your email Subject.