Prioritizing initiatives for institutional review board (IRB) quality improvement

Name / volume / issue

60856

Page number

265-274

Primary author

Daniel E. Hall, Ulrike Feske, Barbara H. Hanusa, Bruce S. Ling, Roslyn A. Stone, Shasha Gao, Galen E. Switzer, Aram Dobalian, Michael J. Fine & Robert M. Arnold

Tag(s): Journal article

Abstract

Background: Institutional review boards (IRBs) have been criticized for inconsistency, delay, and bias, suggesting an opportunity for quality improvement. To aid such quality improvement, this study aimed at determining IRB members’ and investigators’ priorities regarding IRB review at 10 Veterans Affairs (VA) IRBs. Methods: Six hundred and eighty anonymous Internet surveys were sent to 252 IRB members and staff, and to 428 principal investigators and project coordinators at 9 VA Medical Centers and the VA Central IRB. Surveys included 27 statements adapted from Koocher and Kieth-Spiegel’s IRB-RAT describing IRB activities or functions (e.g., “An IRB that is open to reversing its earlier decisions”). Respondents indicated how each statement described both their “ideal” and “actual” IRBs. The difference between the ratings of the actual and ideal IRBs was calculated for each item, along with estimated 95% confidence intervals. Ratings outside those intervals indicated activities or functions with relatively good or poor performance compared to the ideal IRB. Results: Three hundred and ninety (57.4%) responses from 165 IRB members and staff (65.5%) and 225 investigators and project coordinators (52.6%) demonstrated that these IRBs were closest to the ideal when protecting human subjects, treating investigators with respect, and taking appropriate action for alleged scientific misconduct. The IRBs were furthest from the ideal regarding duplicative forms, timeliness of review, and provision of complete rationales for decisions. Although IRB members reported near-ideal willingness to reverse earlier decisions, investigators rated this capacity far from ideal. Investigators rated IRB members’ knowledge about procedures and policy as nearly ideal, but IRB members themselves rated this aspect far from ideal. Noteworthy site-level differences in the ratings of IRB functions and activities were also identified. Conclusions: Although these VA IRBs perform well in some areas, these data support the need for ongoing quality improvement. The described method of administering and analyzing the IRB-RAT may help identify and monitor site- and activity-specific initiatives for quality improvement.

Full text

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.