RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.

Newsroom

New study provides insights into ethical implementation of AI

Report evaluates effectiveness of algorithm review boards for responsible AI governance


RESEARCH TRIANGLE PARK, N.C. — Experts from RTI International, a nonprofit research institute, led a newly published study on algorithm review boards (ARBs) to understand their effectiveness in providing responsible artificial intelligence (AI) governance. The work is one of the first detailed explorations of the use of ARBs across sectors as a governance approach to manage the risks associated with AI.

ARBs—a committee of individuals with expertise in data science and AI, cybersecurity, legal, and ethics—have been proposed as a method to provide oversight and approval for the use of AI within or by an organization. 

“In the U.S., little to no regulation or guidance on how to manage AI risks, even as AI technologies are being used in practical settings,” said Emily Hadley, a research data scientist at RTI. “ARBs offer an exciting solution to support responsible AI use by organizations, particularly in regulated industries such as health and finance where they are already gaining traction.”

Hadley and her colleagues interviewed 17 practitioners in finance, government, health, and tech sectors to understand their experiences with responsible AI governance. Participants were asked about their experiences, if any, with ARBs, along with their thoughts on other responsible AI approaches and institutional review boards (IRBs) to manage AI risks. Participants also commented on the attributes needed for responsible AI governance to mitigate potential challenges and succeed at an organization.

Their results are the first detailed findings on ARBs in practice, including their membership, scope, and measure of success, and confirmed existing ARBs in the finance and health sectors. 

The study’s findings suggest that IRBs alone are insufficient for algorithm governance, and ARBs are often used in tandem with other responsible AI approaches. Integration with existing processes and leadership buy-in were also considered critical to the success of internal responsible AI governance.

“We hope that technical practitioners and organizational leaders can apply these insights to enhance their own internal governance and ensure the ethical deployment of AI,” Hadley added.

Read how RTI is contributing to a national consortium about responsible AI practices in research

Learn more about RTI’s commitment to ensure the ethical and responsible use of AI