RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.

In-Depth With Our Experts

Using Data to Advance Responsible Artificial Intelligence

Emily Hadley is a research data scientist using data in the space of public good.

Emily Hadley

Throughout her career, Emily Hadley, a statistician and research data scientist, has always stuck to the motto “what can I do in the space of public good.” She’s been gathering data throughout her career – not just in the research sense, but in a way that helps determine her own path. Now an expert in responsible artificial intelligence (AI) at RTI, Hadley reflects on how data has shaped her character and career.

Serving a Rural Community with Data Science

Hadley was first exposed to research while pursuing her bachelor’s degree in statistics and public policy at Duke University. It sparked her interest in how data can be used to impact policymaking. Despite her interest in the topic, Hadley knew immediately following graduation that she wanted to give back to others who shared a rural upbringing. She packed her bags, joined AmeriCorps, and moved to Sampson County, North Carolina.

Hadley worked as a college advisor for AmeriCorps for two years. College Advising Corps is very proud of its near-peer model of placing advisors with similar backgrounds to their students. Her experience as an advisor gave her the opportunity to work with students who had overcome tremendous adversity and were often on the path be the first in their family to attend college, and for some, the first to finish high school. She used data to show the areas of success and areas of improvement for her students. 

“I led the ACT and SAT prep for my students, and we used clustering, which was an analysis I learned about in undergrad that groups students together by their ACT scores to identify who needs to work on math or English, etc.” Hadley stated. 

She realized that many people are unfamiliar with data and sometimes feel uncomfortable with it. The use of data for good prompted Hadley to further her knowledge through NC State’s Master of Science and Analytical program, where she was eventually recruited to join the team at RTI. 

Emily Hadley

Responsible Artificial Intelligence in Research

In recent years, Hadley has turned to the pursuit of responsible research practices with the understanding and use of AI. She leads RTI’s contribution on the National Institute of Standards and Technology (NIST) U.S. Artificial Intelligence Safety Institute consortium. Her role includes creating documentation for a more systematic way of reviewing AI, particularly with the growth of generative AI, which introduces a new set of risks. 

“The heart of what you’re getting at is what are the ethics for an AI system and how can we use it in ways that help.” Hadley said. 

Hadley explained a commonly associated risk of AI technology. For example, self-driving car technology has very little regulation or governance because it is relatively new, which is why people are required to keep their hands on the wheel and look forward. If the vehicle hits someone or something, is the person sitting in the car responsible for the accident or the company that designed the AI in the car? Questions like this are one of many that address the ethics of AI and the real-world consequences with AI systems.

Using a Machine Learning Model to Better Understand Long COVID

As Hadley pursues standards for ethics and best practices for emerging AI, she also uses it to expand the capabilities of her research. Recently, Hadley worked with three other RTI colleagues  to address missing Long COVID information with the help of AI. 

Most hospitals use an electronic health record system that has an associated code for each diagnosis. Long COVID was not given a unique code until October 2021, despite many people having Long COVID symptoms starting in early 2020. The delay created a barrier in studying Long COVID accurately. 

Hadley’s team relied on a machine learning model using a computable phenotype developed collaboratively with other contributors, including the National COVID Collaborative (N3C) and RECOVER. Based on a description of Long COVID, the model interpreted data and predicted which patients might have Long COVID. The (N3C) Database currently holds 22 million patient records. For this study, Hadley’s team used a RECOVER specific cohort of 5.9 million patients and the machine learning model suggested that 1.1 million patients have most likely had Long COVID. 

Challenges Working with AI and Hope for Its Future

Hadley knows that AI brings on a new level of skepticism and problems. She described the field of data and its challenges, particularly the rapid transformation of data capturing technologies over the last decade. 

“It can be pretty challenging to keep up with the latest trends to make sure we’re staying on the forefront of the responsible AI piece,” she stated. “There’s so much we still don’t know about how these tools work, and there are real concerns that what we’re doing might no longer be considered good practice in a couple years when we’re able to finally learn more about some of the systems that we’re trying to use right away.” 

Regardless of the unknowns, Hadley is optimistic about the future of AI and its impact helping others. She believes AI offers tremendous opportunities to make sure people are getting the appropriate benefits that they qualify for.

RTI is extremely well positioned for our work with government agencies to help them implement AI in ways that benefits taxpayers and citizens but also in a way that’s really attentive to the security and privacy concerns. I’m less interested in robots taking over the world and more interested in how we can use this for good in ways that benefit people.