By Elizabeth Green, The Chronicle of Social Change
As big data tools like predictive analytics become more prevalent, child-welfare agencies must grapple with implicit racial bias if they want to ensure that it does not cause harm, according to a new white paper published last month by the Kirwan Institute at Ohio State University.
This week, the paper’s author, Kirwan Research Associate Kelly Capatosto, joined The Chronicle of Social Change’s Publisher Daniel Heimpel for a webinar about her research, which examines how individual, cognitive-level barriers and broader historical- and societal-level barriers can stack the deck of predictive analytics against families and communities of color.
The Kirwan Institute at OSU focuses on education, equity and sustainable communities; public and community health; criminal justice; and how structural racialization and race in cognition create barriers to opportunities in each of these areas.
“Predictive analytics has often been described as a way to predict the future using data from the past,” Capatosto said in the webinar, highlighting its key characteristics of using large data sets, and assigning levels of risk to various outcomes in certain situations. She explained that predictive analytics is “increasingly sought after to guide decision making in child welfare fields.”
This white paper provides insight into ways in which implicit racial biases must be accounted for to protect communities of color who may be disproportionately represented in certain data inputs that are plugged into predictive analytics models, which are employed to help find hidden patterns, streamline service delivery, and decrease budgets, Capatosto said to webinar attendees.