Health /

Oregon To Stop Using AI Tool to Decide Which Families are Investigated by Social Services After Racial Bias Concerns

The program uses publicly available data to calculate the probability of child abuse


Oregon will no longer use an algorithm to flag potential cases of child abuse after a report revealed the software may have a racial bias.

Comparable AI programs are in use in 11 states with the goal of preventing neglect and abuse.

The Department of Human Services in Oregon told its hotline workers in an email in May that it would not use the Safety at Screening Tool following concerns regarding differences between families that the algorithm flagged. 

The program will be completely out of use by the end of June and the department will shift to a less automated review system.

“Making decisions about what should happen to children and families is far too important a task to give untested algorithms,” Democratic Senator Ron Wyden of Oregon said in a statement per Engadget. “I’m glad the Oregon Department of Human Services is taking the concerns I raised about racial bias seriously and is pausing the use of its screening tool.”

On April 29, the Associated Press published an extensive report about a similar software used in Pennsylvania. 

The outlet “identified a number of concerns about the technology, including questions about its reliability and its potential to harden racial disparities in the child welfare system.”

During the first year that the program was used by Allegheny County, it “showed a pattern of flagging a disproportionate number of Black children for a ‘mandatory’ neglect investigation, when compared with white children.”

The algorithm used data from public records to calculate risk scores that would indicate the likelihood of child abuse occurring in the home.

The county “initially considered including race as a variable in its predictions about a family’s relative risk but ultimately decided” against including the information in 2017.

An independent review of the data found that “if the tool had acted on its own to screen in a comparable rate of calls, it would have recommended that two-thirds of Black children be investigated, compared with about half of all other children reported.”

Oregon’s Safety at Screening Tool was developed from the system created by the Allegheny County Department of Human Services and launched in 2018.

In a November 2019 report regarding the algorithm, the government said that “the Safety at Screening Tool utilizes techniques from a field of computer science called machine learning.” 

“The procedure involves using a computerized technique to discover how to associate Child Welfare administrative data elements with future outcomes of interest,” noted the DHS. “By linking data elements… regarding historical information to live information about an incoming report of abuse/neglect, it is possible to generate a prediction about whether the report will lead to a removal if the report is assigned to investigation, and/or whether screening out the report will lead to another future investigation.”

Data elements included the number of children in the report and total number of past reports. Information about the family’s welfare status were also included in Pennsylvania’s algorithm.

“The tool comes to learn how to use administrative data elements to calculate the probability that a child will be removed from home and/or involved in a future investigation,” the department noted.

The report warned that using the algorithm could result in “automation bias” which leave child welfare officials feeling pressured to rely on the “predictive risk scores despite clear contradictory evidence.”

An Oregon DHS spokesman, Jake Sunderland, said the algorithm cannot be used in that state’s new screening process and was therefore “no longer necessary.” 

While he did not provide a specific explanation of why state officials decided to suspend the program, he noted “no expectation that it will be unpaused soon.”

Oregon intends to switch to a review system known as a Structured Decision Making Model, which is currently in use in New Jersey, California, and Texas.

*For corrections please email [email protected]*

Popular