Investigation revealed potential bias and transparency issues surrounding the increasing use of algorithms within the troubled child welfare system in the US. While some see these tools as a promising way to help overwhelmed social workers predict which children may face harm, others say their reliance on historical data risks automating past inequalities.

  • Gaywallet (they/it)
    link
    fedilink
    3
    edit-2
    1 year ago

    This is exactly the kind of AI application that is almost assured to happen in financially strained systems, especially systems of government that are chronically underfunded, that are most at risk of causing serious harm because nearly all algorithms are biased and in particular, racist.

    This is the use of AI that scares me the most, and I’m glad it’s facing scrutiny. I just hope we put in extremely strong protections ASAP. Sadly, most people in politics do not see how dangerous using AI for these applications can be, so we most likely will see a lot more of this before we see any regulation.

    If you’re curious as to why these kinds of applications are nearly all biased, the following quote from the article helps to explain

    The Allegheny Family Screening Tool was specifically designed to predict the risk that a child will be placed in foster care in the two years after the family is investigated.

    They are comparing variables to an outcome - the outcome is one which is influenced by existing social structures and biases. This is like correlating the risk of ending up in jail with factors which might loosely correlate with race. What will end up happening is that you’ll find the strongest indicators of race, in particular if you are black, and these will also be the strongest indicators of ending up in jail because our system has these biases and jails black individuals at a much higher rate than individuals of other races.

    The same is happening here. The chances of a child being placed in foster care depend heavily on the parents race. We are not assessing how well the child is being treated or whether they might need support, we are assessing the risk that the child will be moved to foster care (which can alternatively be read as assessing the likelihood that the child is of a non-white race). This distinction is critical to understand when AI is reinforcing existing biases.