Gov't aid algorithm
Many government agencies in the US use a software system to evaluate
whether individuals are entitled to certain government aid. Many of
these systems make horrible decisions.
In the case described at the start of the article, the program was
developed by a contractor which is not known for developing free
software. We must suspect that the program is nonfree and that the
staff of the agency have no way of understanding why people get
rejected. Indeed, the agency staff seem to have no way to respond to
horrible decisions except to say, "Fill in the form again.
Some of these programs may use machine learning. (I doubt many of them
make use of
bullshit
generators.) Machine learning can learn to predict outcomes by
recognizing patterns, but when there is no objective information
about what the right answer was in training cases, they can't learn to
predict the right answer.
When it comes to the question of whether a person deserves state aid,
this approach is simply wrong. It should be illegal to use a software
system to make such decisions about people unless the decision-making
criteria that the system implements is precisely documented so that
specific wrong decisions can be traced to a specific cause.
For bureaucrats whose priority is reducing expenses, a mysterious
program that erroneously and inexplicably rejects 27 percent of
applicants is a welcome excuse to respond, "Computer says no." The
kind of Christian that practices systematic non-charity might call it a
"godsend".
Many years ago I posted about a nonfree program that
courts used to
decide
whether an accused person can be given bail.
There is a campaign to restrict the use of software for making
decisions about
how to treat specific people.