This piece was originally published for the Reboot Democracy blog.
This January, the British television station ITV is showing "Mr Bates vs the Post Office." The four-part miniseries nationalized the conversation around the decades-long legal battles faced by over 900 postmasters in the UK, who were wrongly prosecuted for financial shortfalls – errors mistakenly attributed to them by Horizon, a flawed Fujitsu accounting software. Despite reports of problems with the technology, Post Office leadership vigorously pursued fraud convictions through 2015. In addition to hundreds of criminal convictions, at least four post office officials committed suicide as a result. The scandal is dominating the front pages of every news site and magazine throughout the United Kingdom.
The easy argument here would be that tech itself was the villain in this story. Some observers cite the Horizon Scandal as a cautionary tale against the unchecked use of opaque technologies today in governmental processes. It’s the same argument that has been copy-pasted to the AI conversation: technological innovation is not worth the loss of the human touch.
However, in the Horizon Case technology malfunction was compounded by corruption and malfeasance, calling into question the argument that human oversight will solve the problem of overreliance on technology.
The scandal is chock full of human malintent. After all, Post Office bosses secretly decided in April 2014 to sack forensic accountants who had found bugs in their IT system. The appearances of key witnesses during the government’s inquiry were delayed by the failure of the Post Office to disclose thousands of relevant documents. Fujitsu has continued to receive billions in government contracts since the alleged cover-up. Additionally, English law is uniquely written to assume computer programs are “reliable” unless proven otherwise – a presumption that led to hundreds of criminal prosecutions and the suicide of four of the accused. The deliberate wrongdoing and unethical practices by Fujitsu and certain officials within the Post Office culminated in what is now referred to as "the most widespread miscarriage of justice" in the annals of British legal history.
We want technological systems to “align” with human values. But what happens when those values are out of whack? The Horizon Scandal serves as a potent reminder of the critical need to scrutinize the technology we use, but also the intentions and actions, and the moneyed interests, of those who wield it.
Because AI technologies learn and improve, it should be possible, unlike with older technologies, to align them to the goals of what we are trying to achieve. Thus, if the goal is to provide people with much-needed benefits and to optimize for helping those in need, we should be able to design systems to do just that. However, if we build tools aligned to the belief that benefits applicants are trying to bilk the system, the technology is going to work very differently.
That is why it is so important to ensure transparency and accountability for the technology but also for the assumptions that go into designing it. We need to be clear about the values we are trying to align the technology with and to set those values in a democratic and deliberative manner, scrutinizing the inputs and the outputs and creating institutional safeguards to protect against human and machine failings.
It is not enough for technology to be on tap – rather, justice, fairness, equity and democracy need, ultimately, to be on top.
Comments