By Lora Benson
Educational charity and professional body the Chartered Institute for Securities & Investment (CISI) is drawing key ethical lessons for global financial services professionals from one of the UK’s most significant technology failures: the Horizon Post Office scandal.
In an exclusive interview for the CISI member magazine The Review, Richard Moorhead, professor of law and professional ethics at the University of Exeter and a member of the Horizon Compensation Advisory Board, outlines how the scandal provides a powerful warning about poor transparency, weak accountability and an overreliance on technology.

Professor Moorhead’s analysis highlights the need for an ethical framework for financial services firms when integrating new technologies such as AI, big data and distributed ledgers into their operations. This includes taking a hard look at potential conflicts of interest, prioritising accountability and ethical oversight at every stage of tech adoption.
1. Incentives should reward ethics, not short-term gains
Professor Moorhead (left) identifies distorted incentives as a key failing behind Horizon. Fujitsu, the system’s developer, was at risk of penalties for faults that it diagnosed, creating pressure to conceal errors rather than address them. “The evidence shows that even during pilots, Fujitsu could tell things were wrong,” he says, but the incentive structure meant that errors were ignored or minimised.
Financial services firms, he suggests, must ensure that rewards are tied to ethical outcomes rather than short-term profits. Independent audits, regular incentive reviews and protection for whistleblowers are essential safeguards to prevent similar ethical blind spots.
2. Build a culture of accountability and honesty
A defining feature of the Horizon case was a lack of accountability: “Nobody took ownership of the problem,” says Moorhead. “It was reputation management over substance. But the underlying horror is in the disregard for the impact on postmasters’ lives.”
He explains that this type of ‘moral disengagement’ occurs where individuals justify harmful actions by distancing themselves from their internal sense of right and wrong. Financial services firms can learn from this by encouraging honesty, humility and prioritising a culture of integrity when approaching technological uncertainty, rather than a “fake it until you make it” mentality.
He also suggests firms cultivate an environment where employees feel safe speaking up without fear of negative consequences. Those involved in Horizon, he says, emphasised good news, suppressing negative stories and ignoring conflicts of interest. To avoid these scenarios, Moorhead urges leaders to take ownership of tech problems, encourage a culture where failure is not only tolerated but also seen as an opportunity to learn from, and where concerns are addressed promptly and transparently.
3. Ethics should guide technology, not the other way around
Moorhead cautions that technological optimism, driven by the rapid pace of advancement, can cloud judgement. “Good teams want things to work,” he says, but that can lead to a reluctance to question whether systems are doing what they should.
In a fast-moving digital environment, the CISI urges firms to embed ethical oversight into every stage of technological adoption. Independent testing, scenario planning and worst-case analysis can help mitigate optimism bias and ensure new systems deliver fair outcomes.
“People must feel able to challenge some of the basics,” he says.
4. The case for a clear ethical framework
The UK’s Senior Managers and Certification Regime has helped to strengthen governance and accountability culture across many firms in our sector by clarifying responsibilities and making senior individuals personally accountable. As the CISI Review article notes: “If your firm is based in a country with a similar control regime, make sure you’re clear about how it relates to your use of technology. Or if you’re based in a country that does not have a regime like SMCR, consider how you can implement your own structures for control and accountability.”
For firms not in countries with comparable regimes, Moorhead recommends a framework that all firms could follow. CISI member firms in the UK could also use this framework as a complement to the CISI Code of Conduct and Guidance on Digital Ethics.
Moorhead’s framework includes mapping stakeholders to assess who gains or loses from technological change, defining principles such as fairness and transparency, embedding accountability mechanisms and continually reviewing real-world impacts.
5. Lessons for AI and automation
As financial institutions increasingly rely on AI for decision-making, the CISI Review article highlights the risk of replicating the same ethical failures seen in Horizon. Poorly trained models can create bias, while profit-driven incentives can undermine integrity.
Automation does not eliminate human error as every system reflects the biases and pressures of those who create it. Firms should therefore scrutinise the incentives built into AI, ensure oversight of automated systems, and reinforce human judgement at critical decision points.

CISI CEO Tracy Vegro OBE (right) said: “The Horizon case shows that technology without ethics can destroy trust. At the CISI, we believe financial services professionals must lead with integrity and challenge groupthink, especially as AI reshapes the sector.
“Ethical courage and independent judgement are the best safeguards against future scandals. With these technology issues in mind, and as a charitable and educational body working to enhance public trust and confidence in financial services, we launched our Certificate in Ethical AI in 2023. This award-winning short online course offers our sector professionals the opportunity to understand the fundamental ethical and management issues in the deployment of AI in finance.”
Five key takeaways for financial professionals:
- Overreliance on technology amplifies ethical risks: maintain human oversight.
- Incentive structures should link to ethical, not solely financial, outcomes.
- Accountability and open reporting must be embedded into culture.
- AI ethics demand fairness, transparency and bias mitigation.
- A structured ethical framework enables responsible technology governance.
Read the full CISI Review magazine interview with Professor Moorhead