4 minute read time.

At The Old Bailey, London, in February 2026, the Justice For All series of events covered the topic of Justice for the Accused: an “in-depth examination of the rights of the accused, delays in the Courts and justice systems, and the critical importance of procedural fairness and access to legal representation”. Hosted by Alderman and Sheriff Robert Hughes-Penney, Sheriff and Deputy Keith Bottomley, and His Honour Judge Mark Lucraft KC, the event considered how technology and AI may improve the efficiency and fairness of the justice system.

The keynote speech was given by Master of the Rolls the Rt. Hon. Sir Geoffrey Vos, and the panellists were the Rt Hon David Gauke, Prof. Richard Susskind CBE KC (Hon), Charlie Taylor, Katie Wheatley, and Andrea Coomber KC (Hon).

The event covered many excellent points and, as an attendee, my takeaways through the lens of a UK large industry employee and IET Artificial Intelligence Technical Network member are as follows.

  1. Re‑examining Business Purpose and Intent

We need to refocus on why our processes and documents exist, not simply repeat how things have been done for years; the environment has changed. Each technical process in industry was originally written to achieve a specific goal, and while periodic reviews refine the wording or steps, have we adequately revisited the underlying goal(s) itself and considered whether a fundamentally different approach is now possible with the growing proliferation of AI? We should challenge legacy assumptions and redesign processes starting from first principles when new technology/capabilities significantly trigger this.

  1. Growing Data Volumes and Rising Complexity

The volume of engineering data, technical documentation, and project complexity has increased dramatically compared with generations past. Delays are not caused by people intentionally slowing work, but by the rising complexity of roles, projects, and information. Human processing capacity has not increased at the same rate, so backlogs will inevitably continue to grow unless we change how we work at a structural level.

An example spoken at the event which is readily translatable to industry as well as the legal profession: AI can now produce substantial quantities of text, instantly. A claimant can now create a multi‑page, legal‑sounding document for free using a generative AI service like ChatGPT - something that used to require paid legal expertise (a filter at point of creation). This text is likely to be riddled with AI-generated false authorities, and the burden now shifts to the people who must read and process these documents using traditional methods. This dynamic will only accelerate as AI lowers barriers to producing ever larger volumes of content. How do we ensure this is sustainable? See point 3.

  1. Efficiency vs. Resourcing

Simply adding more people is not a solution; it increases cost, complexity, and organisational inefficiency. The response to “too much work” should be smarter, more efficient systems rather than expanding headcount. As datasets grow, our processes and ways of working must evolve at the same pace.

  1. Scalable, Tailored Approaches

We need scalable processes that deliver “good enough” outcomes where appropriate. Not every situation requires a gold‑standard, exhaustive response. We may claim to tailor processes in industry, but in practice do we do this enough, or is that a futile term? A scalable framework where effort aligns with significance is essential.

  1. Design for the Future, Not Just Today

If we design solutions only for current needs, they will quickly become obsolete. We should instead design systems that address enduring goals. The goal may not have changed, but the approach to achieve it should be reconsidered.

Black & Decker’s example is instructive: customers don’t want a drill; they want a hole made in some material. We should explore how to identify what outcomes we truly need and design processes and solutions that deliver those outcomes in flexible, future‑proof ways.

  1. The Role of AI, its Independence, and Decision-Making

AI can assist with tasks such as document checking and identifying inconsistencies. However, questions remain about whether a machine can be independent or impartial. For example serving as an authority or ‘Chair’. Consider how far can human decision‑making be informed or supported by machine-generated assessments, and where must human authority remain central?

However, do not let these complex decisions block practical short‑term wins: for example the adoption of AI to generate minutes and actions rather than the continued use of slow, manual notetaking and report generation.

  1. Legal and Ethical Considerations

Relevant frameworks such as Article 6 of the European Convention on Human Rights and Article 14 of the EU AI Act must be understood in relation to fairness, impartiality, and transparency. These should inform how we incorporate AI into decision‑making processes.

  1. Rapid Technology Cycles and Vendor Dynamics

Software vendors are shifting to rapid update cycles, often with only 2 years of support, because technological advancement (and exploitation of vulnerabilities) is accelerating so rapidly. This increasingly rapid change cycle presents an opportunity for consumers to adopt new software and importantly the new ways of working associated with using it. If an industrial party does not adapt, its customers may increasingly prefer their more modern, agile competitors who are adapting.

  1. Workforce Expectations and Generational Change

Further to point 8, if an industrial party remains rigid or old-fashioned in how it works, it risks disengaging new joiners and undermining future competitiveness and attractiveness in favour of its competitors who have embraced the technology expected by current and future generations of workers.

In closing, the points and reflections made above are offered here for further discussion in the comments.