+ 44 (0)1902 672323

The Coming Wave: why IPs should use AI early – and talk about it properly



Mustafa Suleyman’s The Coming Wave is not a productivity tips book. It is a warning about containment: how society retains the practical ability to steer, limit, and, when necessary, constrain technologies whose upside is enormous and whose downside can be system-level.

For insolvency practitioners, that lands in a very specific place. We operate where pressure is highest: time, money, distress, scrutiny, and evidential standards. In that environment, AI is not a novelty. It is a force multiplier – both helpful and hazardous in the same breath.

The core dilemma: extraordinary good, extraordinary failure

Suleyman is explicit that these technologies can be profoundly positive, able to ‘deliver an extraordinary amount of good… from helping unlock the next generation of clean energy to producing cheap and effective treatments.’

He is equally direct about the failure mode: not a tool going wrong, but societies being harmed at scale. He warns that the coming wave ‘threatens to fail faster and on a wider scale than anything witnessed before’ and that it ‘needs worldwide, popular attention.’

That’s the correct framing for our profession: the question is not whether AI is useful, but whether we can adopt it without degrading judgement, ethics, or trust.

Why early predictions fail: AI doesn’t arrive alone

One of Suleyman’s most practical arguments is that technology does not arrive as one neat thing at a time. It compounds.

He describes how new technologies evolve ‘by colliding and combining with other technologies’ and how technology is ‘a commingling set of parts to combine and recombine.’ This is why the impact cannot be anticipated accurately early on. It is not ‘AI plus a chatbot’. It is AI plus automation plus ubiquitous data plus new business models – what Suleyman calls a ‘supercluster.’

For IPs, the implication is straightforward: if we treat AI as a standalone drafting tool, we will miss its real effect – on speed, evidence, stakeholder expectations, and, critically, what regulators and courts will come to regard as ‘reasonable steps’.

Containment is rare – but where it worked, dialogue mattered

Suleyman asks whether containment has ever been achieved and points to nuclear weapons as an exception, noting they did not ‘endlessly proliferate’ in the way many expected.

He attributes that to hard engineering, but also to something more human: ‘the long, patient hours of discussion, the decades of painstaking treaty negotiations… [and] international collaboration.’

That is the key lesson for professionals: containment is not only a technical safety problem. It is norms, incentives, institutions, enforcement, and professional ethics – built through sustained dialogue at multiple levels.

The cautionary tale: when we don’t contain, we patch later (badly)

Suleyman argues that in many technologies we have adopted first and governed later – often clumsily. He observes that even for social media there is ‘no consistent approach’ to regulating a powerful platform, and debates fragment into silos. He cautions it is ‘not enough to have dozens of separate conversations’ about different risks; the issues are ‘interrelated.’

You can see that ‘patch later’ dynamic now playing out in social policy. Australia has legislated a minimum age requirement for certain social media platforms, requiring providers to take ‘reasonable steps’ to prevent under-16s from holding accounts, commencing 10 December 2025. Whether one supports it or not, it is a textbook example of the pattern: mass adoption, then social cost, then regulation.

IP takeaway: if we want AI to raise quality rather than corrode it, we cannot leave governance solely to legislators, platform providers, or large firms with dedicated compliance teams. We need our own professional-level dialogue and norms.


What this means in practice for insolvency practitioners

1) Use AI early – but inside professional guardrails

Waiting for ‘perfect rules’ is the surest way to become dependent on others’ interpretations. Early use builds literacy: you learn what these tools do well, where they hallucinate, where they overconfidently simplify, and where they miss nuance.

But for IPs the controls must be explicit:

  • Human in the loop: AI drafts; the office-holder decides.
  • Confidentiality discipline: do not feed sensitive case data into general-purpose tools unless you have a secure arrangement you understand.
  • Audit trail: keep a note of what was generated and what was changed (especially where outputs influence statutory reports, correspondence, or decisions that may later be scrutinised).
  • Competence and due care: train staff to challenge AI outputs, not simply accept them.

Suleyman is clear that ‘regulation alone is not enough.’ The profession must build capability and norms that stand up under stress and scrutiny.

2) Don’t confuse speed with quality

AI will compress time. That is its superpower.

In insolvency, compressed time can be an advantage – better responsiveness to directors in distress, quicker triage, faster stakeholder communications, less wasted time on boilerplate. But compressed time also creates new risks:

  • shorter reflection cycles;
  • weaker checking;
  • template creep (where every case starts to sound the same);
  • and false confidence (because the prose is fluent).

The discipline is to use AI to buy time, then spend the time you saved on higher judgement work: strategy, stakeholder management, evidence, and ethics.

3) Make dialogue part of your operating model

Suleyman points to wider societal mechanisms – civil society, campaigns, and even citizen assemblies – to make containment ‘collective’ and ‘grounded.’

In our world, ‘dialogue’ means something very practical:

  • partners agreeing firm-wide acceptable use rules;
  • sharing near-misses internally (what the AI got wrong and how it was caught);
  • comparing approaches across the profession (what is becoming market standard);
  • and feeding real-world insight into professional bodies and regulators.

If we don’t do this, we will end up with reactive, fragmented rulemaking that is divorced from how cases actually run.


A 30-minute ‘AI governance’ discussion for IP firms

Use these questions in a partner meeting or technical meeting:

  1. Which AI use cases do we encourage today (drafting, triage, checklists, research, training)?
  2. Which uses are prohibited (case-specific confidential data into insecure tools, unreviewed statutory content, anything that could mislead stakeholders)?
  3. What does ‘human in the loop’ mean in our firm – what must be checked, by whom, and recorded?
  4. How do we evidence competence and due care (training log, review steps, supervision)?
  5. How do we prevent ‘template drift’ and keep case narratives genuinely case-specific?
  6. What’s our approach to transparency with clients and stakeholders, where relevant?
  7. What is our incident process if an AI-assisted document contains an error?
  8. What will we review quarterly as tools and expectations evolve?


The challenge: try VAi properly for one week

If you are an IP and you have not yet tested a specialist insolvency co-pilot, you are operating with a major blind spot.

Try VAi for one week – just not as a ‘Google replacement’, but as a structured colleague. In that week, make VAi your ‘experienced IP in the corner office’: the person you would naturally bounce things off when you want a second view, a tighter structure, or a reminder of angles you might have missed.

Use it on:

A) Non-standard letters (where templates don’t help)

  • Bespoke letters to directors addressing unusual factual patterns
  • Stakeholder correspondence where tone, clarity and risk sensitivity matter
  • Follow-up letters where you must explain decisions without inviting dispute

Ask VAi to draft, then adjust to your house style and your actual facts.

B) File notes (fast, consistent, and more defensible)

  • Call notes from director conversations
  • Creditor / solicitor / funder discussions
  • Internal decision notes and ‘why we did what we did’ contemporaneous records

Used properly, this becomes a practical way to improve consistency and evidencing, especially under time pressure.

C) Strategy exploration and decision documentation

This is where an AI co-pilot can be most valuable for senior practitioners:

  • Explore options and constraints (legal, commercial, ethical)
  • Stress-test assumptions (‘what would a creditor challenge here?’)
  • Document your reasoning: why X was rejected, why Y was chosen
  • Produce a coherent narrative you can refer to later if scrutiny arises

In other words: VAi can help you think in writing, then convert that into an auditable record.

D) Your ‘bounce-it-off’ partner

Use VAi as the reliable second mind for:

  • ‘What am I missing?’ prompts
  • Drafting alternative approaches and pros/cons
  • Sanity-checking tone and stakeholder reactions
  • Creating clearer explanations for directors in distress

Two rules to keep it professionally safe:

  1. Human sign-off every time (you remain the office-holder; VAi is the assistant).
  2. Keep a learning log: what VAi got right, what it missed, and what you corrected – because that’s how your firm builds competence rather than outsourcing judgement.

If you adopt early, you will be better placed to shape the professional norms – rather than simply comply with whatever arrives later.

Finally, let me ask – what books on the subject of ai are you reading and can recommend? I’d be delighted to know.

More Posts

+ 44 (0)1902 672323

paul@vaisolutions.co.uk

Copyright VAi Solutions Limited 2024.