Artificial intelligence can now produce a Business Continuity Plan in seconds.
Type “write me a BCP” into any AI tool and you will get something that looks reassuringly complete: headings, tables, roles, recovery times, even crisis communications.
For many organisations, this feels like progress. Business continuity should be easier now. Cheaper. Faster. Less reliant on specialist support.
In some ways, that is true. But there are also real risks in treating AI as the starting point for a Business Continuity Plan.
Ask an AI tool to write a BCP and what you will get is, in effect, a template. Not 'your' plan. A generic one.
AI models are trained on large volumes of material pulled from across digital sources. In the case of business continuity, that material overwhelmingly consists of downloadable templates. Many of these templates were not written by business continuity practitioners. They are often produced by legal firms, HR consultancies, accountants, software vendors, or lead‑generation sites whose primary goal is to capture contact details.
A quick skim of many publicly available BCP templates shows the problem. They look reasonable on the surface, but they often:
We provide a BCP template ourselves which is intentionally structured for smaller organisations. However it is also deliberately built on our underlying proven methodology. It is designed to take organisations down the right path, not just help them fill in boxes. Even then, the best results come when a consultant supports the organisation as they write it. This is often a very affordable way of accessing expert input, while still retaining ownership of the plan.
A template written by someone who has never delivered, exercised, or defended a BCP in the real world is not a safe foundation. AI has no way of knowing which templates are good and which are not. It simply averages them.
AI models don't know the difference between a battle-tested continuity plan and a lead-generation form masquerading as one. They just regurgitate the most common patterns.
As an example, an AI‑generated BCP might define disruption scenarios before it understands what the organisation actually does, what services matter, or what failure would even look like. The scenarios feel sensible, but they are generic assumptions, not discovered risks.
It is often said that “the prompts are everything” when using AI. That is true.
The difficulty is that good prompts for business continuity are not obvious unless you have written solid BCP plans before. Knowing what to ask AI requires an understanding of:
A consultant could almost certainly use AI to produce a better draft than a non‑specialist. But that is because the consultant already understands the method, context and nuances. The AI is still dependent on human expertise to guide it.
Without that grounding, organisations tend to ask AI to “write a BCP”, rather than asking it to support specific, well‑defined outputs within a broader continuity framework.
The biggest limitation of AI‑generated plans is not what they include. It is what they miss.
Much of our work with clients is not about analysing what is already written down. It is about identifying and formalising what is not. Critical knowledge often exists only in people’s heads, such as:
These gaps surface through experience‑led questioning. Often, they are questions we ask because of challenges we have faced on previous projects, or failures we have seen unfold during exercises or real incidents.
This is the difference between a plan that sits on a shelf and a plan that saves your business during a crisis.
AI can only work with the information it is given. It cannot interrogate silences. It cannot sense discomfort in a workshop. It cannot recognise when an answer sounds technically correct but is operationally unrealistic.
The most useful information in a business continuity plany is often the most nuanced. It is precisely the sort of detail that is least likely to be captured by an automated drafting process.
This is where experienced consultants add the most value.
Used at the right point, AI can genuinely add value to business continuity work. The key is understanding what stage you are at and what problem you are trying to solve.
AI tends to work well when:
If an organisation has completed a proper Business Impact Analysis, understands its critical services, tolerances, dependencies and recovery strategies, AI can help shape and present that information more effectively. At this point, AI is working with validated inputs rather than guessing what should exist.
AI is far better at improving clarity than discovering truth. It can help restructure content, simplify language, remove duplication and make documents more readable for senior audiences. What it should not be doing is deciding recovery priorities or determining whether strategies are actually viable.
Business continuity documentation often grows over time. AI is useful for spotting inconsistencies across sections, such as mismatched role names, conflicting assumptions, or variations in terminology that could cause confusion during an incident.
Once plans exist, AI can be very effective at generating realistic exercise injects, media prompts and scenario developments. This allows organisations to run more engaging and immersive exercises without the cost or time traditionally associated with scenario design.
AI works best when it is clearly positioned as a support tool. Decisions about risk appetite, prioritisation, trade‑offs and feasibility still need to be made by people who understand the organisation and will be accountable when things go wrong.
Just as important, is being honest about where AI struggles.
AI is a poor choice when:
In these situations, AI tends to produce documents that look complete but lack depth, coherence and defensibility.
In short, our business continuity consultants use AI carefully and transparently.
Clients do not engage our BCP consultants purely to produce documents. They engage them for their years of experience, judgement, challenge, reassurance and accountability. AI cannot replace those things.
Where we have found AI genuinely useful is in support roles, once the critical thinking has already been done.
AI can quickly raise the standard of written English. We use it to help turn our analysis, commentary and recommendations into clearer, more board‑ready language. The content remains ours, but it is easier to read and more consistent in tone.
Whilst we still peer‑review each other’s work, AI is effective as a first pass. It can quickly highlight inconsistencies such as one section referring to an “Incident Manager” while another refers to an “Incident Responder”, or recovery assumptions that do not align across sections.
Our exercising has improved significantly. AI allows us to generate realistic injects, media prompts and scenario developments very quickly. This means that even a simple, affordable desktop exercise can feel more realistic and immersive.
Used in this way, AI enhances our work without diluting the value clients come to us for.
AI has a place in business continuity planning. Ignoring it would be unrealistic.
But using it as a shortcut to a finished BCP is risky. At best, it produces a document that looks compliant. At worst, it creates false confidence.
Our view is simple: get the foundations right first. That usually requires experienced human input. Once those foundations exist, AI can be a powerful tool to support, refine and enhance the output.
Business continuity is not about having a plan. It is about being able to respond and recover when it matters. No AI can take responsibility for that.