Safeguarding report writing and artificial intelligence

Artificial Intelligence is rapidly moving from a novelty tool to the new norm across sectors; education, charities and faith-based organisations included.  Increasingly, safeguarding practitioners and organisations are asking whether generative AI tools can assist with drafting safeguarding reports, structuring case notes, summarising chronologies, or improving written clarity.  Irrespective of whether you work professionally in safeguarding, or are a volunteer safeguarding officer for a charity, there are several questions that we must reflect upon when we are thinking about AI.

The central question, however, is not whether AI can produce text.

It can.

The more important question is whether AI should be used in safeguarding documentation, and if so, under what conditions.  Safeguarding reports are not routine administrative documents. They might:

  • Inform statutory decision-making

  • Be disclosed to police or social services

  • Form part of court bundles

  • Be scrutinised in disciplinary processes

  • Be read by survivors and families

As with writing a safeguarding report old-school style, precision, evidential integrity, accountability, and fairness are essential.

Let’s explore the potential benefits, the significant risks, and the professional safeguards required if you’re planning on using AI in safeguarding report writing.

The Potential Benefits

Administrative efficiency

The National Workload Action Group’s[1] supplementary report on AI in case recording recognises that administrative burden is a significant pressure within children’s social care. It explores whether AI tools may assist with aspects of case recording, drafting and summarising, potentially freeing practitioners to focus on relational practice.

Used well, AI may assist with:

  • Structuring documents

  • Formatting headings

  • Creating consistent templates

  • Correcting grammar and readability

Note that these are mechanical tasks rather than analytical tasks.

Consistency and structure

AI can generate structured frameworks for reports, helping ensure key sections are consistently addressed,

for example:

  • The nature of the concern

  • The context in which the safeguarding incident happened

  • Evidence and observations of persons involved/witnesses

  • Analysis of safeguarding incidents

  • Actions taken during and after the incident

  • Reflection on safeguarding practice

Research in Practice[2] has discussed generative AI’s potential role in children’s social care case recording, particularly in relation to consistency and reflective prompts. However, they emphasise that professional judgement must remain central.

Consistency can support quality assurance; provided the substance remains practitioner-authored and professionally accountable.

Plain English and accessibility

The Nuffield Family Justice Observatory[3], in its briefing on AI in the family justice system, acknowledges that AI may assist in drafting clearer, more accessible documents. This can be valuable where reports need to be understood by non-specialist readers, including families; but this clarity must not come at the cost of evidential accuracy.

Reflective prompts and gap identification

Some safeguarding practitioners report using AI to identify missing dates, unclear chronology, or unanswered questions within drafts. In this sense, AI may function as a reflective tool; similar to a checklist, rather than a decision-maker and this distinction is crucial.


Professional judgement is not automation

Cardiff University[1] published empirical study comparing ChatGPT outputs with social workers’ judgements, demonstrates that while AI can sometimes approximate structured reasoning, it does not replicate contextual understanding, relational nuance, or ethical accountability.

Safeguarding decisions are not pattern-recognition exercises; they are professional, relational, and morally complex.

Court, Statutory Scrutiny, and Evidential Integrity

When safeguarding reports are shared with statutory agencies or placed before a court, key principles apply:

  • Clear authorship

  • Evidential traceability

  • Distinction between fact, observation and opinion

  • Transparent rationale throughout the report

  • Accountability for professional judgement

If AI has been used, organisations must be prepared to explain:

  • What tool was used

  • For what purpose

  • Whether identifiable data was processed

  • How accuracy was verified

  • How bias was mitigated

In court contexts, lack of transparency can undermine credibility.  This means charges can be removed, and cases thrown out as the evidence (i.e. the report you wrote) is not fit for purpose.  The person(s) at the centre of the court case is denied justice.

Safeguarding report writing is not merely a descriptive exercise, it is accountable to all those involved.

 

A proportionate approach to AI and safeguarding report writing.

The extremes of an outright ban or uncritical adoption of AI and the impacts they have, are at best, unhelpful, and at worst unlawful, with devastating impacts for victims and survivors.  It is the responsibility of organisations to develop a clear policy and framework for using AI well, it is not the responsibility of staff, volunteers or the person at the centre of the safeguarding issue to have to navigate AI. 

When used well, a balanced organisational framework might include:

Clear red lines

  • No identifiable safeguarding data entered into unapproved AI systems

  • No AI-generated direct quotations

  • No AI-generated professional analysis or threshold decisions

  • No automated drafting of survivor disclosures

Limited permitted uses (with oversight)

  • Structural templates

  • Grammar and readability improvements

  • Formatting assistance

  • Converting practitioner-written paragraphs into clearer prose without altering meaning

Mandatory safeguards

  • Human verification against original contemporaneous notes

  • Explicit distinction between fact and analysis

  • Clear audit trail of authorship

  • Supervisory oversight where AI tools are used

  • Alignment with ICO data protection guidance

 

Equipping staff and volunteers

Once you have created your policy and framework on using AI in the safeguarding context, training of staff and key volunteers is essential.  As part of your organisations ongoing safeguarding practice and professional development, incorporate discussion about receiving and recording disclosures well into the wider discussion – including the use of AI 

Remember the core principles of safeguarding

Safeguarding practice is fundamentally relational, contextual, grounded in lived experience and underpinned by legislation and practice. It requires:

  • Active listening

·        Professional curiosity

  • Ethical judgement

  • Trauma-informed practice and sensitivity

AI can assist with administrative frameworks, but it cannot replicate professional accountability.  The danger is not that AI writes poorly, it often writes extremely well.  The danger is that it writes convincingly. 

In safeguarding, convincing is not the same as accurate.
Polished is not the same as evidential.
Efficient is not the same as safe.

Safeguarding reports are not simply text; they are first and foremost someone’s lived experience.

If you would like support for your organisation to develop good safeguarding reporting practices, Kaizen Safeguarding is here to assist.  Contact | Get in Touch Today — Kaizen Safeguarding 

For example, the difference between “pushed,” “assaulted,” and “physically restrained” carries evidential weight.

Social Work England’s[5] report on the emerging use of AI in social work education and practice cautions that generative AI may reshape meaning while presenting output as fluent and authoritative.

In safeguarding, meaning matters.

“These are not necessarily the person’s own words”

If AI refines rough notes into polished narrative, the resulting text may read as though it reflects verbatim disclosure when it does not.

This presents serious risks:

  • Evidential contamination

  • Credibility challenges in court

  • Allegations of narrative shaping

  • Survivor distress if language feels inauthentic

In adversarial contexts, the process by which a document was produced can itself become subject to scrutiny. 

When equipping staff and volunteers in report writing after a disclosure, we always teach that the person’s own words should be used.  Using AI in a document that might end up in a justice setting, raises important concerns about transparency, reliability, and procedural fairness.

Confidentiality and data protection

Safeguarding data frequently involves special category personal data under UK GDPR. The ICO’s guidance on AI and data protection stresses lawfulness, fairness, transparency, data minimisation, and security[6].

Entering identifiable safeguarding information into non-approved AI tools risks:

  • Unlawful processing

  • Data breaches

  • Loss of control over sensitive information

  • Reputational and regulatory consequences

Internationally, there have also been concerns where child protection workers used generative AI tools inappropriately with sensitive data.  In 2024, the Department of Families, Fairness and Housing (DFFH) in Victoria, Australia, reported an incident to the Office of the Victorian Information Commissioner, which detailed an incident whereby a child protection worker had used Chat GPT to draft a report that was submitted to the Children’s Court.  Chat GPT altered or mis-represented elements of the safeguarding assessment and therefore changing the content of the report.  This included evidence of harm being reported as a positive factor.  The Victorian Information Commissioner ordered the DFFH to ban staff from using generative AI tools[1] so that this type of error would not happen again.

Bias and inequality

AI systems are trained on large datasets that reflect societal patterns, including bias. The ICO and professional bodies caution that AI may reproduce or amplify discriminatory assumptions.

In safeguarding, where anti-oppressive practice is central, even subtle bias can influence tone, interpretation, and assessment.

False authority and over-reliance

The British Association of Social Workers (BASW)[8], in its statement on generative AI, emphasises the ethical responsibility of social workers to retain professional judgement and accountability.

AI-generated text is often polished and confident. This can create a false sense of reliability that excludes professional curiosity, insight and decision-making skills. Safeguarding decisions and outcomes and must not be outsourced to predictive text systems. 



The risks and professional concerns

While the potential benefits are real, the risks attached to using AI in safeguarding report writing are substantial and widely recognised within professional bodies.

Hallucinations and fabricated detail

Generative AI systems are probabilistic, they predict plausible language patterns and can insert details that were not provided, smooth inconsistencies, or fill informational gaps.

The Information Commissioner’s Office (ICO)[4], in its guidance on AI and data protection, highlights risks relating to accuracy, fairness, and explainability. In safeguarding contexts, even minor inaccuracies can materially affect risk assessment.

A changed verb.
An added descriptor.
An assumed sequence of events.

These are not trivial and can have a significant impact on process, understanding and most importantly, the people at the centre of the safeguarding issue.

Distortion through paraphrasing

One of the most serious dangers in safeguarding report writing is paraphrasing disclosures.  AI can:

  • Replace ambiguous language with definitive phrasing

  • Strengthen or weaken certainty

  • Alter the emotional tone of the report

  • Substitute legally significant terminology


References

[1] National Workload Action Group (2023) Artificial intelligence in case recording: Supplementary report. Department for Education.  https://assets.publishing.service.gov.uk/media/68da6c86c487360cc70c9e95/Artificial_intelligence_in_case_recording_-_national_workload_action_group_supplementary_report.pdf

[2] Research in Practice (2024) Discussing the use of generative AI in children’s social care case recording. Available at: https://www.researchinpractice.org.uk/children/content-pages/videos/discussing-the-use-of-generative-ai-in-children-s-social-care-case-recording/

[3] Nuffield Family Justice Observatory (2024) Artificial intelligence in the family justice system: Briefing. Available at: https://www.nuffieldfjo.org.uk/wp-content/uploads/2024/05/NFJO_AI_Briefing_Final.pdf

[4] Information Commissioner’s Office (ICO) (2023) Guidance on AI and data protection. Available at: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection

[5] Social Work England (2023) Understanding the emerging use of artificial intelligence (AI) in social work education and practice in England. Available at: https://www.socialworkengland.org.uk/media/ge5plflg/understanding-the-emerging-use-of-artificial-intelligence-ai-in-social-work-education-and-practice-in-england_v1_final_.pdf

[6] Information Commissioner’s Office (ICO) (2023) Explaining decisions made with artificial intelligence. Available at: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence

[7] AI ban ordered after child protection worker used ChatGPT in Victorian court case, https://www.theguardian.com/australia-news/2024/sep/26/victoria-child-protection-chat-gpt-ban-ovic-report-ntwnfb?utm, Guardian Australia.

[8] British Association of Social Workers (BASW) (2025) Statement on social work and generative artificial intelligence. Available at: https://basw.co.uk/sites/default/files/2025-03/181372%20Statement%20on%20Social%20Work%20and%20Generative%20Artificial%20Intelligence.pdf