LLMs in evangelism: balancing risk and opportunity


Abstract digital interface with blue and orange elements, featuring code snippets and security symbols on a dark background.
What you'll find on this page:

In recent years, AI tools like ChatGPT have garnered significant attention – even within the realm of ministry, and including fields such as digital evangelism. 

As ministries look to expand their outreach in innovative ways, many are considering whether Large Language Models (LLMs), such as ChatGPT, could enhance their evangelistic efforts. 

At CV, we are actively exploring the potential of LLMs as a creative and conversational tool. However, we are also acutely aware of the risks involved in deploying these tools in a ministry context. Balancing excitement about the opportunities these tools present with the prudence necessary to ensure their responsible use, we believe it is crucial to carefully weigh the potential benefits against the inherent risks.

While we recognise that it is not yet possible to completely mitigate all of these risks, we are committed to exploring these opportunities responsibly and proactively.

Below are some of the risks we've encountered within our own work in evangelism, and some of the initial steps we've taken to mitigate some of these risks. 

“Generated content from LLMs does not necessarily represent truth or factuality.  It relies very heavily on our ability to discern and make decisions for ourselves based on the knowledge we have.” 

– Stuart Cranney 

Risk 1. Hallucination and inaccuracy

One of the major appeals of using a Large Language Model (LLM) is its ability to rapidly generate impressive content for a variety of purposes. However, there is a significant risk: how do you ensure the content produced is accurate and aligns with theological truth? 

From time to time, LLMs are known to hallucinate. Hallucination occurs when a model generates content that is factually incorrect, misleading, or entirely fabricated.

As a consequence, the potential for spreading misinformation or even ideas that may be considered unbiblical is a concern for anyone using LLMs – but the stakes are particularly high for ministries dedicated to faithfully sharing the gospel. Human intervention is often required to establish with certainty if the output from an LLM is fit for purpose.

Risk 2. Bias

LLMs are trained on vast amounts of data, which can include biased or prejudiced content. Consequently, these models may inadvertently produce outputs that reflect societal biases or inappropriate language. In a ministry or pastoral context, this could result in problematic statements or insensitive remarks that could alienate or offend.

 “All of human knowledge is encoded with human biases.”

– Stuart Cranney

Risk 3. Privacy and confidentiality

LLMs like ChatGPT can engage in conversations that feel remarkably human, which might lead individuals to share personal or sensitive information. However, as AI systems, they lack the ability to securely manage confidential data, which creates significant privacy risks. If your commitment to stewardship extends to wanting to protect the personal or sensitive information of individuals you minister to, you may need to consider platforms and solutions that allow you to preserve data privacy.

Risk 4. Lack of human engagement

Although LLMs like ChatGPT can deliver quick and efficient responses in the context of chat-like conversations, over-reliance on AI for this kind of communication risks eroding the human touch that is often vital in ministry work, especially in evangelism. Pastoral care, counselling, and spiritual guidance are inherently personal and relational, and requires the empathy and discernment that only a human can provide. Moreover, we believe that we as the church have been called to play a role that cannot – and should not – always be replicated by machine interaction. AI supports the work we've been called to, but should not replace it.

Risk 5. High-risk interactions

In the context of a chat conversation, you may be able to include automations that recognise the intent of a message from a seeker – but the above-mentioned human engagement and pastoral care is all the more important when there is a serious issue with regards to safeguarding and protecting vulnerable people from harm. LLMs can be trained to provide a degree of supportive feedback to somebody at risk of harming themselves or somebody else, but can never take the place of a real human in a high-risk situation.

Smiling man with a beard wearing a gray shirt, sitting against a dark background with soft lighting.

A conversation around LLM complexities

In this brief conversation, Stuart Cranney, CV's Director of Innovation, sheds light on some of the complexities to be considered when thinking through the potential risks of using LLMs.

Why take on the risk?

 
While the risks associated with LLMs can be considerable, dismissing them entirely would mean missing out on valuable opportunities to reach more people. 
When used in a chatBot, LLMs can offer personalized, 24/7 engagement, breaking down geographical and time barriers. They can take care of routine tasks, freeing up teams to focus on the more relational side of ministry. They could also assist in evangelistic efforts by creating content and translating messages.
For these reasons and a range of others, CV has endeavoured to explore measures to mitigate some of the above-mentioned risks. Here are some of the practical steps we have taken.  

Manual review

Some of the risks around inaccuracy, hallucination and bias can be mitigated by committing to a manual review of AI-generated content before it is published or shared. This is particularly relevant in cases where LLMs are used for creative content generation (as opposed to larger-scale programmatic, conversational implementations). 

Manual review may also be necessary in cases where sensitive theological or societal issues are present in the subject matter.

 

AI Ethics Board

Against the backdrop of its own work in AI and other areas of emerging tech over the last few years, CV recognised the need to convene a group dedicated to carefully considering the ethical implications of these technologies. This led to the establishment of our Tech Ethics Advisory Board, a forum where we can thoughtfully navigate the challenges and opportunities that come with utilising an array of technologies in our ministry.

This board provides a dedicated space to thoughtfully consider some of the most important questions around matters such as AI, and provides input and advice to our senior leadership in navigating these matters, helping to ensure that our approach aligns with our values and mission. 

While these types of advisory groups may be  associated with larger organisations, there is nothing that prevents even smaller ministries or churches from establishing advisory groups, whether formal or informal, to assist in thinking through these matters. This allows for informed, proactive decisions, ultimately fostering a responsible and impactful use of technologies such as AI in your ministry.

 

Interested in establishing an advisory group? 

Like others, we have found it helpful to convene a small group that consists of both independent individuals and senior leaders, with these broad areas of speciality or interest:

  • An individual with a speciality in an area of technology. The specific technology might reflect an area of focus (or intended focus) within your church or ministry. 
  • An individual that is deeply familiar with and invested in the mission and values of your organisation, preferably with a senior leadership role. 
  • An individual with a speciality in pastoral ministry, theology, and/or ethics.

A combination of independent and in-house advisors roughly covering these areas of interest is a helpful starting point, and at the very least provides the basis for conversations that could reveal how to move forward.  

 

Third-party Governance tools

A number of tools are emerging that are aimed at mitigating some of the inherent risks associated with these technologies. 

While none at present represent a one-stop fail-safe that eliminates all risk, several interesting solutions are emerging that take care of a broad range of challenges.

CV has been experimenting with a specific tool that covers, amongst other things:

  • Security checks
    (For vulnerabilities like prompt injections).
     
  • Privacy filters
    To mask or replace sensitive data in model inputs.
     
  • Output validations
    This prevents hallucinations (inaccuracy) and other toxic outputs by LLMs.
     
  • Compliant data protections
    This tool helps to manage compliance with various data governance benchmarks. 

 

Our Partnerships team would be more than happy to connect you to the above provider (we are not engaged in an affiliate arrangement), so feel free to reach out.

If you want to learn more, check out this other article:

Featured Article

Mapping seeker journeys: A technical deep dive