AI: Opportunities and threats
In an interview with Fr Jean Gové , Maltese priest and philosopher summarizes for us some of the questions around AI.
Interview between Stephen Watt and Fr Jean Gové.
Would you say a little about your background and how you came to be studying for a PhD in philosophy at St Andrews?
I’m a diocesan priest from the Archdiocese of Malta, ordained just over five years ago in 2019. As I was completing my theological studies in Malta, my Archbishop asked me to pursue further studies in analytic philosophy, and St Andrews was singled out as an ideal place to do this. I first began a postgraduate diploma, then followed on to the Masters programme, and finally the PhD which I successfully defended last April.
Your main research interests centre around the philosophy of mind, language and epistemology. How important do you think AI is in relation to this general field?
I suppose it would be better to ask the opposite question - how important are philosophy of mind, language and epistemology to AI? The answer there would undoubtedly be “A lot!”
Within these fields, we discuss a myriad of questions. How do we think, reason and understand? How do we know what we’re thinking? What is the relationship between our thoughts and words about objects, and those objects themselves? How do we become acquainted with new objects and grasp new concepts? What is it to be conscious, and in virtue of what are we conscious? The list goes on…
The more complex AI systems become, and the more ‘life-like’ they seem to be, the more people seem to be confused as to whether or how AI systems are different from us humans. The philosophical study of mind, language, and epistemology, as the small sample of questions just provided above show, thus offers us invaluable tools and concepts to help distinguish and differentiate between genuine thought and sentience, and mere computational processes.
Although AI in some form has been around for a while now, the recent explosion in the use of LLM (large language models) for AI has resulted in it being a much more obvious part of everyday life for many people. What do you think the greatest threat and the greatest opportunity are likely to be from this expansion in technology?
As you rightly pointed out, AI has been around for a while, but it’s only in the past few years that we have seen this marked expansion and, more importantly, a more diffuse usage of this technology - specifically with respect to what is called ‘Generative AI’ (GenAI). GenAI are AI systems, such as LLMs, which are able to create new material, be it text, image, or audio, based on existing data that a system has been trained on.
AI systems in general have the capacity to automate a good number of menial or time-consuming tasks that, up until now, could have only been carried out by a human agent. Doctors and educators, for example, can devote more time to actually being with their patients or students, instead of being taken up carrying out time-consuming administrative tasks that can now be offloaded onto AI systems.
Furthermore, given AI systems’ great ability to predict and pick out patterns, it also has the potential to be used to solve more complex problems. A fantastic example of this is the AI system AlphaFold, developed by DeepMind. A great problem in the scientific community, commonly known as the ‘protein folding problem,’ was that of figuring out the structure of complex proteins. This knowledge is important for a number of reasons, such as the development of new medicines. AlphaFold, launched in 2018, essentially solved this problem.
So the opportunities for great benefit are definitely there. But there is equally a threat, or rather, a number of threats that AI poses to us. I’ll just focus on a couple. With the ability to create vast amounts of new content in a short span of time, a very real worry relates to disinformation campaigns. We’ve already heard this a number of times in the news where election interference is involved. The use of AI technology in creating deepfake media can cause great harm - whether it’s a fake phone call pretending to be President Biden dissuading voters from going to vote, or a fake video of President Zelensky ordering Ukrainian troops to surrender. Another, related point, concerns the issue of fairness. As time passes, AI systems become more embedded in our daily life. Major decisions with long lasting consequences - be they financial, medical, or civil - are slowly being taken by AI systems. This is, in part, due to an assumption that AI systems are necessarily always better than a human decision-maker (or, at the very least, cheaper!). However, this is not the case. While AI systems are better than human agents at examining vast amounts of data, it has been seen time and time again that AI systems replicate the same bias that is found in human agents (an excellent textbook example of this is of the COMPAS AI system employed in US law courts).
There have been many long-standing concerns among philosophers and theologians about the general impact of technology and the machine age on modern life. Do you think the problems of AI are simply a continuation of these sorts of concerns or do you think it raises unique problems of its own?
In some sense, there is a continuation of certain issues. Technology is always framed in the perspective of ‘tools.’ Now we generally take certain rudimentary tools as being morally neutral. This means that the tools in themselves are not ‘good’ or ‘bad’, but rather it’s what a subject does with them (or by means of them) that would be morally good or bad. However, this argument becomes more complicated, and less clear once we begin to consider more complex tools – more complex technologies. In this sense, the discussion surrounding AI is just a continuation of the debate on the impact technology has on us.
But at the same time, as we have seen in the past with other technologies, novel tools tend to bring about novel challenges. A major novel challenge that AI is posing that we have not really been confronted with (at least not on this scale), is the possibility of AI replacing our interactions with other humans – especially when it comes to meaningful or important contexts relating to friendship, dating, therapy, and education, to mention a few. Are interactions with AI systems equivalent interactions with other human beings? And if not, how does this adversely affect us individually, and as a society?
I’d like to focus on two problems that might be raised by AI. The first is whether AI might raise issues regarding the creation of artificial persons, and the second is whether its use might damage the flourishing of existing human persons. Starting with the first problem, do you think there are any implications for our understanding of personhood that AI raises particularly for theology? For example, might we have to wrestle with whether artificial persons can be saved?
This is quite an important question – and one that, given my background, I tend to focus on greatly. And while this question might seem a bit far-fetched and theoretical at the moment, we can very easily imagine a not-too-distant future wherein the AI systems employed seem to be, for all intents and purposes, identical to the way we reason and feel. What do we do then? Should such AI systems be accorded the same rights we have? Affirming that such AI systems are indeed persons like you or me might also imply that, as you mentioned, they could or should also be saved.
Theologically speaking, we tie the concept of personhood, and the dignity that we enjoy, with the fact that we are created in the ‘image of God’. We have normally expressed this biblical phrase in terms of abilities that we can do (as in CCC §357) and which no other animal can. But what if an AI system were to be able to have these same abilities (assuming that such a system would not be simply imitating these abilities, but genuinely performing them)? The Incarnation might also have something to tell us here. Christ took on this particular form – this human, organic body. Does this mean that to be in the ‘image of God’, necessarily implies having an organic, flesh-and-blood body? This is not entirely clear to me.
Let me be clear, I am not implying that the AI systems we have today should be considered as persons (or are even conscious). Furthermore, I have significant reservations about whether or not this might be possible. However, what is sure is that these questions will be raised, and we need to have the theological and philosophical arguments to confront them.
Turning to the second problem, a worry that seems to be common is that AI might become too intelligent for our own good, with the takeover by Skynet in the Terminator films being a popular representation of this. Do you think that this is a real worry and is there anything that might be done to mitigate this threat?
While I would not categorically exclude this possibility, I don’t think this should be a major cause for panic – at least not presently anyway. On the contrary, instead of worrying about rogue AI, I think we should be more concerned about AI being used for malevolent means by what are termed as ‘bad actors’ – individuals who make use of AI tools in order to advance a harmful agenda. We should also be worrying about the great power such a few companies yield in this area, and about the lack of legislation to safeguard against the abuse of such tools.
As well as the ‘big issues’, two of which I’ve just mentioned, there are many smaller but still important issues raised by AI which are of perhaps more immediate concern. One example of a more immediate threat is the way that assumptions unconsciously or consciously built into algorithms controlling AI can result in issues of social justice, for example, in discrimination against minority groups. Moreover, because AI is being developed and applied by experts with little moral or political accountability, there is a danger of such control being exercised against the common good. Do you think this sort of injustice is a real problem and, if so, are there any possible insights that Catholic social teaching can bring as to how this problem might be mitigated?
As mentioned above, these are the more pressing issues that we should actually be worrying about and doing something about. The COMPAS case is an excellent example of a racial bias. Furthermore, while AI has its benefits, there is also a real risk that such tools may further exacerbate the already present inequalities present in our societies. Catholic social teaching has many points that can be helpful in these cases, from issues relating to labour, to the environment, to what Pope Francis is repeatedly referring to as the ‘technological paradigm.’ However, I’d just like to highlight one principle in particular – subsidiarity. The use of AI tools, if not done properly, has the possibility of further centralising power and decision-making, thus affecting the autonomy of individuals and of smaller community groups. We must be guard against this and ensure that AI remains a tool that does not enslave, but that liberates.
A further but more hopeful opportunity that AI seems to be presenting just now is the expansion of cheap educational tools. Do you think that the Catholic Church might find opportunities here particularly for catechesis?
Yes! While of course not discarding the importance of human relationships and interactions, especially in the important activity of transmitting the faith, AI can serve a role in this area too. A particular project that I’m contributing on does precisely this – Magisterium AI. This is a LLM that is able to answer questions related to the faith by referencing a vast number of magisterial texts, along with writings from patristic authors, Doctors of the Church and other saints and theologians. It’s being constantly updated, and is a great tool, not only for individuals who would like to know more about the faith, but also for pastoral workers, and researchers alike.
Overall, are you optimistic or pessimistic about the future of AI?
As Catholics we are always invited to be hopeful. The Lord has endowed us with many gifts and talents to use, to look after the planet, to build God’s kingdom of love, and to ultimately be united with Him. Problems and challenges will always present themselves, but I believe that the desire for good and truth that is placed in the heart of every human being will ultimately prevail and lead us forward.
Could you recommend to our readers any sources for exploring AI and Catholicism further?
In January, the Pope published two messages relating to AI that merit reading – the Message for the World Day of Peace, as well as the Message for the World Day of Social Communication. Furthermore, a number of Dicasteries within the Holy See are working tirelessly on this area. Most notably the AI Research Group within the Dicastery of Culture and Education. Their recent publication called ‘Encountering AI’ (which can be freely downloaded) is a great resource. This is apart from an ever-increasing number of Catholic academics who are also publishing writings on this issue.
*This was a written interview sent via email to Fr Gové.