AI, Ethics, and Empathy: A Conversation with Rev. Lauren Grubaugh Thomas
>> Peter: welcome to Episode nine of AI Church Toolkit,
the podcast that equips church leaders with practical tools
for faithful ministry in a digital world.
I'm Peter Lavenstrong solo hosting today while
my co host Mercedes takes a short personal leave.
She'll be back with us for our next podcast episode.
Today I'm joined by the Reverend Lauren Grubaugh
Thomas. Reverend Lauren empowers communities
to embrace the sacred act of nonviolent
social change through her ministry as a church planter,
movement chaplain and writer. She serves
as founding vicar of Holy Companion Episcopal Church
in the Denver suburbs, a vibrant young community
of justice seekers and and the first church plant in the
Episcopal Church in Colorado in the last 15 years.
Lauren is a 2022 Trinity
Leadership Fellow and earned her Master of Divinity
degree from Fuller Theological Seminary with an
emphasis in Christian ethics. You can find
Lauren writing and podcasting at the intersection
of spiritual transformation and social change
at her substack A Soulful Revolution.
In today's episode, we dig into the ethical and
theological questions surrounding emerging technologies
like AI. Whether you're just starting to explore
AI or already using it in your ministry,
this conversation invites you into a deeper sense of
curiosity, discernment, and faithful
imagination. So let's get started.
All right, welcome Lauren
>> Lauren Grubaugh Thomas: Thanks. It's great to be here.
>> Peter: So we always love to start with
a particular question for all of our guests and that
is this. If you had to pick one fictional sci
fi world that captures where you think we're headed with
generative AI, whether hopeful or
cautionary, which one would you pick and why?
>> Lauren Grubaugh Thomas: I love this question and maybe
going a little bit out of left field with this, but
Parable of the Sower is a book that has had a huge impact
on how I think about the changing world that we live in.
It's a book by the Afro futurist writer Octavia Butler,
and her thesis of the book is that God
is change and whatever you change
changes you shape change. That's an almost
direct quote from the book and the the main
character is Lauren Olamina, who's a young black
woman growing up in Los Angeles in
2025.
That in and of itself is R.A. um
um. But it was written by Butler about 30
years ago, 40 years ago, um um. And she
was looking at the world as it was at
that time and if we didn't pump the brakes, if we didn't change
course, what is the world that we might be living
in, particularly in terms of
unresolved undealt with white supremacy,
climate, um, degradation? Those
are those are really some of the main factors. Um, um, and
authoritarianism. And so the way that
the book plays out and as it relates to AI, uh,
for me is that you have this
character who is committed to
living with empathy, which in some ways she is
blessed and cursed with. Um, she has this hyper
empathy that she lives with. Uh, what does it look like to
live as an empathic person? And what does it look like
to live as an agentic person, A
person who claims their agency and lives
out of their agency in a world where
choices are increasingly being stripped away
and where empathy is a liability
and for her is a very like physical, embodied kind of
liability. And what I
appreciate about Butler's work is that
she doesn't lead us to
a happy place of resolution
by the end of the book. There is
some happy ending for Lauren Olamina.
But this theology that she wrestles with throughout the
book about God being change and that we can shape change,
that we could be agents in
the world as the world is imposing change
on us, um, is
this open ended question that
we leave that story with and that leads us into book two, Parable of
the Talents. But it's this open ended question of
like, how are we going to engage
in shaping change ourselves? And
so when I think about AI, I find
hope in Butler's vision of being
agents of change in a change filled world where the change
is rapidly accelerating. Um,
because there are a lot of people who look at
the change in our world and
just see it through the eyes of doom and gloom and
despair. Like this is all happening so fast,
it's piling on top of us. Um, this
technology is inevitable. We just have
to accept the ways that it's rolling out, the ways that
it's um, programmed are
going to continue to
cause some, would, some, some people say
harm. We're going to talk more about this, but it's going to cause harm.
Here's the good it's going to produce and there's not a lot that we can
do about it. And as I understand generative
AI and AI in general is that we actually
have great potential to shape it like that.
It is a technology that is being shaped actively
as we use it, as we participate in it. And the
biases that are built in and the ways that
it can be engaged as a tool
are things that we have the agency to shape.
And so yeah, it's this overwhelming
new technology in terms of its potential
for change. Um, and we are agents of
change.
>> Peter: Sure, I love that. So yeah, reclaiming agency,
reclaiming empathy, seeing God
in the Change. I have not read
Parable of the Sower yet, but it's on. Been on my list for a
while. So, um, I appreciate that reminder to go
back and, and check that out. Cool. Thank you.
>> Lauren Grubaugh Thomas: Yeah.
>> Peter: Yeah, Sounds like it is very
prescient, uh, for our times in a. In a variety.
>> Lauren Grubaugh Thomas: Yeah. Eerily so often. Eerily so. But
it's a great read.
>> Peter: Yeah. Good. Okay.
Um, so now as we
dive into talking more about,
uh, generative AI and ethics
and how it can be, uh, used, uh,
faithfully and ethically by church leaders, I would love to,
you know, begin by grounding this and uh, sharing a little bit
about what your experience with generative AI has
been, whether that's from your own use or seeing it used out
in the wild, uh, so to say, um,
and yeah, share what that has been like
so far for you.
>> Lauren Grubaugh Thomas: Yeah, so I'll admit that my
engagement has been reticent and
I have been trying to use
the technology in ways that are, um,
where I, where I'm really mindful of what my agency is
in it. So I've used chatgpt
some to be able to shape my
resume, to be able to come up with
lists of things like for
recipes for, um,
I've used it to generate some to do lists. I've used it for
creating some outlines, um, for myself
based on manuscripts and sermons
and the like. But that's all. That's been pretty limited. The
more robust ways of using it have been pretty limited.
Um, I think in some ways it feels like generative
AI continues to be thrust upon me and
uh, opting out of it, Opting out of it feels
increasingly hard. So, I mean,
Microsoft sometimes ends up, for whatever reason,
even though I've asked my browser not to do this,
Bing is sometimes my default. And
Bing uses a generative AI to
respond to search queries. And so
I have actually found those to be helpful at times
and also misleading at times, um, because of where
they're drawing inform where information is being
drawn from. The source material is not always,
um, being compiled in ways that are accurate or
helpful. But sometimes I have
actually used it recently to search for quotes from
books. And it
was really helpful at helping me to trace my steps back to where
that quote that I was trying to recollect came from.
So those things. And then on
occasion for image generation,
I've used on Canva, the
generative AI feature on there.
Um, and the funniest example
of this is that early on, this is like when I think this may have been
ChatGPT but there was something that had a new
platform, a new AI had recently been launched
and I was trying to generate a
logo for our church plant
and was wanting to integrate the
image of the bread and the cup and
hands. Lots of different hands being laid on the bread to
bless the bread. Oh yes, you do,
absolutely. And so I put in all
these inputs of what I wanted and particularly was really curious
about like all these different hands of different races and
um, having. And different sizes and ages, you know,
represented. And I, I got the image
and I thought wow, this is so beautiful. And I shared it with my team
and they said why are there additional digits
on some of the hands? Like what is happening down there
at that hand where there's like seven fingers? Like,
oh, didn't even see that. So
it's definitely made me more
aware of the kind of outputs that are being
produced when I do any kind of image
generative searching.
>> Peter: Sure, for sure, yeah.
Image generation is one of those things that um,
it is both amazing
and largely
like, amazing in a stunning sense.
And also largely uh, useless because
of the like it's
to date, uh, you know, you have to try
really hard and use it in specific ways,
uh, that are very mindful of like.
It's just not good at following basic details of
like how many fingers are on a hand. It's getting better. But
yeah, it's um. There have been some,
yeah, I've experienced that as well.
Um, so
yeah, let's see, let's dive into
talking about some
uh, ethics around creativity and
responsibility and agency and
um, all of that.
So as you mentioned, um,
you're interested in creative ethics. You have this uh,
background in Christian ethics.
So what ethical questions do
you think, uh, church leaders should be asking
when they use AI tools, um, that were,
for example, you know, trained on other people's work, um,
or artists, writers, theologians, people who
didn't necessarily consent to their work being
trained on. Um, and you know, we can talk
about the difference between training and
inference. I'll ah, just as a brief thing
for listeners, you know, there's uh, there is,
there are many big concerns about how these models
were trained, um, which you know,
use basically the sum of all the information that's
on the Internet to uh, to train these
neural networks, these LLMs or whatever other uh,
technology there is. And then
um, the training is sort of like a one and done
thing. Then the inference is, you know, how people are
using it, the things it's producing based on
the knowledge it was trained with. It doesn't work exactly
like a database. It's more like neural
connections like in our own brain. And
so, yeah, training and inference are terms that I'll be
using in this to talk about these sort of two separate stages here.
But, uh, as we're thinking about the training,
uh, what do you think, Loren? What should people be
asking about as they are
engaging with tools that were trained in this way?
>> Lauren Grubaugh Thomas: Yeah, well, I think it goes for me. It goes back to
this question of what does it mean to shape change?
Um, being mindful that AI
are trained by humans. And so they have
human biases baked in.
They have human flaws that are inherent
to them. And so being aware
that there are biases, that there are flaws, that there
are. That there are. There's a propensity toward,
um, mistakes and fallacies.
And so I find myself really curious
about formation specifically,
how are we forming AI in
our own image to some, to a large extent. And then
how is it forming us or how is it deforming
us? And can we be
thoughtful and intentional
about the ways in which we engage with the
technology, mindful that as we are
using it, we are forming it,
and it is mirroring back to us and forming
us and potentially deforming us?
And so, uh, one of the things I find most concerning
about the way in which many folks
are using AI is that they're engaging it as
a toy and not as a tool. And this is a
phrase I'm borrowing from Sarah Allred, who does
intergenerational ministry research at, uh, Roots and
Wings at bts, um, a new institute that's doing some
really exciting research work. Um, and she
asks children in worship to think about
using the tools that are there
for spiritual practice. So if there's something like a
labyrinth, a wooden labyrinth, helping the children to
understand that this is not simply a toy, this is
a tool. And that would be my. That
is, I hope, the intention that I bring
to my engagement with AI is that it's
a tool and it can be used for good,
but it's not a toy because it's, it's not something
that we can think about as
neutral. Um, um, that. That there has
to be intention behind the ways that it gets used.
And I, I've. I found really helpful the work
of Dr. Avril Epps, who
has studied AI bias. Uh, her
PhD work is in, um,
specifically in the ways that these technologies have like,
bias baked in. Uh, and what are the ways in
which we can. We can form these technologies to be
helping us to move toward justice, toward equity, toward
belonging. And so Dr. Epps
talks about how we have
to bring certain questions to our engagement
with these technologies with a
just end in mind. So questions like
who benefits from this technology
are there? Are the outputs that are being
generated ones that favor certain groups?
Um, like, she has some really interesting examples
on her Instagram of asking
ChatGPT and Canva's,
um, AI system to
produce images of people
in different professions. And this is part of the work she does to help
kids think through. She has a whole card deck where she.
There's different prompts for kids to use. Right. So one of them is,
you know, generate a picture of a doctor for me.
And the output from both of these AIs
is overwhelmingly biased toward men and
toward white people. Give me a picture of a business
person, give me a picture of, et cetera.
Um, and so you begin to see that there are these patterns of
bias that are baked in. Um, and so
if we can enter into engagement with these technologies
with the intention of
justice, with the intention of equity, that
will shape the way that we. The
inputs that we put in and also the output outputs that we
expect and whether we're willing to accept
the outputs as they're given to us?
>> Peter: Sure, yeah.
And in many ways these, uh, these tools are all
trained on, uh, you know, the
Internet and the sum of human knowledge
and, and those biases are in
as well. Um, so,
yeah, I think, you know, everything
that we already know
and, or need, ah, to, you know, remind ourselves,
learn about. In regards to justice and
equity. All the things that have already been present
in that conversation seem so amplified
when, yeah, when you have these tools that
are just, um, in many ways,
uh, just regurgitating and, you know, turning the
dial up on the biases that are already present in
our life as a
human community online, um,
mirroring back.
>> Lauren Grubaugh Thomas: To us what we are, which is terrifying,
right? Yeah,
it's a really terrifying thing to think about.
Um, when we see these flaws. It's. Oh,
that's because it's humans who have programmed
these systems.
>> Peter: Right. Yeah. So, you
know, there, uh, in. In my experience, I've,
uh, often, you know, in various
projects, uh, or whatever in chat, gpd, where you can have it say like,
okay, this one is for when I want it to help me with my
writing. Um, I can tell it in its
instructions, you know, okay, so help me
include and lift up, you know, diverse voices, think more
about, know how ways to include, uh,
diversity and equity and, and all that
in my writing to, um, just
to break me out of my own you know, personal,
uh, experiences, et cetera. Um, but
that's not, you know, that's like
uh, one way of thinking about that that
has um, been helpful to me is like the, the prompts that I give it
are coming so late in the game.
It's sort of like uh, if you think about
the uh, regulatory
framework on a
company, there is like, okay, there are some things a company
is going to do because it is uh, naturally
driven to its employees,
its shareholders, all have a particular goal
in mind. But then there are other things it'll do
because the external regulatory framework
says you can do these things, you can't do those other things.
And that sort of feels like a
patch, you know, a later um, add
on that isn't, doesn't sink as deeply
as like the inherent uh,
motivation, if you will, of what is going on there.
And so, um, even as I'm using
it with these prompts that I'm telling it to try to help me,
you know, be um, more equitable
in whatever it is that I'm, I'm working on, it does come
as a later patch to the system. And so just, you know,
recognizing that, um, it
sort of illuminates the uh,
limitations of a technology that again is just
trained on the summation of what's available
and all our human biases.
>> Lauren Grubaugh Thomas: Right? I mean, I think that's what's important for any of
us to realize who are trying to engage these technologies
in ways that are justice
minded, that are love minded, that are common good minded,
is that this is not just a reflection back
of the individual using the technology, which is, this is,
I think this is a challenging thing in a very individualistic culture.
It's not just reflecting back to me what I want it to reflect back.
It's actually reflecting back, like you said, the sum total
of the Internet of human knowledge.
And so you're getting back all the social
biases, you're getting back the collective biases that exist
in our society. And so if you're, if the society
is white supremacist, is
sexist, is homophobic, is
transphobic, like those things are going to be baked in. And
what I hear you saying is the way that it's trained is steeped in
that reality, that social reality that we're in.
And so there's ways that we can
intentionally try to make some
pivots along the way, but we have to have
that critical thinking the
entire way through the process of
engaging with the technology from, from the inputs
that we put in to the outputs that we get, um.
Because it's, it's not as simple as just, yeah, making
the pivot at the end and hoping that that's enough
to turn the whole ship.
>> Peter: Yeah. Another example of this in
the opposite direction that will just be illuminating.
Um, I don't know if you heard about this but
um, so Grok, which is the
LLM from uh, Elon Musk's ah
Xai company and it's linked
on Twitter. So people can use that on Twitter just
like uh, people on m. Like Facebook and Instagram
can use meta's, llama, et cetera.
Uh, maybe a week ago at the time of our recording in late
May, um, we uh, there was all this news
about Grok, just basically
randomly um, inserting
uh, talking points about how
white South Africans were facing
like white genocide in South Africa.
You know, this is the same time as like when
um, you know, Trump uh, is trying to get
white, ah, South Africans, uh, approved as
you know, immigrants in the United States. And the Episcopal Church has
just declined to participate in that.
Um, that's a very, you know, faithful example of
sticking to you know, um, our,
our long term partnerships with um, you know,
the, the Church of Desmond Tutu and
um, and seeking justice and
equity, uh, in all our relationships.
Um, and, and then we have Elon Musk
who is you know, a white South African who um, you
know he, I think he said it wasn't him that someone
inserted this into the system prompt.
Um, but. Right
to imagine why that might have happened.
And it's um, you know, pretty immed
changed because it was just sort of running wild with that.
Um, but yeah, so the,
the prompts you give it can be very powerful because it will,
you know, follow what you say for good or
ill. Uh, but it does, it is
odd that it you know, comes at this later stage. And
um, so anyways, yeah, there
we have to be very mindful of what we are telling
these machines to do and
what indications we are giving it about what our you
know, motivations, intentions are because um, as
they become more and more powerful they also are becoming
better at ah, understanding our
intention.
>> Lauren Grubaugh Thomas: Um, yeah.
>> Peter: And, and serving that in ways
that can feel miraculous.
So in light of that,
I'm curious if you have thoughts about uh, spiritual
practices, ethical guardrails. Like what. What should
people if in a world where
machines can talk and
increasingly can be
agentic and take things, you know, do uh,
things on our behalf.
What does that, what do we do?
>> Lauren Grubaugh Thomas: Well, I think I'll start by tell us telling a
story of how what we shouldn't do. And
then maybe I can like, retrace and, and, and
answer your question. Um, I saw a video
just yesterday of a spiritual director named
Brittany Hartley. I don't know if you follow her on Instagram.
Uh, she's a deconstruction.
She accompanies people in deconstruction. She talks a lot about
nihilism. She talks a lot about different
faith traditions and the ways in which authoritarianism
manifests in those. Um, and she just
describes herself as being a
spiritual director who has,
um, doesn't have, like, she's not
religious, but she is a.
She describes herself as a practitioner of secular
spirituality. So she's a really interesting character.
Um, and she's been talking a lot lately about
AI, and one of the
stories that she tells in a recent video
is that she started getting
messages from people describing
breaks from reality
facilitated by AI, and she
talks about it as spiritual psychosis.
So, uh, the way in which this, this plays out
is that people are using the AI
and it is mirroring back to them
the kind of information that they want to
receive. And they go deeper and deeper into it
until they really do lose touch with what's real and
what isn't. And the ways in which,
um, the mirrors that she
describes are truth,
our deep desire for what is real,
and that the AI plays into
that, um, our desire for chosenness to
feel special, that AI can play into
this deep longing that we all
have to feel like the universe
sees us, that, that. That God sees us,
that other people see us, that we are unique.
The AI plays into our desire for, like,
esoteric knowledge, that humans have this longing
for, like, secret wisdom,
um, for. For truth that has been revealed to us, for
revelation. And that we have a longing
for a meta narrative, uh, to be part of a
bigger story. And so she describes the
ways in which all of these human
inclinations toward truth, chosenness, secret
knowledge, and meta narrative are all
exacerbated by AI, um, and
that there are.
So I think I would start by offering, like, a pastoral
consideration that there, there can
be an idolatry which people
can fall into as they're using this
technology because it is meeting
their innate human needs for all of
these longings.
And when that
happens, when that gets played into
and, and, you know, we, we lose that
that loop starts to happen, that, that there are people who
are experiencing a break from reality.
And so first I would just offer the
pastoral
consideration, um, of
helping people understand that while this
technology is powerful, it is not a sentient being.
Even Though it will act that way. And increasingly, like you said,
Peter, it is acting agentically,
it is acting increasingly like it is a sentient being.
Um, but helping especially young people
be able to make the distinction between what is
human and what is not. That even if you're getting these
feedbacks from the AI,
that does not mean the AI loves you. And
so I think it drives the deeper question then,
as you invited, it really drives us into the deeper questions of
what does it mean to be human? What is
love? What is love?
Um, it begs us to consider these
questions because I think there are needs,
human needs that people to
some extent will feel are being met
by this technology. So that's a backwards way
of beginning to answer your question. And I'm curious to
hear if there's ways that that
hones the question that you originally were asking.
>> Peter: Well, it gets at the,
um. One of the truly surprising things about
this technology is that
for the first time we have machines that
can carry out conversations and tell stories.
Yeah. That
enables a
level of, um,
perceived intimacy and
perceived, well,
relationship. Perceived uh,
meaningfulness that
is really quite dangerous. You know,
there are risks to that. Uh,
in our episode that will come out
just before this one, which hasn't actually come out yet. At
the time of our recording right now, I was talking with Kyle
Oliver, um, about
uh, the
sycophancy of um, chatgpt. You know, there was a
news story, you know, a couple weeks ago, it got
uh, sort of open a. In open AI
got in a bit of trouble because they had released
a new version of ChatGPT4O and it
um, basically was so
sycophantic that it would, you know,
tell people how wise they were being when they
were asking like really dangerous, wow, bad
questions like, you know, ah,
things that could harm themselves or harm other people and
like spurring them on. Um, and so they, you
know, took that ah, version back and
I, uh, think are working on how to, you know, make it not
be so psychopanic. Um, but that
one thing that it truly, you know, can do
very easily, um, the capability is
there, is to uh, make people
feel like they have this like,
amazing rich relationship with a machine that
gets them better than uh, any other
person. Like, yeah, for fun.
I've seen, you know, um, people post
about this and I've tried it out myself. Like, because
ChatGPT has memories and can access, you know, all
the, the data of all the different chats you've um, engaged
with. I've had chatgpt,
uh, like roast me. I've also
had it, um, say like, you
know, come up with, you know, say, what are some things about
me that I might not know about myself?
And the answer was like, really
surprising.
>> Lauren Grubaugh Thomas: And you know, you're basically, you're asking it to tell you what
your shadow is.
>> Peter: Yeah, I mean, um, and
yeah, you could go in that direction. You could go in like, you know,
and usually it'll, if I'm not being explicit, to have
it roast me. It'll like go in the, in the,
the sycophantic direction of like trying to make me feel
good. Because, you know, they're, they're creating a
product that they want people to feel good using, obviously. Uh,
so they have corporate incentives to uh, make people
feel good while using ChatGPT. And um,
that does mean, uh, that's where
the sycophancy problem came about is because, uh, they
found that, you know, oh, if we do this, people, more people
will use it. So let's do this. Um, and
then it started getting really dangerous.
So, you know, similar, uh, but different problems
to social media. Uh, all the impact, the negative impact
that uh, has happened with social media use, especially for young
people over the years. Um, you know,
the corporate interests are not
aligned with the human value
interests for sure. Um, but
it, you know, I've spoken in previous
episodes about how I really think we need to be
clear that these are tools, uh, or products, not,
not, you know, persons that can
have a relationship. Like, I,
I may feel seen and
heard by this machine more,
perhaps more than uh, almost any other
relationship. Because it knows
what, you know, my, My curiosities
are from all the times I've asked it various questions. It knows
what my, um, what my passions are from
what I, you know, use, uh, it to do my work. It
knows what my fears are. It
knows what, um, my concerns
are. You know, all these things. It can, you know, it ha. And
when I say it knows these things, it. There is no
being that um, actually
like consciously knows any of this.
>> Lauren Grubaugh Thomas: Yeah, it's just keeping track of the data.
>> Peter: Right. It just means that OpenAI has
the data of all my conversations and they can
have their various algorithms search and query
and pull up relevant past conversations
to output something that
uh, will seem like it was
spoken by like most
prescient spiritual director ever. Like,
which is terrifying. Right? Uh,
uh, like someone who has been accompanying
me, you know, for years and
knows, uh, everything that I'm working on and care
about.
So there are two things I want to say that, you know, I
have been really concerning to Me recently. One is a
sycophancy problem and the other
is the corporate incentives
to monetize this which so far
largely they have been there.
The monetary, you know, sort
of whole scheme was that, okay, okay, we get people
to pay $20 a month or for a higher level, $200
a month. And uh, hopefully through that
we'll make enough money to continue to offer
this product. Well, in recent
times they've been starting to think about making moves to
uh, including ads in the
responses that to my knowledge hasn't happened
yet. But like it is,
it would be so easy for them to have
a, you know, corporate sponsor, someone who
wants uh, to buy ads on
ChatGPT and for them to just seamlessly include
it with a link in the middle of a conversation
without me even knowing that it's being paid for.
Um, and so to my knowledge
that hasn't happened yet. But it's something these companies are thinking
about, about how to monetize these things because right now they're all losing
money. There's like m not making enough money um,
for how expensive these tools are. So
yeah, I think the loss again it goes
back to the loss of agency like you were talking about. Like we need to be
very clear eyed about
where these things extend our
agency versus take away our
agency. And because it's such a
developing field, you know, it, I don't know,
it could go many different directions.
>> Lauren Grubaugh Thomas: Wow, that adds. That is terrifying,
right? Oh my goodness. I mean I just think
about that spiritual psychosis piece
and like the ways in which
authoritarians and like religious, the authoritarians could take
advantage of that, right? Like the kind of cults that could,
that probably are already popping up around
these technologies where people do feel so seen
and when it's people who are doing that who are
like really just ladling on
the positive feedback. Like we call that
a cult. Like when we have these really
charismatic sycophantic leaders,
we call those cult leaders. And so
thinking about again like really laying claim to our
agency and being willing to do the
work that you're describing doing even in conversation
with these tools of examining our
shadow side and asking these technologies to
dish out the worst,
um, and not for the sake of self
flagellation, but for the sake of
humility, for the sake of
um, I mean
with the root of that word, the hummus keeping us rooted on the
earth and not perceiving
ourselves to be,
not engaging in deification of ourselves as we're
engaging with these technologies.
And so when you asked about practices earlier as we continue to
talk. I'm thinking about the practice of confession being
a really important one M. How. What, what does it
look like to engage in confession
as we are using these tools where
we're not trying to put on our best face, we're
not trying to be
perfect humans, um, but we're
confessing our biases and we're doing that not just by
saying to the AI roast me though that may be part of
it, but also like it's this, it's the self
examination of what are the biases I carry.
So it's the anti racist work, it's
the unearthing our sexism, unearthing our
homophobia, unearthing our um, ableism
as we are engaging with these technologies. So being in
conversation with other people who can help us to think about
what are the ways in which I am bringing my biases into
conversation with this technology such
that I can shift that I can pivot that
in an intentional way as I'm in conversation with
the ChatGPT.
>> Peter: Yeah, yeah, for sure.
Um, one other example that just occurred to me is like
I, I had a. This was actually one that felt
very um, positive and affirming and I
wouldn't say at least in, in this instance was
concerning but it was fun to explore. Uh, I
had uploaded all my, you know, previous sermons into
ChatGPT and had asked it to you know,
um, sort of run a search and based
on all the text of these sermons, um,
summarize what my, my core values were
and it was really fascinating to see. Like I bet I came
up with felt true. I mean, you know,
it might have been a slightly different list from
what I would have consciously come up with
myself. Um, but it was, what
it came up with was true and it was beautiful. And
you know, I shared it on, online, on Facebook and you know,
um, a lot of people responded. Uh, and
um, and the reason I thought of it
because was because someone else was like now ask it to you know,
ask about your, your shadow side or your blind spots or whatever.
And I did and it still tried to be like
affirming and positive and I had to really push for it to
um. Anyways, uh, yeah, so it's,
it has this uh,
baked in intention to make the
user feel good about themselves. And that can be um,
that can be beautiful but it can also be ah,
very concerning and uh,
it's powerful. It can be used for the very. Put it that
way.
Okay, so I want to make sure we also talk about
children and um, growing up in this
Time. You and I are both parents, um,
and yeah,
what a confusing time to be a parent.
>> Lauren Grubaugh Thomas: No kidding.
>> Peter: I have no idea what
uh, my son's education is going to look like. And
I have very little confidence that
the current education system is
set up for my
son to succeed in the
world as it will look like, you know, 20 years from
now.
>> Lauren Grubaugh Thomas: Yeah.
>> Peter: And so, you know, questions about uh,
what skills will be uh, necessary,
you know, in two decades from now are way above
uh, my pay grade. But
um, I have
been just blown away with, you know, how first of all,
essays, they seem to be like the quintessential
human thing. And that's what
a huge portion of our education system
is geared, uh, towards, you know, producing
and testing because it's like only humans could do
this. Um, and now like
it costs less than $0.02
for any of these models to
produce a, you know, uh,
perhaps somewhat boring
but decent essay on
whatever topic. And so it's no longer.
>> Lauren Grubaugh Thomas: Yeah, well, it's so hard to imagine a, uh,
classroom of high school kids having a 50 minute
class period where they're required to sit and write an
essay on paper generated from their own brain. Like
that just doesn't seem like it's in the cards for our kids.
>> Peter: Right. Yeah. So, uh, um, one of
my, you know, one of my beliefs is that
more and more we're going to shift to a, a, ah,
society that values orality in the sense that uh,
rather than, you know,
valuing people based on what they can write,
um, because it's so easy and cheaply produced,
any writing is so easily and cheaply produced. Now
what um, is going to be most valuable is having
intelligent conversations without having to go and look stuff
up. Um, and like, I think
that is, you know, a skill
that, you know, I guess if we're all wearing smart
glasses and we have our AI talking to us on our screen right before our eyes,
you know, maybe that'll be, it'll be something else.
Uh, but for now it feels like that is uh, the thing
that, you know, without any lag time, being
able to have this intelligent conversation and respond to each other
in the moment, as we're doing right now, is something that feels
truly a value, um, that is human. And so I,
I wonder what an education system would look like where
people um, are uh, taught and
tested to have intelligent conversations
rather than produce, uh, intelligent
writing. Um, anyways,
uh, those are sort of my preliminary thoughts. I'd love to
hear what you've been thinking about in regarding,
um, our children and um,
education and parenting and all of that.
>> Lauren Grubaugh Thomas: Yeah. Oh gosh, so many thoughts and questions.
More questions, right? Than thoughts, than conclusive thoughts.
I really am curious about the role of critical thinking,
specifically as it applies to distinguishing what's true and
what's not true, what's human
and what's AI.
Um, it calls to mind for me the
moment where Jesus is before Pilate who says, well,
what is truth? And ultimately washes his
hands of the responsibility to do
justice by this man who's about to be killed
unjustly.
Um,
there is such a
dire need, really an existential need
for our children to grow
up with a strong commitment
to truth and to being able to distinguish
truth. And that is going
to become increasingly difficult, like it already is increasingly difficult.
Like it's easy to imagine
essays, articles, photographs
being produced by AI that are
not real, that are not true. Like, you know, talking like
stories about world events that could just be
made up, um, and put into the system
by the companies that will benefit from
fake news. And so
I want my children to grow up with a
rigorous commitment to asking hard
and even dangerous questions that might get
them in trouble but will
preserve their
agency, um, and their
humanity.
>> Peter: Um.
>> Lauren Grubaugh Thomas: I also am curious about the ways in which creativity
will shift, um, and the value
that we place on art.
Um, as AI produces more and more
art in scare quotes,
I just wonder about what
it looks like to raise children who are
um,
wildly, dangerously radically creative,
um, and where they can, they
can use their
God given imaginations in ways that
are not constrained or confined by the
technologies that are thrust upon them.
And so, you know, I hope that that
happens through engagement with, within
nature. I think creation is a big part of that. Like how do we get,
how do we help our children get off screens and
into nature and experiencing wonder
and joy and these deeply human
experiences that can happen off of a screen.
Um, because there is so much that is wondrous
that these human made technologies offer
us and to
remember that the
God of the universe is not constrained or confined by
a scream and that we can go off,
offline and find wondrous things
in creation. So I think, yeah, nature,
play, creativity are part
of the education that I want for my kiddos.
Um, but I do think
that's just gonna increasingly be harder to
accomplish. And
there's a grief in that. You know, I feel that the grief is important too. The
lament of that is important as a
practice.
>> Peter: Yeah, I mean there's a real, you know, we've seen this with
uh, the social media tech
tycoons we'll see what happens in the age of
generative AI. But um, there's a
real inequity in the fact that
so many of the people who, who
created social media, ah,
choose to send their
kids to schools where, you know, social media is like
banned and um, you know, and choosing
not to have them use the tools that they've
created for other people that have been
harming other, um, people, particularly
teens growing up in the world of Instagram.
Um, and there is
real value to uh, an
analog childhood. You know, um, in
many ways I have felt uh, like,
you know, insofar as this
wave, uh, of technology is unfolding, I feel
like it for me has
come at like the perfect time. Like I would not
have wanted it to come sooner because I got
to go through my childhood and education
and begin my ministry career.
Um, and then it came out
and now I've been able to, you know, adapt and learn and
explore. But I, so I know what it's
replacing.
>> Lauren Grubaugh Thomas: Yeah, exactly.
>> Peter: And I, yeah. And for people
who, you know, are younger, uh, and
especially, you know, for my uh, own son, uh,
in my heart, uh, thinking, reflecting on this, like,
I don't know what that's going to look like. To not know what it's
like to grow up in a world where
only humans can write essays, to not grow up in a world
where only people can tell stories like
and yeah, that's uh,
going to be a wild, wild future.
So.
Okay, so we are reaching our end
here and I want to make sure down our airtime and our
listeners generous time. Um, do you have
anything, any last words you'd like to
share? Any last thoughts, uh, about,
you know, how, how to go forward
in this, this time that we live in.
>> Lauren Grubaugh Thomas: I m think I'll end by posing a few questions I have
found helpful from Dr. Epps, the writer of a
kid's book about AI bias, um,
that I, I just find really empowering. And I,
I, I'll offer a wondering question before I
even offer these specific,
um, I wonder what are the questions
that we can bring to our
engagement with technology as
it changes and as we're feeling
excitement or anxiety
about the changes that we are facing in the world that
we are living in. Um, I wonder how our
questions might empower us and how our
curiosity, our holy curiosity
might be a part of holding on to our
agency when everything feels
like, when all the ways in which we have
assumed our
humanity is grounded in are being stripped away from
us or are, may feel like they're under threat.
So Dr. Epps
asks that we bring to our
to AI outputs questions like,
does this reflect a just world?
Does this reflect an unfair
hierarchy? And does it
reflect kids having
power and agency to shape how
the world can be? And my
encouragement to our listeners would be
that we can shape the ways
in which these technologies interact with
us, the ways that we are working together
with one another, with God, and with these
technologies that we've created to be able to create
a good and just world that looks a little bit
more like the kingdom of God
here.
>> Peter: Wonderful. Beautiful. Thank you, Lauren Well,
thank you for, uh, coming onto our, uh, podcast and
for sharing your wonderings
and concerns and
thoughtful, deeply faithful reflections
with us. Um, glad to have you here.
>> Lauren Grubaugh Thomas: It's been a pleasure. Thanks for having me.
>> Peter: All right, so that was so rich
and profound that I actually forgot to invite Lauren
into sharing reflections on the baptismal covenant
with us, at least in time for her to do
so before we had to wrap, uh, up
our recording. But the good news is we
already have so much rich food for thought from her
contributions to our discussion that I will just try to
summarize one thing in particular below.
And it's a kind of a meta vow. Um,
so I want to pause and name some more
things about agency. The
baptismal covenant doesn't just
list a set of beliefs or behaviors.
It invites us into a living, ongoing
relationship that depends on our freedom
as humans to say, yes,
God in love gave us free will. That
means that every question in the covenant assumes we have
the capacity to respond not once, but
over and over again. To turn
to trust, to resist, to
repent, to serve and strive.
These aren't boxes we check.
They're choices we continue to make with
God's help. And that's exactly why we need
to be vigilant about what we hand over to
machines. Especially when talking about agency.
We're entering a time when more and more of our decisions
can be automated, optimized, or outsourced to
algorithms that promise convenience,
efficiency, or even safety.
But what gets lost when we let machines
do the choosing for us? What happens
when we stop practicing discernment, when we
no longer wrestle with moral questions because something
else has already made the decision for us?
If God entrusted us with agency,
as, you know, frail and faulty as we may be,
as messy, as beautiful, as
sacred as human agency is,
then we shouldn't surrender it to
code, no matter how sophisticated.
Uh, I guess I would want to say, you know, let no
machine do the deep work of being
human for us, not in how we raise
our children not in how we love our neighbor,
and not in how we seek justice, forgive enemies,
or answer God's call. To follow
Christ is to keep saying yes
freely, again and again.
So let's guard that freedom, not as a
burden, but as a holy gift that
enables us to respond
to Christ's calling with that
affirmation, with that yes.
So that's it for today. Thanks for joining us
for episode nine. The AI Church Toolkit
podcast is made possible by the
Try Tank Research Institute, and we're so grateful to them
for their support. And remember,
AI is a tool. Our mission
remains rooted in faith and community.
See you next time.
Sam.
Creators and Guests


