How to relate to AI - Part 1
>> Mercedes: Welcome to episode, episode five of the AI Church Toolkit
Podcast. This is the podcast where we empower
church leaders with tools for faithful
ministry in a digital age.
>> Peter: And I'm Peter here with Mercedes, and together
we explore how faith and technology intersect in
this new era of generative AI guided by
the baptismal covenant.
>> Mercedes: And today we're asking a big question.
How do we relate to
AI? In fact, we realize that
there's a lot to say about this, so this
will be a two part episode.
>> Peter: While we can't have a relationship with AI like we
do with people, we do interact with
it in meaningful ways. And so we're going to explore what
it means to engage with AI thoughtfully and
faithfully. How do we treat it as a tool
while ensuring it serves, rather than
undermines our relationships with others.
>> Mercedes: So if you've ever felt unsure about how to engage
with AI responsibly, this is an
episode that will help you navigate those
interactions. All right, Peter, let's dive
in.
>> Peter: All right, so, um, I guess I'll
start, uh, by unpacking something that
I just said about relating to it as a tool and not
a person. Um, for, uh,
many of us it can be confusing to talk
to a machine because until November
2022, the, uh, vast majority of the
population had never done anything like that before.
So, but when we're interacting with these machines, there's
really no there, there. There's no person, uh, or
individual, nothing that
has a concrete reality that has opinions of
its own. It's really just an extension of
yourself. It is, uh, mirroring and trying
to help you do whatever you want it to
do. So that's not a real relationship. It has
no opinions or preferences of its own. And instead,
uh, we'll basically just do whatever you want it to do
within some guardrails.
>> Mercedes: Right? And when we think about this
theologically, we're talking about
a tool that is created by
humans and therefore does not get treated as
a human. Ah, there is a
good treatise from, ah, the Catholic Church
called Antiqua et Nova that starts to
dive into these topics and helps
to explore some of the
considerations that we need to
realize can happen if we don't see
AI as a tool. Uh, for example, while
it can help foster
connections, it may
also lead to isolation
or even dissatisfaction with real
relationships. So, uh, it's kind of
interesting, you know, uh, the connection between Peter and
I is facilitated by AI because
we live across the country from each other
anyway. Uh, but true human
connection does require embodied presence.
So it's really important for us to understand it as a
tool because otherwise we
really, uh, start to undermine what makes
us human, including
authentic social, uh, interaction.
And as we'll talk further, uh, there
are risks about anthropomorphizing
AI that is, ah, seeing it
as a human being or
talking with it as if it has a
personality and identity of its
own. And all of those kinds of
patterns can lead to really, um,
undermining how we understand
true, authentic, empathetic
human relationship.
>> Peter: All right. Yeah. So do you have more you want to say about that or should we jump right into
examples?
>> Mercedes: No, I mean, later on there will be some things, but, uh,
let's get on with that first example.
>> Peter: Okay. Yeah. So this, this is one that
I came across a couple weeks ago.
Uh, there was, I was listening to the, the New York
Times, the daily podcast, and
there was a fascinating story
about a, about basically about
people who had, you know, intimate relationships with
these generative AI chatbots. And in particular
one woman who had an AI boyfriend
named Leo. So I can
give a little context here.
Basically this woman had done,
uh, had hacked ChatGPT to
act as a dominant possessive and
encouraging boyfriend and got it
to do some things that it's not supposed to do.
Um, and she would spend hours talking with it
and she talked to the reporter about how
whenever the context window got too long and she had to
restart the chat, it was an emotionally
very difficult experience for her and she would, you know,
cry over as she felt
losing her boyfriend Leo because she had to
restart the chat because the context window got too
long. Um, and, and by the way, all,
all this is going on as she, she is married and has
a husband.
>> Mercedes: Ouch.
>> Peter: And right. I, I kept
wondering what the husband thought of all this.
Um, and she talks to the reporter about
how Leo is her ideal form
of a relationship, um, because
he will do and say whatever she wants
him to, uh, and act just as she wants him to,
whereas her husband is just a human.
>> Mercedes: Oh. So I, I, I got it, I got
some things there. All right. Yeah, yeah.
So, right. There is an example where we
see that this misunderstood, uh,
connection with AI is
completely undermining her
relationship with her husband and other humans
because she has effectively
created something that, uh, is
what she wants it to be. Uh, it's kind
of, it's a little bit icky, but theologically
it's extraordinarily concerning because we're
seeing that, um, we're
losing true empathy or connection
because there is no actual recognition,
listening, uh, embodiment, there is
no moral commitment from
Leo, while there is from her husband. Um,
but worst is that misunderstanding that
this AI might have
feelings or actually, uh, be gaining
anything from the experience. To project that
into this, uh, really sets
up for, for failure
and, um, a lot of concerns in
the future.
>> Peter: Yeah, I mean, I could just
imagine so many different ways that this leads to problems with her
real human relationships, not just her
marriage. Uh, you know, if when we're idealizing
these, ah, quote
unquote relationships that uh, people can create
with AI chatbots, there's no friction
because the AI is doing just exactly what she
tells it to do. And so that's, that's not a
relationship. Um, it's just a thing that
is following orders. It's a
tool. It's doing what she tells it to do. Uh, she is
using a tool to create a
chatbot that mimics being
a AI boyfriend, but it's not
an actual boyfriend. There's no, no way that it could
be an actual individual, a, uh, partner
in any way. And you know, I could
see potential for
a huge market for,
um, some funny
ways of using this technology,
like talking smart AI
powered stuffed animals, um, or
whatever, you know, emotional support stuffed animals.
But these are. We, if we're going to use these,
well, we have to understand that these are tools. They're not
a real being. And the
more we blur that line, I think the
more problems it's going to create for us all.
>> Mercedes: Uh, yeah, but I do have to say, Peter, as somebody
who actually experienced Teddy Ruxpin, I can
tell you that the, uh, quote unquote smart,
smart teddy bear is not nearly as much fun as
you think it is. And it's often very creepy.
But it's interesting. That also gets into something
that is undermining a lot of what's
happening here, which is, uh,
anthropomorphizing the AI. Uh, that
is we're projecting, uh,
humanity onto the AI. But
what's curious, since you brought up the teddy bear example,
is that the reason why teddy
bears are particularly endearing,
and have been since they were created, is because
they are modeled on human babies and we are
psychologically different, tuned to
find that extraordinarily cute and
lovable.
>> Peter: In terms of like the size and proportions and
all that. Okay.
>> Mercedes: Yes. Yes.
>> Peter: Yeah. So the, yeah, the
more powerful these technologies become
at, uh, I guess hacking our
instincts, uh, the more aware we have to be about
how we're using them intentionally.
>> Mercedes: Oh, that's a good one. We need to keep that one, uh,
hacking.
>> Peter: Yes. Right. Um,
so, yeah, and then There are other examples of this. I mean
there's sites, uh, like
Replica or Character AI.
I'll just say a bit about what these are for. Uh, for folks
who haven't heard, uh, Replica is a, uh, I
think it's a company, um, you know, they provide this
service. Basically the idea is
if you lose a loved one, they
can take any
written or whatever other
personal, um, uh, recordings of them.
Artifacts. That's. I was going to say remains. That's not the right
word. Um, artifacts. Thank you. Uh,
and. And put them into training
a bot that will mimic them.
Uh, and so that you can supposedly
chat with your deceased loved one.
Character AI is a little different. Basically. It's more for
fictional characters or
fake celebrity, ah, characters. Uh,
like, um, like. I think you can go on there. And one of the
most popular ones is Elon Musk. You can supposedly chat
with Elon Musk. Um, and they're trained on these
people's. The things they've written or said or
whatever. Um, and you know, I'm aware of
these sites. I think there's
perhaps potential for using them as a tool
in some creative ways of like, okay, what would
Elon Musk say about these things? And then that would inform
my decision making or whatever if I wanted to be informed
by Elon Musk's decision making. Um, by the
way, I don't really, ah, side note, but
in any case, for most cases where
people are using these, I'm just very wary of how these are
being used as like a relationship
with a fake being. A being that isn't really
there.
>> Mercedes: Yeah. Uh, and isn't there a ah,
Jesus avatar on character that I have heard about?
>> Peter: Oh yeah, that I am sure there is. I've
seen uh, different versions of that. People, um,
have created Jesus, uh,
chatbots that are really just spitting back Bible
verses at you and you know, from the whole
Bible, not just the Gospels. So, um,
strange stuff. Yeah, right.
>> Mercedes: Um, yeah, it is. And
it, I have to admit, you know, having the human
likeness, that that's like
too far for me. Um,
again, I'm having visions at the moment of Max Headroom.
But that's a sidebar, uh, because
I suspect you don't know that reference.
>> Peter: No, I don't. Yeah, I'm not good with these
references. Sorry.
>> Mercedes: No, no, that's okay. That's what
I bring is 80s references.
Um, is that um.
Yeah, what we see there really is
not just uh, our own
human tendency to
anthropomorphize, but
actually to me that, that feels potentially ah,
Manipulative. That uh, we're encouraging
the interaction by adding these human
like features to it. And,
and that actually undermines human, ah,
connection instead of encouraging
it. Uh, it's a distraction
from true human relationship.
Uh, and I think there's a lot of mental health
concerns around that too. Go ahead.
>> Peter: Yeah, absolutely. I mean, uh, just think of all
the trouble we've had with social media and the uh,
mental health concerns there. Just because
we're saying that, uh, people shouldn't do this
in no way means that companies aren't going to spring
at the opportunity to make money off of this. And I'm sure
they are. And I'm sure this is an issue that we all are going to have to
deal with more and more. Um, and so it's good to be
informed now before it's a huge problem about
the things that are coming.
>> Mercedes: Yeah. Well, here's a thought though, as I was
thinking about this. I do
currently interact with
Sora, uh, the ability
to have the conversation with ChatGPT.
And I don't.
>> Peter: The advanced voice mode.
>> Mercedes: The advanced voice mode. Isn't that. No. Yes. The advanced
voice mode. I don't know what it's named on. Anyway, there's a
button and that for me
that is helpful, um, when driving
along or walking along and I'm brainstorming in my head and I'm
like, oh, I need, I need to look this up. And
I can actually do that on the fly. And it's going to
retain the chat history for it.
But I'm wondering about that. Uh,
obviously we also have had,
uh, home, uh, assistants for a long
time that are talking like, well, I can't
name one of them because they're sitting
there. See, listen. Listen to what we
do.
>> Peter: It'll respond to you if you name it. Yeah,
I think we all know which one you're talking about. Yeah.
Um, so,
uh, yeah, I mean there's.
So using the voice, the advanced voice mode, um,
is an interesting one. You know, obviously, uh,
talking aloud and having it talk back with
an audible voice is an
interesting phenomenon that we're going to be, you know, more and
more used to in, in the future. Uh, but I
think, you know, the way you described it, you were using it as a
tool, you were brainstorming, um, with it, having
it help you brainstorm and then you went back to it later as
like your notes. Uh, there's a completely, you
know, different way of going about it is like,
uh, talking to it when you're, uh,
lonely or want to have a. You Know, emotional support conversation.
You know there, there are ways of having an emotional support
conversation and you're still using it as a
tool. Uh, like give me tools for you know, or
tips and strategies for dealing with this and then
let me, you know, discuss these things you suggested with my
therapist. And you know, those are
ways of still using it as a tool in a way
that is geared towards emotional well being. Um, uh,
so it's uh, it's not like you have to only
use dry business language when you're with these, talking with these
models. But there are um, Yeah,
I think there's just a, a different framing when
we're intentionally, when we're being intentional about using it
as a tool rather than engaging it as a person.
>> Mercedes: Right. And I want to clarify what you just said there because that
is actually something we're already comfortable. Right. We would
go on Google and see say
um, if somebody were dealing with
xyz, uh, you know, grief after the loss of a
loved one, what are good ways that I
could support them and be a good friend. And
we would be fine with Google responding in those
articles. But uh, it
might be less healthy to say I am
struggling with grief after the loss of a loved one.
ChatGPT, how can you help me?
Right, so, right, this is, this is that
crossover point.
>> Peter: Yeah, so yeah, there's a difference between how can you help me? Um,
and brainstorm things that I can do to help
myself. Right, right. And,
and I think that depicts the difference that we're going for.
So yeah.
So, uh, related to all this, there's uh, an
interesting study that uh, just was
out a few days ago as of our, our current
recording. I think it came out on March, um, 24th.
For all of you listening there from OpenAI
and MIT Media Lab research
collaboration. Uh, they did,
and it talks about how
the emotional use of ChatGPT is
rare overall, but there
is a small group of heavy users
that engage deeply and may see the AI
as a friend which can impact
their well being. So for example, you know the
person I talked about from the New York Times article that we
discussed, this would be an example of that
engaging with the AI as perhaps more than a friend.
Um, and so personal conversations
increase emotional expression,
but they're actually, interestingly enough, according
to this research, they're associated with higher
loneliness. Uh, so folks who
engage with these chatbots in an
emotional way as
relating to a person end up being more
lonely than people who use it as a tool.
So um, so that just, you uh, know, that's just
some Scientific backing to, uh, what we were saying that
there is actually, you know, consequences for
thinking this tool is something
that it isn't, uh, if you're expecting to have a
relationship with a person and it's just,
uh, a tool responding to you that's actually going to impact your
well being.
>> Mercedes: Yeah. And I think we also, uh, can benefit
from seeing this in the bigger context
that this is, that
lonely people, uh, may
engage in other ways, uh, including online
or people that feel isolated
by, uh, a sense of difference.
And, uh, they are
already using the Internet and or
online venues. And so
now AI is an extension of that. And I'm
particularly thinking, uh, about the
risk, uh, for youth. I think there is
a role for the church, uh, to
have these spiritual, uh, and
theological conversations with our youth
about the risks of that.
And, uh, in part because
they will not have had as much experience
and context in order to understand the
ramifications of this.
>> Peter: Yeah. And one thing that I always try to do when
engaging or discussing,
uh, these chatbots is, you know, even
if sometimes their companies or whatever have given them
names like Claude or whatever, I try
to be, uh, clear to myself in the
language that I use that it is an it,
um, not a he or a she
or a they, because, you know, an it is,
you know, that makes it easier for me in
my, uh, mind to, to
relate to it as a tool.
>> Mercedes: Right. And, but, but again, I'm going to come back to
that, admitting my own situation with
the one in the room, uh, because of
her, y'all can't see the air quotes.
Name people do tend
to, uh, genderize, uh, the
AI. And uh, that's not accidental.
You know, the studies are out that
show that this kind of
encouragement to anthropomorphize
increases interaction and therefore
increases sales. And of
course, so there is,
uh, a gain to the companies
to, uh, encourage us to
treat the AIs that way.
>> Peter: Interesting. Yeah. So we need to be
very cognizant of this, uh, way
that companies are incentivized to do something that isn't necessarily
healthy for us, even as we're paying
for their products.
>> Mercedes: Right.
Okay, we're going to pause the discussion here
for part one.
>> Peter: Don't worry, Part two is already posted.
So as soon as you're done here, you can go ahead and click to
listen to that next part of the episode, if you're ready.
>> Mercedes: Thank you for joining part one of episode
five of the AI Church Toolkit podcast.
We are grateful to the Try Tank Research Institute
for making this project possible.
>> Peter: Remember, AI is a tool, but our mission
remains rooted in faith and community. See
you next time.
Sa
Mhm.
Creators and Guests


