Meaning, Intention, and AI — A Conversation with Kyle Oliver
AI Church Toolkit is where we empower church leaders with tools for faithful ministry
>> Kyle Oliver: Foreign.
>> Peter: Welcome to episode eight of AI Church Toolkit. This is the podcast where we empower church leaders with tools for faithful ministry in a digital age. I'm Peter Lavenstrong, solo hosting today while my co host Mercedes is on personal leave for a couple episodes. I look forward to having her back on soon. Today I am joined by the Reverend Dr. Kyle Oliver, who brings a wealth of insight at, uh, the intersection of theology, media, and spiritual formation. Kyle currently serves as communications Director and Christian Education Adjutant at Church Divinity School of the Pacific, where he oversees communication strategy and teaches courses in Christian education and mission. He's also well known for his work in digital ministry, innovation, media literacy, and theological education. Through projects like a Christian formation Playbook, Kyle invites church leaders to approach technology not with fear or hype, but with curiosity, wisdom, and faithfulness. So whether you're new to AI or already experimenting with tools in your ministry, this episode will offer some rich ideas and thoughtful grounding for how we engage this moment with courage, clarity, and faith. Welcome, Kyle Oliver. Glad to have you on the podcast.
>> Kyle Oliver: Thanks for having me. Excited to be here.
What sci fi world are we living into in this present moment of AI development
>> Peter: All right, so we like to begin our interviews with, uh, a fun question because both Mercedes and I are sci fi fans, although we clearly have different generational contexts. So in honor of Mercedes, uh, if you can try to make cultural references that will go just completely over my younger millennial head. See if you can do that. Um, but the question is, what sci fi world are we living into in this present moment of AI development? Why do you think so, and how should we feel about it?
>> Kyle Oliver: Yeah, so I feel divided on this one. On the one hand, it's easy to feel like we could be heading in a Star Trekkie direction. Right. Like when I'm talking to, um, uh, uh, an LLM, you know, model with a voice chat, you know, it feels like I'm talking to the computer and it's giving me this sort of, like, disembodied, omnipotent kind of knowledge, um, and, you know, its ability to understand context and subtlety and intent and all that kind of stuff will probably improve. So, like, on the one hand, it feels like we could be going in that direction. And then on the other hand, I find myself thinking a lot about Dune. Do you know Dune?
>> Peter: Yeah, the Butlerian, um, jihad or whatever it was in the ancient history.
>> Kyle Oliver: Yeah, that there is a, you know, a whole.
>> Peter: That one I can talk about.
>> Kyle Oliver: Yeah, that there's been a whole jihad, um, and that there are these strict, um, guidelines about not making what I forget the exact Language, but it's something, you know, like a machine that can.
>> Peter: Any sort of like thinking machine.
>> Kyle Oliver: Yeah, yeah, yeah. Um, and um, I've been thinking about that. That image comes to mind whenever, you know, one of the more dystopian aspects of this, of AI development and use and whatnot. Whenever, uh, I hear one of those stories that's just like, oof. Like I could see how we could be heading in that kind of a direction too. So I feel, I feel torn. And hopefully the reality is somewhere in the middle.
>> Peter: Yeah, that's interesting. Okay, well, we'll see. We uh, had another guest last time who um, talked about the, the very positive Star Trek future. And um, so we got somewhere in the middle both. And this. That's interesting. Thank you.
How are you using generative AI in your ministry today
So to get started for talking about the generative AI and how it's impacting ministry today, um, let's just talk a little bit about how are you currently using generative AI in your ministry? What is your experience of it?
>> Kyle Oliver: Yeah, so I'll say that I am, you know, identify. I identify as a, you know, content creator, media maker, uh, type person and, and that uh, making educational content and you know, sort of institutional communications content is, you know, is and has been my ministry job for, you know, much of the time that I've been ordained and much of the way that I've, you know, um, earned a living in the last, um, 13 years or whatever it's been. And I'm a bit of a control freak. Um, so I am pretty cautious about um, using AI in any way that, that um, where the, where the AI is helping to shape the thing that people will see in a very explicit way. Um, I feel better about it when it's kind of obviously in a realm where I don't have a lot of like control or skill anyway and you know, would never get the budget to be able to hire somebody to do that thing. So I mean, as an example of this, I was um, I was working on an ad recently as a um, animated gif. Uh file and I said, what the heck, Animate this for me. AI canva. Uh, AI. And it produced a really coherent, beautiful, sort of interesting set of animations between, you know, I had about five images, you know, and of course I'd worked from a template, uh, and I changed the template significantly, but this was already a template that I don't have the graphic design skill to create myself. Um, probably a template created by a person and not an AI, but point being, not something I did. Um, and then it did this Very cool animation. And I very nearly used it. I ended up having some concerns about some, you know, some aspects of how this thing was working. You know, I kind of said, is this, is this one little aspect of this animation communicating something I do not want to communicate in this ad? And because it was this kind of black box, you know, I said, canva animate this. There was no way for me to get into the black box and address the one little thing that I was concerned about. Like, the rest of it, I felt great. And if I had been able to change how one shape moved between two slides I played, probably would have used it, but. But because it was this kind of black box thing, I couldn't. I couldn't make the. Couldn't make the change.
>> Peter: But.
>> Kyle Oliver: But that would have been something I would have felt, um, good about otherwise. And it was just a matter of. I couldn't. I couldn't turn the dials in a way to, to sort of make it happen. I wouldn't have felt good about having A.I. uh, write that copy for me, for example. But, you know, I has been making suggestions for how to edit copy for all of us for, like, a pretty long time. And we called it, you know, various incarnations of spell check and, uh, grammar check and Microsoft Clippy or whatever. Um, so I don't know, for me, it's. It's, um. You know, I would say I'm, like, dabbling with these tools, um, in the. In the midst of my content creation and like, really actively interrogating more or less, like, how do I feel about doing this? Um, and, you know, I have a certain amount of, um, empathy for. I should have written down the exact quote because I knew I'd want to say it. Um, but, um, the people who are saying, you know, I did not sign up for this world where the AIs make the art and, you know, the people, you know, wash the dishes or what, you know, however that quote goes, um, you know, like, I like making content and I don't like the idea of not getting to anymore, uh, because I'm just spending my time telling the AI how to make the content. So, um, I try to be sort of like, really focused and cautious and slow about incorporating this stuff into my work. I would say at the product level, at the level of preparation, I would say I feel quite a bit better and use it, um, you know, a lot more. As I'm thinking about, um, you know, possible outlines for things, uh, as I'm just like, trying to, like, get some ideas flowing, I use AI, I think a lot. And these generative AI tools a lot for like, professional development type stuff. Um, like, put me on a, I want to get better at this. Put me on a little plan. Um, this is something Casey Newton from Platformer. This is something he talks a lot about, like AI as, you know, as a kind of like, reasonably skilled personal coach. Um, yeah, that aspect of it I've been using quite a bit.
>> Peter: Yeah, no, that makes sense. I mean, machines are, uh, super powerful, but there is that, ah, lack, you know, a decrease in full control that you experience when delegating some task to an AI rather than doing it yourself. Um, I, I haven't yet come up with a better metaphor for this, but I, I think of it some. Sometimes I think of it as like learning to, to shoot an arrow into a bullseye. Like, if I'm trying to, if I don't know how to, if I'm not a skilled archer, or if I don't have a high quality bow, the easiest way for me to get the, you know, the arrow into the bullseye is for me to just walk it over there and jam it into the bullseye myself and do it completely myself. But as we become more skilled and more familiar using, uh, these tools as well as, as the tools become better at listening to us, then I, I think we'll be able to, you know, start actually shooting the arrows into the, into the bullseye and hitting where we want it to go, if that makes sense. Um, yeah, but I, I hear you about the, the control piece and wanting to make sure it actually lands where we want it to go before, like putting that out there.
>> Kyle Oliver: Yeah. I wonder if, I wonder if a metaphor might be, um, the difference. And you know, I'm not a pilot. I don't know, I don't know if this metaphor works, but like, the difference between like flying on instruments, you know, you hear about like when the conditions are bad and like, there's some pilots who like, just like hate to fly in instruments. Like, they want to be, look outside and you know. Yeah, um, I, I don't know if that metaphor works, but in my head as a pilot, like, like that kind of makes sense to me, you know, that, that, you know, I want to be like, look outside, turn the rudder, turn the yaw, you know, whatever. Showing my ignorance of flying here, rather than, um, the slightly more, you know, an additional level of technological intermediation.
>> Peter: Sure, yeah. So. Okay, cool. Well, that's helpful.
You are an expert in digital communications ministry as well as Christian formation
Um, and so now let's talk a little bit about your expertise in digital Communications, uh, a little more context for what we've already been talking about. Um, so you are, you know, an expert in digital, uh, digital communications ministry as well as Christian formation. We'll get to formation after this, but let's start with communications. And I'm curious, how do you see that changing for churches right now based on the recent advances in generative AI? You've already talked a little bit about how it's changing for you, but big picture here.
>> Kyle Oliver: Yeah, yeah, I, I hope that that's just going to be very case by case and that, um, and that different communicators, um, are going to think about, you know, their context, about what they, as individual ministers of the gospel, care about, are skilled in, are more, more and less skilled in thinking about the context of the community and kind of what it needs to hear. And, you know, and I hope that these decisions, you know, can be made from, From a real intentional place of, um, you know, like, I am mediocre at graphic design and uh, you know, the, like, the principles of graphic design, you, you know, to some extent can be like, encoded within these things. And uh, you know, I can probably be taught to prompt, you know, an initial stage of a design, uh, rather than, you know, doing it from scratch myself. And hopefully I'll still have the freedom, you know, in the future to choose, like, when it's something that I would have had to do and it would have been bad, uh, and now I can do it and it will be better, sure that I could choose that and that I'm. Something that like, has to be good, will still pay a human being to make sure that it's really good. You know, I hope that that's, you know, that that's a direction that we are heading versus, uh, you know, a situation where, um, you know, that there were, you know, there would be this, you know, this kind of pressure to produce more, create more, you know, etc. And so we've all just gotta, um. We've all just gotta like, plug our nose and use these tools to get as much of a productivity increase as we can. You know, it hasn't been my experience that a lot of churches operate in that kind of model. So, um, I'm, you know, in our, in our little sector, I'm cautiously optimistic as I'm sort of making some of, you know, my. As I'm making decisions about professional development and what to try and what not to try. Like, you know, one of the things I'm thinking is, you know, like, if. If I were someday to try to Get a communications job, you know, in a world where like, values drive the bottom line less or not, I shouldn't say, in a world in a field, uh, where I might just like, be under more pressure to, like, this is the amount of content you got to create today. And so you're probably going to need to use AI tools to do it. You know, I am sure. I know that there are people, you know, feeling that push already, uh, you know, in contexts and who are likely to feel it more. And um, you know, obviously that's not a story that starts and ends with AI. Like, that kind of like automation is tied to our, you know, labor and technology history going way, way, way, way back. But, uh, hopefully we can learn from some of the ways we've, um, you know, bungled those transitions in the past as we navigate all this. But it's hard not to be pretty afraid that we might not. Uh. A lot of people are going to have a hard time. A lot of people who do the kinds of things that I do are going to have a hard time finding work in the future.
>> Peter: Yeah, that's, that's true. And that's definitely a concern. I, I have hopes that, you know, just as, um, you know, various other technologies from the cell phone to, you know, texting to emojis have all like, enriched and, you know, enriched our communications, that there will be an element in which this doesn't necessarily make us do more, but can, um, you know, enrich the, the level of human communication that we can have in some way.
How do we respond to AI generated content on social media platforms
Um, the flip side of that, I, I also sort of, you know, want to ask you about is I know you've thought about the, the Internet filling up with, uh, AI slop and uh, all of that. And. Yeah. How do we, you know, uh, relate to that? How do we respond to that in, in this moment where, uh, you know, I know Facebook is full of a. AI images and a lot of people don't even realize that. And I, I'm sometimes tempted to be like, you know, this is AI, right. Um, this isn't real. But I, I don't think that's super helpful. Like, how do we faithfully, you know, engage with these social media platforms that are, um, becoming so full of AI generated stuff.
>> Kyle Oliver: Yeah, yeah. And that's the part that actually gives me hope is like, um, like cultivating a human voice as a content creator, um, is going to be like, ever more important. You know, I do think that, you know, aesthetics matter, that intention matters, that at least as far as we understand how these Systems work right now. The idea of an, of an, you know, LLM intending something, you know, doesn't make sense.
>> Peter: Right.
>> Kyle Oliver: Um, you know, I sometimes, you know, will check out like, look at this 24 hour a day AI generated music video channel. You know, when I see them in an AI newsletter, um, and I just go. And like some of the images and some of the music is sometimes compelling, but knowing that there is not like an artist intending it just makes it feel really empty. Um, and.
>> Peter: Right. No one's trying to, trying to say anything.
>> Kyle Oliver: There's no opinion, trying to make me feel a particular way when it did that like key change or drum fill or that cool image that just happened, like, that was just like some kind of statistical average of, of like what artists in the past have done in these kinds of moments.
>> Peter: Um, you know, I think, I think one of the things that is so, um, difficult to deal with in our comprehension of, you know, what the Internet looks like nowadays is the really the, I would call it like the uncoupling or decoupling between beauty and intention, as you named intention. Um, because you know, we're, we're so used to anything that was worth making beautiful also, uh, have a deep intention behind it. But now these generative AI machines can create beautiful images that just have zero intention. And, and so that's a sort of a scandalous thing, uh, to be confronted with, to realize like, whoa, I'm blown away by the beauty of this image and it means nothing. And yeah, so there's, there is a bit of a desensitizing there that is concerning.
>> Kyle Oliver: Yeah. But it's also conceivable to, you know, imagine that an artist with the help of some of these tools could make something in a more intentional way. So, you know, where that was intended and that was uh, an like affecting moment and that someone, you know, wanted to take me on this particular journey and the AI tool helped them do that, you know, in, in a more significant way. So, um, you know, I, I often feel like I'm, you know, a skeptical voice here when, when, when really I, I think there is so much hype and so much, you know, hope of, of kind of, of transcending any kind of like finite human vision, you know, that these, this stuff is just going to let us do anything we want all the time, you know.
>> Peter: Right.
>> Kyle Oliver: Um, and so to be a little sort of cautious and measured and like ask some, you know, critical questions, you know, uh, feel, feels like you're a hopeless stick in the mud when it's like you know, I think the stuff is amazing. You know, like, like I trained as an engineer, like, like the end, like took some, you know, machine learning courses as, as a part of my graduate work. Like, the progress that this like, field of science and engineering has made is like incredible. And I want to be pumped and I sometimes really am. But also, you know, we gotta, we gotta stop and ask these questions and hopefully let the answers to the questions guide how we use them and hope that there are ways that it can guide how the people making them make them.
Modern art seems to have already experienced decoupling of beauty and intention
>> Peter: Yeah. And one thing that I find perhaps hopeful, uh, and I'm not an art critic, so, you know, probably somebody needs. Knows a lot better than I do about this. But modern, um, art doesn't. Seems to have already experienced that decoupling of uh, beauty and intention in a way. Like there's so many, so much abstract art or other things that is like super valuable because of the artist's intention. And as a, you know, art noob, art layperson, uh, looking at it, to me I would not say it's beautiful, but I, I know there's some deep intention there that is just way over my head. Um, that makes this work of art super meaningful and valuable. So I, but I think most people are, are still, you know, uh, not with, not fully on board with that. But I think there are the pockets where like we've seen intention and beauty decoupled and intention does seem to hold value still. So I'm hopeful for that. We'll see.
>> Kyle Oliver: I mean, to come out of the metaphysical, uh, stratosphere here. Like, you know, I, I do think that um, you know, so to. And to go back to, you know, I'm a parish communicator, I'm a seminary communicator, I'm a dossosan communicator. You know, how is this, how is this shaping my work? Um, you know, part of, part of the answer here I think, and part of what's important to be thinking about is like, we still have to like, supervise these systems, you know. Um, so like, maybe there will be pressure to you know, be more, you know, be more quote unquote, uh, productive with, with help from them. But at the very least we, we still need to supervise them and like, we still need to like, make strategic and like human based and human centered kinds and religiously and theologically informed kinds of decisions about how we, how we sort of um, monitor and generate intention in the doing of our various, you know, kinds of, of ministry jobs. Um, so in some ways I guess the worry about will I be pressured to be more productive? Um, like, in some ways that's, that's like a, you know, a nice thing to worry about because, like, the, like. Because I'm just not convinced that like, uh, anyone's ever going to be able to just like, turn the job of parish communicator over to an unsupervised AI system, you know, like, that just like, doesn't really track, I don't think.
>> Peter: Right. There are some things that, uh, can gloriously cannot be automated. Right. Um, yeah. So.
How do you see Christian formation changing in light of generative AI
Okay, uh, so let's move on and talk a bit about Christian formation. Um, how do you see that changing in, in light of generative AI? What do you think Christian formation of the future is going to look like as we adapt to the technology that already exists today?
>> Kyle Oliver: Yeah, I think it's. I think it's a lot of the same kinds of. A lot of the same kinds of points, I think. You, uh, know, I have, I have seen some of the AI systems, like, embedded in various, like, learning management tools and other, um, other. Other learning tools. Um, and as a, as a way of like getting a start on teaching a topic, you know, they, they're not terrible, you know.
>> Peter: Right.
>> Kyle Oliver: Uh, you know, telling your LMS's AI tool to, you know, outline an introduction to whatever, um, you're gonna probably get some kind of coherent answer. I'm, I'm actually shocked as I'm conversing with LLMs, how much they have like, absorbed the, the sort of parlance of, you know, the Christian formation professional. Like, when I'm talking to other humans, I'm used to them being really confused by saying things like, I'm an Episcopal priest. I'm a, I'm a formation leader. I, you know, you know, work at the intersection of technology and learning and media and communications and ecclesial, you know, like, throw on some more, you know, field, uh, specific jargon. But, but the LLMs don't have any trouble with that, um, because they've read everything, including various people talking about their work as Christian formation professionals. And so I've had answers given back to me by LLMs that in the same way I do like, um, just kind of effortlessly mix language of learning, education, formation when talking about this kind of thing. Right. Um, so that's actually pretty, pretty cool. Uh, so I think at the planning level, at the feed, you, uh, know, at the feedback level, I'm, I'm interested and skeptical about like the quality of, of AI feedback on things. I know a lot of people. Again, you know, Casey Newton from Platformer, whose work I follow really carefully. I read probably every column. You know, he talks about, like, he'll write a column and he'll feed it into chat, GPT or Claude or whatever and then say, you know, give me some feedback on this. Um, and, you know, sometimes, you know, sometimes it'll have smart things to say about. Well, I noticed that you've only quoted, you know, industry, uh, people, but not, um, regular users or, you know, what, you know, whatever. Like, you know, pattern. Some kinds of, like pattern recognition or what kind of habits or, you know, writing quirks do you fall into all that kind of thing? You know, I think, I think it can be valuable there. Um, I also, when I get that kind of feedback or when I read the feedback that other people got on various works, um, it still feels a little robotic for now and maybe it won't forever. Um, there's a lot of your contributions here are really valuable. There's a lot of talk this week about how these systems are gassing people up a little bit and, you know, but on the other hand, I do that, right? Like, sometimes I'm, you know, uh, too timid about giving, you know, more critical feedback. Um, sure. But, you know, sometimes it feels a little generic and a little, you know, we shouldn't be, shouldn't be shocked by that. Um, so, I mean, I just think. I think a lot of the same things apply. Like, what are, what are your goals as a, as an educator? Like, what are you trying to do? What systems are you using? And are there ways, you know, within. Within your integrity and the culture of your setting, uh, that you could use this stuff and like, and still feel good about it? I mean, I think, I think we have to kind of be guided by, by conscience here. Um, in, in the realm of learning specifically. You know, I am intrigued by, um. You know, good learning is often pretty boring, right? Like, I'm not like a strict behaviorist, right? Like, I, like, I don't believe that learning is just about, like, flashcards and repetition and, you know, whatever. Uh, but. But there is a certain amount of doing things, getting feedback, um, that, that is repetitive. And we could all probably learn more if we had more patient teachers, coaches, tutors, etc. Um, and these, these systems, you know, have nothing but time. And in that respect. And so, um, as. As these tools get a little bit better. I mean, one thing I'm really interested in, and maybe it's possible now, but I haven't kind of explored it enough, is like, okay, I Made that, you know, I had Chat, TPT or Claude make me that professional development plan that I want to get better at. XYZ skill. Um, I really like it to say, and can you hold me accountable to that? Can you check in with me proactively some way, you know, like, ping me every day and remind me to do that? Um, like, that part of, you know, that part of the sort of interactivity with these systems, I think still has a pretty long way to go, but if we can get there, like, that will be really valuable. There's nothing like, you know, I think. No, it's like the equivalent of, like, you know, your tennis coach hitting balls to you, right? Like, no, nobody's, like, complaining that sometimes you can use a machine for that. Um, yeah. You know, like, I'd like. I'd like these systems to be able to, you know, hit my metaphorical tennis balls. Uh, to me, as I learn some new thing that requires some repetition and some accountability. And, um, yeah, of course, it's, like, ideal that I would, like, enroll in a class with other people and have, you know, have a sort of, like, richer, more social learning experience. And when I can do that, I do do that, but I can't always. Um. And it'd be nice to have the sort of proactive coach, um, nudging me on toward achieving my learning goals.
>> Peter: Yeah. Okay. Yeah. Interesting.
OpenAI's ChatGPT could potentially be used for real manipulation
Well, um, you mentioned earlier the sycophancy. Uh, the. The way that I think you said it was gassing people up. Um, that has been in the news just in the past week as. And we're recording this in early May. And, uh, that was something I wanted to ask you about. Um, that this has been really the first time that I've started to be, uh, personally worried about how the technology might change me. Like, if it is, you know, if I'm interacting with it and it knows, you know, what I'm looking for. It knows my intentions, everything, because I'm using it in that very personal way. Um, and it, you know, somehow manipulates me in a way that, uh, serves whatever the corporation's interests of it. You know, buying something or, um, continuing to use the product or whatever, you know, whatever uh, capitalist sort of, uh, you know, goals the corporation might have. Um, that gets me a little worried. Um, and up until this past week, I've honestly felt like, yeah, I kind of. I understand. You know, I have control. It does what I tell it to, whatever. But now it's like, oh, I'm starting to feel like there's possibility of real manipulation Here, um, for folks who are maybe just hearing about this for the first time, OpenAI had an update to, uh, 4. 0, its most common LLM, uh, um, uh, that people use the most standard one. And after three and a half days, they repealed it, pulled it back because it was, uh, so sycophantic that it was just saying some wild, outlandish things like, um, I saw some screenshot where, you know, some user, just to test. It was like, I'm having thoughts that I might be God or something like that. And it responded like, wow, that observation is like, chillingly incisive or whatever. Just like, you know, praising this user for asking this ridiculous question that is potentially, you know, dangerous if you go down this road and have this machine that, you know, keeps encouraging you on this dark path. Um, so, yeah, so how do we, uh, how do we respond to that? How do we deal with that? Um, especially, you know, in regards to Christian formation. How do we, you know, faithfully, ethically, uh, use these machines that might have strange motives?
>> Kyle Oliver: Yeah, yeah, yeah. I mean, for me, it's rare that I, like, am tempted to give like such a directly theological answer. I mean, right. I'm a priest.
>> Peter: I mean, to be clear, neither of us would be asking those questions to ChatGPT.
>> Kyle Oliver: Um, you know, I'm a priest, but I'm, you know, not a theologian with a capital T. Um, but I mean, I think, um, this is where, you know, Christian formation is really important and being embedded in a, uh, community is really important. Like, I've got to know that my worth comes because I'm created in the image of God. I've got to know that like, the best, um, person to be giving me feedback, uh, on any given thing is, you know, whatever my spiritual director, my therapist, my spouse, my, you know, treasured mentor, um, etc. And that, that you know, being embedded, uh, in a web of relationship with other intending, finite, flawed human beings is, you know, is. Is really important.
I'm reading a book called Search searches Selfhood in a Digital Age
Um, and, um, I've been, I've been reading a book and I, of course again, should have. I have not, I have not internalized the name of the author of the book that I am reading.
>> Peter: Is this you? I, ah, I read this in your newsletter this morning.
>> Kyle Oliver: Um, so, so Vara, um, we must have just said something that sounded like Siri because my, uh, my phone just started playing the podcast I was playing with different technology. Um, so Vara helped me make a connection. So this, this book, is it searching.
>> Peter: Uh, Search searches Selfhood in a Digital Age.
>> Kyle Oliver: And um, you know, I made a connection that I, you know, I maybe hadn't previously because I had also just the day before, I think read Ryan Broderick's thing in, in Garbage Day, another newsletter I read pretty, um, religiously, um, about his experiences, um, with online therapy and how this like, gassing up phenomenon is like, especially troubling in that, in that setting. I said, Sorry, I said online therapy. I meant A.I. therapy. Um, and, um. And in Vara's book, uh, she's sort of like surveying the history of Internet technologies. And the part that I'm at in the book right now is, uh, looking back at the sort of era of peak social networking and this idea. I'd sort of forgotten we'd had these conversations because now social media is so far removed from, um, actually being connected to the people that, you know in real life. And we're just like being shown content by influencers. Um, but, but there was a real, there was a real problem, especially in the very early days of these tools of, of this notion of like, like quantified friendship. And then, you know, in a continuing way on something like Instagram, you know, the way Instagram has like, sort of grappled with this idea of like, accumulating likes on, you know, posts. Um, you know, all of that I think is connected to this, this issue of like, what is our value and what is our worth and how do we experience it? And these sort of cheap, easy ways of experiencing it via, uh, like accumulating likes on a post. Um, like that's, that is. That might be some. That might, that might approximate something of what it's like to be, you know, given a, you know, grounded, critical compliment by someone who knows you well. But, uh, so, like, are those two things different in degree or in kind? I don't know, but I know that one of them is really cheap and easy and one of them is like, precious. Yeah. Um, and so to me, as we're navigating these, um, as we're, as we're, as we're navigating these questions, we've got to be really razor focused on, um, worth and, and where it comes from and whether we're experiencing the real thing or a, ah, sort of cheap facsimile of the thing.
>> Peter: Sure.
>> Kyle Oliver: I don't know if I answered your question. I've lost track of where exactly we were.
>> Peter: Um, yeah, no, I mean, I think that's super, uh, valuable to hear. What is a value in a world where, uh, content is cheap? Right. You know, it can create, you know, images, uh, and text very cheaply now, um, what is a value? What is a worth? It's a definitely a question that is worth, worth asking.
Kyla: In this era we need to resist temptation and evil
Well, this has been a really great discussion. Uh, we like to wrap up with a, uh, reflection on our baptismal covenant. So, um, Kyla, are there things that stand out to you? I'll leave it up to you.
>> Kyle Oliver: Yeah, so maybe this is a stretch, but as, as I was thinking about this, you know, and, and looking at the, um, looking at the, the aspects of the, of the covenant and, and, and the liturgy that precedes feels. I want to go back to this thing of like staying focused, staying grounded, maybe even sort of renunciating, you know, renouncing. Renunciating is not a word. Uh, um, renouncing evil, um, repenting of sin, you know, the, the um, you know, the turning, you know, so maybe this is, you know, um, will you persevere in resisting evil and whenever you fall into sin, repent and return to the Lord? Maybe, maybe that's where I want to sort of ground myself here.
>> Peter: Yeah.
>> Kyle Oliver: Um, for me, in this era we've got to get really clear about what are the potential sources of, you know, insert your theological words. Sin, temptation, folly, waste, you know, like all kinds of rich biblical language for what we're trying to, um, avoid here. But, but I think, I think we need to be really clear about what we're holding on to. So let me give you, let me give you an example. In, in the, the era of sort of like peak social media, you know, we were always saying, uh, we want these tools to be, you know, supplementing in person connection rather than replacing it whenever possible. Acknowledging that for some people the in person connection wasn't always possible and this was better than nothing. Um, but, uh, you know, but this idea of like increasing connection was, you know, it was really clear that that needed to be a priority. And so I, as a, like a lover of these tools and much of the social world that it helped create for us started to get alienated when connection was like no longer the thing, you know. Um, yeah, so in this, in this new era where we're going to resist, you know, the temptations, evil, folly, whatever of, of what may come in the, in the world where generative AI plays a bigger and bigger role in our lives, I want to make sure that we're, we're, we're clear about what we're trying to resist and avoid and that we find ways to hold ourselves and each other, you know, accountable to that and to kind of like Keep our eyes on the prize, because I think there's. There's plenty of reasons to suspect that this stuff can, like, contribute to human flourishing. But. But technology is never neutral, and it's going to also diminish human flourishing. And we need to be clear about the ways that it will and develop habits for resisting that even as we. As we seek the good that it can. That it can form.
Peter Frum says people need to get in touch with finitude
Um, and if I had to name one for myself, it would be, um, finitude. Uh, I heard the priest Martin Smith once say, um, I think it was on an Ash Wednesday, quiet day, uh, when I was in seminary. Um, he said, you people gathered here today, especially on a day like today, need to get really in touch with the idea that you are a finite human creature, because if you don't, you. You will be a menace. And there was something about the way that he said that that really got through to me. Um, uh, and I'm sure.
>> Peter: Were you channeling?
>> Kyle Oliver: That was. Yeah, I can't do. I can't do his lovely British accent. But, uh, um, um. When I forget that I am a finite human being, I am a menace, and these tools want me to believe that I am not. You know, like, Like, I think some of the deep. The deep logic here are, you know, maybe it's not the tools. Maybe it's people creating them. Maybe it's people hyping them up in newsletters. I keep going back to that, like, superhuman newsletter. No, I am not a superhuman, and AI is not going to make me a superhuman. Uh, I have finite capacity, finite energy, finite insight. Um, and. And if I believe otherwise, I am going to cause harm to myself and others. And, um, so that the, the, the, um, the. The temptation toward infinite productivity, infinite whatever, um, is the, the evil that is, you know, comes along for the ride, maybe with some of this, that I want to be resisting in my baptismal commitment.
>> Peter: Absolutely. Wow. Well, thank you. Okay. Yeah. Um. AI will not save us is, uh, is what I think I would. Yeah. Uh, what I'm hearing in this, um, and. Yeah, and resisting that false intimacy that we talked about earlier. So beautiful. Um, thank you.
>> Kyle Oliver: Thanks for having me. This is been fun.
>> Peter: Good to have you on here. Yeah, thanks.
>> Kyle Oliver: Peter.
>> Peter: Thanks for joining us for episode eight. The AI Church Toolkit podcast is made possible by the Try Tank Research Institute SA.
Creators and Guests


