Home Page
cover of Take2
Take2

Take2

Allison Cohen

0 followers

00:00-48:37

Nothing to say, yet

Podcastmusicspeechambient musicnewage musicvibraphone

All Rights Reserved

You retain all rights provided by copyright law. As such, another person cannot reproduce, distribute and/or adapt any part of the work without your permission.

1
Plays
0
Shares

Audio hosting, extended storage and much more

AI Mastering

Transcription

Some Mormon transhumanists believe that it is their responsibility to develop the technology to bring everyone back from the dead and make everyone immortal. They believe that once this technology is developed, God can then do the rest. There is an intersection between Mormonism and transhumanism due to the belief in a materialist metaphysics where everything, including the spirit, is made of material. This belief leaves room for technology to play a role in achieving goals such as digital resurrection and immortality. Mormon transhumanists see these goals as both technological and theological projects. Some Mormon transhumanists believe that maybe it's our job to figure out how to resurrect everyone. Maybe God is waiting for us to develop the technology to make everyone immortal and also bring everyone back from the dead. So once we've done that, then God can do all the heavy stuff that comes after that. Welcome to The World We're Building, a podcast on a mission to infuse the AI hype cycle with hope, agency and critical thinking. I'm Ali and I'll be your host as we hear from an eclectic group of guests ranging from annotators to activists and designers to policymakers, each pursuing radical work that challenges dominant narratives about AI development. Today, we'll be hearing from Kyle Roth. Kyle is a PhD student at the University of Montreal studying natural language processing. He's currently working on improving rag based LLMs with functionality like retrieval and reasoning. We'll get more into what that means a little bit later. Kyle was born and raised as a Mormon in the United States. Interestingly, Mormonism seems to be a religion that is embracing certain AI capabilities in his paper, speaking as from the dust ideologies of AI and digital resurrection in Mormon culture. Kyle describes the reasons for optimism in the Mormon community, specifically with newfound opportunities for things like digital resurrection spoken about among certain members of the Church of Jesus Christ of Latter-day Saints. Today, we'll be hearing about various AI projects that are underway in the Mormon community, what it can tell us about the intersection of AI and theology and what the implications of all this might be for those unfamiliar with Mormonism. But before we dive in, let's start the discussion, as we always do, by situating ourselves so that the listener has a better idea of the context for today's conversation, acknowledging points of reference and properly valuing experiential forms of knowing. So, Kyle, before we jump in, I'm hoping you can start by helping us situate your knowledge and experience so you can talk a little bit about some of the intersectional elements of your identity or personal experiences that you see as being highly connected to, whether it's your professional trajectory, your work or your outlook right now. I was born and raised in the northwestern United States, not right in the area where there are lots of Mormons or members of the church, but sort of peripheral to that area. My parents, they were born and raised Mormon as well. I would say that my upbringing was very embedded in a standard American Mormon culture. We were Mormon within a world that wasn't Mormon. So, if you live in Utah, let's say, you're kind of surrounded by members of the church. Everybody is. And it's like a notable thing if you're not a member. Whereas, I grew up in kind of a standard Western American context where, yes, there were a good number of Mormons, but not a ton. And so, I guess I grew up seeing myself as kind of different from the rest of the world in that way because we had, you know, slightly different practices or beliefs than other Christians around us, let's say. One important piece of context is that the Mormon church is a relatively conservative community or culture. And I grew up with the community is interested in maintaining the status quo in a lot of ways. But I grew up very committed both to faith and to intellectualism. I saw things like, for example, God is a scientist, like God works in the world. He does his things by using natural laws. And so God is not like sort of an exception to science. He does science in a sense. And so to me, like my faith and my intellectualism didn't contradict one another. So that was a really important part of my upbringing, I think. For the folks in the audience that may not know very much about Mormonism in general, can you tell us about some of the fundamental values, beliefs, maybe traditions that are practiced within the Mormon community? Yeah, so Mormonism is a branch of Christianity that began about 200 years ago in upstate New York. There was a prophet named Joseph Smith who claimed to have seen God come to him and tell him that none of the existing Christian churches were correct and that he needed to start his own church. And he also received an ancient record written on golden plates that was a record of ancient people's interactions with God thousands of years ago here in the Americas. He received these plates and then translated them through the power of God. And the Book of Mormon is the result of that. And that's sort of like it's a scriptural basis for a lot of the differences that arise between Mormonism and the rest of Christianity. Mormons believe in the Bible, but they also believe in the Book of Mormon and the stories that are there. And so in addition to the Book of Mormon, Joseph Smith revealed a lot of other new different doctrines. And they're within the Mormon church or within the Mormon community. Sorry, I should talk about the Mormon community because there are actually many different churches within the Mormon umbrella that all kind of branched out from these initial revelations of Joseph Smith. There are, of course, differences between all of these churches. The main church, the largest church, is called the Church of Jesus Christ of Latter-day Saints, and that's the one that's based in Utah. But one of the important concepts across all of Mormonism is this concept of ongoing revelation, which is that, you know, in a lot of Christianity, we say that the Bible is closed because, you know, with the last prophets, that was all the revelation that we needed. And now we're sort of waiting for the last days, whereas Joseph Smith said God is speaking to us now. God can continue to speak to us. There are prophets today. And so the leader of the LDS church in Utah is considered to be the prophet, just like Abraham or Moses were prophets back in their day. So that means that there's sort of an expansive idea of what revelation can do because God can always give us more truths. The list of truths is not complete. And so that has resulted in kind of like this expansive theology on what heaven looks like and like what the goal of what the goal of being sent to earth is for. I guess people who are listening to this podcast are likely wondering, OK, this is interesting, but a lot about theology. How does this actually connect with technology? And I'm curious about some of the developments that you've seen in the Mormon community. I don't know if this is, you know, adopted in a widespread way across the churches or whether this is unique to one of the church communities that you were describing. But there seems to be an interest in and a potential adoption of AI. So I'm curious how you see that happening and to what extent that's widespread across the Mormon community at large. Yeah, sure. So just generally across all of Mormonism, what we would call a materialist metaphysics, which means that all the all the things that exist are material. So in Christianity, we have the sense of like a body and a spirit and that those are two separate things. So that's called metaphysical dualism. But within Mormonism, there's this belief that the spirit is made of like a more refined kind of matter, that everything is material, including spirit. God has a body, for example. He's not just like an ether or like a like a presence. You know, he's a physical thing just like the rest of us. And that like heaven is also material and that it exists somewhere in the universe. So everything being material kind of leaves a lot open for what technology can do. Right. Because our modern concept of what technology is, is that it deals with the material. Right. So I'd say there are two aspects to this. There's this belief that AI may be used as a tool for the church's purposes on Earth, like, you know, bringing more people into the religion and things like that. But then there's also this more theoretical, like kind of a subset of Mormon believers who are interested in like bigger uses of technology, more like theological uses of technology. Like digital resurrection. Can you maybe say more about digital resurrection and these concepts around transhumanism that I've heard you speak about in the past? I think this concept is really interesting and sort of revealing as well. So transhumanism is a philosophy that we should be very open to ideas about. Using technology to move beyond like what it means to be human currently. So a lot of this, you know, it would be really easy to like tie in a lot of like sci-fi tropes into transhumanism. Say like, you know, uploading your brain into a computer would be it would fall under this umbrella of transhumanism or like curing all diseases so that we can live hundreds of years or maybe even forever. So achieving immortality, like some of these things that we used to think of as just magical or or like dreams or kind of had a religious nature, transhumanists say, why not take on these goals as a technological project? What's really interesting is that there's actually an intersection between Mormonism and transhumanism. And I think that's because of this materialist metaphysics that Mormonism has, or like, you know, because technology operates on material, you know, and God operates by natural laws. God could be using technology to do things like resurrecting everyone at the end of the world or, you know, like accomplishing all these things that that that are part of God's plan may be done using systems that we would understand as technology today. And transhumanists are also interested in doing some of those same things. And so there's this whole group of people called Mormon transhumanists. You know, they associate themselves with the ideals of transhumanism and also the sort of the theological basis or at least the philosophical basis of Mormonism. Some Mormon transhumanists believe that maybe it's our job to figure out how to resurrect everyone. Maybe God is waiting for us to develop the technology to make everyone immortal and also bring everyone back from the dead. So once we've done that, then God can do, you know, all the heaven stuff that comes after that. Like there are these ideas and it's not very orthodox. Like there's a lot of different kinds of beliefs within Mormon transhumanism. There's also varying levels of beliefs in like the standard Mormon tenants. But I would say broadly, Mormon transhumanism is interesting because it is this interesting intersection of like the Mormon metaphysics with like the passion for technology and using it to accomplish some of these some of these like almost religious goals. So maybe we can go back a little bit into how AI tools are being used in this context. Yeah, maybe we can start with a case study that's being built called Wilford Woodruff AI learning experience. I'm hoping you can tell us more about about this. Yeah, yeah. So the Wilford Woodruff AI learning experience is something that was developed as kind of a proof of concept. So for a little bit of context, Wilford Woodruff was the fourth prophet of the Mormon church that's based in Utah. So there was Joseph Smith and then a few others and then Wilford Woodruff. And. He was very he was very prolific with his journaling for reasons that we can get into in a minute. So we have a lot of records of both his personal journal as well as like documents of meetings that he was in and also other people, other records from around the time. So there's this whole foundation dedicated to collecting and managing all of these papers relevant to his life. And so they created this system that is essentially I believe that they fine tuned a large language model on a subset of these historical documents, as well as like some some facts about his life outside of the documents. And then the experience is this game built on the unity engine where you walk into Wilford Woodruff's house and you meet him and then you can have you can sit down and have a conversation with him about whatever you like. And so like the the messages that you send that you type in the game, they're used somehow to prompt the language model to produce. The response is if Wilford Woodruff were talking to you. And the point is that it's trained on his personal documents and on records related to his life. So it's like it has a lot of information about his life. And the point of the experience is to maybe get you more familiar with this prophet. You know, this was still more than 100 years ago. And I know we were seeing some developments in chatbots that are connected to religion in a way that helps educate people about that religion. But what I found unique is that it seems like it's not just a tool for proselytizing. It seems to also be a tool for revelation through AI. So if, for example, the AI model spits out something that never existed in his original documents, that actually speaks to an entirely new revelation, for example, how how would that be treated? Is there room for that? Is that something that people are excited by? Yeah. So one of the things that piqued our interest in this and made it kind of the centerpiece for our work is the fact that on their website, they have some testimonials from test users. They were so grateful to be listening to the testimony of a prophet and that they felt the spirit confirming the truth of what the prophet was talking about. So to maybe make that make sense to a non-Mormon audience, it's important to know that within Mormonism, there's this concept of of like global revelation, like revelation for the whole church or revelation for everybody in the world and then personal revelation. And so the prophet's job is to provide this sort of global revelation that applies to everyone. And your personal revelation is something that you can receive from God when you pray, when you seek for answers. And the Mormons believe that the spirit is what communicates that message. So when when a Mormon says, I felt the spirit when I was hearing this person speak or when I was interacting with the system, that means that they felt that the that God was confirming to them the truth of what of what was being said. And that would count as personal revelation under the Mormon framework. And so we found it very interesting that at least some Mormons are comfortable with using this AI system to interact with something that is trying to be like a prophet, like not not the current prophet, but an old prophet. And in their minds, does this achieve the goal that you had sort of described about transhumanism? Is Wilford Woodruff sort of transcending death in a certain way and being reconnected? I guess there's really no body. There's maybe an element of soul. How would you describe, I guess, how transhumanists might be seeing and interpreting this type of revelation, for example? Yeah, to give a little bit of context, transhumanists have this concept of, you know, we've talked about mind upload where you maybe take a brain and scan it somehow and then reproduce it on the computer. And then you've placed this person or, you know, what makes this person a person. We've placed that onto a computer. They also have this concept of indirect mind upload, which is where maybe we don't have the brain anymore because the person is already dead or maybe it's decomposed somewhat or maybe we don't have a perfect scan. And so we use other sources of information like journals or videos or, you know, memories of other people talking to this person to sort of reconstruct the original person that was lost. That's called indirect mind upload. And so this is very compatible with Mormon conception of resurrection. I don't think that Mormons would see this as literally Wilford Woodruff coming back from the dead, in a sense, even. I think that it would just be, it's just more plausible that this could be a source for communication with God because this man was a prophet, because his documents were like journals from his life where this man had a special connection with God. The result is that if we have this AI system built on these documents, then there's a possibility that maybe you could also commune with God through that because this person has what was a prophet when they were alive. It does the thing that a prophet is doing, which is connecting you with God and giving you the chance for communication with him. And it's also doing it in maybe the way that is unique to Wilford Woodruff, because Mormons acknowledge that like every prophet kind of had different themes that were important to them or different things that they really stressed. And so by connecting with this Wilford Woodruff system, maybe you're reconnecting with God in the way that Wilford Woodruff connected people with God rather than the way that the current prophet connects people with God. It's interesting to think that there is some return to an authentic connection with the prophet when we know that this technology is so cutting edge and there's nothing sort of nostalgic about engaging a large language model. Yes, it's interesting juxtaposition, right, because the documents are very nostalgic for Mormons. You know, Mormons are very interested in old documents from their own history. But then the AI is so new, right? Like it's these two very one thing is very old and nostalgic and the other is bleeding edge and very progressive, I guess, in a certain sense. To me, there is obviously a very exciting benefit to that because people feel compelled to have very specific religious revelations that they might not have otherwise experienced. But to me, the glaring concern that comes along with that is maybe lack of critical thinking about how the technology was built, how these models were trained, maybe even the capacity of these models to hallucinate in ways that might not have actually been the intention of Wilfred Woodruff. So when you think about responsible AI technology development, are there certain aspects of this that you think should be communicated to the users of this technology so that they have a sufficient amount of critical thought around their interaction with the tool or even to make sure that the tool's outputs remain in line with the intended use? I can give a sort of an account from a Mormon perspective of how a Mormon might want to solve this problem and and also the technical perspective. And I don't think that a Mormon would disagree with the technical perspective. I think they would just want to use both. So from a technical perspective, you know, there's only so much that we can do to try to limit the kinds of outputs that are created and like verify and check them. And I don't think that that's anywhere near a solved problem. Sometimes there's an illusion of trust that comes when you have like a source attached to every output. Like if you can say, here's the source document for what what this model is saying right now, then people tend to maybe not check. They could check and it would take them extra time because they have to parse through the document themselves and decide if the output aligns with what the document is saying. So, but, you know, sometimes that just creates the illusion of trust rather than than real verification. But an important piece of this for Mormons is that because, like I mentioned before, the spirit is supposed to confirm the truth of the words that are being spoken or read. In fact, the Mormon church, you know, the Utah based Mormon church has come out with some official guidelines for using AI within a within a Mormon context, within a religious context. And these are guidelines that it has said that it will use in its own work. You know, when when it comes to like these applications for proselytizing or whatever kind of media that it would like to produce in the future, one of the things that it recommends is to always be listening for the spirit to confirm the truth of what is being said. For these testimonials of these test users using the Wilford Woodruff system, it's not a stretch for them to try to listen for the spirit to confirm what an AI system is saying. That's not that new of an idea because they're already used to this concept of like, you know, maybe they're at church and someone says something that they're not sure about and they need to be sort of in tune with the spirit in order to to decide if that what that person said was correct. And so they're already used to doing this when they're reading and listening. And so it's just Mormons are going to do it. I think they're going to generally follow this principle of like listening for the spirit to confirm the words wherever they're coming from. I'm currently learning more about the challenges in data annotation and red teaming and, you know, there's such a lack of emphasis placed on the skills and expertise of the people that are verifying these models. And I think in the context you're describing that you not only have to have likely Mormons who are very well versed and excited by tools like this sort of looking at the outputs, but also Mormons who can connect and listen to the spirit to check whether these model outputs are in line with what, you know, Wilford Woodruff would likely be thinking and saying. It just speaks to like a premium level of data annotation. That is so not in the computer science community, at least it's a domain that seems so undervalued, but it almost seems like in Mormon culture, this would become something that is very important work and makes me wonder just how differently annotators, I don't know if they would even call them that, would be treated. That's a really interesting question because, yeah, I think the Mormon, like leaving AI aside for a second, the Mormon church has always been very concerned with falsehoods. The church was involved in a lot of like drama and controversy very early on in its existence. And so it very early on had detractors and people who had joined and then left. So like almost immediately upon its creation, it was very concerned with making sure that people got the right sources of information and that things that were said were correct. And it's always been preoccupied with this. So like as the church sort of matured over time, it started establishing a system for like who gets to decide official doctrines or beliefs, who gets to decide whether a miracle was performed by the power of God or by the power of the devil, like all of this. There's sort of like a hierarchy of truth or of like, you know, there's like there's definitely like this concept of an official stamp on things. And so like from an annotation perspective, I would imagine that anything that the official church does using AI, they're very heavily focused on making sure that things are in line with, you know, the official doctrines or the official beliefs or the official stance of the church. You were talking about the importance of truth and having sort of centralized sources of information and truth. And we know with artificial intelligence that there are all kinds of concerns around misinformation and the propagation of misinformation, whether through algorithms or misleading text or false images. Is there any way that the Mormon church is or the Mormon community at large is dealing specifically with the fear and risks around misinformation? I think a lot of it comes down to the approaches that I've already described. Like, you know, like one thing that the church's AI guidelines say is when a piece of content is produced by AI and it's not obvious, they'll mark it as such so that it's clear, like, hey, I should, you know, think critically, you know, this isn't like necessarily fully approved by the church in the way that other materials produced by the church are. So I think that other than that, there's not really like any uniquely Mormon cultural approaches to solving the misinformation problem that that is coming about because of AI. You also mentioned a concept earlier that I think would be helpful to define, which was retrieval and maybe even rag based systems. And then, you know, I'd like to sort of jump in a little bit more into your research specifically. So do you mind sort of helping us understand those terms better? Yes. So in a normal context, retrieving something means you're picking it up from where it is. In computer science, I guess we say information retrieval is the study of coming up with algorithms that help you pick the right pieces of data out of a big collection of data. And by right, we mean like relevant to a query. So everyone in the world who uses the Internet does retrieval every day when they do a Google search because they're putting in a piece of text, like a small search query, and then Google is returning them, you know, what it thinks is the most relevant set of documents to answer the question or to, you know, for what you're looking for. So that's retrieval. And then within like the modern AI context, we we use retrieval pretty often because, as everyone knows, I'm sure many people are listening to this podcast, have interacted with ChatGPT and asked the question and then it gave an answer that you later found out was wrong or maybe like poorly, poorly contextualized in a way that was basically wrong. So a really common strategy for helping this a little bit is to if you have a big bunch of documents, let's say Wikipedia, let's say you have all of Wikipedia. And before you take your question to ChatGPT, you take your question to Wikipedia, you just put it in the search bar and you find the, you know, maybe the five most relevant documents from Wikipedia. And then you just copy and paste the contents of those documents all the way into the prompt of ChatGPT. And then you put your question at the bottom and say, based on these documents, answer the question. So because ChatGPT doesn't, when you're prompting these language models, they don't access the Internet. They don't have like a store of documents that they're referring to. So any any information that it happens to be right about is just statistical luck. So like if if ChatGPT happens to be able to output the right year for the birth of Abraham Lincoln, let's say that's just because it happened to see enough times in its training data that that year was statistically correlated with the name Abraham Lincoln. And it produced that. Whereas if you're using RAG, so retrieval augmented generation, you're doing some retrieval first. Maybe you take a query, which is when was Abraham Lincoln born? You search over Wikipedia, find that document and then put that into the prompt and say, based on this document about Abraham Lincoln, just tell me when he was born. And then ChatGPT is going to be right way more often. Like sometimes it may pick the wrong year from the from this document because there are several different years listed. But on something obvious like that, it's almost guaranteed to get the right answer. So would you describe the difference then between the two systems, one being RAG based large language models and the other just being foundational large language models as the RAG based tools are actually able to identify the specific correct information from the documents and then summarize that back to the user, whereas the large the foundational large language model is learning the connection between words and just outputting information that seems to sort of flow from one another by virtue of the connection that exists between words. Yeah, that's that's right. Like if you're just prompting, if you're just doing pure prompting, that's what's happening is it happens to be outputting truth, quote unquote, only because truth was statistically more likely in the data set that it was trained on. And so, you know, in the case of the Wilford Woodruff project, they it seems like what they did is they took a general purpose language model and then they fine tuned it, which means just doing further training on these documents relevant to Wilford Woodruff so that it was more likely to be right about these details of Wilford Woodruff's life and stuff like that. And so it's going to sound a lot more like Wilford Woodruff. It would still benefit more from doing some retrieval to make sure that we at least have sources attached to any sort of claims that it's making. I've sort of heard from you in the past about how closely you've worked with a philosophy PhD in exploring some of these themes and making sure that your work does benefit from that interdisciplinary collaboration. So I'm hoping you can share how the perspective of a philosophy PhD has has influenced you in the course of your work. So I think. Broadly, not even just in academia, but kind of just Western culture in general, we have like this really strict dichotomy between like STEM and not STEM or technical and non-technical. And I think that that. Is really unfortunate because it leads to some really impoverished philosophical ideas on the technical side and then like a severe lack of understanding of the systems that we're building in detail on the humanities side. And there are very few people who kind of cross that. And I think that that crossing whenever there's crossing over that can happen between those that is really valuable because technical people, we don't realize our biases. Like, for example, we a lot of technical people think that technology is neutral in like a political sense. The problem for me is not that people have those preferences because I think people can have different political preferences. But the problem is that we don't question those assumptions. And it's all based into how the academic study of technical subjects goes. And I think that more technical people need to think about that. I'm wondering if you have any specific example of how that difference in thinking might have challenged you or anything that you were quite sure of before bringing philosophical perspective. Yeah, so. From a technical perspective, just looking at like rag based systems or LLMs or whatever, it's like we already know what these systems are for. It's for producing text, like from a technical perspective, that's that's the purpose of of a language model set to like to do completions, you know, to to take a prompt and then produce more text. Produce more text. That's like its purpose. And so like when people train a new language model that's bigger or better in some way and it it has a lower perplexity score on on some benchmark, then we say, oh, this model produces text better than another model. And so like, you know, that assumes that having lower perplexity on this data set has some correlation with this notion of like good text or like it ignores the situations that these systems are placed in. And so this this study for me was really useful. It kind of opened my mind to this concept that like where these systems are placed really matters. So people have created all these chatbots like the Abraham Lincoln chatbot. Technically, they're taking a bunch of documents about Abraham Lincoln and then fine tuning or doing some kind of rag or whatever. To create this chatbot, they're doing the same thing when they create this Wilford Woodruff AI system. But the social technical role that this system plays is different because this is the Mormon prophet in a Mormon context. So it's important that scientists know what their stuff is being used for. I think technical people in general just like don't they aren't aware of these edge cases that are really interesting about how these systems are being used to have extra social valence. They have they do things in the world that are not what you expect. When you're building the system. So for me, I really try to push back against the concept of scaling because that is so prominent in the AI circle. The notion that I can scale sort of indefinitely is exciting because it means that the opportunity for profit is massive. But in reality, people are interacting with these tools in entirely specific and unique ways. And the way that we're doing something like benchmarking, as you were saying, which is evaluating how the models are performing, might be a completely biased or misleading metric when considering that people are, you know, it doesn't take into account those specific applications and use cases for which that metric is not a very good indicator of success or uptake. Yeah, I have something to say about that. I've heard language models described as agency amplifiers that they they take, you know, the agency that everyone has to accomplish things in the world and then just amplify it. That's how general we're being when we talk about like what these AI systems can do. There's sort of these everything machines. And so when something is an everything machine, it has to be good at everything. And being good at everything means you have to measure everything that it's going to be possibly used for. And I think that as technical people, we have a very, very narrow sense for what these systems will be used to accomplish socially. And so I don't think it's possible to measure in any meaningful way. From my perspective, the way to solve that is by having very narrow scoping exercises where you look at an application in a very particular space, domain, religion, and try to connect with community members there and understand from their perspective what makes this valuable. And acknowledging that that is an iterative process that you have to do over time so that you continue capturing what makes this technology relevant and useful to an evolving community. Is that are those some of the best practices that that you would see to sort of deal with this problem that measurement is quite challenging for a product that's at scale? Or is there something else that that you've been thinking about? Yeah, I mean, the problem is that we are seeing that models trained on lots of different tasks end up doing better at each of those tasks individually, like machine translation is a really good example of that because we can train a model just to translate English to French. But we also benefit if we train it to translate English to Spanish. It benefits on translating English to French. And the tension here is that I agree that we should evaluate AI systems based on their in their particular context in the world and what they're being used for. But people like from a technical perspective, we have a tendency to want to train on everything because that gives these other benefits. Right. I think the solution, in my opinion, is to is to judge based on the situation that they're placed in. You had said earlier that it's so important to have representatives who come from this, the domain of humanities embedded on your projects to sort of see, see things in a different way and sort of challenge the way that computer scientists might approach this work. I'm almost curious if you think that whether it's all projects should have a philosopher or an ethicist, or is it a representative from any domain within the humanities would be helpful? How do you see the type of humanity expertise? How would you describe it and what domain sort of fits that bill, I guess? Yeah, I think within STEM, let's say broadly, we have this this sense that other people don't understand the technology. Other people don't understand the technology. And so we sort of gatekeep having it having a say or an influence. So I think there's two things we need to stop, like reifying the technical knowledge that we have. And that will make it easier for humanities people to feel comfortable, like like learning about this stuff so that they don't feel like it's this incomprehensible magic that they that they can't have an influence on. And then also, I think it's important to whatever it is that you're working on, try to find people that are that are in a field of study that studies it from a social perspective, because at the end of the day, everything that we create is social. You know, living those people in is not enough. I think we also need to like ourselves start having more respect for that field of study or that way of thinking about things and then sort of take it on ourselves and do a little bit more thinking about it ourselves, which I think comes from collaborating with humanities experts, but also just from, you know, being aware that we don't know and wanting to learn more. So I want to transition us to a bit of a conversation about the future. You know, we're seeing a lot of religious chatbots coming out like Bible, AI, Quran, GPT. Is there a connection there between being someone who is spiritual or religious and feelings about this really novel technology? And then, of course, it's going to be coming from your perspective. I know this is a very, very broad question, but do you see any trends that you think are interesting we should be paying attention to? I mean, I can answer this most easily from like a Mormon perspective. First, North American Mormons are generally pretty cool with technology. Like they think it's awesome and it's going to solve a lot of their problems. They're aware of the risks because they've been told about some risks from the media. But they're generally, as we have been before, cautiously pro-technology. I guess the goal in my mind would be like from a religious perspective, like, you know, like from a religious perspective, the point is to be connecting people in a new way to the text. Whereas like with these embodied chatbots, these personalized chatbots, it's to connect you with these people. And people are a little bit more messy than a piece of paper. Like, and our, especially of a historical figure, our perception of what the historical figure was like is very fraught with, you know, our current historical understanding of what people were back then, rather than like, we like to think that just be like, if we focus on the documents, if we train enough on just the documents from his life, then maybe it will be correct. But there's always this view. There's always this view from the future being placed on these people. I'm hoping that people will see that maybe like these personalized ones are a little bit problematic and that they may not fully represent the person as they would want to be represented. And that it's not really a resurrection of that person in any sense. It's more just like a thing that is being done now to look back historically. And interpretations of scripture change all the time, over time. Yeah, I think you raised a really interesting point about this out of distribution aspect where people are asking very modern day questions to figures who only know about lifestyles, considerations, concerns of a very particular time, which I find really interesting. And maybe that connects to my next question, which is, in a worst case scenario, how do you see this playing out? What would be your biggest fears and concerns in the adoption of this technology? From a religious context? Yeah, I think from a worst case scenario, I would be worried that people will take the outputs as truth or that maybe they acknowledge, yeah, sometimes it could be wrong, but the experience is helping me learn more. And so then I think the bad news would be if these systems end up influencing people's experience of religion or manipulating it in some way. The person who trained this model has a lot of power over which documents were used and when they're measuring performance for themselves, they're the ones deciding which stuff to measure. And so when people end up using the system who are unaware of that bias, then that can influence a person's religious experience or a person's religious beliefs in a way that might be a little bit normative because the Wilfrid Woodruff experience is created by a single person and someone else is not going to come along and make a competing one. I think it would be very difficult for that to happen because that person will also need access to the documents and there's not really as much of a drive to create something that already exists. So there will be just one. And so now we have just one view on what Wilfrid Woodruff is like. Where there's almost just like an individual is unique, so too might be the AI technology that replicates that individual's experience. You're not going to have three Wilfrid Woodruff experiences that are all slightly different. I mean, I think it's also interesting you could sort of shop around theologically, like who do I agree with most or who supports most the lifestyle that I choose to live? But yes, I could see how that would actually be really challenging and actually might create almost like this digital reformation within, like I'm just thinking in the context of Judaism, how there are different rabbis who interpret things very differently and people might subscribe to the interpretations of one rabbi rather than another over the Torah. And having these chatbots that may even represent the same rabbi or the same prophet, but with fractured, with diverging opinions and what that would create in the community in terms of diverging interpretations and I don't know whether that would also fracture the trust in any one chatbot. But on a more optimistic note, what is the best case scenario that you could imagine for this technology? I think that one of the good things about it is that it could help people try to engage more with their own history. Mormonism has a unique history and it might give a non-trained audience the chance to interact with some very old documents that they wouldn't have interacted with otherwise because people don't just go scrolling through online the Wilfrid Woodruff papers unless they're way into it. But having something like this might expose this stuff to a broader audience, leading people to actually engage with the ideas of some of the older prophets in the Mormon church. And more broadly, outside the Mormon perspective, if this is connecting people with documents, the documents of their religion, in a way that makes them think new ideas and engage with old ideas maybe, then I think that's a source for diversity or a source for new concepts or a new level of engagement with one's own culture, I guess. And what a great way to wrap up this conversation. Kyle, this was phenomenal. Thank you so much. Where can people find your work if they'd like to learn more from you? You can find me on my website. It's kind of hard to find because it's, it drops a couple letters from my name, but it's kylrth.com. That's my personal website and I put what I'm working on there and also random blog posts that don't get turned into papers. That would be the best way to reach me. It's my contact info there. Thank you so much, Kyle. And to our audience, if you'd like to ask us any questions, please reach out to us at theworldwearebuildingatgmail.com. We'll see you in our next episode.

Listen Next

Other Creators