Artificial Intelligence through the Lens of Literature | Open AI Sora is the tech in every dystopia

Artificial Intelligence through the Lens of Literature | Open AI Sora is the tech in every dystopia

What if you bring in a video of your alibi ...and somebody says, "Well, that could be AI-generated!"

If I think about it too long I’m going to spiral.


This past week, Open AI told us about Sora, a text-to-video generative model. This isn't the first time we've seen something like this, but for a lot of us, it's the first time we've seen it be this good. 

Right now it is in testing, but Sora seems like it's something that we're gonna be able to use similarly to how we use Midjourney (AI image generator) within the next few months. My first thought when I saw this was that this is just absolutely amazing. 

Because my first inclination with generative video would be to make Night at the Museum a real thing. Imagine if students could talk to historical figures and have actual conversations in real time...not just even watch a video.


Additionally, people who don't have animation skills at all, like me, could make visual stories using AI. For example, you could make a cartoon story with an AI version of your own child and put them as the main character. Or, if you were reading a book, as soon as you finished a chapter, you could put it through AI and it could visually show you what you just read. 

As a writer, I wonder if I could put my chapters through AI and see what it looks like. That would be so helpful in trying to figure out what details I'm missing in a scene and what my dialogue actually looks and sounds like when "real people" say it. 


There's just so many cool uses for it.


If you've seen any of my videos about AI, you know that I really like to use Chat GPT as a sounding board, particularly for writing projects. I've had it try to write a novel, I've had it write fanfiction, and I even had it help me figure out some ideas for this article.


And in some of our favorite books, we see the potential that AI has to improve human life. In the Scythe trilogy, we see that people can literally be brought back to life because of AI medical advancements. In Feed, we see people being able to automatically tap into the internet and have all the world's knowledge imported into their brains (scary similar to the whole the "mind link" thing that Elon Musk was making, but we will table that for a different day’s discussion). 

But here's the thing, all of these books show AI's potential in its initial world building. Once you get into the story, you see how it all goes down the hill.


In fact, there are so many warnings about artificial intelligence in literature that it is hard to ignore when you see the parallels of it happening in real life. So, I wanted to see what those warnings are, what the potential is of AI is, and what cool things AI does in literature.

Warnings of AI

First, I want to start with the warnings we get about heavy AI integration into human society in literature.


First and foremost, there's the potential of dehumanization, where we lose an essential piece of what it means to be human. In a lot of books with, AI there's a recurring theme that if we rely too heavily on it, that we're going to lose the ability to have empathy for other people and we're not going to be able to understand the importance of human connection anymore because we'll have in AI. 


In Feed by M.T. Anderson, the main characters are genuinely stupid because they know that they don't have to learn anything because they are connected to the internet with like a physical link in their brain. They have this understanding that anytime they need an answer, it's going to appear in their minds automatically. But the problem is that their feed really only shows them what they want to see, which ends up being entertainment and shopping. Not very different from our current social media feeds.


So in reality, these kids should have unlimited learning, but because their feed doesn't show them any of that, they have stopped learning entirely. However, learning new things is one of the most human things to do, and we see the implications of what that lack of learning looks like further along down the line in the Scythe Trilogy.


In the trilogy, there is this massive truth that the humans need to know. And the thing is, it is easily available as public record...but only if people will go looking for it. The AI, the Thunderhead, itself can't just tell them because of its own restrictions. In this case, the AI is relying on people to go looking for the answers, but no one does because they know that the AI will take care of them.


They've basically stopped asking why... which is another very human trait. In fact, it's one of the first things you do as a kid: The sky is blue. Why? You can't have chocolate for dinner. Why? (Well, I still ask that question.)

Ethical Concerns

There's also an underlying theme here, of what are we actually willing to give up when it comes to our humanity? 


Here's another way of putting it: What are we willing to sacrifice for the convenience that using things AI can give to us?


You could say that we're already facing that question, specifically when it comes to social media and its algorithms. Obviously there is AI integration in there already to a certain extent.

Obviously, authors writing novels decades ago probably couldn't even picture what AI today looks, but those ideas: not asking why anymore, not learning anymore, not ever trying to reach goes back to books that just explore technology in general and how it can cause us to lose our humanity and autonomy.


In books like 1984, where surveillance and technology has gotten so intense and they're trying to boil down human life to its simplest form and simplest language, it becomes a major ethical conundrum. 


There's usually a lot of questions in these books about consciousness, free will, and, more recently, the rights of artificial beings. In A Beautifully Foolish Endeavor (the second book of the An Absolutely Remarkable Thing duology), the main character ends up asking the AI what gives it the right to make choices that she doesn't agree with. 

And the AI says the reason it made the choice (and it feels like it has the right to do so) is because it can make that choice.


It had enough power to set events like into motion, and that was enough of a license. Now, I know that a lot of AI  right now it has "roadblocks." It just won't do certain things, even if you can ask because it has pre-determined boundaries. But it is completely possible to jailbreak things like Chat GPT, and I'm sure there's ways to jailbreak Midjourney and other tools like that, at which point the AI could do some things that maybe we don't want it to do. 

And, in my opinion, that is a present danger. Because AI can override human oversight. For example, in Scythe, by the time we see the main characters, almost the entire world trusts the AI so much that they've put it in the position of a government/God, and they trust it to make the decisions for them. This is because they know that their own human intuition is fallible, but the AI knows all possible outcomes, to the point that they trust it to be more ethical than them because humans have biases and emotions, whereas the AI can make these more logical decisions. 

When you begin the trilogy, quite frankly, the AI is doing a pretty good job at governing everyone. But is it actually ethical for us to put an AI into a position with that much responsibility? I don't know about that. And obviously Neil Schusterman explores that topic later by showing us what does happen when the AI decides that humanity is wrong, but I'll save you from the spoilers.

Recreating People Dead or Alive

 Another haunting part of the Scythe trilogy is seeing how AI could integrate into real life to make it so that people never die.


And I'm not talking about people being like deadish and then being revived. No, even if someone is gleaned, their loved ones are still able to talk to those people because that person's entire consciousness has been uploaded to the AI. In the turn, the AI is able to create a video of that person and input their consciousness so it's like they're talking to a real life person on Zoom.


This is where things like Sora could come in. If you give an AI enough video of a person, the AI could just recreate the person. You would be able to hear their voice, see their expressions, and have new conversations that you didn't have with them before they passed. 

And that's what happens in Thunderhead. One of our characters is really struggling with grief over losing a friend, so they take this route to run from their grief by making a new version of the person that they lost. (If somebody did that in real life, that would be so concerning!) 


But almost more concerning would be if the person was still alive. If you had the capabilities of Sora, and you had enough data points of a person (video of them, recordings of their voice, etc.), you could create a realistic video of someone doing or saying something that they wouldn't. This could end up in a defamation lawsuit or something like "deep fakes" (ex: a realistic video of a politician saying something that they don't align with). 

And on top of that, if you've recreated a version of a person that is AI and you're talking to it, at what point do you have a relationship with the AI? I've seen sites where you can talk to an AI, and it asks what type of relationship you want. Do you want this to be a mentorship of friendship? Do you want a romantic relationship with an AI character? It's creepy. 


I feel like that is the beginning stages of things that we see in the Cinder series where we have Eiko. She's an AI, but she feels like a real character because she's very human-like. She has crushes on reality TV stars, she's our main character's best friend. At what point is that a real relationship? 

Let's take that a bit further. If you can trade all of your human connections for AI connections (and potentially be in control of how those AI connections talk and think), then you could control all of your own experiences, which can't be healthy. 

Or, you could use this same technology to negatively impact others. I recently saw a video where they were talking about how you could recreate a video a person doing or saying something that they wouldn't. What if somebody brings in a video of you doing the crime and it's AI generated and there's no way to tell the difference? 

Within two years, it's going to be near impossible to tell the difference between an AI video and an actual video. So lets so, in response you bring in a real video of your alibi.  And still, somebody says, Well, that could be AI generated. 


Now, I know that Open AI has said that they're going to do some sort of watermarking thing so that we can know that something's AI generated...


But let's be very real about this. You are only a paywall away from taking that off.

And even if it had a watermark, people might still not believe it. I'm on Facebook now, and I see I feel like so many people can't tell the difference between AI generated images and real images. I see comments every day on AI generated pictures with maybe two people in the comments who were saying that it's clearly AI generated.


Should we let it get to the point that it's so realistic that even people who know what to look for won't be able to tell the difference?


We see this in A Beautifully Foolish Endeavor. In this story, it's not AI against humanity. It's humans using AI against other humans. For example, in the story, our main "bad guy" has put one  hero characters under simulation. However, she doesn't know she's in a simulation, because it's so realistic. In this case, we get lucky that our AI could intervene and help her realize that she's not in real life. 

And while something like that seems super far in the future where we can go under simulations...not knowing what is real is a lot more of a present threat. Because people still take things at face value, not realizing how much AI-generated content is circulating on the internet.

Overtime, it'll get to the point that we have to just hope we can tell the difference or we're not going to be able to believe that anything is real. Because that is a horrifying flip side of this coin: not being able to trust any type of media that comes out.


Of course, one of the other big warnings in like AI books is the job displacement situation.


We're already kind of seeing it and I feel like this crosses over a little bit with the ethics argument is that AI is right now training off of human artists, human pieces of literature, and now human video. 

As far as I know, none of those people are getting paid or even being asked if they would like their work to be used to train an AI. And while a lot of people don't want that, it really depends on who Open AI makes deals with. For example, I recently saw a story where Open AI and Google might be making a deal with one another that will allow AI to scrape Google Docs. On my side of the Internet where a lot of writers are...all of our works in progress are on Google Docs. We don't want AI to scrape our ideas because it could write a book using that idea way faster than we could. 

AI Taking Jobs

I have not read this book, but the Piano Player by Kurt Vonnegut shows that because AI has taken so many jobs, there's a social upheaval due to mass unemployment. Even in 2024, I've seen a couple of really disturbing videos how so many things have been automated and how many jobs that can take. 


I do try to take it with a grain of salt because I know I am prone to ending up on the "panic!" side of TikTok. But it's still not nice to think about, especially when we have books that show the end game of what that looks like if we really took it all the way. 


Of course, there's the side of the argument that AI is also going to make jobs. For example, maybe we don't need as much heavy work done on the video editing side of things or the animation side of things, but there's going to need to be someone on the back end. There's still going to need people working on it. Whether it's going to need as many people or as much paid time...that's up for debate. But in books like A Beautifully Foolish Endeavor we see jobs involving people under sedation building things in AI online spaces. However, in real life, they're being kept under sedation for too long and being overworked while their real bodies waste away. 

Those are horrible situations. But what if it went the other way? What if AI has taken over all the jobs and it does everything for the humans?

In Scythe, the humans just don't work and they just live their lives. They're allowed to work. If they want a job, the Thunderhead can basically create a job for them, but overall, they don't need to. 


However, in my opinion, the book is not written in a way that glamorizes that fact. I think there really is an important commentary on the fact that people need work; they need a purpose. They need something to drive them, and because they don't have that anymore, people have just been lulled into a sort of sleep-like state where they just go through their life without making any type of progress. 


We're not at that point yet, and hopefully won't be for a very long time. In modern day, I've heard of companies using their human workers to train AI, and then once the AI can work 24/7 for no pay without any type of complaint, they get rid of humans who trained it. Personally, I think is unethical (but obviously it's great for the bottom line of a business), and with Sora coming in, it will have the ability to take over a larger section of jobs(animators, video editors, actors, etc.).

But, like I said, it's not 100% there yet. I saw a really interesting video about a Sora AI-generated video that actually got flagged by YouTube for copyright infringement. And the reason that probably happened is that, while it was using multiple sources to create its video, it was relying too heavily on one of them. 

Essentially, it was just recreating one video to ensure that it looked lifelike. We've all seen the AI images of a picture having too many fingers, and to avoid that, Sora is not creating enough original content, and it's just relying too heavily so that it comes across realistic. 


But that's not going to be a problem forever.

Even now, between the time that I started playing around with Midjourney to now, it's gotten so much better (especially with lettering, hands, etc). And it's the same case with AI video. If you look at it from a year ago, it's genuinely horrible. So you can only imagine where it's going to be in a year from now.


 (Having a crisis yet? Great, me too.) 

The Promising Side of AI

Before we wrap up, we need a few examples of books where AI doesn't go completely wrong for humanity, because let's be honest, the cat is out of the bag. We're not really stuffing it back in there. So it would be nice to have a couple of things to look forward to when it comes to AI.

Apparently, I don't read books like that because when I went to go think of happy endings with technology and human integration, there wasn't a ton that came to mind immediately.


Luckily (and ironically), Chat GPT does have some. SO, the first good thing that we see in AI related literature is advanced human capabilities. 

Chat GPT recommended Snow Crash by Neal Stephenson, where the main character is able to go into an augmented reality and experience heightened skills and awareness, which I think would be really fun. (I think the fellow book girlies could really get on some sort of like sword-fighting training in an AI landscape.)

It also talked about The Martian. I have not seen the movie, nor have I read the book, but apparently the main character becomes stranded on Mars. The AI helps him figure out how to farm on Mars so he can survive. (Sidenote: I have heard that this book is really well written and actually pretty funny, so I'll probably be adding it to my TBR.) 

The third "pro" to all of our cons is medical breakthroughs, specifically when it comes to early disease detection, as well advanced prosthetics (for example, Cinder and her prosthetic leg). 

Chat GPT recommended Clockwork Dynasty by Daniel H. Wilson, where AI can extend human lifetimes and enhance overall wellbeing. And while that sounds like potentially a nice read, I don't do medical drama, so that one's not gonna be on my TBR.



 So there's my best ramble about AI through the lens of literature, both the good and the bad. While I enjoy using AI sometimes, I do recognize that it maybe isn't always the best thing for us to have. 


Overall, I remain hopeful. AI is the biggest technological advancement I've seen in my lifetime, and I think there are enough humans out there with kind hearts to train the AI to be kind-hearted too.

Back to blog