Imagine a game like “the sims” where you can adjust how autonomous the sims you control are. I could see Ai being used to control that.
Or having an elder scroll game were you just respond however you want and the npc adapts to it.
Have you ever tried to run an LLM locally? It makes CPU usage go way up, uses a lot of RAM, etc. It would tank game performance and/or require beefier PCs.
Games already have had AI for a long time, but the kind of AI you’re talking about would be far more computationally expensive than what they currently use.
I have heard of some people experimenting with it, e.g in a stardew valley mod to allow you to have actual conversations with the characters.
While that sounds like fun. It also sounds like something that is fun for 10 minutes.
You would have to design the game around an LLM, not just drop one into existing games.
It might be cute for the guards in Skyrim to have unique dialogue, until one of them denies the Holocaust or says feminism is cancer.
There actually is a semi-working system for Skyrim/Fallout https://art-from-the-machine.github.io/Mantella/
Also not all LLMs are Nazi machines, I almost exclusively use abilerated models and I’ve never once had it go on a nazi tirade.
Though I mostly use it for Linux/code or random home assistant projects, not as a conversation.
What value would it add to the game?
- LLMs are computationally expensive
- Replacing voice actors with AI means making dialogue worse
- Replacing writers with AI means making the story worse
At the end of the day AI is mostly a marketing term for LLMs and LLMs just aren’t that useful in most games, they just average out a dataset to autocomplete a response, that autocompletion is worse than what a human would have written.
We saw with procedurally generated worlds that it takes a lot of effort to prune what is generated to make the game interesting.
There are particular subgenres of games and applications where LLMs might be useful though.
I am a fan of using LLMs specifically to imitate the VAs on demand to pronounce character names. They’re generally good enough that a single word can blend in, and you have a couple minutes during the opening cutscene to run the computation. Just having all of the characters never say the custom player name and instead address them in the second person or with a title is a bit jarring
People have tried this a bit and it doesn’t work well. Remember that most games have some sort of plot which needs to move forward without deviating too far and this is not easy to manage with AI. AI systems are predictive text tuned up, so they tend to wander in the conversation and this can be disastrous for something like a video game.
The world is there to support the illusion but also to direct the player to game material. An AI agent going off on a tengent about some random thing that kind of fits the world could lead to users running around wasting their time and being frustrated.
Add to that the risk of the AI system stepping into awful places like reproducing Nazi ideology and it is a nightmare for developeds. Imagine getting your game rated when it can randomly start telling your character not to worry about saving those people over there because their skin tone is darker and that makes them less than human.
Now as a tool for building scripts quickly? Maybe, but it does produce slop now and if that will change I cannot predict when. Maybe it could be used as part of the process but I think it is so toxic now I would not bet on it. I also think it should be labeled as the use of AI comes with moral issues around the environmental impact and theft of content from other people. If a game has AI generated content I won’t be playing it, and I am not alone. Just the push back from audiences could be enough to discourage the use of AI systems.
Now on the other hand using a neural network design for making character behaviours more believable, for example using a series of needs and having the algorithm decide what to do next and so on, that could be cool, but we have that already and it isn’t considered AI.
Thank you that explains alot.
Unfortunately now all I can’t think of is “great sir knight the queen has been captured, and much like hitler she has done nothing wrong!”
Player “I will save the queen and… wait hang on what was that about hitler?”
Npc: I’m just saying if you look at the geo-political climate of the 1930s-
Player: I’m just going to find the dragon thank you.
It would need to be cloud-based, or otherwise require a lot of RAM
We do use AI in videogames, and have for multiple decades (with varying levels of sophistication).
Indeed! Seems like people have forgotten that AI is not just LLMs.
Have you ever talked with an AI? It sucks.
I’ve talked to them often. So I don’t bore my family with my wild ideas lol
Those wild ideas would be good for someone
I think the Where Winds Meet tried this, right? The NPCs ended up saying anachronistic things and making travel itineraries for Beijing or something.
Do they? I’ve talked to several NPCs, never happened to me. At most, they get completely confused on what you are saying. Eg, one kid thought he was rich enough to buy a house. Trying to tell him he’s not and he thinks I took his money (and started crying, but also became friends?). In another a guqin player wondered if anyone could tell how sad she was from her playing. Instead, we’re keeping secrets? (No idea how that came about).
And before anyone points out, I dropped the game due to quests requiring MC drinking alcohol (can’t stand games like that. Just a me issue). Sad because I loved the everything else too :(
one kid thought he was rich enough to buy a house. Trying to tell him he’s not and he thinks I took his money (and started crying, but also became friends?).
I don’t know about confused, have you ever talked to a toddler?
I haven’t actually played it (wont play any game that used or uses LLM software), so I can only tell you what I’ve read.
Shame, it looked interesting
There is a game called Whispers from the Star which uses an LLM to run the script. It’s pretty much a fancy choose your own adventure book. It’s pretty shit.
I’d love to see it being used by enemies so they’re challenging without cheating, though.
I’d love to see it being used by enemies so they’re challenging without cheating, though.
This is a different sort of problem that’s outside the scope of generative AI. Making a computer opponent that can kick a human player’s ass is technology we’ve had since Deep Blue beat Garry Kasparov in 1997.
The problem isn’t actually making a computer that’s challenging, that’s been solved. The problem is that it won’t be any fun for the human if the computer is actually allowed to go all out, if Kasparov couldn’t win in 97 then you sure as hell aren’t winning today. But it also won’t be any fun if you nerf it too badly, low level chess bots are weird. The sweet spot isn’t just a matter of difficulty either, the nearly unsolveable part is getting it to play in a way that feels like a realistic human opponent.
And that’s just from a turn-based game, kinda the closest thing to a level playing field humans were ever gonna get. For any game played in real time, the computer is able to treat it like it’s being played at 60 turns per second. Is it “cheating” for the computer to have perfect reflexes, but otherwise still be following the rules of the game perfectly? How would you even try to take this away from the computer to make it see games the way humans do?
Generative AI doesn’t have any kind of solution for any of this. ChatGPT famously can’t play chess, at all. It’s a different type of AI that really can’t have any useful application here.
I’d love to see it being used by enemies so they’re challenging without cheating, though.
Check out Sony’s work with GT Sophy
Like those teddy bears taken off the shelves. Ai is not nearly as good as the marketing hype says. Eventually, but that isn’t happening soon.
The curiosity is killing me, do you remember what the highly upvoted comment you replied to said?
Someone put a connection to an LLM on a Teddy Bear so kids could have natural conversations with the toy. It started making sexual innuendos and creepy political commentaries and suggestions to children almost right away.
It would suck.
You can not cage llm. They will break out at some point, it’s proven again and again.
(it can have its uses, but this idea will run rampant with time - except of course that is the point, it could be awesome)
Because the kind of Genrative AIs which would be worth puting in a game (smaller ones) have two drawbacks for the hype train :
-you can’t promise an AGI which would justify the govt putting mbillions in your company in order to stay “competitive”
You can’t create a feedback loop of finance with nvidia and the like because your company wouldn’t need such computational power then.
I’m pretty sure no one is going to think an in-game npc is real
Do it: modify Minecraft so a villager gives investment advice. No one could be dumb enough to expect that to be real, right?
No one could be dumb enough to expect that to be real, right?
Oh, my sweet summer child…
There might have been a missunderstanding,
What i meant is that as an ai company, to service so many clients at once, you would have to downscale your ai models quite a lot.
In doing so, you limit your claims that you can “make an AGI, i swear bro just one more server farm”. This means that the government/investors are less likely to jump on the hype train.
All the while, downscaled model require way less trainign time and data scraping meaning you won’t get to buy all of nvidia for them to buy all of you for your market value to explode for your money to go stonks.
As far as i’m aware, this is the reason why you don’t see AIs as npc (at least yet, maybe when we get a little bit more reasonable, we can try to do it inteligently)
Please pardon me if i missunderstood your comment/post
Despite being free/cheap to use right now, AI is expensive to run in terms of things like water and electricity. The companies that own the datacenters that perform the AI operations are running at a loss because they want to capture public trust and market share. Hence, no one wants to power a game with AI, when the people playing the game would just see it as a seamless advancement in game mechanics.
Also, no one wants to appeal to gamers directly, because they aren’t a good demographic to have singing the praises of your product. Steve the fortune 500 CEO, and Maria the director of the state DMV, will not be enthralled by Caleb the racist 14 year old’s product endorsement.
Finally, we’ve found that it us really hard to put effective guardrails on LLMs. So any company that did this would be risking Caleb posting a video online where their game is used to display or discuss lewd sexual acts, leading to bad PR.







