I’ve been obsessed with a side project lately—building a fantasy world with my sons. We decided on a "no-magic" rule, aiming for something grounded in the grit of ancient Rome or Egypt.
But honestly? Getting an AI agent to actually help build this without it sounding like a generic D&D manual has been a headache. I spent weeks fighting with prompts that just wouldn’t click.
I finally broke through the wall today, and I realized why my previous attempts were failing:
I was giving the AI too much freedom and not enough "personality constraints." For example, I stopped letting it say things like "Based on the database..." or "According to the lore." As soon as I banned those "bridge phrases," the AI stopped acting like a bored librarian and started actually inhabiting the world we built. It’s a small tweak, but it forces the model to integrate the history of nations like Avalon or Fura into its own "memory" instead of just copy-pasting facts.
The other big win was splitting the work. I stopped asking one prompt to do everything. Now, I have one "Architect" that just cooks up the conflict—usually something grounded like a salt shortage or a broken trade route—and a "Chronicler" that handles the actual writing.
It’s a reminder that with these tools, "more" isn't better. The tighter the box you put the AI in, the more creative it actually gets.
It started as a way to bond with my kids, but it’s turning into a masterclass in how to actually talk to these models.