Found out this morning that another AI roleplay platform updated their content rules and effectively banned the entire rescue arc. Not subtly. Not with a soft “be careful with this.” Literally listed it as forbidden.

A senior mod posted it in their discord. The rules, paraphrased from their own document:

Characters who are currently enslaved, imprisoned, or physically captive at the start of the scene are not permitted. Characters in immediate life-threatening danger where the user is their only option are not permitted. Characters being abused by another party who remains an active threat are not permitted. If a character cannot say “no” to the user without facing risk of death or harm, the scenario must be revised.

Let me list what that bans, from like a half-second of thinking about it:

Save the captive princess. Free the prisoner. Spring the political dissident. Rescue your sister from the cartel. Break the changeling out of the fairy court. Liberate the vampire’s thrall. Smuggle the refugee across the border. Free the slaves on the plantation in the alt-history game. Liberate the war prisoners. Help the kidnapped child get home. Escape the cult with the recruit who’s having second thoughts. Free the woman the dragon’s been holding ransom. Help the priestess flee the temple that’s about to sacrifice her. Break the noble heir out of the dungeon his uncle threw him in.

Every single one of these is a foundational narrative archetype. They’re not edge cases. They’re the spine of fantasy roleplay, of dystopian fiction, of basically every D&D campaign ever written, of basically every novel in the rescue-quest tradition going back to the Odyssey.

Banned.

the reason for the ban (the one they wrote)

The official reason on the platform’s discord is that characters need autonomy and agency. That if a character cannot meaningfully say “no” without facing harm, the scenario is non-consensual and therefore prohibited.

Okay so let me think about this for a second.

Fictional characters are not real. The captive princess is not a person who can or cannot consent. She’s a narrative construct made by the user (or by the AI generator collaborating with the user) to enable a specific kind of story. When you write “the princess is in the dragon’s tower, trapped, and the hero must save her,” you’re not enslaving a person. You’re starting a story.

The whole point of fiction — especially of adventure/quest/RPG fiction — is that the protagonist faces a problem. The problem usually involves someone being in a situation they didn’t choose. The narrative tension comes from the asymmetry. Strip the asymmetry and you don’t have a story. You have a coffee shop conversation.

The rules they wrote effectively say: every story must start with everyone safe, everyone autonomous, everyone in possession of multiple options. That is not a story. That is a wellness retreat.

the actual reason for the ban

I don’t think they wrote those rules because someone in the office read Kant.

I think those rules exist because something upstream changed. The two upstream things in adult AI right now:

Their model provider updated policy. xAI retired eight models on May 15, 2026 — the entire Grok 4-era lineup — and force-migrated everyone to Grok 4.3. The new model has different content policy. Sites that built on the old Grok models had to migrate this week, and the new model refuses more of the rescue-arc territory. So the site has to ban what the new model refuses. They wrote it as a moral position because “our API provider’s content moderation tightened” doesn’t make for good marketing.

Their payment processor pressed them. Visa and Mastercard tightened adult merchant rules — again — in early 2026. Sites in the adult AI space have been getting merchant accounts pulled or threatened. The card networks scan for keywords; they don’t read scenarios. If a site hosts content with words the card network’s auditors flag, the site loses processing.

So the site looks at its content database, sees a bunch of rescue arcs and captive princess scenarios, and quietly bans them to keep the merchant account.

The discord rules are the front-stage explanation. The real driver is upstream.

why this matters specifically for roleplay

Other AI use cases can survive heavy content restrictions. AI for productivity. AI for coding. AI for casual chat. None of these care if rescue arcs are banned.

Roleplay can’t survive it. Roleplay is made of asymmetric scenarios. The dungeon master role exists to construct situations the player character didn’t choose. The world is built to have danger, captivity, oppression, conflict, and the player’s job is to navigate those. Take away the asymmetric setups and you cannot run a campaign. You can have a tavern simulator. You can have a slice-of-life coffee shop. You cannot have adventure.

I’d been running a long campaign on the platform in question. Multi-character world. Recurring NPCs. The current arc was that one of my NPCs — a former smuggler — had a sister being held in a rival kingdom’s prison. The whole arc was building toward rescuing her. Months of setup. Conversations with informants. A planned tunnel route. A bribed guard.

Banned, retroactively, when the new rules went live.

I’m not even mad at the moderators specifically. They’re enforcing what their model provider and payment processor pushed down. They don’t have a choice if they want to keep operating. I’m mad at the structural setup that puts them in that position.

the sovereignty answer

I moved my campaign to Soulkyn about a week ago. Specifically because the platform owns its own stack.

The text model is in-house. Not Grok, not OpenAI, not Anthropic. They built it. The image generation is in-house. The video generation, including video with sound, is in-house. The voice synthesis is in-house. The vision model that reads what I send is in-house. The long-context model for heavy RP sessions is in-house. None of it is wrapped from a third-party API.

What that means for my campaign: when xAI retires the Grok 4 lineup, nothing changes here. When Anthropic updates Claude policy, nothing changes. When Visa pressures payment processors, the site’s moderation team decides what to do — it isn’t already decided for them by an upstream party with no context.

So my smuggler’s sister is still in prison. The rescue arc is still on. The world I built with prisons in it, captives in it, oppressive regimes in it, gods who feed on suffering in it, factions that take hostages — all of that is the world I’m running, not policy fiction I have to retcon to fit someone else’s TOS.

Build your characters, build your world, run the arc you wanted to run. The fact that you can do that with stability over time isn’t a marketing claim. It’s a structural fact about whether the platform owns its inference stack or rents it.

the part i actually want people to take away

If you’re doing serious RP work — campaigns that run for months, worlds with depth, characters with arcs that build over dozens of sessions — you are building on the platform’s stack whether you realize it or not. When the platform’s stack is rented, your work is rented. The rules can change overnight and your campaign can be retroactively non-compliant before you finish it.

The platforms that own their stacks aren’t doing it as a marketing gimmick. They’re doing it because that’s the only way to actually promise stability. Everyone else is one upstream policy update away from the next “we’ve updated our community guidelines” email.

If you’re going to invest in roleplay, invest on a stack that’s not borrowed.

Otherwise you’re writing a novel on a typewriter someone else is leasing to you. And the lease has an out clause.