Photo of yellow and read threads by amirali mirhashemian / Unsplash

Playing with Purpose

7 min read

By day, I am an AI Product Manager at Vinted. By training, I am a philosopher. But above all, I am a father – and a terrifyingly stubborn optimist.

If you want the honest truth about why this studio exists, it comes down to a healthy dose of fear and a very specific kind of hope.

In my doctoral research, I study what happens to human minds in the age of Artificial Intelligence. Specifically, I investigate a chilling paradox: as AI becomes more efficient at making decisions for us, it actively erodes our phronesis – the ancient Greek concept of practical wisdom. Not theoretical knowledge. Not raw intelligence. But the hard-won, embodied capacity to perceive a situation clearly, weigh competing goods, and act well under uncertainty.

Phronesis is not a gift. It is a muscle. And like any muscle, it only develops through exercise – through friction, failure, and consequence.

Working inside the tech industry, I see firsthand how commercial algorithms are structurally optimized to do the opposite. They smooth away friction. They pre-empt your decisions. They learn your preferences so thoroughly that you are never asked to choose anything genuinely difficult again. These systems don't just help us; they substitute for our judgment, short-circuiting the deep, sustained attention that moral development actually requires.

The result is a quiet atrophy. We are becoming less capable of sitting with ambiguity, less tolerant of narratives that don't resolve cleanly, less practiced at imagining consequences before we act. And layered beneath all of it is a reading crisis that almost nobody is talking about seriously enough. When people stop reading deeply – when they stop inhabiting other minds, following complex arguments, and living inside cause-and-effect over hundreds of pages – they lose the very cognitive architecture that makes phronesis possible.

A society that cannot read deeply cannot think carefully. And a society that cannot think carefully is extraordinarily easy to manipulate.

When I look at my child, I know I refuse to accept that as inevitable.

As a product manager, I am paid to eliminate friction. Smooth onboarding. Frictionless checkout. Zero cognitive load. And in most contexts, that is exactly right.

But as a philosopher, I know that friction is precisely where learning happens. The moment of resistance – the puzzle that won't yield, the choice that implicates you, the consequence you didn't anticipate – is the moment the mind reaches beyond itself. Remove all friction and you remove all growth.

This is the insight that founded Mitos Games: what if we built something that was irresistibly entertaining, and deliberately, architecturally demanding?

Not homework dressed up as a game. Not a quiz with a story bolted on. But a genuine work of interactive literature where the mechanics themselves are the pedagogy – where every system in the engine is quietly building the very cognitive capacities that the rest of the digital world is slowly dismantling.

To do that, I needed an engine. So I built one. Mitos Engine is the technological translation of everything above. Its governing philosophy is deceptively simple:

A scene is a moment. A node is an element of that moment. Flags and stats are the player's history made visible.

But beneath that simplicity is a layered architecture designed to do one thing above all else: make the player live with their choices.

Memory That Doesn't Forgive

The engine tracks the player's moral history through two distinct instruments: flags and stats.

Flags are the permanent facts of your story. Did you betray a confidence? Did you learn the secret of the diamonds? Did you insult the priest? Once set, a flag reshapes the world around it – quietly, often invisibly. Certain doors close. Darker paths open. The engine never lectures you about what you did. It simply remembers.

Stats accumulate the texture of your character. Not just gold or score, but suspicion, dignity, charm, paranoia, trust. These aren't decorative numbers. They are the slow accumulation of every small decision you made when you thought nobody was watching. A stat_check node fires silently in the background – no UI, no announcement – and routes you to the scene your history has earned. High charm? You might smoothly bribe the guard. Low trust? You will be thrown out, and you will have to live with understanding exactly why.

This is phronesis training in structural form. Aristotle argued that practical wisdom develops through habituation – through repeated experience of acting well and poorly and feeling the difference. The engine creates exactly that feedback loop, at the level of narrative consequence rather than abstract instruction.

Consequence Delay

The richest moment in interactive fiction is not the dramatic choice. It is the moment, hours later, when a choice you had nearly forgotten quietly resurfaces – and the world is different because of it.

This is what I call consequence delay, and it is the feature I am most proud of in the engine's design.

Most game systems reward and punish immediately. Choose the kind option, get +5 reputation, move on. That is operant conditioning, not moral development. It teaches you to optimise, not to reason.

The Mitos Engine is built to plant narrative seeds in Chapter 1 and let them germinate quietly through Chapters 2 and 3, paying off with devastating emotional weight in Chapter 4. The flag you set when you chose to trust a stranger doesn't do anything obvious at the time. But three chapters later, in a moment of crisis, the engine reads it – and a single line of dialogue changes. A door is open that would otherwise be closed. Or a character who might have helped you turns away.

The player feels this as story. But structurally, it is an exercise in understanding that actions have extended, non-obvious consequences – which is precisely the kind of reasoning that populism and manipulation depend on people not being able to do.

Sidequests as Moral Texture

The main story is the spine. Sidequests are where character is actually revealed.

The engine supports a full parallel sidequest architecture: optional story branches that run alongside the main chapter flow, triggered by flags, stat thresholds, or things the player has already discovered. A sidequest doesn't just give you a bonus – it reveals something. The best sidequests answer questions the player didn't know they were asking. Why is this character so frantic? What happened in this building before the story began?

Crucially, sidequests write their conclusions back into the main story's flag state. Complete the right sidequest, and a persuasion option appears three chapters later that no other player will see. Ignore it, and you will never know what you missed. This is how real moral texture works: not through explicit choices labelled "GOOD" and "EVIL," but through attention, curiosity, and the willingness to slow down and look.

Puzzles as Judgment

Every puzzle in the engine is a moment of active judgment, not passive consumption.

The puzzle architecture is intentionally open-ended: anagrams, ciphers, drag-and-drop sequence reconstruction, dialogue bluffs conducted under pressure with a limited number of wrong answers before consequences fire. But the design principle behind all of them is the same: the player must bring something to the problem. They must reason. They must hypothesize. They must sometimes fail.

Crucially, failure is not a dead end. The engine uses a failNext pointer to route failed puzzles into consequence scenes. "I couldn't crack the safe" becomes a valid story outcome – one with its own weight, its own doors closed. There is no infinite retry that eventually dissolves the challenge into inevitability. The difficulty is real, and so is the cost of not meeting it.

This directly counters the passive consumption model of modern media. Every puzzle is a small act of cognitive resistance.

Multi-Character Perspectives

One of the deepest capabilities in the engine is its multi-character architecture – and its implications for building exactly the kind of moral imagination that phronesis requires.

In pick_one mode, the player chooses a protagonist before the story begins, locking in a unique constellation of starting stats and flags. Playing as Ostap Bender – brilliant, charming, ruthlessly pragmatic – means seeing the world through the eyes of a man for whom every social situation is a confidence game to be won. Playing as Vorobyaninov – paranoid, greedy, burdened by status anxiety – means experiencing identical scenes as genuinely threatening, the same dialogue reading as sinister rather than playful.

This is not aesthetic variety. This is structured perspective-taking – the deliberate practice of inhabiting a mind that processes the world differently from your own. Aristotle's phronesis is inseparable from the capacity to perceive this situation from multiple vantage points before acting. The engine makes that practice architectural.

In sequential mode, the engine goes further: the author can force POV shifts mid-story using character_switch nodes. You are Ostap scheming in a Moscow office – and then suddenly you are Vorobyaninov, across the city, watching the same plan from the outside. World flags ensure that universal truths survive these switches: what one character discovers, the world registers. What one character sets in motion, the other must navigate.

Cross-Game Memory: Your Moral History Follows You

The most ambitious feature of the engine is the one that operates across the boundaries of individual games entirely.

At the end of The Twelve Chairs, carryover nodes snapshot the player's most defining decisions before the final screen: Did you show mercy? Who did you betray? How many chairs did you find? When the same player opens the sequel, The Little Golden Calf, the engine reads that CarryoverStore before the first chapter even loads. The sequel's world is shaped, quietly and specifically, by who you were in the previous game.

This is the full arc of consequence delay extended to an entire literary series. Your moral history is not erased at the end of each book. It follows you. The con artist you became in one story is the person who walks into the next one.

This is, to my knowledge, a genuinely novel approach to interactive literature – and it is a direct structural expression of the studio's deepest belief: that character is not chosen in a single moment, but accumulated across time through thousands of small decisions.

All of this technology exists in service of a deceptively simple ambition: to make people better at being people.

Not through instruction. Not through moralizing. But through the oldest technology humans have for practicing wisdom at a safe distance from real consequences – story.

The interactive layer doesn't replace what makes great literature great. It deepens it. It makes you responsible for the story. It makes you feel, viscerally, the weight of choices that in a passive novel you could observe from a comfortable remove. It asks you, again and again, to exercise the very faculty that the rest of your digital life is quietly encouraging you to outsource.

And it does all of this while making you laugh, scheme, and eagerly tap to the next page to find out what your terrible decision just set in motion.

We are, above everything else, a fun studio. We make games about con artists hunting for diamonds hidden in upholstery, absurd Soviet bureaucracy, ciphered addresses and blown cover stories. Our goal is for you to want to keep reading – and to emerge, hours later, having exercised something important without ever being told to.

We believe the best way to armor a mind is to play with it.

Welcome to the studio. We're glad you're here.

Let's play.

Justas