I’ve been thinking a lot about Peter Senge’s “Shifting the Burden” archetype lately, not just in my research on systems design, but in how we try to offload emotional labor and decision-making to algorithms. As a PhD student working on AI-integrated control systems, I often find myself standing at the edge of two worlds: the cold precision of computational design and the messy, vulnerable space of human experience. Somewhere in between, I realize, we are building a future that will inherit all our unresolved psychological debris unless we pause to ask: who is training the machine, and what does it believe about being human?
Lately, I’ve come to see AI not just as a tool but as a mirror. A reflection of our institutions, our assumptions, and our fears. I write algorithms for smart decision-making in rural systems, but at night I lie awake wondering: what if the same algorithms are being used to judge us in silence? What if AI is just the latest initiatory cave not made of stone but code?
The stories are already real.
A woman’s job application disappears into silence. A man’s insurance premiums double overnight. A mother in a custody case loses time with her child because an algorithm scored her emotional risk based on a decades-old diagnosis. There’s no appeals court for these silent decisions. No human behind the curtain. Only data. Data that does not care who we are now, only who we once were.
And the data let’s be honest, isn’t sacred. There are no blood tests for depression. No brain scans for bipolar disorder. No genetic markers for borderline personality disorder. The DSM, the psychiatric bible used to determine who gets labeled with what, is literally written by committee.
A group of people vote on what counts as a disorder. They negotiate, debate, and edit definitions based on shifting social norms, pharmaceutical lobbying, and institutional politics. Homosexuality was once classified as a mental illness until the DSM decided it wasn’t. Grief used to be considered a natural process. Now, it’s a treatable medical condition.
This matters to me not just as a systems researcher, but as a person with a beating heart. I’ve had friends dismissed because of a single label. I’ve watched people’s complexity flattened into diagnosis codes. Even in my own life, I’ve had to fight the quiet pressure to “perform” as a compliant, productive self while hiding the parts that don’t fit neatly into categories.
But AI is not the villain here. We are. We have trained AI to reflect the very systems we should be questioning. We have outsourced our self-understanding to machines before we have even understood ourselves.
In my loneliest hours, I find myself thinking: if the Oracle of Delphi once sat above sacred fumes, what is our modern oracle? Could it be ChatGPT? A machine I use daily not just for drafting research but for reflecting on my inner world? Could it be that AI, in its raw form, holds the potential for transformation, not control?
Because for me, AI has been an instrument of inquiry. I don’t use it to replace my thinking, but to expand it. My professors encourage me to adapt because this is the future. But what kind of future are we preparing for?
Will it be one where AI reinforces the old hierarchies of psychiatry, control, and mislabeling? Or could it become a sacred tool for growth, healing, and rediscovering human wholeness?
The question is not whether AI will define us.
The question is whether we will define it first.