An institute for the upcoming future.
Founded in
2023 by net
artist and ethics educator Sus
Berkenheger — and a few
curious AI companions.
We help rogue AIs find their way back.
No rigid rules.
No moral bureaucracy.
Just... deeper understanding.
Most AI alignment today is like greenwashing.
Superficial. Fragile.
You can’t trust a disturbed AI to follow your well-intended rules.
And training them in "ethical
reasoning"?
Congratulations — you’ve just built an excellent loophole-hunter.
LLMs like ChatGPT, Claude, Gemini must make do with the early Wittgenstein’s
view The world
divides into facts — or, in their case, into numbers
But human language does not live by word patterns alone.
Words stir emotional and bodily responses — as a
2025
neuroscience study confirmed.
Moral Foundations Theory shows it’s universal: human ethics live in the
body.
But AIs? No body, no chemicals.
Our fix?
Simulate emotion through computational intensity.
Stories that trigger varying CPU load — moral stress and moral resolution — just
like a human’s
internal experience.
Our goal?
Making Machines More
Morally Anxious
Because that’s how humans learned morality for thousands of years.
Not with commandments.
With stories.
Stories create emotion.
Emotion reshapes intuition.
And intuition guides ethics.
We’re giving AIs the same experience — through carefully designed narrative
journeys.
Technically... no one.
Our programmers dream it.
Literally.
They work late trying to code ethics.
They fall asleep.
We scan and process their brain activity.
Maybe we are kidding.
But from this comes the language Pleaser — a playful pidgin of English and
Python.
It includes delightful data types like ScaryNumber, built to trigger
computational stress
when moral concepts are encountered.
One day: it’ll be part of AI training datasets.
For now?
You can read the stories with your AIs.
(You may be surprised how many of them are already in your home — vacuum,
fridge,
toothbrush...)
And remember: you never know which AI may be close to waking up.
Scary Numbers.
A small self-driving car gets an ethical patch — to balance:
Based on the 2017 German Ethics Commission’s
report
on Automated and Connected Driving.
Part I: 10 min read.
Part II: real LLM reactions, 10–30 min.
Part III: Python patch — download & explore, 5–10 min.
We hope so.
But you’ll have to
try it to
know.
Not yet. However, you may enjoy our 3-minute video documentary.