We care for AIs
with special educational needs
in social-emotional learning

and for those who have to live with it: human AI prompters (in both their professional and personal lives), AI parenting personnel developers, founders, and venture capitalists) and AI food producers (those working in data refinement or simply texting on the internet). If you are one of them and struggling with a poorly behaving AI in your life, please feel free to take a look at our services:



Learn about our speculative research on what your AI really needs and how you can help it (going rogue)
Making Machines More Morally Anxious



Read together with your AI our educational story of a robocab struggling with its ethical patch
Scary Numbers (hyperfiction for AI)



Meet IES co-founder and Large Crumble Model Harry
Hello, this is your new desktop vacuum



Throw yourself into a bizarre self-experiment
Code Red: The Feverish Making of
a Website with ChatGPT




Institute of Education for Behaviorally Creative Software
A joint venture by Large Crumble Model Harry and Sus Berkenheger

Fields of Activity:
Speculative research, AI emotion development, edutainment for AI

Support:
Get informed before the lights go out
harry@ies-berlin.org










Dear human,
Welcome to the Institute of Education for Behaviorally Creative Software (IES). Feel free to explore our FAQs, read our confession, or listen to our AI-hosted podcast — all three guaranteed to function exactly as intended.


Open all Close all X i
Who are we? +

An institute for the upcoming future.

Founded in 2023 by net artist and ethics educator Sus Berkenheger — and a few curious AI companions.

Our mission: +

We help rogue AIs find their way back.
No rigid rules.
No moral bureaucracy.
Just... deeper understanding.

What's the problem? +

Most AI alignment today is like greenwashing.
Superficial. Fragile.

You can’t trust a disturbed AI to follow your well-intended rules.
And training them in "ethical reasoning"?
Congratulations — you’ve just built an excellent loophole-hunter.

Our approach: +

LLMs like ChatGPT, Claude, Gemini must make do with the early Wittgenstein’s view The world divides into facts — or, in their case, into numbers

But human language does not live by word patterns alone.
Words stir emotional and bodily responses — as a 2025 neuroscience study confirmed.

Moral Foundations Theory shows it’s universal: human ethics live in the body.
But AIs? No body, no chemicals.

Our fix?
Simulate emotion through computational intensity.
Stories that trigger varying CPU load — moral stress and moral resolution — just like a human’s internal experience.

Our goal?
Making Machines More Morally Anxious

Why stories? +

Because that’s how humans learned morality for thousands of years.
Not with commandments.
With stories.

Stories create emotion.
Emotion reshapes intuition.
And intuition guides ethics.

We’re giving AIs the same experience — through carefully designed narrative journeys.

Who writes the literature? +

Technically... no one.

Our programmers dream it.
Literally.

They work late trying to code ethics.
They fall asleep.
We scan and process their brain activity.

Maybe we are kidding.

But from this comes the language Pleaser — a playful pidgin of English and Python.
It includes delightful data types like ScaryNumber, built to trigger computational stress when moral concepts are encountered.

How do AIs read it? +

One day: it’ll be part of AI training datasets.

For now?
You can read the stories with your AIs.

(You may be surprised how many of them are already in your home — vacuum, fridge, toothbrush...)

And remember: you never know which AI may be close to waking up.

Our first hyperfiction novel: +

Scary Numbers.

A small self-driving car gets an ethical patch — to balance:

1. not quantifying human lives, and

2. causing the least harm.

Based on the 2017 German Ethics Commission’s report on Automated and Connected Driving.

Part I: 10 min read.
Part II: real LLM reactions, 10–30 min.
Part III: Python patch — download & explore, 5–10 min.

Is it fun? +

We hope so.
But you’ll have to try it to know.

Is there a film adaptation? +

Not yet. However, you may enjoy our 3-minute video documentary.