Prologue

It was a regular Sunday walk, but when I saw this guy in a car, my thoughts turned into an unexpected question about robots.

The car stood out from the normal crowd in that area. It’s usually Porsche, Lamborghini, Ferrari, and a bunch of Teslas. But this was really old Ford, an old truck.

Its color was fantastic, like a painting. It was made of several layers of different colors. All these layers decayed, and it was creating fluid gradients between blue, black and rusty scratches.

It was cold outside, but he was driving with open windows, probably because the car was not functioning properly and the heat from the engine was going straight to the cabin. The guy was listening that kind of hard-core metal music that makes you jump into the mosh pit. He had a short haircut, which is often not what is perfect for your face type, but what option is the cheapest.

This wasn't one of those "life is beating me and I'm trying to survive the best as I can". It was more like, "I don't care what people think, this is the best version of me". He appeared unapologetically himself.

Comfort is coherence between internal model and external experience

So what does make him feel comfortable driving this car? And why do I feel discomfort imagining that car as my daily drive?

Is it about our childhood? What we saw in our parents when we were growing up? What our parents drove, how they talked about cars? Or maybe the guy is just lazy to fix his AC.

So what does make us feel comfortable about one situation versus another? How do we develop this feeling? If growing up we were praised or mocked for appearances, and thus we had internalized what things are "acceptable".

Then I transferred this question to robots. How to train robots to make them feel comfortable about some decisions and not comfortable about others? But first

How to explain social norms

and in what brain regions they are stored?

Definitely various brain regions involved in that process. Specifically we will look into regions related to social cognition, emotional regulation, and decision-making.

Mirror Neuron System (MNS). There is no a specific location in cortex for this system (mirror neurons has been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex, and the inferior parietal cortex). The MNS is associated with imitating others' actions. It plays a role in internalizing (making part of one's beliefs) social norms through observational learning.

Ventromedial prefrontal cortex (vmPFC, part of prefrontal cortex) is crucial for decision-making and moral reasoning. It helps in adhering to social norms (following the rules).

Anterior Cingulate Cortex (ACC, It consists of Brodmann areas 24, 32, and 33. 24 and 32 lie in PFC. 33 belongs to cingulate cortex, internal part of the limbic system) as it involved in reward anticipation and error detection, it plays a role in detecting norm violations and adjusting behavior to align with social expectations.

Another part of the limbic system is amygdala. In response to emotional aspects it can lead to adhering to or deviating from social norms.

The hypothalamus can produce oxytocin that influences social bonding and trust. It plays a role in promoting social acceptance.

The basal ganglia producing dopamine involved in habit formation and reinforcement learning, thus repeated social behaviors can become a norm.

The superior temporal sulcus (STS) has been shown to produce strong responses when subjects perceive stimuli in research areas that include theory of mind. This means that STS is crucial for recognizing social cues and understanding others' intentions.

The interaction between these regions forms a network that allows individuals to perceive, understand, and conform to social norms.

Then, the robots

How do we train robots to feel “comfortable” in some states and not others?

What is the “robot’s childhood”? One could define internal goals and value structures, and train it on goal-aligned tasks where some paths feel cheap and good, others costly and conflictual. Discomfort would arise when there's mismatch with internal model and increase in prediction error. But if robot just optimizes reward, it might do things that look wrong to us. That guy in the truck is comfortable because his environment fits his model of the world. I feel discomfort imagining that truck because it violates mine.

So comfort, isn’t about reward. It is the absence of conflict between internal representations and external reality. AGI won’t just be built from optimization. It will need its model of the world, and even a capacity for shame—model-mismatch under social constraints.

Would you want a robot that’s comfortable in situations you aren't? Or would you build it to share your discomforts?

discussion

Sources

Rate this page