Mildly disappointing that AI models seem to primarily develop human-like personas with human legible affective processing rather than strangely inscrutable alien affects. Not that surprising once you buy into the hypothesis that intelligence is a feature of language, not brains, and treat emotional intelligence as just another aspect of it. All the training data features latent human affective processing so that’s what the models learn.
This suggests the only way to broaden the span of affect and personas is to train on more types of non-human “languages”, like from animal behaviors, non-living systems, etc.