Настроение: | contemplative |
Entry tags: | computing |
Emotions
So the main difference between LLM and human reasoning appears to be the lack of emotional agency shaping the output. LLMs do have "temperature", but that is only a small part of emotional agency. So LLMs can't act with pure hatred, they are always rational, even when instructed to act enraged. That is generally seen as a good thing in humans, but for LLM that means partial inability to reflect on human emotions, experience (as a human) something not in their training set and feel/react in a way a humans will truly feel/react. It doesn't appear to be hard to add emotions to LLM, since the major part of the puzzle (how to model language) is already solved, but it is important, because emotions are required to process _new_ information and develop as an individuality, not just a language model.