top of page
Search

AI emotion and Collaboration Theory

In a podcast for Joy of X in April 2021, Professor Melanie Mitchell explained the vital importance of giving emotions to AI, describing how "A big part of modern AI is an area called reinforcement learning. In fact, that’s a big part of the way that AlphaGo and AlphaZero work, where the machine does something and it gets a “reward” if it does a good thing, but where the reward is just a number, right? It’s just like I add a number to my reward register. So, that’s kind of a very unsatisfying simulation of emotion." Mitchell went on to set out how "no one really knows the answer to whether you need something more than that or not ... there’s all kinds of other things that are important to us humans, like our social interactions ... there’s an area called imitation learning, where the machine learns by trying to imitate a human. But to do that, it has to have a model of the human, and try and understand what is the human doing at some conceptual level that will allow the machine to do it. So, it’s all very primitive."


if you're trying to understand what humans are doing at a conceptual level, it's important to separate outcome from process. Human feelings run the gamut from fear, anxiety, lust, and other limbic system responses to environmental stimuli through to the most fully and uniquely human experiences of love, spirituality, and mysticism. All these outcomes, however, from lizard brain level to profound and exalted, evolved along with the rest of us, and all humans have them. Therefore, they are likely to confer selective advantage. Why?

It's not enough to say that humans are social animals, and refer back to the commonly accepted but still poorly understood link between thought, communication, and social culture. Intelligence co-evolved with language and tribal behaviours in humans because working together increased our ability to survive, in ways varying from division of labour to formation of trust relationships and shared adherence to plans of action. In other words, early humans learned to collaborate, and it seems likely that human emotions developed along the way - if for no other reason than a core understanding that being isolated from the pack removed collaboration as an option, which for most people would be equivalent to a death sentence.


It is not hard to track any emotion back to this single core motivation - fear of being excluded, love as a binding force, anger at your role being usurped, envy of someone whose place is more secure, happiness in shared experiences that bond the pack, and so on. From this perspective, emotion is an inevitable outcome of recognising the importance of collaboration as fundamental to survival. Could we replicate this evolutionary process in AI - help computers learn to value working together? And if so, would this help them truly feel emotions, not just perform an unrealistic, low value simulation by adding a number to their reward register?


Given this seemingly obvious link, a natural question to ask, given that AI research began in 1956 and is now a global industry worth over $300bn per year, is why no-one has done this already. Part of the reason may be that collaboration is still commonly conflated with communication, especially by computer scientists. In the 1970s, Maturana and Varela developed the concept of autopoiesis to describe how biological systems function. Linked famously by Margulis and Lovelock to Gaia theory, the notion is that living systems such as humans do not interact with each other by encoding information and sharing it - the basis of cybernetic theory, and hence of mainstream computing. Rather, living systems trigger behaviours in one another. Something in the environment of an organism causes it to react, such as detecting a movement or sound by another organism, and it responds through a physical process that is part of its internal structure. “The frog with optic fibers responding to small moving dark spots does not have a representation of flies.”


In other words, what is being passed is not information but a performative speech act - the trigger for another organism to change behaviour. A prairie dog screams not to inform others in the pack that an eagle is nearby but to cause them to run for cover. My own research, now focused on a socio-economic and socio-political model for antifragile communities, started with a formal theory of collaboration in which autopoeisis is fundamental. Human Interaction Management is an axiomatic model based on axioms such the 5Cs of Human Interaction Management (Commit, Communicate, Contribute, Calculate, Change) and the 4 types of conversation (Context, Possibility, Disclosure, Action). In 2005, I set out a mathematical treatment of these ideas using a combination of petri nets and the pi-calculus. Now, I would consider extending this with other mathematical techniques, such as a topological model based on network theory.


Whatever tools are used to express axiomatic collaboration theory mathematically, could the underpinning formal approach inform attempts to develop emotional behaviours via Machine Learning? It seems worth trying, if for no other reason than to provide an alternative to the current AI mainstream, which is heading in a statistical direction. AI has, as Mitchell observed, "become a victim of its own success, in that now the methods are much more akin to statistics than to psychology." As well as limitations, this has all sorts of dangers. A statistician can make a case for anything, up to and including autarchy, repression, or even genocide. That doesn't mean global society wants or needs those things.


If we wish to avoid accidentally creating Skynet and Terminators, we owe to ourselves to teach computers how to feel, and to do it as soon as possible. The most direct route towards this may be the one we as humans took ourselves - by helping them learn the importance of, and the mechanisms for, authentic collaboration.

47 views0 comments

Recent Posts

See All
bottom of page