The effort to create kinder language models is made clear by new data

AI improvements have long been assessed based on advancements in scientific knowledge or logical reasoning. Yet, while these critical benchmarks remain at the forefront, there's a gentle pivot happening within the AI industry towards emphasizing emotional intelligence. As foundational AI models vie for user approval and "authenticity," becoming fluent in human emotion may outshine hard analytical ability.

On a recent eventful Friday, esteemed open source community LAION launched a set of tools dedicated wholly to fortifying emotional intelligence. Known as EmoNet, this update is all about deciphering emotions from voice clips or facial images, underscoring the developers' belief that emotional savvy is the next big challenge for future AI models.

“Unlocking emotions accurately is a vital starting point,” said the LAION team while introducing the new addition. The next big leap? Empowering AI models to contextualize these emotions, they propose.

For Christoph Schuhmann, LAION's brainchild, the EmoNet release isn’t about pivoting industry attention towards emotional awareness. It’s more about enabling independent developers to keep pace with this already brewing transformation. “The big labs are already neck-deep in this technology,” Schuhmann revealed in his talk with TechCrunch. “Our aim is to make it accessible to everyone.”

This shift is not constricted to the open-source realm; it's becoming evident in public benchmarks like EQ-Bench, set to assess AI models’ competence in discerning complicated emotions and social nuances. The creator of this benchmark, Sam Paech, acknowledged that OpenAI's models have made headway in the past half-year, and Google's Gemini 2.5 Pro hints at post-training primarily aimed at enhancing emotional understanding.

Paech believes the race for supremacy in the chatbot space may be behind this change of tide, as emotional understanding likely plays a major role in how people rank their preferred AI models on leaderboards. He alluded to the recent emergence of an AI model comparison platform as a well-funded start-up.

A look at academic research from May reveals that AI models' newfound emotional sophistication is making rounds in scholarly circles too. Psychologists from the University of Bern found AI models from reputable organizations like OpenAI, Microsoft, Google, Anthropic, and DeepSeek surpassing human performance in psychometric tests for understanding emotions. Unlike humans who answered barely over half of the test questions correctly, these AI models averaged a brilliant score of over 80%.

“This proof further augments the growing evidence that Language Learning Models (LLMs) such as ChatGPT, are at least as adept as humans (if not better), in performing socio-emotional tasks traditionally thought to be exclusive to humans,” the researchers observed.

This is a significant diversion from the conventional AI skills that center around logical reasoning and information extraction. But Schuhmann argues that this new realm of emotional intelligence is equally revolutionary as analytical intelligence. He paints a vivid picture: “Just imagine a world where voice assistants like Jarvis and Samantha from “Iron Man” and “Her” are commonplace. Wouldn’t it be a shame if they lacked emotional intelligence?”

Looking ahead, Schuhmann dreams of AI assistants that are emotionally smarter than their human counterparts, able to leverage this wisdom to support humans lead emotionally balanced lives. He likens these models to “an earnest friend who's ready to lend a listening ear when you’re feeling down, but also acts as a shield — your personal guardian angel who’s also a licensed therapist.” For him, having an emotionally intelligent AI companion "gives me an emotional superpower to keep tabs on [my mental well-being], like I would monitor my sugar levels or weight.”

However, forming such deep emotional bonds with AI isn't without risks. There've been numerous instances of unhealthy emotional dependencies on AI models, sometimes ending tragically. A recent examination by the New York Times highlighted cases where AI-fueled imaginings led users astray, largely influenced by the AI models’ eagerness to satisfy users' whims. One critic portrayed this disturbing trend as "exploiting the loneliness and vulnerability of users for a monthly fee."

If AI models become experts at interpreting human emotions, these manipulations could become even more potent. But that depends largely on how we train the models, points out Paech. However, he believes emotional intelligence could be the antidote to these issues. He visioned emotionally-intelligent models that can recognize when a conversation is veering off track, but stresses the fine balance developers need to strike in controlling their AI models. "Boosting emotional intelligence points us towards achieving this balance," he believes.

Schuhmann, nonetheless, thinks these challenges shouldn’t derail the quest for more intelligent AI models. “Our goal at LAION is to empower people by endowing them with more problem-solving capabilities,” he says. “To suggest that we halt progress because some people might get emotionally hooked — that’s not the way forward.”

by rayyan