Nola by Studio Drift (used with permission)

Discovering Emotional Engines

Lucia Komljen
6 min readNov 30, 2017

--

what will it take for us to trust emotionally intelligent machines with data & decisions?

It’s your voice of reason — keeps you in check, pushes you, but also knows what’s best for you.” — Adnan, UK

The Shape of Things to Come
Everyday services will soon embrace a layer of functionality akin to the human quality of emotional intelligence, enabling them to be responsive to what their users are feeling. We can also assume the emergence of another breed of services that will seek to regulate or manipulate emotion, as and when — and to what end — the user requires or desires them to do so.
Both of these manifestations of emotionally intelligent technologies will — in theory — also be able to act on their user’s behalf, automating daily tasks and decisions. Moreover, both promise a future in which their adopters will find themselves in a constant state of equilibrium, as their ever-attuned objects and services tirelessly hyper personalise their environments.

That, at least, is the vision for the future as seen through the lens of the technology. To resist its obvious trappings, we wanted to explore emotional intelligence through the lens of the people that may, one day, be using versions of the aforementioned manifestations.

Therefore, to help us identify opportunities for human-centric innovation in this space while building our understanding of the moral and ethical implications this would entail, we wanted to know — what do people think and feel about emotionally intelligent technology? What do they need or want it to do? And crucially — what will it need to do, or prove, to make sure it can be trusted with data and decisions?

Method & Initial Learnings
We recruited an optimistic, 20-strong mix of young men and women in the UK and the US, highly familiar with VPA’s (arguably a predecessor of emotionally intelligent ‘response’ services), to engage in topical discussions surrounding — and design tasks utilising — emotionally intelligent technology. In turn, a 1200-strong panel across Spain, Germany and the US was consulted to quantify sentiment and needs.

As we discovered early on, our hypothesis that emotional intelligence would be a layer on all services with automation capabilities, and thereby an evolution of AI, was rather naive. In its application across everyday services, the perceived benefit of functional intelligence was perceived to be related to savings (time, money effort). The perceived benefit of emotional intelligence, on the other hand, was related to well-being. Accordingly, it should not be assumed that any AI-powered services must necessarily evolve to become emotionally intelligent — and that any service with emotional intelligence needs to be able to fully automate tasks and decisions.

For example, emotional intelligence was outright rejected in the context of e-commerce, where it was seen to exploit a highly vulnerable data set. Current e-commerce mechanics have already made it painfully easy to buy without thinking (rationally, as well as at all), while the aftermath of irrational spending sprees was frequently associated with guilt and regret. It was no surprise that the prospect of an emotionally-aware agent, sitting on top of an e-commerce platform, evoked a loss of financial control and was therefore rejected outright by our participants.

Response
An explosion of choices across content platforms have increased the time spent on search and discovery, and many of today’s recommendation or curation efforts often seem to compound — rather than solve — overload.

The prospect of a letting their emotions filter content in real time, under the guise of ‘response’, was therefore seen as a welcome evolution by our respondents. In fact, it was even viewed as having the potential to bypass the trappings of content-based filter bubbles in the future (“It can open doors to find new things that might not have been found or experienced before”). Its potential to enhance or mitigate acute emotions in the process also garnered interest.

Regulation
Considering the compounding stresses of modern life, emotionally intelligent technologies’ potential to regulate emotion was met with great enthusiasm. This is hardly surprising, given that our contemporary age is often referred to as an anxious one (1 in 5 people in the UK and 4 in 15 globally can attest to this). Career-, financial-, social- and health-related stresses are playing out against a backdrop of faltering governance, ideological polarisation, and ever-widening wealth gaps. Our previous work on the future of connectivity highlights the role connected technologies play in this stress cycle as well (see ‘absorption’ and ‘overload’).

Appetite for emotional regulation from technology has already been created through the popularity of yoga, secular adaptations of mindfulness (including clinical applications, like MBSR / Mindfulness-Based Stress Reduction), and ‘braintech’ — an emerging genre of hard-and software gaining traction and investment to the tune of half a billion dollars thus far.

Our participants revealed that beyond real-time regulation (specifically around stress and anxiety), they desired help from the technology to build psychological resilience, cultivate willpower in the face of unhelpful / nutritionally poor / unhealthy temptations, and to break bad habits — thus making the case for programs that complement data-based emotion ‘triage’ with features that facilitate mind transformation over time.

The Mechanics of Building & Sustaining Trust
Emotionally intelligent services will require people to generate and disclose a highly sensitive set of personal data. Our participants spoke about the vulnerability they felt when tracking their own emotions over several days as part of an experiment we conducted — not least because it had the potential to undermine their investments in creating idealised versions of themselves for the world to see across multiple social networks (online and in real life).

Therefore, any innovation efforts around emotion data must ensure this vulnerability is acknowledged and respected when it comes to designing how data is collected, handled and secured. Allowing for choices regarding how much or how little a user wants to share, proactive transparency (where / what / why / how, communicated simply and openly), temporal storage, and corporate accountability will dictate whether or not a prospective user will disclose their data in the first place.

Trust is built and sustained by a myriad of factors, and the experience of a product over time plays a key role in achieving this. Lessons from human-to-human relationships provide a useful framework for how a service experience might unfold (mirroring → knowing → caring). Designing a service with a relevant archetype in mind, complete with desired characteristics and personality traits, will encourage people to open up (mothers, gurus, therapists and pets were the dominant ones that emerged from our participants’ design efforts).

Towards ‘Emotional Medicine’
“My dream is…[for] a new field of medicine to be established…something like emotional medicine.” — Avi Yaron, med-tech pioneer and CEO of Joy Ventures, speaking at Wired 2015

The opportunities around applying emotional intelligence to improve content discovery are vast in light of the sheer number of platforms and the multi-billion dollar industries that contain them. Likewise, positioning it as a means to amplify human potential, as some early products and services like Thync or Feel have demonstrated, is lucrative as people strive to make the most of their time and increase their productivity.

When taking into account its abilities to measure, make sense of and influence emotions, the potential for this technology to have a far greater socio-cultural purpose is evident, not least through the eyes of its prospective future users. To achieve this, the medical community’s quest to connect mind and body must formally give way to a new mindset in the field at large. Avi Yaron’s vision is a sound start — while ‘mental health’ is limited to issues that require solutions, ‘emotional medicine’ (or well-being) is an ongoing maintenance process, for everyone.

Ethics
Social networks and behavioural analytics companies have recently come under fire for inferring emotion from behavioural data for commercial and/or political gain, unbeknownst to the majority of their users/targets, and with grave socio-cultural consequences. Typically, ethics and legislations lag, yet once they’ve caught up and are translated into a format and language easily understood by regular citizens, they will play a key part in ensuring these practices are transparent and easy to opt out of.

As with all new technologies, it’s tempting to rush progress and learn through trial and error. But there is also a strong case to be made for truly understanding how people want this highly sensitive set to be handled first, let alone how it feels to generate it.

Finally, the context in which emotions manifest must be understood from multiple angles before accelerating innovation in this space – namely, the brain, consciousness and the mind. In his recent interview with Exponential View, Yuval Harari aptly warns, “if we don’t understand the internal ecosystem, the result may be that we destabilise or unbalance it the same way we have unbalanced the external ecosystem.”.

--

--

Lucia Komljen

Sociocultural researcher, innovation strategist, ex-generative AI startup CSO. A collection of essays based on my research into what people expect from tech.