We build our lives not around humans, but around machines. In order to cross a road, open a door or dry our hands, we rely on pressing buttons or activating sensors so a machine can recognise and read our movements. Our cities are built not for walking across, but for driving, and our machines are taught to communicate with each other. It’s clear that technological sciences have thus far focused on ‘functionality’ and finding ways to replace humans in the workforce. Yet, a new movement using behavioral sciences to make machines human-centric hopes to change the path of technological innovation. Behavioral Artificial Intelligence aims to develop machines to aid rather than replace people, to improve communication, and most importantly to read and understand the behavior of people and respond accordingly.

This is no simple feat, particularly given our inherent irrationality: only 53% of our decisions reflect our intentions. The field of behavioral analysis seeks to address this ‘intention-action gap’ by analysing empirical data about human cognition and behavior, rather than relying on assumptions about rational behavior. The combination of behavioural analysis with technological sciences allows for the collection and processing of rich data to identify predictable patterns in our seemingly irrational behavior. This has led to the creation of AI systems that directly interact with humans or needs to understand human behavior for further decision-making.

Such a system could, for example, be implemented in autonomous vehicles to read pedestrian body language and make accurate predictions on future behavior. If the AI is trained purely on a physics model, it may determine the velocity of a runner crossing the street and make physical calculations accordingly. Yet, if it is also equipped with a behavioral intelligence model then it may be able to read the body language of the runner and determine whether the runner is planning to slow down or stop entirely. This added layer of insight could certainly harmonize coordination on roads and improve safety.

Although, it should be noted that these systems come with some risks. It is important for these systems to make predictions based on behavior rather than on the characteristics of individuals, as this could otherwise result in biases. Furthermore, given the increased use of visual sensors the models should not have facial recognition capacity, nor should they capture identity, and they must comply with the General Data Protection Regulation (GDPR). Aside from the GDPR regulation, protection in this area is virtually non-existent, and as is often the case with AI technologies regulation is lagging very behind. Worryingly enough, it is still such a new field that there has been very little expert discussion of the dangers that this type of technology could present.

Yet, when we are faced with shortages such as those of mental health workers in the world, where in half of the countries in the world there are four mental health workers per 100,000 people, AI seems like our only hope and the potential dangers are dismissed. By tracking the expressions of individuals who are asked to watch videos of stereotypically ‘happy’ things, a study found that an AI can accurately diagnose depression in individuals. Another study found that by de-coding therapy sessions and comparing the techniques used to the outcomes reported by the patients, it could not only provide immediate feedback to clinicians but could potentially be used to train an AI therapy chatbot. These chatbots could provide some of the essential elements of care to people who can’t afford therapy or who simply don’t have access to it in their part of the world. Although research on these chat bots is still in the initial stages, preliminary research has shown successful patient outcomes. Yet, before chatbots are universally adopted some of the next steps to be considered include: creating universal standards of reporting; creating a universal evaluation standard for chatbots; and increasing transparency.

Analysing behavioural patterns gives us amazing insight into the way the human mind works. AI systems trained on these patterns can help us identify behavioural anomalies and to identify their cause and solution, not to mention the potential of harmonizing interactions between humans and machines. However, freely providing our behavioral patterns could also allow them to be used as a powerful tool against us. Ensuring morality and ethics are at the forefront of all computer-science-driven approaches is essential looking forward. Yet, we must not forget that it is also our responsibility to be aware of how valuable our individual behavioral patterns are and to be careful with whom we choose to share this information.

Written by Celene Sandiford, smartR AI

Recent News

smartR AI in the press, with EPCC

smartR AI along with EPCC have been featured in the local Scottish and UK business and technology press this last week, featuring the AI-training trial yields and the dramatic reduction in AI training times smartR AI has experienced during the trials with EPCC’s...

Double recognition: smartR AI receives two new awards

smartR AI has once again been recognized for their contribution to AI development. Corporate Vision has named smartR AI "Best Behavioural Intelligence-Focused AI Application Company – UK, 2023" Corporate LiveWire has awarded smartR AI  "2024 AI Solutions Provider of...

smartR AI in the news

AI-training trial yields impressive results with EPCC's Cerebras CS-2 smartR AI™ and EPCC are working together on an AI trial using the Cerebras CS-2 Wafer-Scale Engine (WSE) system. The results so far have shown a dramatic reduction in AI training times. Read more...

smartR AI in the media

smartR AI has been featured in the press following the award ceremony by Corporate LiveWire, Innovation and Excellence Awards 2023, for Solutions Provider of the Year, 2023. Find us in the Scottish press and UK tech news: newsfromscotland.co.uk View Post...