Pranks were all the rave when I was a kid, but as we grow up, most of us move on to other hobbies. Particularly when our attempts to succeed in this endeavour result in slightly too-realistic pranks which, in turn, land us in trouble with our mothers…
Some of us, however, have found newfound inspiration in the opportunities offered by AI in the pranking department, such as Evan Ratliff. Ratliff recently cloned his voice and got an AI to replace him on phone calls with his friends to see if they would notice. While he may not be winning any popularity contests any time soon, he has made some interesting observations:
When Ratcliff asked his friends and family what it felt like talking to the AI, he found that many people felt a strong sense of loneliness, almost like they were talking to themselves. Ratcliff wonders whether these simulations of conversations could result in us more strongly craving the real thing.
Ratcliff also noticed how easy it was to clone a voice and attach it to your phone number, it takes less than an hour and anyone can do it. If we consider how much damage a naughty child could cause, the potential for an ill-intentioned adult is quite frankly, terrifying. From break-up calls to heart-breaking death notices, I don’t doubt we will see a rise in legal cases in this area.
There has already been an increase in fraud involving voice agents. A recent example of this took place in California when Anthony (a senior citizen) was swindled out of $25,000 by scammers using a voice clone of his son claiming to be in a tragic accident. While many may consider themselves impervious to such scams, Anthony points to the emotional impact involved when faced with a loved one’s terrified voice on the phone. Voice-agent scams are becoming more sophisticated, with social media sites being used as research and a few seconds of recording being all that is required to replicate an individual’s voice. Unfortunately, the elderly community are the most common prey, with one British study finding that 40% of individuals aged over 75 receive a minimum of one scam call per month.
Yet, just like the alligator snapping turtle pretends to be the prey, not all elderly civilians are what they seem… Daisy Harris is not the old English lady she might seem, but is rather an AI built to fight scammers with its one impressive super-power: the ability to waste inordinate amounts of time. Yes, it’s not the coolest power in the world and, yes, amongst some individuals it seems a scarily common trait. But every painstaking minute that Daisy keeps a scammer on the phone, desperately trying to explain how to carry out the simple task of turning on her laptop, is a minute in which a scammer is kept from calling other potential victims.
The extent of influence of voice agents extends beyond financial fraud, they are also being used to sway voting patterns. During the US elections, we saw thousands of Democrat voters receiving targeted robocalls from a faked voice of President Biden telling voters not to vote in the first primary of the election season. The Texas-based company behind these calls is now facing a criminal investigation. Legislators across the US have been scrambling to deal with this growing problem, the Federal Communications Commission banned AI-generated robocalls in 2024 due to concerns of electoral disinformation being spread.
In one of our previous blogs, we explored how AI has contributed to an increase in cybercrime. The rise in instances of fraud involving voice agents falls in line with this greater pattern of increased cybercrime. It should also be noted that AI tools are also being used to better detect and prevent instances of fraud and cybercrime. Not to mention, AI voice agents are being used to create more interactive learning experiences by creating audio-lessons with the voices of famed individuals in particular sectors, such as Geoff Robinson, a former UBS investment analyst. As with most new inventions, AI provides a series of tools which can be creatively used for both good and evil.
smartR AI has recently been working with Insight Consulting to produce a program which aids the detection of fraud in transaction data by facilitating natural language queries into common patterns involved in fraud. This means that individuals that aren’t necessarily qualified in computer science can make better use of organisation data to detect fraudulent transactions. Find out more about the work smartR AI is doing here.
Written by Celene Sandiford, smartR AI