UTM Deputy-Rector warns: Deepfake attackers can clone your voice from just three seconds of video

Video or audio recordings that appear genuine but are actually fabricated with the help of artificial intelligence—commonly referred to as deepfakes—are increasingly encountered online. This technology is frequently exploited by scammers, especially in phone calls or voice messages, where they imitate real voices to solicit sensitive information such as credit card details or money transfers.
Recently, deepfakes started to impact public services, with instances reported where the emergency number 112 is targeted by automated systems, delaying crucial assistance for people in emergency situations.
Dinu Țurcanu, deputy-rector for digitalization at the Technical University of Moldova (UTM), explains that some individuals fall victim to these scams because attackers employ synthetic voices that closely resemble those of their loved ones. Victims often believe they are speaking to relatives or friends who are asking for money, when, in reality, the voice on the other end is artificially created using artificial intelligence. This can be based on a live broadcast from social media or, in the case of public figures, compiled from multiple videos available online.
According to Țurcanu, “We currently have artificial intelligence tools that require only three seconds of any video from the user to create a voice that is nearly identical.” He warns that with children actively recording and sharing videos on social media, attackers can easily clone voices and produce synthetic versions that mimic the individuals’ actual voices.
The expert advises that if someone calls asking for information that raises doubts, it is prudent to reach out to that person through different communication channels. “The vast majority of security incidents are exacerbated by pressure to act quickly. By the time you fully grasp the situation, it may already be serious,” Țurcanu points out.
It is crucial for people to be educated and aware that technology is advancing rapidly. Young individuals are encouraged to inform their grandparents or parents about the importance of not sharing card details or personal information if asked over the phone or electronically.
Deputy-rector Țurcanu also recommends that citizens report posts containing manipulative content generated by artificial intelligence to help prevent potential digital scams.
If you receive suspicious calls requesting sensitive information, please call 112.
Dinu Țurcanu also advocates amending the Minor Offences and Criminal Codes, to tighten the penalties for such cases.
He said, on the same show, that the Republic of Moldova is not the only country that has registered a large number of false calls to 112. Recently, the United States of America, as well as the United Kingdom, have also been attacked by calls with synthetic or pre-recorded voices using artificial intelligence tools.
The same source referred to the recommendations of the European Union and NATO regarding the installation of a defensive system at the state level, to detect calls with synthetic or pre-recorded voices. The state should also have tools that can detect, in real time, the audio signal to verify whether it is a human or synthetic voice. According to the expert, these things must be done as soon as possible, to prevent possible attacks aimed at harming citizens.
Author: Valeria Cîrjeu, intern