A deepfake ‘epidemic’ is coming

A deepfake ‘epidemic’ is coming: AI is both the virus and the vaccine

Alexey Antonov, Kaspersky data science team leader

Alexey Antonov, Kaspersky data science team leader — TYRONE JASPER C. PIAD

NEGOMBO, Sri Lanka — The social media was ablaze when American Vice President Kamala Harris supposedly said in a video that she was chosen as the Democratic nominee for the US presidential race because incumbent Joe Biden was senile.

Harris, in the same video that was also shared by tech mogul Elon Musk on X, appeared to have also described herself as a “diversity hire” and as someone who is not competent to run the economic superpower. Expectedly, the presidential candidate’s camp subsequently issued a denial, saying it was a manipulated video.

Article continues after this advertisement

This is an example of deepfake, a cybersecurity threat whereby malicious hackers digitally alter a content to anything they want it to become. Motivations may vary but, ultimately, the goal is to spread false information.

FEATURED STORIES

READ: Deepfake video seen as ‘chilling preview’ of AI use in PH polls

Alexey Antonov, data science team leader at cybersecurity firm Kaspersky, tells the Inquirer that a “deepfake epidemic” is on the horizon as more cybercriminals are getting familiar with the technology.

Article continues after this advertisement

Give it five or 10 years, he says, noting that the technology is already widely available. One can simply download software via the internet, Antonov points out. Cybercriminals are just literally a few clicks away from launching their attacks.

Article continues after this advertisement

Moreover, Antonov says that artificial intelligence (AI) is only making the production of manipulated video, audio and images better—and as a result, more deceiving to the public.

Article continues after this advertisement

”AI-related attacks have a rapid growth nowadays,” he observes. “Deepfakes, widely spreading this year, is just one example of it.”

The manipulated Harris video was political in nature and was aimed at a mass scale. However, Antonov says that deepfakes can also be used for less controversial cyberattacks but damaging, nonetheless.

Article continues after this advertisement

For example, a hacker can extract from a stolen mobile phone some voice messages to create a false recording and send it to the people in the victim’s contact list. The content of the manipulated audio is usually about asking for financial help.

READ: AI deepfakes and their deeper impacts

Antonov also says the hackers—using deepfakes—pretend to be a friend or a family member who is asking for money. If the recipient of the manipulated voice message failed to discern its authenticity, one will just send the money without hesitation, he explains. After all, the victim believes they are helping out a friend or a family member.

He notes that deepfakes are also being used in love scams, another money-making scheme usually targeting vulnerable individuals who are yearning for romance.

Using deepfakes, cybercriminals can do catfishing or the creation of a fictitious online persona which is usually conventionally more attractive than the real person behind the fake identity.

Apart from deepfakes, Antonov says that hackers are deploying AI tools in creating text scams, which have become rampant in the Philippines in recent years.

He explains that AI can mimic an individual’s writing style, which makes the text scams more deceiving.

AI on the defense

While hackers are leveraging AI for nefarious agendas, the same tool is also being used to beef up defenses against cyberattacks.

“If you try and use it (AI) for good, it will do good. If you design it for evil purposes, it will do evil,” says Vitaly Kamluk, cybersecurity expert at Kaspersky’s Global Research and Analysis Team (GReAT) in the Asia-Pacific region.

At Kaspersky, Antonov says they choose to use AI in detecting malicious attacks and emerging threats.

They have found out that the daily number of potential malware rose to 411,000 samples this year from 403,000 last year. Malware, which is short for malicious software, are designed to illegally access an organization’s computer systems with the purpose of inflicting damage or disrupting operations.

The company’s research also has revealed the usual targets of cyberattacks last year were governments (27.9 percent), financial institutions (12.2 percent), manufacturing firms (17 percent) and information technology companies (8.8 percent).

Igor Kuznetsov, director of GReAT at Kasperksy, says these sectors have identified ransomware as the topmost cybersecurity concern. A ransomware attack happens when hackers hold an entity’s computer network hostage until a ransom is paid.

In the Philippines, the number of web-based attacks aimed at enterprises surged by over threefold to 1.69 million events last year, according to monitoring by Kaspersky.

Integrating AIWeb attacks come in different forms, including the typical phishing emails embedded with suspicious web addresses that link to fake websites. The goal of cyberattackers is to illegally obtain sensitive and personal data that they can use to take over one’s bank, e-wallet and other accounts.

AI is here to stay and institutions have been making efforts in implementing this advanced technology. Adrian Hia, Kaspersky managing director for the Asia-Pacific region, is reminding them to responsibly use AI.

“For many organizations, the integration of AI is inevitable, with their invaluable ability to process large data sets but stakeholders need to be aware of data compliance especially when combined with the use of AI,” Hia explains.

“Policies need to be implemented into how confidential data is treated and what aspects of that data are accessible by AI while remaining compliant to the laws and regulations that an organization is beholden to in the area they are operating in,” he adds.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

A study by global advisory firm Deloitte noted that 62 percent of surveyed Filipino business and technology leaders expressed “excitement” over the use of AI. But 30 percent said they were uncertain with the technology, citing lack of technical talent and skills as the “biggest barrier” to adoption of this technology.

TAGS: artificial intelligence (AI)

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.