AI technology misuse outpaces commonplace checks

Wan Lixin
Sadly, in the brave age of AI, formerly foolproof checks can no longer keep up with technology's efficiency at perfecting the art of simulation and dissimulation.
Wan Lixin

A time-honored trick to frustrate telecom fraudsters used to be: before lending money to someone who has requested the loan via WeChat, check beforehand by making a telephone call or, even better, a video call, with the would-be borrower.

Sadly, in the brave age of AI, this trick can no longer keep up with technology's efficiency at perfecting the art of simulation and dissimulation.

A milestone victim is a legal person for a technology company in Fuzhou, Fujian Province, surnamed Guo, who was recently cheated out of 4.3 million yuan (US$610,000) in a matter of 10 minutes.

With the help of AI, the fraudster asked to borrow the sum using the WeChat account of one of Guo's friends. Before sending the money, Guo took the precaution of verifying the request by having a video call with the would-be borrower, and sent the money only after he thought he had confirmed the caller was his friend.

The truth is that Guo had been deceived by an image and voice synthesized by AI.

This deception starkly illustrates the growing challenge of safeguarding one's assets and legal rights in the AI age.

A couple of months ago, when I first tried my hand at ChatGPT, it occurred to me that before long telecom scam schemes would be an infinitely more cost-efficient enterprise because nearly all professional scammers could be more efficiently replaced by a ChatGPT program, at almost no cost, and around the clock.

Although AI simulations are an economic shortcut in dramatic programs, we should definitely not underestimate their abuse in myriad other aspects.

The abuse of technology could spawn a chain of criminal practices, including fabricating pornographic images by grafting someone's face on another's body. By comparison, a student creating a term paper or thesis by using ChatGPT appears almost innocuous.

Deepfakes relating to one's identity lead to a host of other risks. For instance, facial recognition has been widely used in completing electronic payments, and leaks of relevant data will pose a series of financial risks including unauthorized money transfers.

According to the national regulations governing deep synthesis service providers published last December, providers of deep synthesis services must obtain prior and specific consent of the owners of the images to be edited. We could well imagine the challenges for effective regulation.

Before we are fully prepared to navigate this uncharted land of AI, it might be advisable to heed those experts advocating caution, or put the technology on hold until we are ready.


Special Reports

Top