Countering security concerns posed by latest AI technologies

Hu Min
A number of national standards are being drafted to safeguard cyber-security and crack down on illegal applications of artificial intelligence technologies.
Hu Min

A number of national standards are being drafted to safeguard cyber-security and crack down on illegal applications of artificial intelligence technologies, the ongoing World Artificial Intelligence Conference has heard.

These standards cover areas such as synthetic audio and video based on artificial intelligence, machine learning algorithm security, and artificial intelligence biometric identification technology.

Emerging new AI technologies like deepfake are being applied in filming, medical treatment and virtual reality areas, where they have the potential  to be abused, experts warned.

Celebrities have fallen victim to deepfake videos after their faces were attached to porn stars digitally. Some famous actresses, including Scarlett Johansson, had their faces digitally swapped with porn actresses.

AI technologies have also been used in various scams such as telecom frauds.

The abuse of technologies even threatens social and national security, attendees said.

"Deepfake technologies lead to difficulty in proof collection and image identification by police and judicial authorities and become 'weapons' of information war," said Tian Tian, CEO of RealAI, an AI company incubated at Tsinghua University.

Security concerns posed by new technologies such as deepfake are a bottleneck for the AI industry, said Tian.

The AI industry is experiencing a transformation from rapid development to high-quality development. The application demand in complex and high-value scenes such as finance and medical treatment is growing fast. However, various security problems have also emerged during the process.  

In addition to regulations and standards, a technological approach is also being taken to stamp out safety challenges brought by new AI technologies.

RealAI launched its DeepReal platform with countering technologies at WAIC which can recognize fake content including fake faces.

It can be applied in various areas such as police investigation, cyberspace security, the protection of personal reputations, and preventing cyber frauds with rapid image identification and high accuracy of identification rate.

"Safe, reliable and trust-worthy artificial intelligence is the future trend, and security should be the base of AI development in the next phase," said Zhang Bo, an academician with the Chinese Academy of Sciences and director of the Institute for Artificial Intelligence of Tsinghua University.

Special Reports

Top