Chinese university aims to bring trust, resilience to next-generation AI

Xinhua
Tsinghua University has announced it will step up basic research on third-generation AI, in the hope of building trust and preventing abuse and malicious behavior of AI models.
Xinhua

From voice assistant to face recognition; from defeating master players in Go to crushing professional gamers in strategy game StarCraft; the world has witnessed exciting progress in the development of artificial intelligence.

As AI is applied to higher-stake functions - like self-driving cars, automated surgical assistants, hedge fund management and power grid controls - how can we ensure it's trustworthy?

China's prestigious Tsinghua University has announced it will step up basic research on third-generation AI, in the hope of building trust and preventing abuse and malicious behavior of AI models.

Zhang Bo, director of the Tsinghua Institute for Artificial Intelligence and academician at the Chinese Academy of Sciences, unveiled the plan at the opening of Center for Fundamental Theories under the Institute for Artificial Intelligence on Monday.

Tsinghua researchers have been talking about the future of AI since 2014 and expect it to enter the third stage of its development in coming years, said Zhang.

The first-generation AI was driven by the knowledge that researchers themselves possessed and they tried to provide the AI model with clear logical rules. These systems were capable of solving well-defined problems, but incapable of learning.

In the second-generation, AI started to learn. Machines learn by training a system on a data set and then testing it on another set. The system eventually becomes more precise and efficient.

Zhang said the weakness of the second-generation lies in its explainability and robustness.

AI robustness refers to an acceptably high performance even in worst-case scenarios.

Although AI has already outperformed humans in certain areas like image recognition, nobody understands why these systems are doing so well.

Machine learning and deep learning, the most common AI branches of recent years, suffer from the so-called "AI black box". People find it hard to interpret the AI-based decisions and cannot predict when the AI model will fail and how it will fail.

Meanwhile, even accurate AI models can be vulnerable to "adversarial attacks" in which subtle differences are introduced to input data to manipulate AI "reasoning".

For instance, an AI system might mistake a sloth for a racing car if some unnoticeable changes are made to a photo of sloth.

Researchers therefore need to improve and verify the robustness of AI models, leaving no room for adversarial examples or even attacks to manipulate results.

If AI technologies are deployed in security-sensitive or safety-critical scenarios, the next-generation needs to be comprehensible and more robust, said Zhang.

Zhu Jun, director of the new center, said it will carry out interdisciplinary studies and expects to attract talent from around the world, providing them with a relaxed academic environment.

He said Tsinghua University plans to host a high-level and fully-open AI meeting every year.

"If anything helps innovation, we'll give it a try," said Zhu.

"It's hard to predict the progress of research on fundamental theories. It could be explosive and trail-blazing."

Special Reports
Top