Taking responsibility for a better artificial intelligence future

Li Qian
The next step in artificial intelligence is to develop responsible AI, according to experts at the World Artificial Intelligence Forum in Shanghai.
Li Qian

The next step in artificial intelligence is to develop responsible AI, according to experts at the World Artificial Intelligence Forum in Shanghai.

Computers make decisions based on big data, while humans take more into consideration.

For example, AI may think it’s better to kill people as the best way to eliminate the COVID-19 virus, and its better to dig up a road if someone has an accident. By contrast, humans would try to develop vaccines for the virus and install guardrails to prevent traffic violations.

John Hopcroft, American theoretical computer scientist and Turing Award winner, said people need to find a way to understand why AI makes such decisions and try to reduce the negative impact that new technologies bring to us.

According to Wang Guoyu, professor at Fudan University, said one solution is to design AI with a conscience and moral sensitivity.

AI should be responsible for its decisions, reflect what people think and help create a fair world.

Hopcroft agrees.

AI looks independent and impartial, but it reflects social discrimination. For example, most high-ranking officials are male, and therefore AI will help companies to select males as senior managers, he said.

IBM AI Ethics Global Leader Francesca Rossi said responsible AI must make fair decisions and be able to explain them.

The forum was held by the Shanghai Institute for Science of Science. About 30 experts, including 20 from foreign countries, attended.


Special Reports

Top