The story appears on

Page A7

July 11, 2018

GET this page in PDF

Free for subscribers

View shopping cart

Related News

Home » Opinion » Foreign Views

Solving disinformation puzzle is a challenge

GIVEN the vulnerability of digital channels to purveyors of “fake news,” the debate over how to counter disinformation is going on. But if there is one thing that the search for solutions has made clear, it is that there is no silver bullet.

Instead of one comprehensive fix, what is needed are steps that address the problem from multiple angles. The modern information ecosystem is like a Rubik’s Cube, where a different move is required to “solve” each individual square. When it comes to digital disinformation, at least four dimensions must be considered.

First, who is sharing the disinformation? Disinformation spread by foreign actors can be treated very differently — both legally and normatively — than disinformation spread by citizens.

Second, why is the disinformation being shared? “Misinformation” — inaccurate information that is spread unintentionally — is quite different from disinformation, which is spread deliberately. Preventing well-intentioned actors from unwittingly sharing false information could be addressed, at least partly, through news literacy campaigns or fact-checking initiatives. Stopping bad actors from purposely sharing such information is more complicated, and depends on their specific goals.

For example, for those who are motivated by profit, new ad policies that disrupt revenue models may help. But such policies would not stop those who share disinformation for political or social reasons. If those actors are operating as part of organized networks, interventions may need to disrupt the entire network to be effective.

Third, how is the disinformation being shared? If actors are sharing content via social media, changes to platforms’ policies and/or government regulation could be enough. But they must be specific.

For example, to stop bots from being used to amplify content artificially, platforms may require that users disclose their real identities. To limit sophisticated microtargeting — the use of consumer data and demographics to predict individuals’ interests and behaviors, in order to influence their thoughts or actions — platforms may have to change their data-sharing and privacy policies, as well as implement new advertising rules.

This is a kind of arms race. Bad actors will quickly circumvent any changes that digital platforms implement.

New techniques — such as using blockchain to help authenticate original photographs — will continually be required. But there is little doubt that digital platforms are better equipped to adapt their policies regularly than government regulators are.

Yet digital platforms cannot manage disinformation alone, not least because, by some estimates, social media accounts for only around 40 percent of traffic to the most egregious “fake news” sites, with the other 60 percent arriving “organically” or via “dark social” (such as messaging or emails between friends). These pathways are more difficult to manage.

The final — and perhaps most important — dimension of the disinformation puzzle is: What is being shared? Experts tend to focus on entirely “fake” content, which is easier to identify. But digital platforms naturally have incentives to curb such content, simply because people generally do not want to look foolish by sharing altogether false stories.

Misleading and incendiary

People do, however, like to read and share information that aligns with their perspectives; they like it even more if it triggers strong emotions — especially outrage. Because users engage heavily with this type of content, digital platforms have an incentive to showcase it.

Such content is not just polarizing; it is often misleading and incendiary, and there are signs that it can undermine constructive discourse.

But where is the line between dangerous disagreement based on distortion and vigorous political debate driven by conflicting worldviews? And who, if anybody, should draw it?

Even if these ethical questions were answered, identifying problematic content at scale confronts serious practical challenges. Many of the most worrisome examples of disinformation have been focused not on any particular election or candidate, but instead on exploiting societal divisions along, say, racial lines. And they often are not paid for. So, they would not be addressed by new rules to regulate campaign advertising.

If the solutions to disinformation are unclear in the US, the situation might be even thornier in the international context, where the problem might be more decentralized — another reason why no comprehensive solution is possible.

But, while each measure addresses only a narrow issue — improved ad policies may solve five percent of the problem, while different micro-targeting policies may solve 20 percent — taken together, progress can be made. The result will be an information environment that, while imperfect, includes only a relatively small amount of problematic content.

The good news is that experts will now have access to privacy-protected data from Facebook to help them understand (and improve) the platform’s impact on politices around the world.

One hopes that other digital platforms — such as Google, Twitter, Reddit, and Tumblr — will follow suit. With the right insights, and a commitment to fundamental, if incremental, change, the social and political impact of digital platforms can be made safe — or at least safer — for all countries.

 

Kelly Born is a program officer for the Madison Initiative at the William and Flora Hewlett Foundation. Copyright: Project Syndicate, 2018. www.project-syndicate.org. Shanghai Daily condensed the article for space.




 

Copyright © 1999- Shanghai Daily. All rights reserved.Preferably viewed with Internet Explorer 8 or newer browsers.

沪公网安备 31010602000204号

Email this to your friend