Online privacy must improve after Facebook scandal
I, like millions of others, have willingly given up some of my privacy to Facebook to achieve a sense of connection across cultures, time zones and generations. But revelations of the alleged sale and misuse of Facebook data by Cambridge Analytica has left me feeling betrayed.
Having lived in five countries in the past 20 years, friends and colleagues disappeared from my daily life, but our relationships were maintained through our online interactions. Our posts and emoji reactions revealed new layers of our personalities, values and beliefs. Without being in the same room, we built understanding, inspired one another, shared joys and commiserated.
I shared my stories, preferences and images with contacts to build relationships in exchange for targeted advertising that I could choose to ignore.
It seemed a reasonable trade-off to allow Facebook to mine that personal information for the purposes of targeted advertising. I bore the cost of the commercial use of my data for marketing that benefited the seller and me. I laughed when a Facebook ad tried to sell me a T-shirt, bragging: “I run on feminism, caffeine and social justice.” When companies offer me products or services that I need or like, we both win.
As a professor for social innovation, I support the use of data for non-profit academic research that helps create a better understanding of the society we live in. But when it was reported that Cambridge Analytica had used that data in an apparent effort to sway political campaigns, I felt a line had been crossed. By giving third-party apps access to my data, I could have exposed myself and — until 2014, when Facebook updated its data use policy — hundreds of my contacts to the risks of such manipulation.
The revelations prompted an online campaign to delete Facebook and shares of the free content ad network plunged after the scandal hit.
I considered abandoning my network of 1,000 Facebook contacts. But I didn’t go through with it. While I still value my interaction with my network on Facebook, I have a new sense of responsibility that does not allow me to look the other way.
Facebook’s most valuable asset is the knowledge it accumulates about its 2.13 billion monthly active users. Every emoji, post and friend connection gives the social network more information about a user’s preferences, including shopping choices and political views. That data is then used by advertisers who provide Facebook almost all of its annual revenue, which stood at US$41 billion for 2017.
Advertisers pay for that data to target their messages to the right audience segments. Facebook and other ad-stuffed tech companies have faced some scrutiny, particularly from privacy regulators and campaigners about their business practices, but the Cambridge Analytica scandal has brought those concerns to the masses.
It’s worth noting that other industries have been berated over such business practices. Whether it is the pharmaceutical industry’s approach in the early 2000s to marketing AIDs medication in low-income countries, the impact of mining practices on the environment or the contribution of highly processed foods to childhood obesity and diabetes, corporations are under pressure by investors and consumers to account for the impact of their behaviour on society.
After the Cambridge Analytica scandal broke, browser maker Mozilla released a Firefox extension that promises to limit the extent to which Facebook can track a user’s web activity.
There are plenty of other options that provide better levels of protection of personal data online.
While a further degree of privacy is offered by the likes of Firefox there is no perfect answer to effectively reducing the level of ad targeting and data harvesting throughout the web. Given that there is a growing atmosphere of mistrust, innovation is urgently needed to address the risks related to online data sharing on Facebook, Google and beyond.
Professor Vanina Farber holds the elea Chair for Social Innovation at IMD.