Trust in the tracks of digitalization

Improper use of personal data and a careless attitude to data security have caused a number of major scandals in recent years. If the pace of the digital economy is not to slow, companies need to pay attention to consumer confidence.

Stefan-Larsson-1.jpg

According to Stefan Larson, Associate Professor at LTH, Lund University, it has to be easier for private individuals to understand where their personal data end up.

According to sociology researcher Diego Gametta, trust has two main enemies: poor character and insufficient information. We have seen examples of both in the history of digitalization to date. 

More and more operators are logging their users’ habits and behavior in order to be able to present offers and information that match their needs and interests. However, they often keep their customers in the dark about precisely what information they collect and for what purposes. 

In recent years, improper processing of personal data has resulted in a sharp dip in confidence in both businesses and government authorities. The best-known media example is probably the case of Cambridge Analytica, where personal data from 87 million Facebook users were collected using a game app. Cambridge Analytica was then able to use this information to influence the outcome of the American presidential election through targeted advertisements. In a secretly recorded video from the British TV station Channel 4, senior executives at Cambridge Analytica can be heard taking credit for Donald Trump’s victory in the election. 

Throw in the debate about whether Facebook’s algorithms create filter bubbles and the spread of fake news on the platform, and it is hardly surprising that the hashtag #DeleteFacebook went viral for a few months when more than a few angry users decided to close their accounts.

While it is true that permission to share these data with third parties was actually regulated through the membership agreement, the British equivalent of the Swedish Data Protection Authority reached the conclusion that Facebook had failed not only to protect user data, but also to be transparent in its interaction with its users. For this reason, it fined the company GBP 500,000. 

Facebook has long maintained that it takes a neutral role as a platform owner rather than a media company, despite the fact that Facebook is a distribution channel for news and information that reach millions of people every day, helping to shape their view of the world. 

“In many cases, operators who run large-scale digital platforms do not consider themselves producers of content that carries an obligation of responsibility. However, the design and moderation of these platforms generate a range of effects we need to understand better, even when they are performed automatically through algorithms,” explains Stefan Larsson, Associate Professor at the Department of Technology and Society, LTH, Lund University, and head of the digital society program at the Swedish think tank Fores. 

There has previously been some talk of “filter bubbles,” but the conversation today centers more on what is known as “fake news” and propaganda or influencing information, and increasingly has to do with issues of responsibility in relation to a variety of society’s inequalities that digital platforms risk replicating or even actually helping to reinforce. 

“How you set limits for issues of responsibility is crucial, and a discussion we need to have more often; and not just as a technical matter,” continues Stefan Larsson.

The service suppliers who have a direct relationship with users and consumers are obliged to develop when they are investigated and when the market and legal apparatus make demands. However, one major challenge in relation to personal data is how to deal with the third and fourth parties that consumers find it even more difficult to understand are processing their information in the first place. Those players who operate in the shadows in an increasingly complex data market. Here, you cannot rely on the actions of an individual in relation to something they cannot see. And it is here that the supervisory companies need to expand their activities.


Operators who run large-scale digital ­platforms do not consider themselves producers of content that carries an obligation of ­responsibility.

Risky security

The issue of data security has rapidly moved up the agenda since companies became increasingly conscious of how much damage they risk facing if it is not sufficient. The private sector is not alone in experiencing missteps of this kind – there are more than a few examples of breaches of confidence in the public sector, too. 

One of the worst occurred in Sweden, where sensitive information from the Swedish Transport Agency was made available to staff in other countries when the agency attempted to reduce its IT costs through outsourcing. The entire register of driver’s licenses – with photos – and information about the Swedish road network was suddenly laid bare to staff without the relevant security clearance. The breach of regulations was so serious that it was adjudged to be a threat to national security. 

In Denmark, the Danish Data Protection Agency revealed in 2017 that of eight public authorities investigated, seven were in breach of the Danish Personal Data Act – several of them seriously – and they were strongly encouraged to rectify these breaches without delay. 

A change in trust

So who can we actually trust? From the perspective of society as a whole, trust has neither disappeared nor diminished. But it has changed. Trust in institutions and authorities is declining, but we do have confidence in other people – especially people we know, as well as other consumers whose reviews and user experiences play an increasingly significant role in how we choose to consume. 

Another area in which trust is growing is that of new services and innovations. We take a leap of faith every time we choose to rely on new technology. Using AI for the first time or getting into a driverless car requires both courage and trust, just like the first time we made a purchase by entering our credit card details on an internet shopping site. 

Depending on the outcome, new behavior can quickly become an almost banal part of our everyday lives. However, technology can also let us down unless ethics are built into it from the start. This is often a “make or break” issue for companies. 

Need to be proactive

“Being hacked and having your users’ data leaked can be enough to have your brand ruined and to lose the confidence of your customers,” explains Stefan Larsson, mentioning dating apps as a prime example. 

“It may not be a sensitive issue for users that the company possesses their personal data in the right context, but it can be extremely sensitive in other circumstances if information about their finances, health or sexual orientation finds its way into other forums.” 

For consumers, studying the terms and conditions for use of a platform or a digital service often demands a great deal of effort, and establishing a comprehensive image of where your personal data end up has been close to impossible. 

When two parties are to enter into an agreement and one has less information than the other, economists refer to this as “information asymmetry.” Kenneth Arrow, National Economist and Nobel Laureate, was the first person to use this term in 1963 in the context of health care, where doctors are usually much more knowledgeable about the value and effectiveness of a given treatment than their patients. 

A similar situation applies between companies and consumers on the internet, where users generally find it difficult to establish how and where their data are stored and shared, in spite of the agreement that exists between the parties. 

One weakness of the system in place today is that laws and practice in the field of “informed consent” are out of synch, as Stefan Larsson explains.

“We as people are cognitively limited in our capacity to read and assimilate information through agreements which are fundamentally not designed for consumers. Making your data known to one service, and then seeing them being passed on to 14 other operators is tough to watch, and even tougher to relate to.”  


GDPR sets out sanctions which require companies to review their data and to delete all information they do not need. Everyone is more aware now.

New legislation strengthens the consumer’s position

With the introduction of GDPR, the new data protection regulation, the position of private individuals has been strengthened in relation to companies, which are now required to keep a check on their data processing.

“This may involve greater transparency, but it has to be simple.” 

The “right to be forgotten” in GDPR is based on people actually knowing who is in possession of their data. But it’s hard for private individuals to build up a complete picture. Another difficulty has to do with establishing an overview of which data have actually been collected. 

“GDPR sets out sanctions which oblige companies to check through their data and to delete all information they do not need. Everyone is more aware now. Before GDPR was introduced, companies had more incentive – at little or no risk – to collect as much data as possible, regardless of the purpose.”

However, even though the new legislation has raised awareness of the role data play, the actual implementation is progressing only slowly. The supervisory authorities in the European countries have limited resources, and implementation will have to pass through a grey area before it becomes clear how the law is to be interpreted and applied. 

Need for new services

Stefan Larsson is sure a market will appear for new services that increase consumers’ opportunities to take control.

“For example, imagine an AI solution that reviews agreements and which you can talk to and ask questions,” he suggests. 

Stefan Larsson would also like to see more work with certification systems or standards such as the Swan Label or “Good Environmental Choice” for personal data so as to reinforce trust and transparency on data-powered markets.  That would, it is hoped, make it easier for consumers to avoid obscure companies where they cannot be sure who ends up with their data, and take the relevant precautions. 

With the advent of digitalization and trail-blazing technology, we are facing major challenges in a number of areas – not just data processing. How, for example, are we to distribute responsibility when a driverless car crashes, or when the company’s AI system unexpectedly starts showing prejudice? One example of the latter may be image recognition software that is better at interpreting images of white males than of people of color and women because the training data are unevenly distributed between the various groups or contain cultural values.

“If you have an image database that is imbalanced with regard to gender and/or ethnicity, this will naturally be mirrored; the key issue here is how you deal with it. The underlying data are crucial, at the same time as the question of transparency and insight is particularly complex when it comes to ‘self-learning’ algorithms. Moreover, if we are to benefit from and feel confident in the richness and potential that artificial intelligence – and, perhaps above all, machine learning – is offering us right now, we must also become better at understanding the ethical, social and legal challenges this inevitably entails.” 

In order to succeed, companies need to take responsibility and focus on confidence. Or, as Satya Nadella, CEO of Microsoft, puts it: “digital trust is crucial. Strive to improve it a little every day”.