Main factors affecting the basis of digital trust
There are three great factors that impact the foundation of trust directly:
- Disinformation: Decision-making becomes a lot more complex if we cannot trust the truthfulness of the information available.
- Public Administrations: Trust in our institutions will have positive impact on our trust in the economic and social context going forward, and we will take decisions based on trust.
- New technologies: Trusting them enables us to transact and communicate online, and in general, they support us to grow personally and professionally.
One of the greatest current scourges of trust is disinformation. By disinformation, we refer to those who create and/or spread false information deliberately, seeking to harm others and sometimes, profit at their expense.
The ABC framework (Actor (who) Behavior (how) and Content (what)) put forward by Camille François, Innovation Director at Graphika is useful to study disinformation.
- Actor: Manipulative actors who clearly intend to interfere with democratic processes or the information ecosystem.
- Behavior: Deceitful techniques and ploys to make it seem as there are many users sharing a certain opinion where there is a robot generating contents.
- Content: Harmful content created to harm or undermine a person’s or an organization’s reputation and influence the public debate.
A very severe issue brought about by disinformation is that authentic social movements lose credibility: Any social movements questioning the status quo is said to be orchestrated by trolls, thus delegitimizing their claim.
The role of Big Tech in the fight against misinformation
Big Tech have reacted with more transparency, opening up data on trolls and constantly blocking shady accounts. But there is still a long road ahead. Some big names in the industry know that they must become involved in regaining the trust of digital users, and there are several taking steps in this direction already.
Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, summarizes Facebook’s strategy against disinformation in this video. Google is another corporation that plays a key role in this field, as recognized in Google’s policy against disinformation, explained in their White Paper “How Google Fights Disinformation“.
As citizens, do we want to hear the truth or whatever resonates with our ideology, way of thinking, values, etc?
Seeking to understand the information pollution phenomenon, in her work “How Misinformation Spreads Through Twitter”, Mary Blankenship put forward in 2020 three categories to detect the different ways of manipulating information to deceive or coerce an audience.
- Misinformation: false information is shared with no intent to harm.
- Disinformation: false information is shared with intent to harm.
- Malinformation: genuine information is shared with intent to harm.
The role of Public Administrations
When trust in Public Administrations is high, citizens feel safe and protected, and their trust in other public and private institutions increases. And much the same way, when trust in Public Administrations declines, it drags along the remaining institutions, just like a house of cards.
Experts have pointed out as a potential cause that some public services strike closer to home, as they offer experiences that develop an affinity, a bond, which means they generate emotional loyalty.
It is therefore essential for the public sector to lead the efforts to regain trust, since the rest of the stakeholders, private sector included, will benefit from the process. The OECD proposes several key measures to achieve this:
- Proactiveness: propose or regulate public services.
- Reliability: anticipate changes and protect citizens.
- Integrity: use powers and public resources ethically.
- Transparency: consult, listen, engage and explain to citizens.
- Equity: improve living conditions for all citizens.
What role does technology play around trust?
Before we trust the technology, we must make sure that technology is secure, responsible and protective of privacy and data.
The key factors around technology and trust are:
- The misuse of personal data.
- The fear of losing jobs due to automation.
When laying out the ground to trust technology, a problem that emerges right away is that technologies replicate their creators’ biases: algorithms can be racist or sexist, for example. We saw this with a chatbot launched by Microsoft in 2016, which tweeted “utterly inappropriate and objectionable words and images”, as acknowledged by the company, which then proceeded to eliminate it and apologize.
The AI bias specifically cannot be eliminated completely, but it is essential to know how to reduce it and work actively to prevent it. A way of tackling the problem is “Explainable AI“, which makes AI applications understandable and traceable, so that they are no longer a black box.
Misuse of data
There is an intense debate about personal data privacy, and the property of the said data: If my data hold value, I want to be rewarded when third parties are using them for profit.
Data privacy and the use made of them requires legislating and regulating. But because digitalization and globalization enable international and global interactions among several actors, the use of data requires international cooperation and coordination efforts to progress towards a fairer, more transparent, and equalitarian society.
Automation and work
A study by KPMG reveals that 67% of employees think that technology might eventually replace them, and 70% believe Artificial Intelligence will destroy more jobs than those created in the long term, based on a survey by Gallup. That fear leads to mistrusting the technology.
There seems to be a consensus around how the jobs of the future will require different skills, and might demand higher educational requirements. Between 75 to 375 million people around the world may need to change their job category and acquire new skills by 2030, according to the report “Jobs lost, jobs gained: Workforce transitions in a time of automation” by McKinsey.
How can we make technology more trustworthy?
When encouraging trust in the technology, and artificial intelligence in particular, regulation is required to preclude a violation of our rights. There are quite a few initiatives in this field. In short, one could say that the design and development of the technology should follow five principles, as proposed by Anna Jobin:
- Justice and fairness