Fighting disinformation is now being systematically addressed by all parties – tech platforms, news providers, and governments.
Each developed their own arsenals which could be combined into a single weaponry to combat the lies, distortions and deceptions that were causing so much harm. We were starting to pool resources and form a coalition in order to fight the common enemy.
The Trusted News Initiative, which I managed for two years at BBC, was one of the places that helped bring this coalition together. The Trusted News Initiative (TNI) brought together the most trusted news sources in the world, including the BBC, Washington Post Agence France Presse, Canadian Broadcasting Corporation, and Reuters, with the major tech platforms Facebook, Google Microsoft, and Twitter.
This informal alliance relied on a shared approach for classifying disinformation and then identifying its most harmful forms. They fell into two main categories.
Fake news that directly and immediately threatens the democratic process and democracy is the first category. This includes the online disinformation surrounding the protests on January 6, 2021 at the US Capitol. This was also true for the Indian elections of 2019, when false poll results purporting that they were from the BBC appeared on Indian social media.
Fake news that threatens life is considered the second most dangerous form of disinformation. Online instructions for drinking bleach were spread as the Covid pandemic affected millions of people around the globe. They claimed that it was a cure. As conspiracy theories connected Covid with the rollout of the new 5G network people were encouraged to attack telecoms infrastructure crucial to emergency services. We took steps to identify misleading posts, alert TNI members, and remove them before they caused any damage.
A game changer
Just as we began to feel a little optimistic, an unprecedented intervention occurred. OpenAI launched ChatGPT in November 2022 and shortly after Microsoft announced its plans to link Bing with this so-called “generative AI” model. This is a game changer. The generative artificial intelligent, despite its many opportunities, threatens to overwhelm our current defences against fake information.
Generative AI allows computers to create new content by mining data and using large language models (also known as LLMs). The implications will be vast. If you asked a question to a search engine, it provided a range of links. Some of these could be inaccurate while others were full of misinformation. It was up to you what you believed.
ChatGPT, however, tries to give a complete answer and not just a selection. This content can include images, sounds, videos or text. The computer evaluates the links and not the user. The computer will then provide a brief answer to the query, including key facts it believes support its answer. Sources of information are at best presented as footnotes, based on the current evidence.
The dangers of “hallucinations”
On first glance, the answer appears to be self-contained and trustworthy. The reader does not know if it is the BBC or QAnon, nor are there any alternative viewpoints. It is possible that the answer is a hallucination: a completely wrong answer.
Nicholas Diakopoulos is an associate professor of communications studies and computer sciences at Northwestern University, Illinois. He found inaccuracies with seven out of 15 queries about news he sent to ChatGPT. If generative AI can learn from the data at hand, then could disinformation that Covid vaccines were more dangerous than the flu virus be presented as an “answer”? If this is the case, then those who fight disinformation face a new enemy.
The generative AI is said to be able to improve the accuracy of the answers over time. Can tech platforms be certain that the problem of “hallucination” will go away? Can they honestly argue that the error rate of the most harmful disinformation is acceptable?
News publishers face a different set of challenges. ChatGPT may have gotten its answers from news sources, but the credit given to news organizations is minimal. This is not just a financial issue, but also a challenge for the entire news ecosystem. How can users trust a certain source of news if they’re not aware of it?
Regulation is slow
Interventions in the form of regulations can be helpful, but they do not work as a panacea. Regulation can even be used to suppress truth. We have seen how disinformation has been used in Russia to suppress free speech.