Home ChatGPt ChatGPT has Opened a New Front in Fake News Wars

ChatGPT has Opened a New Front in Fake News Wars

Fighting disinformation is now being systematically addressed by all parties – tech platforms, news providers, and governments.

Each developed their own arsenals which could be combined into a single weaponry to combat the lies, distortions and deceptions that were causing so much harm. We were starting to pool resources and form a coalition in order to fight the common enemy.

The Trusted News Initiative, which I managed for two years at BBC, was one of the places that helped bring this coalition together. The Trusted News Initiative (TNI) brought together the most trusted news sources in the world, including the BBC, Washington Post Agence France Presse, Canadian Broadcasting Corporation, and Reuters, with the major tech platforms Facebook, Google Microsoft, and Twitter.

This informal alliance relied on a shared approach for classifying disinformation and then identifying its most harmful forms. They fell into two main categories.

Fake news that directly and immediately threatens the democratic process and democracy is the first category. This includes the online disinformation surrounding the protests on January 6, 2021 at the US Capitol. This was also true for the Indian elections of 2019, when false poll results purporting that they were from the BBC appeared on Indian social media.

Fake news that threatens life is considered the second most dangerous form of disinformation. Online instructions for drinking bleach were spread as the Covid pandemic affected millions of people around the globe. They claimed that it was a cure. As conspiracy theories connected Covid with the rollout of the new 5G network people were encouraged to attack telecoms infrastructure crucial to emergency services. We took steps to identify misleading posts, alert TNI members, and remove them before they caused any damage.

A game changer

Just as we began to feel a little optimistic, an unprecedented intervention occurred. OpenAI launched ChatGPT in November 2022 and shortly after Microsoft announced its plans to link Bing with this so-called “generative AI” model. This is a game changer. The generative artificial intelligent, despite its many opportunities, threatens to overwhelm our current defences against fake information.

Generative AI allows computers to create new content by mining data and using large language models (also known as LLMs). The implications will be vast. If you asked a question to a search engine, it provided a range of links. Some of these could be inaccurate while others were full of misinformation. It was up to you what you believed.

ChatGPT, however, tries to give a complete answer and not just a selection. This content can include images, sounds, videos or text. The computer evaluates the links and not the user. The computer will then provide a brief answer to the query, including key facts it believes support its answer. Sources of information are at best presented as footnotes, based on the current evidence.

The dangers of “hallucinations”

On first glance, the answer appears to be self-contained and trustworthy. The reader does not know if it is the BBC or QAnon, nor are there any alternative viewpoints. It is possible that the answer is a hallucination: a completely wrong answer.

Nicholas Diakopoulos is an associate professor of communications studies and computer sciences at Northwestern University, Illinois. He found inaccuracies with seven out of 15 queries about news he sent to ChatGPT. If generative AI can learn from the data at hand, then could disinformation that Covid vaccines were more dangerous than the flu virus be presented as an “answer”? If this is the case, then those who fight disinformation face a new enemy.

The generative AI is said to be able to improve the accuracy of the answers over time. Can tech platforms be certain that the problem of “hallucination” will go away? Can they honestly argue that the error rate of the most harmful disinformation is acceptable?

News publishers face a different set of challenges. ChatGPT may have gotten its answers from news sources, but the credit given to news organizations is minimal. This is not just a financial issue, but also a challenge for the entire news ecosystem. How can users trust a certain source of news if they’re not aware of it?

Regulation is slow

Interventions in the form of regulations can be helpful, but they do not work as a panacea. Regulation can even be used to suppress truth. We have seen how disinformation has been used in Russia to suppress free speech.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

Amanda Clifford Car Accident Checkout the Complete Info

Amanda Marie Clifford of Lindon in Utah was best known for her infectious spirit and smile that lit up any room she...

Who is Kiah Stokes Read All the Info

Kiah Irene Stokes has established herself as one of the premier players in WNBA, Turkish Super League and women's basketball. Born to...

Phaldut Sharma Wife Married Dating History Find All the Details

Paul Sharma (also known by his stage name of Phaldut Sharma) has established himself on both British television and internationally with his...

Is Huw Edwards The BBC Presenter Read the Exclusive Info

BBC Presenter Scandal SpotlightedUnfortunately, BBC is no stranger to scandal and now another one threatens their reputation on social media. A video...

Harvey Guillen Wife Is He Married in 2023 Read the Relevant Facts

Harvey Guillen is one of Hollywood's finest actors and writers originating from Orange County in California. Harvey's outstanding performances and advocacy efforts...