Get ready for more fake news and propaganda because of AI  

ChatGPT has already changed the future

ChatGPT is an artificial intelligence tool that can provide information in simple and understandable “human” language. It was trained on written databases from the internet, totaling 300 billion words. The end result is a chatbot with encyclopedic knowledge and great fluency in human speech.

However, the persuasive appearance of “human” responses can be a powerful tool in the wrong hands. Until now, spreading disinformation has required significant human effort. But an artificial intelligence like ChatGPT would make it easy for trolls to take their business to the next level, as Georgetown University research has shown .

Such a sophisticated tool will change the data on social media, supporting or accusing specific policies, politicians or spreading fake news and propaganda. It also poses a significant risk to elections.

The dynamic of language models to compete with human-written content, at such a low cost, will give a clear advantage to propagandists who choose to use them. These advantages can widen the reach of malicious actors, create new tactics for influencing public opinion, and make the campaign message much more targeted and effective.

And it’s not just the quantity of misinformation that will increase, but also its quality. AI systems will improve the persuasive quality of content, making messages even more difficult for the user to recognize as part of a coordinated disinformation campaign.

Language models will produce a large volume of content that will be authentic every time and allow propagandists to not rely solely on copying and pasting the same text on social media.

There is also a risk in the field of user security and spam.

People who spread spam rely on the most gullible people to click on their link. The more people their link reaches, the more hope they have of finding a victim. With AI, this possibility will multiply exponentially. Even if Facebook and Twitter stop 3/4 of the spam, there will still be 10 times more content than before that can mislead people online.

The researchers predict that social media platforms will be flooded with fake profiles as language models rapidly mature.

Something like ChatGPT can escalate the existence of fake accounts to a level we’ve never seen before. It will become much harder to distinguish these accounts from real people.