How ChatGPT Can Be Used in the Information War

How ChatGPT Can Be Used in the Information War

How ChatGPT Can Be Used in the Information War

Seemingly overnight, platforms that earned money through monetizing user attention became the primary brokers of our news. The safeguards that we had established in traditional journalism disappeared, and the new model came with an inherent conflict of interest. Since user attention was a commodity, the headlines that garnered attention were rewarded. The new structure favored sensational and provocative headlines over in-depth, fact-based journalism.

Ramifications have been grave. There are unintended consequences like political polarization and heightened levels of depression and anxiety in some groups. There are also more overt nefarious acts by tech-savvy communities that exploit algorithms and perpetuate false stories, or fake news, to manipulate the public into supporting an agenda (think Cambridge Analytica).

So, when I learned that ChatGPT and other generative AI that use language models are being used by news publishers and content creators to draft content, I was concerned.

Let’s take a look at the technology, what’s at stake in the media, and how we can protect ourselves from both unanticipated consequences and nefarious actors. Read more on Grit Daily