The Role of AI in the Battle Against Disinformation

The role of AI in the battle against disinformation

Disinformation, misinformation, fake news. These terms have spread widely over the past few years, made more prominent by the 24/7 news cycle and the rapidly advancing technology landscape powered by Artificial Intelligence (AI) and Machine Learning (ML). We’re experiencing an age of information overload, in which murmurs (or shouts) of ‘fake news’ and ‘propaganda’ only instill more wariness and fear when trying to seek out the truth. With tiny computers in our pockets, everyone is a journalist, news anchor, or broadcaster, creating an environment rife with potential for false information.

Valkyrie recently discussed the issue of disinformation and AI in our quarterly State of Science (SoSci) event held at our lab in downtown Austin, Texas. Chief Science Officer, Betsy Hilliard, led the discussion with CEO Charlie Burgoyne and Principal Scientist Craig Corcoran on the state of AI and ML today in the battle against disinformation.

What exactly is disinformation?

Perhaps the most important thing to do before we dig into this topic is to first define disinformation. As the old adage goes, it’s the thought (or in this case, the intent) that counts. The defining feature of disinformation is that it is false or inaccurate information shared with the intent to deceive. Without the intent to deceive, inaccurate or false information is simply misinformation– still potentially harmful, but less nefarious.

Intentions are important, but inaccurate information of any kind is likely to cause harm, regardless of intent. This is part of why it is so important for all of us to take an active role in learning how to identify and minimize informational inaccuracies. With evolving AI/ML capabilities enabling fully generated images, text, and even videos, that task can seem like it is extending further and further from reach.

The role of AI in the battle against disinformation

Is AI helping or hurting the disinformation situation?

While it is easy to blame AI for the capabilities that make disinformation easier than ever to generate and disseminate, the technology is not the enemy. As Betsy Hilliard pointed out, “The same models that are used to generate misinformation and disinformation can also be used to detect things generated by it.” For example, many models can be used against themselves, so to speak, to detect content generated by models similar to itself. A large language model, for example, may flag content that it recognizes it could, itself, create.

Knowledge graphs (KGs) pose a unique opportunity to combat disinformation. KGs provide the ability to build a body of knowledge that is trustworthy and proven, which can then be used to fact-check against a piece of information or content in question. As Corcoran pointed out, “This is still to be determined, but we may be able to leverage a type of technology in the form of a plug-in that uses a knowledge graph to send pop-ups for content it flags as potentially false.”

Knowledge Graphs provide context to AI-generated content

Any one particular piece of content may be hard to label as disinformation, but taking a look at that content in the context of a knowledge graph, can provide clues about its validity. Another type of graph, called a network, can be used to understand behavior. Using a network that tracks behavior can help distinguish disinformation from misinformation by locating the source; a botnet is much more indicative of disinformation than a single source, and distinguishing between the two informs strategy for correcting disinformation and stopping its spread.

This brings us to a really important concept of context.

Context is Key

In the same way that studying behavior in networks provides deeper understanding of intent and validity, context is the best tool for shedding light on content about which you are unsure. Take this example of a video that went viral recently of the Kansas City Chiefs’ head coach, Andy Reid. The video shows a realistic enough portrayal of the coach in a press-conference setting, disparaging one of his players. The video looked and sounded real enough to dupe a large audience, including several sports reporters. The only ones this video didn’t fool? The Kansas City Chiefs and their fans.

Those who knew the coach could immediately detect that the video was not real footage of him. The unique and implicit context these players had enabled them to see what the general public missed– this particular coach isn’t a man of many words, they shared, and he wouldn’t be so long-winded as this. Their close personal understanding of the coach’s character, tendencies and mannerisms gave them a different perception of the video than someone without that implicit context would not have.

But we don’t need this story to know that context is important; anyone could probably spot a deep fake of their own boss or close friend– most of us could probably even detect a deep fake video of a stranger if it was viewed next to one or more real videos. In this way, comparing a suspected deep fake video or image to known, true videos/images is a practical & human way of leveraging the benefits of a network graph. Our brains catch patterns– i.e. context — better than we can teach them to, so certain disparities like the blurriness of the speaker’s mouth or the way their hands move, signal to us that something may be a little off.

What is the human role in the fight against disinformation?

Can you teach your brain context? We can learn what to look out for in videos/images, but they will only continue to become more advanced, and those little tricks will stop working. There is a deeper level of responsibility that needs to be discussed. Just as AI professionals need to be responsible in the models they build, all of us as researchers and consumers of information possess a level of responsibility for how we search for, interact with and share information.

In today’s internet age, we have the unique privilege of an abundance of information and sources, but with that privilege comes the responsibility for fact-checking and education– especially before sharing. Hilliard advocates for paying close attention to sources and being careful which sources you trust. General education about a topic or space is always a good place to start; being aware of a broad space of context makes it easier to detect false information when you encounter it.

As news media becomes increasingly omnipresent, akin to what Burgoyne describes as “cultural tinnitus,” its impact on consumers diminishes as fatigue sets in. Despite this, the threat of disinformation remains potent, underscoring the significance of alternative communication platforms like social media. With more individuals relying on platforms such as TikTok and Instagram for information, it becomes imperative to equip ourselves with tools and strategies to counter misinformation in these less-regulated spaces. As the communication landscape evolves, the responsibility of verifying and fact-checking information falls on each of us. This proactive approach is essential in defeating the spread of misinformation.

Office

515 Congress Ave. Austin, TX 78701 Suite #1425

Email

inquiries@valkyrie.ai

Phone

(512) 947 – 6472

Stay in touch

Stay Updated
This field is for validation purposes and should be left unchanged.