close
close

Professor trains AI to detect foreign interference online | FIU News

Professor trains AI to detect foreign interference online | FIU News

Modern technologies like social media make it easier than ever for enemies of the United States to emotionally manipulate U.S. citizens.

U.S. officials warn that foreign adversaries are trying to produce massive amounts of false, misleading information online to influence U.S. public opinion. Just in July of this year, the Justice Department announced that it had stopped a Kremlin-backed disinformation campaign that used nearly a thousand fake social media reports to spread disinformation.

While AI is often used offensively in disinformation wars to generate large amounts of content, AI is now also playing an important role in defense.

Mark Finlayson, a professor in FIU’s College of Engineering and Computing, is an expert in training AI to understand stories. He has spent more than two decades studying this topic.

The U.S. Department of Defense and the Department of Homeland Security have funded Finlayson’s research on disinformation.

Compelling – but false – stories

Storytelling is important for spreading disinformation.

“A heartfelt narrative or personal anecdote is often more compelling to an audience than the facts,” says Finlayson. “Stories are particularly effective in overcoming resistance to an idea.”

For example, a climate activist might be more successful in convincing an audience about plastic pollution by telling the personal story of a rescued sea turtle with a straw stuck up its nose, rather than just citing statistics, Finlayson says. The story makes the problem understandable.

“We explore the different ways stories are used to advance an argument,” he explains. “It’s a challenging problem because stories in social media posts can be as short as a single sentence, and these posts sometimes just allude to well-known stories without explicitly retelling them.”

Suspicious handles

Finlayson’s team is also studying how AI can analyze usernames or handles on a social media profile. Azwad Islam, a Ph.D. Student and co-author of a recent article published at Finlayson explains that usernames often contain important clues about a user’s identity and intentions.

The paper was published at the International Conference on Web & Social Media (ICWSM), a top artificial intelligence conference.

“Handles reveal a lot about users and how they want to be perceived,” explains Islam. “For example, a person claiming to be a New York journalist might choose the username ‘@AlexBurnsNYT’ instead of ‘@NewYorkBoy’ because it sounds more credible. However, both usernames indicate that the user is a male with an affiliation with New York.”

The FIU team introduced a tool that can analyze a user handle and reliably reveal a person’s alleged name, gender, location and even personality (if this information is implied in the handle).

Although a user password alone cannot confirm whether an account is fake, it can be crucial in analyzing the overall authenticity of an account – especially as AI’s ability to understand stories continues to develop.

“By interpreting usernames as part of an account’s larger narrative, we believe usernames could become a critical tool in identifying sources of disinformation,” says Islam.

Questionable cultural cache

Objects and symbols can have different meanings in different cultures. If an AI model is unaware of the differences, it can make a serious error in interpreting a story. Foreign opponents can also use these symbols to make their messages more convincing to the target group.

Anurag Acharya is a former Ph.D. Student of Finlayson who worked on this problem. He found that training the AI ​​with different cultural perspectives improves the AI’s understanding of stories.

“A story might say, ‘The woman was overjoyed in her white dress.’ An AI model trained exclusively on weddings from Western stories could read that and say, “That’s great!” But if my mother saw that sentence, she would be very offended because we only wear white to funerals,” says Acharya, who comes from a family with Hindu roots.

It is critical that AI understands these nuances so that it can detect when foreign adversaries are leveraging cultural messages and symbols to achieve greater malicious impact.

Acharya and Finlayson recently published a paper on this topic that was presented at a workshop at the North American Chapter meeting of the Association for Computational Linguistics (NAACL), a leading AI conference.

We help AI find order in chaos

Another difficulty in understanding stories is that the sequence of events that a story tells is rarely presented in a clean and precise manner. Rather, events are often found in parts that are intertwined with other storylines. For the human reader this has a dramatic effect; However, with AI models, such complex relationships can cause confusion.

Finlayson’s research on timeline extraction has significantly improved AI’s understanding of event sequences within narratives.

“Reversals and rearrangements of events can occur in a story in many different and complex ways. This is one of the key topics we have been working on with AI. We helped the AI ​​understand how to represent different events in the real world and how they might influence each other,” says Finlayson.

“This is a good example of something that is easy for humans to understand but challenging for machines. An AI model must be able to accurately order the events in a story. This is important not only for detecting disinformation, but also to support many other applications.”

The FIU team’s advances in helping AI understand stories are designed to help intelligence analysts combat disinformation with new levels of efficiency and accuracy.