Researching Resonance with $7.5M MURI grant

Indiana University’s Yong Yeol Ahn leads a multi-institution team studying how large language models underpinning artificial intelligence systems are being used to strengthen social bonds – and inflame vitriolic divisions.

The campaign begins with a simple ad on your favorite streaming platform – a heartwarming video of a small-town farmer fighting to preserve his family’s legacy. The ad stirs something deep inside you — a connection to your roots, a sense of pride in hard work and tradition.

The message resonates.

Across the country, another viewer sees a starkly different ad. This one highlights an activist challenging corporate greed, standing against systemic injustice. It evokes her own values of fairness and resistance.

Again, the message resonates.

Both ads promote the same cause, but the messages are tailored so precisely that they feel personal, almost intimate, as if whoever made them knows exactly how you think and feel.

That’s because they do. Or, rather, it does.

Behind the scenes, artificial intelligence (AI) analyzed mountains of data — your viewing habits, online conversations, even your tone in social media posts — to craft that small-town farmer to deliver a message that deeply moved you, and only you. The AI-generated farmer was even designed to look like that guy you call “Paw-Paw” in all the photos you post on social media.

Increasingly today, sophisticated technology is at work, shaping what you see, hear, and engage with. This is the land of large language models (LLMs) — advanced AI systems that simulate human-like communication. Unlike recommendation engines such as Google, LLMs can now produce tailored messages, not just suggesting content but mimicking specific personas and tapping into your deeper psychological triggers to resonate with your values and emotions.

This is the focus of the latest research by Yong Yeol Ahn, professor of informatics and computing at Indiana University’s Luddy School of Informatics, Computer, and Engineering.

$7.5M MURI grant is helping IU study resonance in online communications.

Ahn recently received a $7.5 million grant from the U.S. Department of Defense to lead a multi-institutional team of experts who will assess the role AI plays in strengthening the sociological concept of resonance in online communications — including misinformation and radicalizing messages. His goal is to develop models that explain how people’s beliefs and ideas spread, both to individuals and groups.

“I have been frustrated by the strong polarization in our societies and the fact that strong beliefs can severely compromise people’s ability to have meaningful conversations across the aisles,” says Ahn, who has been at the Luddy School for more than 13 years. “People often focus on obvious fake news, but it's the subtle biases that may have much bigger impacts.”

From physics to the pitch to politics

To understand how Ahn ended up studying AI and misinformation, you have to go on a journey that extends from the furthest reaches of the universe all the way back to a soccer pitch in his homeland of South Korea.

Ahn’s bachelor’s and master’s degrees are both in physics, as is his Ph.D., all from the Korea Advanced Institute of Science and Technology. Studying the theories behind the origins of the universe was exciting, Ahn says — but then there was the brain.

As Ahn neared the completion of his undergraduate degree, he started to devour books on neuroscience, including Gödel, Escher, Bach by Douglas Hofstadter. The book won the National Book Award and the Pulitzer Prize for its discussion of the links between formal systems. It hypothesized that if life can grow out of formal networks of cells and consciousness can emerge from systems of firing neurons, so too will computers attain human intelligence — also known as artificial intelligence.

The book was published in 1980.

An interest sparked by books was solidified by a global sporting event — the 2002 World Cup of soccer, hosted in South Korea and Japan. The Korean national team had never won a game in the history of the World Cup. But in 2002, a rapidly growing and unified swell of fans — dubbed the Red Wave for the color of the team’s jerseys — watched as the host-country squad bested traditional soccer powerhouses Portugal, Italy, and Spain en route to a historical semi-finals appearance.

Ahn was fascinated as he watched national pride swell. What he was seeing in his fellow South Koreans was akin to his new area of interest: network science. This field serves as a hub for a variety of other disciplines, taking input from biology, neuroscience, computer science, and social sciences in pursuit of explanations for how complex systems evolve and adapt.

The World Cup created a unifying message that resonated for Koreans of all localities, ages, and classes. From that shared World Cup experience came a new political champion for the country, Roh Moo-hyun. Ahn calls him an “Obama-like candidate” who was able to harness that unifying energy and ride it to an upset victory and become the country’s ninth president.

Which brings us back to that ad with the small-town farmer who looked so much like your Paw-Paw and the role AI plays more than two decades after South Korea’s surprise World Cup run.

‘Significant societal threat’

Traditional information consumption models have changed. Gone is the supreme influence of the daily newspaper and the nightly news broadcast. Here is an era of corporation-controlled algorithms on a mission to keep your eyes glued to their content — and the ads they sell. From that model comes a hijacking of our brains through AI that feeds us the words and images more likely to resonate with us personally, as individuals.

AI can detect the type of person from whom you are most receptive to a message or the words most likely to get you to act and generate a marketing image and message specifically for you and only you. It then does the same thing for every single other individual in the target audience.

That the $7.5 million grant comes from the Department of Defense and is focused on creating safeguards against AI-driven misinformation is indicative of the level at which this topic is grabbing attention. The six IU researchers led by Ahn are collaborating with researchers at Boston University, Stanford and UC Berkeley. The group includes numerous Ph.D. and undergraduate students from IU Bloomington.

IU co-principal investigators on the project are assistant professor Jisun An, professor Alessandro Flammini, assistant professor Gregory Lewis, and Luddy Distinguished Professor Filippo Menczer, all of the Luddy School in Bloomington. Haewoon Kwak, associate professor at the Luddy School, will serve as senior personnel.

“It was surreal to see that we were chosen,” Ahn says. “The scale, competitiveness, and potential impact of this project made the recognition particularly impactful to me.”

The IU-driven project was one of just 30 across the country funded by the Defense Department’s Multidisciplinary Research Initiative, which supports defense-related research projects. Make no mistake, Ahn says: This is most definitely an issue of national defense.

Ahn says. “Now, with AI, you’re introducing the potential ability to mine data about individual people and quickly generate targeted messages that could appeal to them — applying big data to individuals. This could cause even greater disruptions than we’ve already experienced.

The deluge of information and radicalizing messages pose a significant societal threat.

Yong Yeol Ahn

Examples of the disruptions Ahn refers to are memes that turn the stock market on its head, or the one that landed Edgar Maddison Welch in prison.

Welch came from North Carolina to Washington, D.C., in 2016 firmly believing a Satanic child sex abuse ring was centered in the basement of the Comet Ping Pong pizzeria. Welch heard about the conspiracy theory online and spent three days reading about it and watching videos, according to his statement that was part of his plea deal.

He then drove to Washington, entered the pizza parlor that was full of customers and, brandishing a 3-foot-long AR-15 and with a loaded revolver in a holster on his hip, fired numerous rounds before being subdued.

No evidence of child sex abuse — or a basement underneath the Comet Ping Pong pizzeria — was found.

‘Huge elephant in the room’

The Defense Department grant will support a five-year study and be run from IU’s Observatory on Social Media. The project will lean on surveys and experiments to dive into human-belief dynamics. Then, large language models (LLMs) will examine the results to extract useful insights about the current social dynamics.

The team also plans to study peoples' physical response to online information, both AI- and non-AI generated, with tools such as heart rate monitors to better understand the influence of their biological impact.

The goal?

Ahn puts it simply: “Make society better.”

The irony of a study about AI-driven misinformation using AI isn’t lost on the researchers. The team will have AI create “model agents” — virtual people who share information and react to messages inside a simulation — to help more accurately model the way information flows between groups, as well as the effect that information has on the “people” inside the model.

It underscores the truth, Ahn says, that AI is merely a tool, and tools can be used for good or evil.

“AI can nudge people to communicate better with others with contrasting opinions. Although current AI systems suffer from hallucination problems, better AI systems can potentially help people to fact-check information or identify common grounds between disagreeing parties,” Ahn says. “Studying collective social phenomena without thinking about AI right now may be ignoring a huge elephant in the room.”