USC’s Dr. Kristina Lerman explains why it’s very hard to go viral and how our brains can’t truly process all the information we’re seeing on social media.
By S.C. Stuart
When your phone is buzzing with alert after alert, and everyone on your Twitter feed seems to be furiously chatting about a particular topic, it can be easy to assume the discussion is captivating the entire world. In reality, your social bubble is just that — a bubble — and you should be mindful not to get lost in the endless chatter, according to Dr. Kristina Lerman, a USC scientist.
Dr. Lerman is a Research Associate Professor in Computer Science and Principal Scientist at USC’s Information Sciences Institute. Her focus is on examining what our online lives reveal about society — for better (social justice, community strength) or worse (gangs and terror-related insurgencies).
We spoke to Dr. Lerman at her office in Marina Del Rey, California. Here are edited and condensed excerpts from our conversation.
Dr. Lerman, can you bust some myths about social media for us? For starters, in your paper ‘A Meme is Not a Virus,’ your computational algorithms on social interaction show that even the most outrageous ‘break the internet’ moment only reaches 5 percent of the total network.
It’s true. Through our research, we’ve found that when we try to comprehensively measure the spread of information — to understand the influence of a meme, for example — we’ve proved that the largest cascade is only 5 percent of the population. So, yes, it’s very hard to go viral. But, we feel like memes go viral; it often looks to us like everyone is talking about something. I ascribe it to another interesting network effect that we call a “majority illusion” — an example of class size bias effect. The meme “appears” more popular because of your position within the network of your friends or contacts.
Because every human being is at the center of their own universe.
[Laughs] Right. Something like that.
Your research has proved that not only do we perceive information as more popular than it is, but we also barely take in the information itself and accept data at face value without truly evaluating it.
The brain doesn’t have enough capacity to process much information in real time. Through adaptive evolution, humans tend to guess at what’s going on and only pay attention to the most salient information.
Your work has also discovered that the more connected we are, the less we comprehend. The problem of signal-to-noise ratio.
Yes. The web is an extreme place for this. It displays information in 2D so, in the West, we start reading from the top left and go down. This has been shown through eye-tracking experiments. Of course, there’s a bias because we’re talking about Westerners who read in that way. But our brains require an enormous amount of energy to process information and at some point, we run out of energy.
If you’re very connected to a lot of other people who are providing a constant stream of input — as in social media — the less, over time, you’ll take in. Our research has morphed over the years from modeling how far a story reaches, within the network to seeing if we can predict how and where popular stories will spread.
Can you about the work USC’s Information Sciences Institute is doing with DARPA?
We are interested in how people make decisions when they are online. If you get certain information, how much does it affect your own decision-making? How much do networks skew your own perceptions of how useful that information is — the influencing of online social dynamics. We will be analyzing everything from including message content, network spread, cognitive bias, sentient analysis, variance and speed of transmission.
Once the large-scale computational model is built, you’ll be able to test the network with new ideas to see how it reacts.
Yes, we’ll introduce “shocks” — a.k.a. “agents” — into the system, to understand how the information is absorbed, shared, and processed.
Is there a flip side project to this? For example, are you working on agents to infiltrate positive messaging to improve behavior, what global governments call nudges?
We’ve not been asked to do that at this time, but I am fascinated by this. I’m particularly interested in using AI around issues of poverty and racism. Nudges, to change behaviors, are very interesting. I’d like to continue to increase our mining of data about human behavior and developing experiments to see what impact certain “inputs” will have. For example, getting people to make better financial decisions around saving, or encouraging students to stay in school by giving them positive influence inputs from older students. It’s a fascinating area.
Agreed. Back to DARPA for a moment. I’m aware you can’t surmise why DARPA would want to build a large-scale computational model of the web itself, but it’s likely this work builds on your earlier $6.5 million EFFECT (Effectively Forecasting Evolving Cyber Threats) project for the intelligence services, with IARPA. What can you tell us about that?
This was a fascinating — and highly complex — problem. IARPA charged us with developing an algorithm to predict cyber attacks. We collaborated with Raytheon BBN, Lockheed Martin, and other academic institutions to launch sensors into the Dark Web to identify malevolent signals. We’re preparing for a final meeting now. Frankly speaking, it has proved to be extremely difficult.
That’s understandable. I interviewed IARPA program managers for their latest “forecast the future” challenge, and they pointed out that if it was easy, there wouldn’t be cyber attacks that take down networks.
The biggest issue is data. If you only have data on the attacks that are a “success” — i.e. achieved their goal — you don’t have enough to work with. To identify meaningful patterns and develop an algorithm that can start predicting, you need ALL data.
True. Finally, you received your Ph.D. in physics in 1995, during the early development of the web, yet almost a decade before Facebook. How did you first became interested in applying a scientific network-based approach to our burgeoning digital communications?
I first used the web back in 1993 when I was living in France, and had access to a computer at the Ecole normale supérieure, in Paris. I ended up “getting lost” — following one link to another and so on, and was fascinated by the experience. Then, on returning to the US, while studying physics at UCSB, we had Unix-based NeXT work stations, and I continued to use the web on those.
It was while doing a paper on computing the similarity between documents, — represented in a highly dimensional space, as particles, through measuring distance — [that] I suddenly realized one could do the same with information emerging on the web. That’s where my research on network effects really started and I continue to be absorbed by where this is all leading us.
Readers in Australia can catch Dr. Lerman this week at the ACM WSDM (Web Search and Data Mining) conference, where she’ll serve as program chair.
Originally published at www.pcmag.com.