Once upon a time, we believed there were two kinds of news: good news and bad news. Then the 2016 election rolled round, and we got a new category: "fake news." More and more of our social media feeds were taken up by spam accounts pushing misleading information or outright lies that many nevertheless believed were true. But why did—does—this automated campaign of deceit work on so many of us? A new study published in the journal Nature Human Behaviour says the bots are only partly to blame.

While "fake news" may be a buzzword, it's certainly no joke. The information we take in can change the way we think, behave, and vote. So scientists are working as fast as they can to understand, and ideally defuse, the phenomenon before it gains any more traction.

Some studies have found that viral ideas arise at the intersection of busy social networks and limited attention spans. In a perfect world, only factually accurate, carefully reported and fact-checked stories would go viral. But that isn’t necessarily the case. Misinformation and hoaxes spread across the internet, and especially social media, like a forest fire in dry season.

To find out why, researchers created a virtual model of information-sharing networks. Into this network, they dropped two kinds of stories: high-quality (true) and low-quality (fake or hoax). Then they populated the networks with actual users and news outlets and spam bots. To keep the virtual news feeds close to real life, the spam bots were both more numerous and more prolific than the genuine posters.

The results confirmed what any Facebook user already knows: Whether or not a story goes viral has very little to do with whether it's actually true. "Better [stories] do not have a significantly higher likelihood of becoming popular compared with low-quality information," the authors write. "The observation that hoaxes and fake news spread as virally as reliable information in online social media … is not too surprising in light of these findings."

Within the model, a successful viral story required two elements: a network already flooded with information, and users' limited attention spans. The more bot posts in a network, the more users were overwhelmed, and the more likely it was that fake news would spread.

Even conscientious media consumers can be taken in by false information if they're in a rush, the authors write. "The amount of attention one devotes to assessing information, ideas and opinions encountered in online social media depends not only on the individual but also on [their] circumstances at the time of assessment; the same user may be hurried one time and careful another."

So what's the solution? "One way to increase the discriminative power of online social media would be to reduce information load by limiting the number of posts in the system," they say. "Currently, bot accounts controlled by software make up a significant portion of online profiles, and many of them flood social media with high volumes of low-quality information to manipulate public discourse. By aggressively curbing this kind of abuse, social media platforms could improve the overall quality of information to which we are exposed."