Last fall, we featured an extensive interview with Petter Törnberg of the University of Amsterdam, who studies the underlying mechanisms of social media that give rise to its worst aspects: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. He wasn’t optimistic about social media’s future.

Törnberg’s research showed that, while numerous platform-level intervention strategies have been proposed to combat these issues, none are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Törnberg has been very busy since then, producing two new papers and one new preprint building on this realization that social media is structured quite differently than the physical world, with unexpected downstream consequences. The first new paper, published in PLoS ONE, specifically focused on the echo chamber effect, using the same combined standard agent-based modeling with large language models (LLMs)—essentially creating little AI personas to simulate online social media behavior.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    Conversely, if just 10 percent of users in a given social media community largely agree with your stances, you will be more tolerant toward diverse opinions that contradict your own. “There’s a certain chance that some users will end up in communities where it’s very homogenous and 99 percent of users are disagreeing with them,” said Törnberg. “That will cause them to leave, and you get this feedback effect just because of the structure of interaction. But if you have a filter bubble effect, where everyone is shown 10 percent of their own type, that creates a possibility for you to find the people who you agree with within the community. And that stabilizes the entire dynamics so it doesn’t tip over to one side or the other and become extreme or overly homogenous.”

    Ooh, this is interesting. It suggests the possibility of automating this; since most social media allows for upvoting and downvoting it should be possible to automatically determine which users are “agreeable” and which are “disagreeable” and filter thread contents to push it toward this 10 percent threshold.

    Probably wouldn’t work on the Threadiverse yet, though, there’s not a large enough population here yet.