Critical thinking education trumps banning and censorship in battle against disinformation, study suggests::A new study conducted by researchers from Michigan State University suggests that the battle against online disinformation cannot be won by mere content moderation or banning those who spread fake news. Instead, the key lies in early and continuous education that teaches individuals to critically evaluate information and remain open to changing their minds. …

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    58
    ·
    1 year ago

    This study is an agent-based simulation:

    The researchers used a type of math called “agent-based modeling” to simulate how people’s opinions change over time. They focused on a model where individuals can believe the truth, the fake information, or remain undecided. The researchers created a network of connections between these individuals, similar to how people are connected on social media.

    They used the binary agreement model to understand the “tipping point” (the point where a small change can lead to significant effects) and how disinformation can spread.

    Personally, I love agent-based models. I think agent modeling is a very, very powerful tool for systems insight, but I don’t like this article’s interpretation, nor am I convinced the author of this article really groks what agent-based modeling really is. It’s a very different kind of “study” than what most people mean when they use that word, and interpreting the insights is its own can of worms.

    Just a heads up, for those of you casually scrolling by.

    • dr_scientist@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      Upvoted for correct use of word ‘grok’, but definitely want to learn more about agent-based modelling. If for no other reason than truth inoculation is one of the more vital battles of our time.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        The basic gist is you define “agents” as individual actors following very specific set of rules, usually quite stripped down and simple, and then simulate lots of agents acting together to see what the emergent properties are.

        Generally, your agents have rules, you set up the initial conditions, and then you have a “time step,” or a “turn,” in which the agents interact.

        For example, you might simulate a social media platform by saying that each person (the agent) has two rules:

        • Each person will make one tweet, composed of a gpt2 generated sentence
        • Each person will like boost any other person’s tweet that they see whose tweet most resembles one of their own, by using something like a string distance similarity cutoff or something

        Each time step might look like this:

        • Each user makes their random tweet
        • Each user is presented with N random tweets from the whole pool of tweets, weighted by how boosted they are (a twice boosted tweet appears in the bag three times, so to speak, whereas a non-boosted tweet is just there once)

        At the end, you could see how even under these conditions, some tweets go viral. And this is what I mean when I say interpreting the results of agent-based modeling is tricky – you sort of purposefully craft your agents to get the result you want.

        This can be a bit confusing, because that’s a bit backwards to how the hypothesis-experiment-conclusion thing normally works, but agent modeling is more an interpretive act than a descriptive one. The point is to see if you can recreate an emergent, complex behavior with simple rules. The model I made up just now, which I haven’t actually coded up and might not work at all (though I’m tempted now…), wouldn’t explain tweets going viral, but it might give some insight into the baked-in nature of going viral in the very structure of twitter, even independent of the content of the tweet.

        One of the classic uses of these sorts of models is what these authors did – you look for tipping points. Roughly speaking, tipping points are when a change in a system produce a qualitatively different behavior. So, in our example, we might notice that once a tweet has a certain amount of retweets, it gets retweeted forever, or something like that. We might change the initial conditions or rules of the game (how many tweets per turn, how many tweets does each user see, etc.), and glean some insight into how those affect that condition.

        People love to do agent-based modeling for markets, as you might imagine. I think markets are silly, so I also think many of these models can be quite silly, especially the ones that are intended to predict things to make money, which can make very, very complex agents (imo this is a strange application of the idea of agent modeling), but some of them are very good. Again, I personally think markets are dumb, so my bias is going to show, but people do really good agent-based models using coin flips or energy exchange in ideal gases to show that inequality is baked into markets.

    • Cabrio@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      To summarise, agent based analysis is closer to comparative analysis than objective analysis, and is best used for analysing systems with too many variables for comprehensive objective analysis.