The creator of an open source project that scraped the internet to determine the ever-changing popularity of different words in human language usage says that they are sunsetting the project because generative AI spam has poisoned the internet to a level where the project no longer has any utility.

Wordfreq is a program that tracked the ever-changing ways people used more than 40 different languages by analyzing millions of sources across Wikipedia, movie and TV subtitles, news articles, books, websites, Twitter, and Reddit. The system could be used to analyze changing language habits as slang and popular culture changed and language evolved, and was a resource for academics who study such things. In a note on the project’s GitHub, creator Robyn Speer wrote that the project “will not be updated anymore.”

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    201
    arrow-down
    3
    ·
    2 months ago

    The project creator doesn’t mince words:

    wordfreq was built by collecting a whole lot of text in a lot of languages. That used to be a pretty reasonable thing to do, and not the kind of thing someone would be likely to object to. Now, the text-slurping tools are mostly used for training generative AI, and people are quite rightly on the defensive. If someone is collecting all the text from your books, articles, Web site, or public posts, it’s very likely because they are creating a plagiarism machine that will claim your words as its own.

    So I don’t want to work on anything that could be confused with generative AI, or that could benefit generative AI.

    OpenAI and Google can collect their own damn data. I hope they have to pay a very high price for it, and I hope they’re constantly cursing the mess that they made themselves.

      • kn33@lemmy.world
        link
        fedilink
        English
        arrow-up
        78
        arrow-down
        1
        ·
        2 months ago

        Yeah, it seems really restrained for someone who has to end a project they’ve put so much effort into.

    • Randomgal@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      30
      ·
      2 months ago

      NGL sounds like a butthurt dude. Emotional arguments without logic.

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        2 months ago

        I’d be fucking butthurt as well if my pet project was being destroyed by mega corpos for a shitty generative thief AI.

      • SirQuackTheDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        edit-2
        2 months ago

        Imagine being an author whose sole income is writing books.

        Here comes an AI that stole indexed your work and is asked by a customer of OpenAI to summarise your books. It does so perfectly and the issuer is able to use your results freely, since they think it’s AI generated and doesn’t require attribution.

        You receive nothing in return.

        Good luck making a living.

        Edit: stole to indexed, added edit note

        • Gorillazrule@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          This is such a nothing argument. If all you’re talking about is a summary of a book, people have been able to get that long before AI. I can go to a wikipedia entry right now of any book and look at a plot summary. The author does not get paid for me looking at the summary on Wikipedia. There are numerous other sites where you can find summaries of books. And if you’re asking an AI for a summary of a specific book by a specific author, what attribution would you like to see? The user already knows the source because they’re specifically asking for a summary of that source.

          A bigger concern would be the AI reproducing your works and using them in responses.