• Kwakigra@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    When I saw how coherent and consistent conservative style posting was on the chatgpt2 model, it made me wonder how many online conversations I’ve had over the preceding years were actually with an algorithim spitting platitudes and cliches exactly how a real life conservative would.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Ah yes, the old AI alignment vs. AI ethics slapfight.

    How about we agree that both are concerning?

    • lemmyng@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.