• ampcold@beehaw.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    I don’t understand why any journalism site will advertise that they are using AI. It just says they don’t care about facts, research or quality in writing. Journalism is not simply spewing out a handful of paragraphs of text about a random subject. It is research that can take weeks or months, double checking facts, verifying sources and putting it all together into a well written article. AI texts have none of that. Quite the opposite.

    • Thrashy@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Because a significant chunk of what gets passed off as journalism on such sites is just writing copy – for example, regurgitating press releases, or repackaging the work of another outlet that actually did do the legwork of investigative journalism. I don’t think there’s anything inherently wrong with using AI tools to speed up the task of summarizing some other text for republishing, but I do question the value of such work in the first place.

      It’s going to be a long, long time until artificial intelligence can do the work of a true investigative journalist.

        • eri@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          The Wikipedia article on news agencies is pretty good: “Although there are many news agencies around the world, three global news agencies, Agence France-Presse (AFP), the Associated Press (AP), and Reuters have offices in most countries of the world, cover all areas of media, and provide the majority of international news printed by the world’s newspapers.” Scroll down and you’ll also find a list of some smaller news agencies, which tend to focus on local news.

        • Thrashy@beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I don’t know if there’s many major outlets that are primarily investigative in the era of the 24/7 news cycle and the accompanying need to always have something fresh on the front page, but at least in the English-speaking world the various newspapers of record (think places like the New York Times or the The Guardian) still have a decent newsroom and publish original investigative pieces. In audio formats, NPR and the various constellations of associated organizations like the Center for Investigative Reporting do excellent work as well. There’s also organizations like Bellingcat that specialize in deep-dive investigations using open-source intelligence, presented in a “just-the-facts” format without editorialization.

    • Stepos Venzny@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Because you have to have specific knowledge about how AI works to know this is a bad idea. If you don’t have specific knowledge about it, it just sounds futuristic because AI is like a Star Trek thing.

      This current AI craze is largely as big a deal as it is because so few people, including the people using it, have any idea what it is. A cousin of mine works for a guy who asked an AI about a problem and it cited an article about how to fix whatever the problem was, I forgot. He asks my cousin to implement the solution proposed in that article. My cousin searches for it and discovers article doesn’t actually exist, so he says that. And after many rounds of back and forth, of the boss saying “this is the name of the article, this is who wrote it” and my cousin saying “that isn’t a real thing and that author did write about some related topics but there’s no actionable information there”, the boss becomes convinced that this is a John Henry situation where my cousin is trying to make himself look more capable than the AI that he feels threatened by and the argument ends with a shrug and an “Okay, then if it’s so important to you then we can do something else even though this totally would have worked.”

      There really needs to be large-scale education on what language models are actually doing to prevent people from using them for the wrong purposes.