• MudMan@fedia.io
    link
    fedilink
    arrow-up
    78
    ·
    5 months ago

    In fairness, my brain is reporting the exact same thing, so… it’s a tie?

    • AdolfSchmitler@lemmy.world
      link
      fedilink
      arrow-up
      40
      ·
      5 months ago

      There’s an idea about “autistic ai” or something where you give ai an objective like “get a person from point a to b as fast as you can” and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.

      Though I suppose that’s more human error. Something we take as a given but a machine will not.

        • ඞmir@lemmy.ml
          link
          fedilink
          arrow-up
          21
          ·
          5 months ago

          That’s specifically LLMs. Image recognition like OP has nothing to do with language processing. Then there’s generative AI which needs some kind of mapping between prompts and weights, but is also a completely different type of “AI”

          That doesn’t mean any of these “AI” products can think, but don’t conflate LLMs and AI as being the same

            • ඞmir@lemmy.ml
              link
              fedilink
              arrow-up
              10
              ·
              5 months ago

              Neural networks aren’t going anywhere because they can be genuinely useful, just not to solve every problem

                • MeanEYE@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  5 months ago

                  You should watch actually AI safety researcher’s thoughts on this. Here’s the link. It’s partially overhyped, but huge strides have been made in this area and it shouldn’t be taken lightly. It’s best to be extra careful than ignorant.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  5 months ago

                  And that somehow means we shouldn’t do OCR anymore, or image classification, or text to speech, or speech to text, or anomaly detection, or…?

                  Neural networks are really good at pattern recognition, e.g. finding manufacturing defects in expensive products. Why throw all of this away?

          • wischi@programming.dev
            link
            fedilink
            arrow-up
            9
            arrow-down
            1
            ·
            5 months ago

            Your brain is also “just a Chinese room”. It’s just physic, chemistry and biology. There is no magic inside your brain. If a “Chinese room” is fast enough and can fool everyone into “believing” that it’s fluent in chinese, than the room speaks chinese.

            • Kogasa@programming.dev
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              5 months ago

              This fails to engage with the thought experiment. The question isn’t if “the room is fluent in Chinese.” It is whether the machine learning model is actually comparable to the person in the room, executing program instructions to turn input into output without ever understanding anything about the input or output.

              • wischi@programming.dev
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                edit-2
                5 months ago

                The same is true for your brain. Show me the neurons that are fluent in Chinese. Of course the LLM is just executing code. And if we have AGI it will also just be “executing code” but so does your brain. It’s not exactly code (but maye AGI will be analog computers, so not exactly code either) but the laws of physics dictate what your brain does. The laws of physics don’t understand Chinese, the atoms and molecules don’t understand Chinese. “Understanding Chinese” is an emergent property.

                Think about it that way: Assume every person you know (execpt you) is just some form of Chinese Room … You first of all couldn’t prove that and second it wouldn’t matter at all.

                • Kogasa@programming.dev
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  5 months ago

                  We aren’t trying to establish that neurons are conscious. The thought experiment presupposes that there is a consciousness, something capable of understanding, in the room. But there is no understanding because of the circumstances of the room. This demonstrates that the appearance of understanding cannot confirm the presence of understanding. The thought experiment can’t be formulated without a prior concept of what it means for a human consciousness to understand something, so I’m not sure it makes sense to say a human mind “is a Chinese room.” Anyway, the fact that a human mind can understand anything is established by completely different lines of thought.

        • BlueMagma@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          5 months ago

          How can you know the system has no cognitive capability ? We haven’t solved the problem for our own minds, we have no definition of what consciousness is. For all we know we might be a multimodal LLM ourselves.

        • MindTraveller@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          edit-2
          5 months ago

          Language processing is a cognitive capability. You’re just saying it’s not AI because it isn’t as smart as HAL 9000 and Cortana. You’re getting your understanding of computer science from movies and video games.

      • BlueMagma@sh.itjust.works
        link
        fedilink
        arrow-up
        12
        ·
        5 months ago

        It’s called the AI alignment problem, it’s fascinating, if you want to dig deeper in the subject I highly recommend ‘Robert miles AI safety’ channel on YouTube

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        I read about a military AI that would put its objectives before anything else (like casualties) and do things like select nuclear strikes for all missions that involved destruction of targets. So they adjusted it to allow a human operator to veto strategies, in the simulation this was done via a communications tower. The AI apparently figured out that it could pick the strategy it wanted without veto if it just destroyed the communications tower before it made that selection.

        Though take it with a grain of salt because the military denied the story was accurate. Which could mean it wasn’t true or it could mean they didn’t want the public to believe it was true. Though it does sound a bit too human-like for it to pass my sniff test (an AI wouldn’t really care that its strategies get vetoed), but it’s an amusing anecdote.

      • Chakravanti@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        ai thinks

        AI’s are Mathematic’s calculations. If you ordered that execution, are you responsible for the death? It happened because you didn’t write instructions well enough; test check against that which doesn’t throw life on the scale; or maybe that’s just the cheeky excuse to be used when people start dying before enough haven’t done so that no one is left A.S. may do it, if your lucky. Doesn’t matter. It’ll just bump over from any of its thousand T-ultiverses.

    • notabot@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      I was doing fine, seeing two cows, right up until I read your comment, and now I see it as some sort of weird giraffe like creature with short legs and a surprising ability to balance even with its neck stretched out that far.

    • notabot@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      I was doing fine, seeing two cows, right up until I read your comment, and now I see it as some sort of weird giraffe like creature with short legs and a surprising ability to balance even with its neck stretched out that far.

      • ✺roguetrick✺@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Two cows and two comments or one cow and one comment. Does the content of the comment being the same make it the same comment or does their different storage location and display make them two comments? The digital ship of theseus.

        • notabot@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          It’s obviously just a glitch in the matrix, and you may be the chosen one for noticing.

          I got a 504 server error the first time I posted, but apparently it worked anyway.

  • MeanEYE@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    5 months ago

    Actually AI wouldn’t even recognize either of those as a cow let alone one long, unless it was specifically trained to look for cow’s head or ass.