• AWildMimicAppears@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    I agree on the “part of AGI” thing - but it might be quite important. The sense of self is pretty interwoven with speech, and an LLM would give an AGI an “inner monologue” - or probably a “default mode network”?

    if i think about how much stupid, inane stuff my inner voice produces at times… even an hallucinating or glitching LLM sounds more sophisticated than that.

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 months ago

      Interestingly, an inner monologue isn’t required for conscious thought. E.g. I’ve got several “inner thought streams”, only 1 uses language. It just happens that a lot of our early learning is language based. That trains our brain to go from language to knowledge. Hijacking that circuit for self learning is a useful method. That could create our inner monologue as a side effect.

      Also, a looping LLM is more akin to an epileptic fit than an inane inner monologue. It effectively talks gibberish at itself.

      Conversely, Google’s Deep dream does produce dream like images. It also does it in a similar way ( we think) to how human dreams work. Stable diffusion takes this to its (current) limit.

      Basically, an AI won’t need to think with an inner monologue. Also, any inner monologue would be the product of interactions between subsystems and the LLM, not purely within it.