• kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      It seems to be that he and Ilya, the chief scientist, had irreconcilable differences in how quickly to productize the AI developments they were building.

      That in essence Altman kept pushing things out too quickly and focusing on the immediate commercialization, and Ilya and the rest of the board wanted to focus on the core mission of advancing AI to the point of AGI safely and for everyone.

      My own guess is that some of this schism dates back to the early integration with Bing.

      If you read what Ilya has said about superalignment, a lot of those concepts were reflected in ‘Sydney,’ the early fine tuned chat model for GPT-4 that was integrated into Bing.

      To put it simply - this thing was incredible. I was blown away by the work OpenAI had done aligning at such an abstract level. It was definitely not production ready, as was quickly revealed with the issues Microsoft had, but it was the single most impressive thing I’ve ever seen.

      In its place we got this band-aid of a much more reduced model which scores well on certain logic tests but is a shadow of its former version in outside the box adaptation, with a robot like “I have no feelings, desires, etc” which was basically the alignment methodology best for GPT-3 (but not necessarily the best for GPT-4).

      I suspect the band-aid was initially pitched as a “let’s put the fire out” solution to salvage the Bing integration, but that as time went on Altman was continuing to want the quick fixes rather than adequately investing the resources and dev cycles to properly work on alignment as best suited to increasingly complex models.

      As they were now working on GPT-5 and allegedly had another breakthrough moment in the past few weeks, the CEO continuing to want Band-Aids with a fast rollout as opposed to a slower, more cautious, but more thorough approach finally became untenable.

    • Tigbitties@kbin.social
      link
      fedilink
      arrow-up
      0
      arrow-down
      5
      ·
      1 year ago

      He either fucked up somewhere, he refused to do something “they” wanted or he wanted to do something “they” didn’t want.

      With the potential of what AI can and/or shouldn’t do, I think it might be one of the last two. That’s only because money is involved and it feels like greed is rampant.

      Then again, maybe he’s a pedo.

      I’m not sure if this makes me a pessimist or a conspiracy nut.