The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
Your first line shows you are a clown and nothing you say is worth reading. 😂
Sure you’ve been working in AI for 10 years? Not at all obviously.
Ditto