The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
Yes, poorly generated boilerplate data. Amazing.
Said by someone who’s never written a line of code.
Is autocorrect always right? No, but we all still use it.
And I never said “poorly generated”, I decidedly used “good and clean”. And that was in the context of writing larger segments of code on it’s own. I did clarify after that it’s good for writing things like boilerplate code. So no, I never said “poorly generated boilerplate”. You were just putting words in my mouth.
Boilerplate code that’s workable can help you get well ahead of a task than if you did it yourself. The beauty about boilerplate stuff is that there’s not really a whole lot of different ways to do it. Sure there are fancier ways, but generally anything but code that’s easy to read is frowned upon. Fortunately LLMs are actually great at the boilerplate stuff.
Just about every programmer that’s tried GitHub Copilot agrees that it’s not taking over progressing jobs anytime soon, but does a fine job as a coding assistant tool.
I know of at least three separate coding/tech related podcasts with multiple hosts that have come to the same conclusion in the past 6 months or so.
If you’re interested, the ones I’m thinking of are Coder Radio, Linux After Dark, Linux Downtime, and 2.5 Admins.
Your reply also demonstrates the ridiculous mindset that people have about this stuff. There’s this mentality that if it’s not literally a self aware AI then it’s spam and worthless. Ya, it does a fairly basic and mundane thing in the real world. But that mundane thing has measurable utility that makes certain workloads easier or more efficient.
Sorry it didn’t blow your mind.
Your first line shows you are a clown and nothing you say is worth reading. 😂
Sure you’ve been working in AI for 10 years? Not at all obviously.
Ditto