Elon’s Death
For a brief moment, I was so excited… Rip deer.
Elon’s Death
For a brief moment, I was so excited… Rip deer.
Oh my… I though the big companies had a mutual understanding that throwing shit at each other is a huge no-no. There’s enough shit to drown them all in it time and again, so this should be hilarious to watch unfold.
I didn’t say it would be easy, but anything TSMC currently produces would likely find a new buyer even with no US customers, so in the short run the loser would not be TSMC. In the long run, it’s pointless to speculate, since US would probably try to level Taiwan down rather than let China have the semiconductor sector to itself… Let’s hope it doesn’t come to that, though.
At least they are consistent, so I guess that’s something…
Not really. China would just buy it all if given the chance and the US companies would be fucked, since TSMC is practically a monopoly within its field at the moment.
This reads like a desperate attempt at proving to investors that their “AI” is useful for handling factual information, with a hint of plain old corporate bribery to avoid further lawsuits from publishers. Grifters gonna grift, and all that.
And do some more stock buybacks and raise dividends, of course.
Careful now, you’ll open the door for fElon Musk 2028. We don’t want that, do we?
Ok, great to know. Nuance doesn’t cross internet well, so your intention wasn’t clear, given all the uninformed hype & grifters around AI. Being somewhat blunt helps getting the intended point across better. ;)
You can play with words all you like, but that’s not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.
I have to disagree with that. To quote the comment I replied to:
AI figured the “rescued” part was either a mistake or that the person wanted to eat a bird they rescued
Where’s the “turn of phrase” in this, lol? It could hardly read any more clearly that they assume this “AI” can “figure” stuff out, which is simply false for LLMs. I’m not trying to attack anyone here, but spreading misinformation is not ok.
Or, hear me out, there was NO figuring of any kind, just some magic LLM autocomplete bullshit. How hard is this to understand?
Analysts: “Is this ‘car’ in the room with us right now?”
Musk: “I have concepts of a car.”
I’d put my money on that account being hacked/sold and gaining a new life as a bot in some disinformation network ready to spew bias and bullshit when the time and topic is right. There’s no other way to explain the comment history before 9 months ago, then a long silence, and then a restart just a few weeks ago with a complete change in character.
Idealistically and realistically: Totally and absolutely cool. If anything, they have a moral imperative to keep the project going, since there are users that depend on it, and doing that requires money. As such, people will need to be informed of how to contribute, so a pop up doing just that is a good way to achieve this. Why would this not be ok, even idealistically?
Perhaps LLMs can be used to gain some working vocabulary in a subject you aren’t familiar with. I’d say anything more than that is a gamble, since there’s no guarantee that hallucinations have not taken place. Remember, that to spot incorrect info, you need to already be well acquainted with the matter at hand, which is at the polar opposite of just starting to learn the basics.
You do realize that a very thorough manual is but a man bash
away? Perhaps it’s not the most accessible source available, but it makes up for that in completeness.
This is one of the reasons I’ve disabled uefi by default with the noefi
kernel parameter, the other reason being the LogoFAIL exploit: https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface#Disable_UEFI_variable_access
Hot take: let this brain rot format die already. If you cannot hold your attention for longer than a few minutes at a time, you might want to work on that instead of embracing it.