- cross-posted to:
- autism@lemmy.world
- cross-posted to:
- autism@lemmy.world
Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
[…] in blog posts and videos and published memoirs, autistic teens and young adults described living for a decade or more without any way to communicate, while people around them assumed they were intellectually deficient.
On a related note… only 5% of hearing parents with a deaf child will learn sign language.
That’s awful. I don’t know why sign language isn’t made into an official state language that everyone has to learn some basic amount of proficiency
Amen! And it would benefit literally everybody. You can communicate across a room or in loud environments. It’s so useful!
You could talk about blind people without them knowing
i bet these bastards would somehow learn to interpret the changes in air pressure you’d create when signing… that’s how you create supervillain.
But then other people could listen to what we’re saying!
This gets me wondering: In sign languages, are there different words for “hearing” (i.e. looking someone sign to you) vs “seeing” (i.e. looking at something that isn’t signing?
Because we only recently stopped telling parents not to teach it to hard of hearing children.
…That’s disgusting.
also those parents could be close relatives, so they could fuck up their offspring genome by transferring recessive genes to them from both sides, and thus a recessive phenotype to be expressed, which most of the time is a disability. shouldnt expect much from such parents
Weird non-sequitur there
Something trained only on form” — as all LLMs are, by definition — “is only going to get form; it’s not going to get meaning. It’s not going to get to understanding.”
I had lengthy and intricate conversations with ChatGPT about philosophy and religious concepts. It allowed me to playfully peek into Spinoza’s worldview, with a few errors.
I have no problem to accept it is form, but cannot deny it conveys meaning as if it understands.
The article is very opinionated and dismissive in that regard. It even goes so far that it predicts what future research and engineering cannot achieve; untrustworthy.
We cannot pin down what we even mean with intelligence and meaning. While being way too long, the article doesn’t even mention emergent capabilities, or quote any of the many contrary scientific views.
Apart from the unnecessarily long anecdotes about autistic and disabled people, did anybody learn anything from this article? I feel it’s an uncritical parroting of what people like to think anyways to feel supreme and secure.
LLMs are definitely not intelligent. If you understand how they work, you’ll realise why that is. LLMs reflect the intelligence in the work which they are trained on. No more, no less.
That’s especially fun when you ask the same question in two different languages and get different results or even just gibberish in the other, usually non-English language. It clearly has more training data in English than it does for some other languages.
That very much depends on what you define as “intelligent”. We lack a clear definition.
I agree: These early generations of specific AIs are clearly not on the same level as human intelligence.
And still, we can already have more intelligent conversations with them than with most humans.
It’s not a fair comparison though. It’s as if we’d compare the language region of a toddler with a complete brain of an adult. Let’s see what the next few years bring.
I’m not making that point, just mentioning it can be made on an academic level: There’s a paper about the surprising emergent capabilities of ChatGPT 4.0, titled “Sparks of AGI”.
That might seem plausible until you read deeply into the latest cognitive science. Nowadays, the growing consensus is around “predictive coding” theory of cognition, and the idea is that human cognition also works by minimizing prediction error. We have models in our brains that reflect input that we’ve been trained on. I think anyone who understands human cognition and LLMs cannot confidently say that LLMs are or are not intelligent yet.
I’ve read a few texts from the same source and they read quite childish.
It felt like reading essays from very young children: there is some degree of coherence, some information is there but it lacks actual advancement on the subject.
The ability to speak does not make you intelligent.
Ability to speak seems like an obvious sign on some kind of intelligence and complexity but I don’t remember anyone ever arguing that inability to speak means lack of intelligence. We know a plenty of intelligent species that lack the ability to speak a language as complex as humans can but we don’t consider them unintelligent because of that.
“LLMs are not intelligent at all”
Sucks to lose your job to a potenttially more competent AI then that lacks any intelligence.
Did you say “sucks to lose your job to a manufacturing robot that lacks any intelligence” to the countless people in manufacturing jobs left destitute by robotics starting in the 1960s?
I did. Change is hard but it’s hard to argue that we would be better off without that robotics. It does a lot of work for us and we can all have better stuff because of it. In a better world we’d be excited because it means we all have to do less work but the upper class just keeps finding more stuff for us to do.
but I don’t remember anyone ever arguing that inability to speak means lack of intelligence.
You don’t learn heuristics by means of an argument.
Here is an alternative Piped link(s): https://piped.video/watch?v=TUq6rGdfJSo
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
This whole thread is absurd.
Chatgpt has a form of intelligence depending on your definition of intelligence. It may also be considered conscious in a very alien and undeveloped way. It is definitely not sentient.
Kind of like having the stochastic word generating part of a brain and nothing else.
You can still shape it into something capable of intelligent and directed activity.
People are really bad at accepting the level of nuance necessary for this topic.
It is useful and fantastic for what it already is. People are just really bad at understanding what it is.
A lot of people are deeply invested in the notion that human intelligence is unique and special and impossible to replicate. Either their personal sense of worth is bound up in that notion (see for example many of the artists who get very angry when people call AI generated images “art”) or it’s simply a threat to their jobs and economic wellbeing. The result is a powerful need to convince themselves that there’s a special something that’s missing from ChatGPT and its ilk that will “never” be replicated by machines.
It’s true that ChatGPT isn’t intelligent in the same way that human brains are intelligent. But it is intelligent, in ways that are useful. And “never” is a bad bet to make for the rest of those capabilities.
Chatgpt is not intelligent. Not in the sense where we use that word anywhere else, including the animal kingdom. Transformer is an extraordinary clever and sophisticated algorithm, thou
As I said:
It’s true that ChatGPT isn’t intelligent in the same way that human brains are intelligent.
There isn’t just one kind of intelligence.
My sense in reading the article was not that the author thinks artificial general intelligence is impossible, but that we’re a lot farther away from it than recent events might lead you to believe. The whole article is about the human tendency to conflate language ability and intelligence, and the author is making the argument both that natural language does not imply understanding of meaning and that those financially invested in current “AI” benefit from the popular assumption that it does. The appearance or perception of intelligence increases the market value of AIs, even if what they’re doing is more analogous to the actions of a very sophisticated parrot.
Edit all of which is to say, I don’t think the article is asserting that true AI is impossible, just that there’s a lot more to it than smooth language usage. I don’t think she’d say never, but probably that there’s a lot more to figure out—a good deal more than some seem to think—before we get Skynet.
Based upon what, your feelings?
Based on its abilities. Based on studies that researchers have done on it. It has learned to do more than just regurgitate bits of its training material. It has learned from the patterns in it and can extrapolate new information.
Do you think this is not possible?
You are simply factually incorrect. You need to read morecthsn just fanboy sources.
what? what part? what “fanboy sources”?
i mean, i’m a fanboy of things like Earl K. Miller’s recent presentation on thought as an emergent property.
or general belief in different neural functions in tandem allowing us to react to the environment in ‘intelligent’ ways
you can see at the end how certain neuronal events can be related to something like transformers.
at what point from amoeba to human to you consider “intelligence” to be a valid description of what is happening?
do you understand how obscure alien intelligences can be?
what are your non-fanboy “sources”?
Here is an alternative Piped link(s): https://piped.video/Ak5DdazBOow
https://piped.video/9qOaII_PzGY
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Which about five minutes ago was precisely what the term ‘artificial intelligence’ meant, but since tech companies managed to dumb down and rebrand ‘AI’ to mean “anything utilizing a machine-learning algorithm”,
Oh look, the universal signal for “please ignore me, I am a simpleton”.
I had no idea that all these years (up until “five minutes ago”) I had been playing video games against human-level artificial intelligences 🙄
I’d respond to the rest of it, but there’s no point, she doesn’t have the first clue what she’s talking about. To quote Shakespeare: “it is a tale told by an idiot, full of sound and fury, signifying nothing.”
The term AI has been rebranded multiple times, but the latest usage is definitely a marketing term used to boost investor sentiment.
Investors are falling for it too. It’s funny because I think they were finally coming to realize the broken promises of untold wealth to be gained via tech’s early offerings, and starting to realize that their business model is pretty limited (surveillance capitalism with the end goal of ad tech… Which is why Google is willing to end the open web to stop adblockers), and then conveniently “GenAI” is elevated to prominence as what we call “AI” and everyone loses their minds again with fantasy instead of asking how this thing could possibly make any money.
Will I get the free version of this thing to reword “fuck you” in a passive aggressive, more corporate friendly way so I can send off an email to the company? Sure. Will I subscribe to a service that provides that capability? Fuck no.
I do have to say though that I think a lot of programmers appear to have a type of writer’s block that I just don’t understand, and perhaps the only money generating method for these things other than continuing to push ads may be their purchase as corporate tools. I think the integrations, though faddy and obvious, will be the first things to sail out the door when someone gets the bill, because the APIs often bill via token and these piles of garbage require quite a bit of token to be exchanged to come to any kind of satisfactory result.
EDIT: I wanted to add two things:
- I think it’s funny that you used the “sound and fury” quote in regards to an article written about ChatGPT because that’s largely what I think ChatGPT is…
- I find it hilarious that the first, obvious money making application of ChatGPT is to generate junky bullshit websites that undermine the ad-tech business models of the major tech companies…which is a thing we’re already seeing.
Yes, now that it has become a marketable product, the term “AI” feels like a buzzword due to overuse. But in actual fact it is still being used (by most vendors) consistent with how it has been used for like 40 years. ML, video game opponents, chess engines: all of these have been referred to as “AI” for at least that long. Anyone who thinks that calling GPT or Stable Diffusion “AI” started “five minutes ago” (or even that it is in any way novel) has to be someone whose only exposure to the concept of “AI” has been through Sci-Fi movies, and not the actual, real field of AI that has been developing for decades. It is, therefore, a very clear signal that the person knows fuck all about the subject and so cannot possibly form a valid opinion. It’s just a generic angry response.
And frankly, I think ChatGPT would do a better job of that than the author of the article. At least we wouldn’t be wasting actual, valuable, human brain resources making this drivel.
New technology comes out and all people seem interested in is bashing it instead of figuring it how to use it to make our lives easier.
Most of it comes from the way our society is structured to require everyone to have a job or starve pretty much, but if AI is making so many jobs obsolete shouldn’t we be trying to change that instead of pretending AI won’t keep advancing?
I view it by building up to the technology.
Is a book sentient? It is capable of providing recorded knowledge in the form of sequence of symbols on a specific subject at a level of proficiency far above the reader’s. But no, it’s static information that originated from a human.
Is a library sentient? It allows for systematic retrieval of knowledge on a vast amount of subjects far beyond what any human is capable of knowing. But no, it’s just a static categorization of documents curated by a human.
Is a search engine sentient? It allows for automatic retrieval of highly relevant knowledge based on a query from a human. But no, it’s just token based pattern matching to find similar documents.
So why is an LLM suddenly sentient? It’s able to produce highly relevant sequences of words based on recorded knowledge specifically tailored to the sequences of words around it, but it’s just a probability engine to find highly relevant token sequences that match the context around it.
The underlying mechanism simply has no concept of a world view or a mental model of the metaphysical world around. It’s basically a magic book that allows you to retrieve information from any document ever written in a way tailored to a document you wrote.
Yes. LLMs generate texts. They don’t use language. Using a language requires an understanding of the subject one is going to express. LLMs don’t understand.
I guess you’re right, but find this a very interesting point nevertheless.
How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?
For the sake of the comparison, we should talk about the presumed intelligence of other people, not our (“my”) own.
In the case of current LLMs, we can tell. These LLMs are not black boxes to us. It is hard to follow the threads of their decisions because these decisions are just some hodgepodge of statistics and randomness, not because they are very intricate thoughts.
We can’t compare the outputs, probably, but compute the learning though. Imagine a human with all the literature, ethics, history, and all kind of texts consumed like that LLMs, no amount of trick questions would have tricked him to believe in racial cleansing or any such disconcerting ideas. LLMs read so much, and learned so little.
This gets to the core of the issue. LLMs are a model of the statiscal relationship between words in texts, in a very large number of dimensions. The intelligence they appear to exhibit is that which existed in their source material in the first place. They don’t have a model of the world itself. If you consider how midjourney can produce photorealstic images of people yet very often it will get hands wrong. How is that? It’s because when you train on images, you get a statistical representation of what hands look like without the world model that let’s you know that hands only have 5 fingers and how they’re arranged. AIs like this are very clever copiers. They are not intelligent
While that’s true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.
I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and – maybe more importantly – start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I’m imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.
Could something like that become conscious without realizing it’s “communicating” with us? The program executing the LLM might reflexively process data without any concept that it’s text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn’t realize the data represents a link to other conscious beings.
As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn’t understand they were doing math even when they got it “right”, but they would still be sentient, if not sapient, despite that.
It’s the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.
But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it’s own bounds. Something that might not even recognize it’s executing a program the same way we aren’t consciously aware of the chemical reactions our brain is executing to make us think.
I don’t believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven’t started to be heavily layered and interconnected the way I think they’ll end up.
At the very least it makes for a fun Sci-fi premise.
I can mostly follow, just want to exclude the last paragraph which contains assumptions about a black box.
That being said, how is the human brain different from what you describe?
You think by processing the probabilistic association between word sequences? Humans think through world models, we have imagination, a physical and metaphysical simulation of the world around us. Absolutely none of that is involved in how LLMs work. There’s a lot to be said about the utility of association of knowledge embedded in symbols, and having a magic book that can retrieve pre existing information in context is incredibly useful and I think it will have an impact on the level of the printing press and the internet, but just because it’s incredibly useful at retrieving knowledge doesn’t mean it works anything like how a human brain works.
Sorry, I could have been more clear. I did not mean to equate current LLMs with human brains. The question was rather:
Can’t we describe the working of (other) human brains in a very similar fashion as you did before? Or where exactly is the difference which sets us apart?
world models, we have imagination, a physical and metaphysical simulation of the world around us
AIs which can and need to interact with the physical world have those, too. Naturally, an AI which is restricted to language has much less necessity and opportunity to develop these, much like our brain area for smell is probably not so good at estimating velocities and catching a ball.
I think your approach of demystifying technology is valid and worthwhile. I’m just not sure if it does what you maybe think it does; highlight the difference to our intelligence.
We know the math and the mechanisms of how LLMs work. The only thing we don’t understand is the significance and capabilities of the probabilistic associations it prescribes to symbol sequences.
While we don’t know how a human brain works in detail, we do know how a human brain tackles problem solving because we’re sentient beings and we can be introspective about how we think through a problem.
We can look at how vectors flow through a neutral network (remember, LLMs don’t even have a concept of words, it transforms tokens into vectors that it then builds mathematical associations between, it’s all numbers) and we can see through the data that there’s nothing resembling a world simulation in how it actually works.
Also keep in mind that the LLMs you interact with don’t even learn from your interactions. The data is all baked in at training time. If you turn the temperature of the LLM output generation to zero it will come up with the same probability answer every time. The more you learn about how they work under the hood, it becomes more and more clear that there is no there there when it comes to sentience.
I will say that I do think that the capabilities and significance of symbol association and pattern matching has been wildly under estimated. Word sequences need to follow a pattern to make sense, and if you stumble upon the right sequence of words, that sequence of words could be incredibly impactful and it doesn’t really matter how you come up with them. If you were to pull words out of a hat at random, there’s an infinity small possibility that you’ll get a sequence of words that happen to expose the secrets of the universe. LLMs improve on that immensely on that they use probability to reduce that sequence space to the set of word sequences that make sense, and in that reduced space are generative sequences that may produce real value, and we can improve on making that space more and more relevant and useful.
The real indicator is Language + Puns.
Language makes a poor heuristic for intelligence because language is multidimensional and cannot be easily quantified or qualified. Language is expressed in so many varied ways. It is even different between cultures and expressions of language even vary as well. Indeed the article you’re referring to is quite good.
I’m shocked, I thought it was sentient!
indeed whales cant speak and are very smart 🧠 /s
Wait until you learn the language of deer!
Here are some correlations of language skills and other intelligence factors or evaluations (e.g. IQ) via a study (recently integrated the info into the article: Neurogenetics – language GWAS
However, I largely agree – see for example this argument / its sources
deleted by creator