Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

BootinUp

(47,143 posts)
Thu May 25, 2023, 10:40 PM May 2023

An AI thought experiment by Emily Bender

Nonetheless, when we see a language model producing seemingly coherent output and we think about its training data, if those data come from a language we speak, it’s difficult to keep in focus the fact that the computer is only manipulating the form — and the form doesn’t “carry” the meaning, except to someone who knows the linguistic system.

To try to bring the difference between form and meaning into focus, I like to lead people through a thought experiment. Think of a language that you do not speak which is furthermore written in a non-ideographic writing system that you don’t read. For many (but by no means all) people reading this post, Thai might fit that description, so I’ll use Thai in this example.

Imagine you are in the National Library of Thailand (Thai wikipedia page). You have access to all the books in that library, except any that have illustrations or any writing not in Thai. You have unlimited time, and your physical needs are catered to, but no people to interact with. Could you learn to understand written Thai? If so, how would you achieve that? (Please ponder for a moment, before reading on.)


https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83
17 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
An AI thought experiment by Emily Bender (Original Post) BootinUp May 2023 OP
The connection I can make from this thought experiment is 1WorldHope May 2023 #1
I can't really imagine it too well. BootinUp May 2023 #2
There is a more recent and more honest documentary, I saw it on someone's Apple TV app. . 1WorldHope May 2023 #3
This thought experiment is pretty much self-serving, and its conclusion, although correct, is moot. Beastly Boy May 2023 #4
It's fairly clear that people misunderstand BootinUp May 2023 #5
Turns out Emily's experiment is not a novel idea. See John Searle's Chinese room BootinUp May 2023 #7
I think her point is we are the ones making sense of the botted cachukis May 2023 #6
Interesting. Ty BootinUp May 2023 #10
I'm not sure the thought experiment here accurately reproduces the situation of a computer Martin68 May 2023 #8
The point of the experiment is to show BootinUp May 2023 #9
If you can't tell the difference then it's like not knowing if Schrodinger's box contains Martin68 May 2023 #11
That would be like the Turing Test BootinUp May 2023 #12
When I posted my previous response, I took for granted what appeared self-evident to me. Beastly Boy May 2023 #13
I see that mine is the minority opinion. I am aware that my test is basically the same as Martin68 May 2023 #14
In my view, the Thai Library experiment demonstrates the difference between Beastly Boy May 2023 #15
Sometimes my replies are very short BootinUp May 2023 #16
If the reliability issues persist then it will be clear that AI is not intelligent because those Martin68 May 2023 #17

1WorldHope

(684 posts)
1. The connection I can make from this thought experiment is
Thu May 25, 2023, 11:19 PM
May 2023

... that I think of deaf person trying to learn English without pictures, or signs. I was so lucky to have fallen into my career as an ASL sign language interpreter. I spent many hours thinking about how little sense written language makes if you can't hear it spoken and have visual context. Now, imagine being deaf and blind and learning language.

BootinUp

(47,143 posts)
2. I can't really imagine it too well.
Thu May 25, 2023, 11:27 PM
May 2023

best I can do is remember the old black and white movie Helen Keller The Miracle Worker.

1WorldHope

(684 posts)
3. There is a more recent and more honest documentary, I saw it on someone's Apple TV app. .
Fri May 26, 2023, 12:19 AM
May 2023

Watching her journey really gives a perspective you don't consider everyday.

Beastly Boy

(9,323 posts)
4. This thought experiment is pretty much self-serving, and its conclusion, although correct, is moot.
Fri May 26, 2023, 02:11 AM
May 2023

It's a hypothetical curiosity which pretty much proves the obvious: the term "artificial intelligence" in the context of large language models is a misnomer.

The thought experiment itself is contrived: it assumes that "you", a being who is in principle capable of understanding language in both form and meaning, inexplicably find yourself in a hermetically enclosed library where you have no access to an interpreter that may endow form with meaning. But a library is never a self-contained hermetically enclosed environment. The sole purpose of a library is to preserve the form of a language for the benefit of "you" interpreting it regardless of its spatial confines. The library itself was never meant to interpret the formal manifestations of a language it contains, even though in most cases a real life library contains some tools that make it possible to decipher the meaning of the various recorded examples of language. But in both the real-life library and the made-up library of this thought experiment, it is the user, external to the system and all it contains, who is in charge of assigning meaning to the bits of language he/she encounters.

The same is true for data models like ChatGPT. It has access to a vast library of formal records of language and its uses. It dispenses certain strings of characters only when prompted by a user, and only in ways suggested by the prompt. It is not meant to be an arbiter for meaning. The only "intelligence" it possesses is to provide several variations of output in response to the same prompt, which some may interpret as a sign of intelligence or even self-awareness.

This doesn't mean that an LLM is incapable of carrying meaning. Oblivious to the fact, it inevitably does. But not for its own consumption. A formal response to a prompt in itself presumes the presence of someone who is familiar enough with the linguistic system to generate the prompt in the first place, and hence is capable of interpreting the meaning contained in the response.

BootinUp

(47,143 posts)
5. It's fairly clear that people misunderstand
Fri May 26, 2023, 07:29 AM
May 2023

What LLMs are doing and are capable of doing. The thought experiment is an excellent way to correct the misunderstanding.

cachukis

(2,238 posts)
6. I think her point is we are the ones making sense of the botted
Fri May 26, 2023, 08:49 AM
May 2023

words. When I taught semantics and logic, I pointed out how there is always a little bit of truth in everything said. The issue for the reader was to discern how valid was the argument and what was not presented.

Does the position make sense? If it does, you will be transformed by it. If it doesn't, you will be transformed by it. Will your castle be made of sand or stone?

We are readers and ask questions. Realize, there are zillions who are not.

Martin68

(22,794 posts)
8. I'm not sure the thought experiment here accurately reproduces the situation of a computer
Fri May 26, 2023, 10:42 AM
May 2023

software AI who has access to the Thai library described here. A powerful computer has the ability to examine billions of pages of text, using algorithms that detect patterns in a way the human brain could never hope to accomplish, in a relatively short time. No human being could do that within a lifetime. Granted, the AI doesn't "understand" what it is processing, but it is powerful enough to analyze the patterns of language, analyze millions of examples of written language, and develop the ability to simulate an intelligent human being in written language by combining text in ways that convincingly "meaningful.". Google searches already do something like this in a very simplistic sort of way, but with the algorithms of AI such AI-produced text could be indistinguishable from that of a human being.

The elephant in the room is whether any kind of "intelligence" and "free will" could emerge from such an AI. But if we put AI in charge of our grid, our medical decisions, our voting system, and whatever else, it could initiate actions that could be extremely dangerous. Like HAL in "2001: A Space Odyssey."

Btw, I didn't know if anyone here has ever read the original Tarzan books by Edgar Rice Burroughs, but he had Tarzan learn to read, write, and speak English by working by himself with a few children's textbooks he discovered. It's a very similar to the concept described here, except with English rather than Thai.

BootinUp

(47,143 posts)
9. The point of the experiment is to show
Fri May 26, 2023, 11:17 AM
May 2023

that to simulate is the best they will ever do. I know it may be hard for some to accept. Check the wiki article on John Searles Chinese room I posted above, this concept or experiment has been discussed by many philosophers and others interested or working on AI etc. But Searle is right, and these computations will never result in consciousness or intelligence or a brain.

Martin68

(22,794 posts)
11. If you can't tell the difference then it's like not knowing if Schrodinger's box contains
Fri May 26, 2023, 06:50 PM
May 2023

Last edited Sat May 27, 2023, 04:21 PM - Edit history (1)

a person or an AI.

BootinUp

(47,143 posts)
12. That would be like the Turing Test
Fri May 26, 2023, 07:18 PM
May 2023

Here’s a snippet from the wiki page on the Turing Test:

Since Turing introduced his test, it has been both highly influential and widely criticised, and has become an important concept in the philosophy of artificial intelligence.[7][8][9] Some of its criticisms, such as John Searle's Chinese room, are themselves controversial.

https://en.m.wikipedia.org/wiki/Turing_test

While passing the Turing Test would be very impressive, I think it’s important not to mischaracterize it. I get fatigued by all the tech hype that is thrown around to attract investors and sales. I don’t think it is helpful.

Beastly Boy

(9,323 posts)
13. When I posted my previous response, I took for granted what appeared self-evident to me.
Sat May 27, 2023, 11:14 AM
May 2023

I didn't realize how controversial this position has been historically and still is.

Turns out, both the Chinese Room experiment and the Thai Library experiment were a necessary response to the widely unquestioned proposition presented in the Turing test for gauging the threshold at which artificial intelligence can be considered on par with human intelligence. They both expose flaws in Turing's proposition. In the context of this larger debate, the Thai Library experiment is justified in its contrivance and is certainly nor moot.

Thanks for the links.

Martin68

(22,794 posts)
14. I see that mine is the minority opinion. I am aware that my test is basically the same as
Sat May 27, 2023, 11:33 AM
May 2023

that of Turing. But I'll give this one more shot. My point is (as I believe Turing's was) that if you can't tell the difference with any test that you can devise , then the point is most certainly moot. Let me give another example from the real world. In the field of Psychology there is no universal agreement on the existence to the unconscious or subconscious mind because it cannot be located in the brain, nor can it be directly observed, or tested. Based on experience and numerous logical arguments for its existence, I believe the subconscious mind is an essential part of consciousness. For the same reason I think that it is possible to ascribe consciousness to an AI that is sophisticated enough to engage in a two-way dialog, produce logical arguments, and solve problems. We aren't there yet, but I can imagine the day when we are at that point. I believe there will be no way to prove or disprove the existence of a conscious AI entity.

Your weariness with commercial hype is shared by me, but that is a red herring. Corporations want the public to believe what they have created is much more than it actually is at this point. They always do. The is beside the point that I am trying to make.

Beastly Boy

(9,323 posts)
15. In my view, the Thai Library experiment demonstrates the difference between
Sat May 27, 2023, 01:28 PM
May 2023

emulating thoughts and generating thoughts, something that the Turing experiment doesn't take into account.

Why is this distinction important in evaluating artificial intelligence? The Thai Library experiment proposes that intelligence is made up of two mutually complimentary components: form and meaning. To make an analogy with your example of subconscious mind being an essential part of consciousness, meaning is likewise an essential and indispensable attribute of intelligence. Form, as the experiment demonstrates, is only useful to represent thoughts so they can be consumed by others and, separated from meaning, is in itself not a sign of intelligence. The experiment hypothetically isolates form from meaning to illustrate this.

Computational nature of artificial intelligence, at least in its current stage of development, excels in formal representation of thoughts but does not assign meaning to them. LLMs like ChatGPT are not capable of taking the meaning of what they generate into account at all. They merely parrot, sometimes convincingly enough to fool most humans into confusing the parroting with intelligent responses. And they only respond to human prompts: there is no independent thought process involved in their operations. The capacity of AI to fool humans into thinking it possesses human-like intelligence without actually possessing it is what is missing from Turing's proposition.

BootinUp

(47,143 posts)
16. Sometimes my replies are very short
Sat May 27, 2023, 02:05 PM
May 2023

and quick due to various reasons. I apologize if I gave you the impression your post was not appreciated in some way. The point you are raising is at the heart of the debate so it must be worth considering. I found the following paragraph somewhat illuminating on the subject.

https://plato.stanford.edu/entries/chinese-room/#SyntSema

In the 1990s, Searle began to use considerations related to these to argue that computational views are not just false, but lack a clear sense. Computation, or syntax, is “observer-relative”, not an intrinsic feature of reality: “…you can assign a computational interpretation to anything” (Searle 2002b, p. 17), even the molecules in the paint on the wall. Since nothing is intrinsically computational, one cannot have a scientific theory that reduces the mental, which is not observer-relative, to computation, which is. “Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago, but I did not.” (Searle 2002b, p.17, originally published 1993).


Tests, of whether a system can fool an observer, by themselves, can never make me feel like they are reaching some new AI level, because the data sets and possible test scenarious are far too huge. There are already scientific papers that are claiming that reliability problems and other issues related to the training data will persist (cannot be eliminated) this is shown by statistical analysis. Now this is my interpretation of the paper which I would have to find if you are interested.

Martin68

(22,794 posts)
17. If the reliability issues persist then it will be clear that AI is not intelligent because those
Sat May 27, 2023, 04:20 PM
May 2023

issues are directly related to a lack of cohesiveness due to the absence of meaning. I guess we'll have to wait and see. As a scientist I am unwilling too rule out the possibility at this stage in AI's development.

Latest Discussions»Issue Forums»Editorials & Other Articles»An AI thought experiment ...