Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

BootinUp

(46,928 posts)
16. Sometimes my replies are very short
Sat May 27, 2023, 02:05 PM
May 2023

and quick due to various reasons. I apologize if I gave you the impression your post was not appreciated in some way. The point you are raising is at the heart of the debate so it must be worth considering. I found the following paragraph somewhat illuminating on the subject.

https://plato.stanford.edu/entries/chinese-room/#SyntSema

In the 1990s, Searle began to use considerations related to these to argue that computational views are not just false, but lack a clear sense. Computation, or syntax, is “observer-relative”, not an intrinsic feature of reality: “…you can assign a computational interpretation to anything” (Searle 2002b, p. 17), even the molecules in the paint on the wall. Since nothing is intrinsically computational, one cannot have a scientific theory that reduces the mental, which is not observer-relative, to computation, which is. “Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago, but I did not.” (Searle 2002b, p.17, originally published 1993).


Tests, of whether a system can fool an observer, by themselves, can never make me feel like they are reaching some new AI level, because the data sets and possible test scenarious are far too huge. There are already scientific papers that are claiming that reliability problems and other issues related to the training data will persist (cannot be eliminated) this is shown by statistical analysis. Now this is my interpretation of the paper which I would have to find if you are interested.
The connection I can make from this thought experiment is 1WorldHope May 2023 #1
I can't really imagine it too well. BootinUp May 2023 #2
There is a more recent and more honest documentary, I saw it on someone's Apple TV app. . 1WorldHope May 2023 #3
This thought experiment is pretty much self-serving, and its conclusion, although correct, is moot. Beastly Boy May 2023 #4
It's fairly clear that people misunderstand BootinUp May 2023 #5
Turns out Emily's experiment is not a novel idea. See John Searle's Chinese room BootinUp May 2023 #7
I think her point is we are the ones making sense of the botted cachukis May 2023 #6
Interesting. Ty BootinUp May 2023 #10
I'm not sure the thought experiment here accurately reproduces the situation of a computer Martin68 May 2023 #8
The point of the experiment is to show BootinUp May 2023 #9
If you can't tell the difference then it's like not knowing if Schrodinger's box contains Martin68 May 2023 #11
That would be like the Turing Test BootinUp May 2023 #12
When I posted my previous response, I took for granted what appeared self-evident to me. Beastly Boy May 2023 #13
I see that mine is the minority opinion. I am aware that my test is basically the same as Martin68 May 2023 #14
In my view, the Thai Library experiment demonstrates the difference between Beastly Boy May 2023 #15
Sometimes my replies are very short BootinUp May 2023 #16
If the reliability issues persist then it will be clear that AI is not intelligent because those Martin68 May 2023 #17
Latest Discussions»Issue Forums»Editorials & Other Articles»An AI thought experiment ...»Reply #16