and quick due to various reasons. I apologize if I gave you the impression your post was not appreciated in some way. The point you are raising is at the heart of the debate so it must be worth considering. I found the following paragraph somewhat illuminating on the subject.
https://plato.stanford.edu/entries/chinese-room/#SyntSema
In the 1990s, Searle began to use considerations related to these to argue that computational views are not just false, but lack a clear sense. Computation, or syntax, is observer-relative, not an intrinsic feature of reality:
you can assign a computational interpretation to anything (Searle 2002b, p. 17), even the molecules in the paint on the wall. Since nothing is intrinsically computational, one cannot have a scientific theory that reduces the mental, which is not observer-relative, to computation, which is. Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago, but I did not. (Searle 2002b, p.17, originally published 1993).
Tests, of whether a system can fool an observer, by themselves, can never make me feel like they are reaching some new AI level, because the data sets and possible test scenarious are far too huge. There are already scientific papers that are claiming that reliability problems and other issues related to the training data will persist (cannot be eliminated) this is shown by statistical analysis. Now this is my interpretation of the paper which I would have to find if you are interested.