General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsThe Guardian found an AI expert who says AI won't harm us - it will just view us as ants
https://amp.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-saysProf Jürgen Schmidhubers work on neural networks in the 1990s was developed into language-processing models that went on to be used in technologies such as Google Translate and Apples Siri. The New York Times in 2016 said when AI matures it might call Schmidhuber Dad.
-snip-
You cannot stop it, says Schmidhuber, who is now the director of the King Abdullah University of Science and Technologys AI initiative in Saudi Arabia.
-snip-
Schmidhuber believes AI will advance to the point where it surpasses human intelligence and has no interest in humans while humans will continue to benefit and use the tools developed by AI. This is a theme Schmidhuber has discussed for years, and was once accused at a conference of destroying the scientific method with his assertions.
-snip-
2017 interview that one links to:
https://www.theguardian.com/technology/2017/apr/18/robot-man-artificial-intelligence-computer-milky-way
Instead, he believes machine intelligence will soon not just match that of humans, but outstrip it, designing and building heat-resistant robots that can get much closer to the suns energy sources than thin-skinned Homo sapiens, and eventually colonise asteroid belts across the Milky Way with self-replicating robot factories. And Schmidhuber is the person who is trying to build their brains.
-snip-
In the year 2050 time wont stop, but we will have AIs who are more intelligent than we are and will see little point in getting stuck to our bit of the biosphere. They will want to move history to the next level and march out to where the resources are. In a couple of million years, they will have colonised the Milky Way.
-snip-
But in that case wont robots see it as more efficient to wipe out humanity altogether? Like all scientists, highly intelligent AIs would have a fascination with the origins of life and civilisation. But this fascination will dwindle after a while, just like most people dont understand the origin of the world nowadays. Generally speaking, our best protection will be their lack of interest in us, because most species biggest enemy is their own kind. They will pay about as much attention to us as we do to ants.
-snip-
SWBTATTReg
(26,271 posts)what the AI entity will or would think, yet. And besides, in my opinion (already expressed on prior AI DU postings), we wouldn't know that an AI entity exists until it's too late, before it's well established (again, IMHO).
edisdead
(3,396 posts)not really anyway.
it IS all programming.
SWBTATTReg
(26,271 posts)can draw upon as a valid selection, thus a kind of 'thinking'. Millions of entries embedded in a table of entries for possible solutions based on millions of possible solutions makes it too difficult for a human mind to comprehend.
edisdead
(3,396 posts)MSSQL, MYSQL, Oracle, dbase
.
You are right it is a sort of thinking
sort of. It just doesnt really give me nightmares the way it does others.
SWBTATTReg
(26,271 posts)Oracle w/ one of my applications, but mostly coded, standards, JCL, execute procs, etc., taught (COBOL, PLI1, etc.), bought in the preliminary Internet into SWBT (saving money from the 4000 ATT private lines that we were renting). Had an application group under me but pretty well hated that part, I wanted strictly to deal w/ IT but in order for one to advance, one had to had people in your group.
Hopefully you have better luck (avoiding the mgmt role one gets stuck w/, deal w/ strictly IT issues, etc.). Sounds like you did, w/ the DB languages, query, etc.
edisdead
(3,396 posts)You go back further than me. I did manage to escape management. I got away from it for a couple years because I just burned out a little and I took a job driving school bis because ot was sorely needed. But then I ended up becoming their IT department and developing an enterprise system of record on a LAMP stack. That has been super fun to build for a company that is a startup with a purpose.
SWBTATTReg
(26,271 posts)it was one of the biggest reasons I finally retired, I got tired of being on call 24x7. Even if you had people working for you, you would still get called (due to them not responding, a variety of reasons). That is nice that you had the experience and fun in working w/ a startup, so nice and actually is fun too. Best wishes to your continued success!
ret5hd
(22,510 posts)Problem solved.
marble falls
(71,962 posts)
Shermann
(9,062 posts)Technology has a track record of underperforming since the 1940's. ChatGPT took us by surprise, but we'll settle back into the more familiar pattern of overhyped disappointments soon.
hunter
(40,715 posts)As a kid in college playing with PDP-11s and similar machines, I didn't realize I'd be building supercomputers in the 21st century out of crap I diverted from the recycling bins, and connecting them to internet pipes capable of carrying 1080p video.
The most I've ever paid for a computer was $300. That was for a shopworn i386, a long time ago.
Open source software, especially Linux, has also surpassed my expectations. The last Windows version I used was a heavily hacked 98SE.
The first real operating system I ever used was BSD. I thought that was a wonderful thing then, and Linux was like coming home again.
ARM microprocessors were pretty wonderful too. The most common computers these days are ARM systems running Linux, although most of these systems are disguised as cell phones, tablets, and smart televisions.
The first modern car I drove regularly was a used 1984 Toyota Camry. Got nearly 300,000 miles out of that. Cars I'd owned before that were primitive in comparison. I think the most primitive was a 1965 Ford Mustang my brother and I bought when we were teens.
Shermann
(9,062 posts)Although Moore's Law was postulated in 1965, and we haven't outperformed that prediction. AI has consistently underperformed.
Smartphones are great but require cell towers and are not as capable as the transponders promised in Star Trek.
The Internet is a sewer.
Then we come to hoverboards...
Renew Deal
(85,209 posts)dchill
(42,660 posts)Maru Kitteh
(31,804 posts)It's a conceit at best.
dweller
(28,457 posts)haele
(15,418 posts)The current AI is still a set of macros that use existing information in a predictive manner to answer questions or do tasks. Pretty much "fill in the blank" with the answer that is closest to what matches the question. Still GIGO. Even when it writes code, it's not creating something new, it's following the process and expected results given to it by a human operator.
Haele
Brenda
(2,063 posts)Battlestar Galactica. And The Terminator. And then there's the Borg. Just like with the climate news, it seems we're living in an era where the sci-fi movies are more like documentaries...maybe from an alternate timeline.
Who will emerge, Data or Lore?
NickB79
(20,370 posts)Getting in the sugar and whatnot.
So his analogy isn't exactly comforting.
highplainsdem
(62,300 posts)systems be comforting to other biological species out there.
DavidDvorkin
(20,600 posts)It seems likely to me that there are other, more advanced civilizations in the Milky Way, probably far more advanced. They would also have developed AIs, which would then have gone on to colonize the Milky Way.
So where are they? They wouldn't be interest in us, either, but we should see signs of their great engineering projects.
Kid Berwyn
(24,488 posts)Dishwashing detergent.
brooklynite
(96,882 posts)Not something Im going to spend time worrying about.
Celerity
(54,485 posts)highplainsdem
(62,300 posts)While also being aware of current risks from AI.
Renew Deal
(85,209 posts)The theory is that we're not intelligent enough to deal with them.
'Zoo hypothesis' may explain why we haven't seen any space aliens
Ask your friends why scientists have failed to find extraterrestrials, and you can be sure at least one of them will offer the following answer: Humans are not worthy.
Were flawed beings. We routinely threaten one other, not to mention other species and the environment. That doesnt sound very civilized, and it offers a plausible explanation for the lack of alien contact. Perhaps the extraterrestrials know were here but dont want to deal with us either by communicating or by visiting.
This idea is endlessly appealing. Its also old. In 1973, MIT radio astronomer John Ball published a paper in which he suggested that the lack of success in uncovering cosmic company wasnt due to a lack of aliens. It was because these otherworldly sentients have agreed to a hands-off policy.
Theyve kept their distance not because were imperfect, but because of our right to pursue our own destiny. Diversity is something that everyone in the cosmos is assumed to value, so life-bearing worlds should be left to their own evolutionary development.
https://www.nbcnews.com/mach/science/zoo-hypothesis-may-explain-why-we-haven-t-seen-any-ncna988946
Kaleva
(40,372 posts)TheBlackAdder
(29,981 posts)LudwigPastorius
(14,770 posts)...up until the point when it decides it needs our atoms for something else.
TheBlackAdder
(29,981 posts)Dave says
(5,433 posts)There is an area deep in the brain that, if less than a cubic millimeter is cut, ends all evidence of consciousness. Its an area that integrates all parts of the brain. Its more primitive than our cortex, suggesting theres likely a lot of sentience in (at least) mammals.
But, unless you posit a ghost in the machine, its all programming messy, evolved from slime, but programming nevertheless.
People like David Chalmers, Sean Carrol, Mark Solms, Phil Goff, Tom Metzinger, and many others kick around a lot of ideas around why and where consciousness exists (I leaned on a materialist view, above).
I like the idea that this programming we experience as consciousness just happens to be using a biological substrate, but why not silicon? Or maybe all of it uses a deeper substrate at a quantum level? Its certainly possible a highly integrated AI can be sentient. We just dont know enough to say when the lights turn on.
TheBlackAdder
(29,981 posts).
At the end of the day, it's just code. And whether it's adaptive code or static, for the most part it's linear, even on parallel systems. While the appearance of 3D though can be emulated due to its speed, most is serial as different tasks swap in and out for the CPU and I/O resources.
Not sure of that cubic millimeter thing, but the cortex is primal, performing rudimentary functions and acting almost like an I/O bus for the body, so yes, if you yank base components of any system it ceases to function properly. I doubt human conscious exists in the cortex.
.
Dave says
(5,433 posts)Let me correct an error: If you damage two, not one, cubic millimeters of the reticular activating system found in the parabrachial region of the brain stem, it's lights out (see around the 40 minute mark):
There is no more sentience of any kind. The interesting thing about this region is it is where the signaling from all parts of the brain merge. It is where information is highly integrated. Perhaps when information is highly integrated, we have consciousness (see Giulio Tononi).
Note that the cortex can process information, make perceptions and judgements about those perceptions, all without awareness. It is only when the reticular system in the brain stem is activated do we have the light of consciousness. This answers the question of "where" consciousness resides, but not why nor how. We land in the world of David Chalmer's "hard problem of consciousness". Why awareness at all? All these functions can take place "in the dark", so to speak, so why are the lights on? No one has successfully answered this question in the thousands of years of wrestling with it.
Daniel Dennett postulates that if you can assemble a system with the equivalent of all the neurons, synaptic connections, structures, chemical swells of neurotransmitters, etc., of the brain, you then have consciousness. By analogy, Dennett says, if you assemble all the parts of a television and stimulate it's circuitry, you get an image on the screen. There is no magical image god, no "ghost in the machine", it's just the physical outcome of the assemblage. (He's a strict materialist - I am not).
So back to the idea in the OP (roughly), when does an AI system become conscious? When we assemble the equivalent complexity of at least a brain stem, anything from an alligator's or a human's, the system just might be conscious, but we just don't know. Per above, we won't know until and if Chalmer's "hard problem" is solved. So far, we are in the dark on this question.
On edit: Added the time mark for the video.
Whiskeytide
(4,657 posts)edisdead
(3,396 posts)Which AI. What we are calling AI isnt a monolithic thing. There are several things that people are calling AI
. However, I wills ay that what we are calling AI is sort of like the toy that kids are calling a hoverboard. It isnt it is a platform with wheels. Just as the AI that we have available doesnt really Think.
So given it is basically complex programming why would anyone venture to guess who programmed what into it and what it can be used for.