Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
In reply to the discussion: Steve Wozniak: The Future of AI Is 'Scary and Very Bad for People' [View all]Fumesucker
(45,851 posts)5. This piece was written 22 years ago...
And it's still a bit ahead of its time..
https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
_What is The Singularity?_
The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.
The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)
What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
<snip>
The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.
The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)
What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
<snip>
Edit history
Please sign in to view edit histories.
Recommendations
0 members have recommended this reply (displayed in chronological order):
172 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
RecommendedHighlight replies with 5 or more recommendations
Basically we go 'Star Trek' with basic income gaurantee or 'Soilent Green' with capitalism as it
kelly1mm
Mar 2015
#2
That's the real question. AI is a threat to human society as it is currently structured
Xithras
Mar 2015
#16
You may be refering to the Ferengi and their quest for Gold Pressed Latinum. Fun Fact: The
kelly1mm
Mar 2015
#135
'Star Trek' with basic income gaurantee or 'Soilent Green' with capitalism as it"
workinclasszero
Mar 2015
#94
It is, and many humans already function as organic robots in the system controlled by silicon, later
RKP5637
Mar 2015
#34
Yep, humans, an inferior byproduct of an extraterrestrial experiment to populate a spinning ball of
RKP5637
Mar 2015
#37
Yes, a significant EMP "if" data is unprotected. As someone else said, redundancy is key. Many
RKP5637
Mar 2015
#39
Quite true, core data would be intact, but peripheral access would be destroyed/limited.
RKP5637
Mar 2015
#65
Oh yes you are right. Just look at what happens in Florida when hurricane warnings go up
workinclasszero
Mar 2015
#108
I'm gonna start pronouncing it "pee-nee-yak" to sound like ENIAC and UNIVAC. nt
bananas
Mar 2015
#112
Why do you think this excerpt supports your point? Just curious, just wanted to see
ND-Dem
Mar 2015
#79
There was a quote from one of the researchers in the last line of the piece..
Fumesucker
Mar 2015
#83
That is an interesting point, about the general superiority of our unconscious minds.
bemildred
Mar 2015
#120
Well a lack of one would make something they'd have in common with sociopaths more specifically
JonLP24
Mar 2015
#78
Yep. Productivity growth is actually fairly low at the moment, yet people are talking about the
Chathamization
Mar 2015
#23
At some point AI will be more powerful than humans. No doubt about it. And robots will....
Logical
Mar 2015
#32
LOL, so unless you have 100% proof, you don't believe anything might happen......
Logical
Mar 2015
#138
How can you think it will not happen? It had advanced constantly! Like my question to.....
Logical
Mar 2015
#153
LOL, still no answer? Why ask me if I want to continue? If you want to quit just say so. Slowly.....
Logical
Mar 2015
#159
So you claim you do not think at some point AI will reach a point that it is like....
Logical
Mar 2015
#161
Who in the hell asks for proof of a technology prediction with no end date? I guess only you! You...
Logical
Mar 2015
#167
"At some point"...true, and at some point the sun will be gone. "In the long run we're all dead."N/T
Chathamization
Mar 2015
#45
We already use computers to design chips that we couldn't think of for ourselves.
Major Hogwash
Mar 2015
#51
I wouldn't be so sure, your smartphone is far more sophisticated than a Star Trek communicator
Fumesucker
Mar 2015
#27
How do we know you are not an Artificial Intelligence messing with us??? Hmmmmm???
hunter
Mar 2015
#68
Great. No fucking flying car but we skip right to slave to evil computers.
Hassin Bin Sober
Mar 2015
#21
AeroMobil has working prototypes and says they'll have one on the market by 2017, actually
Electric Monk
Mar 2015
#42
The project is mostly complete at this point. Welcome to the Cybernetic Civilization.
GliderGuider
Mar 2015
#31
I'm not sure I'll see a true AI in my lifetime but my kids more than likely will..
Fumesucker
Mar 2015
#64
We're still struggling with modeling the 302 neurons of C Elegans. We're not going to get to the
Chathamization
Mar 2015
#70
Part of the reason why these predictions are so silly is because we don't even understand the brain
Chathamization
Mar 2015
#111
Or even the flexibility of a mouse, something still far beyond our grasp
Chathamization
Mar 2015
#122
One of the reasons I stay on DU, it's fairly low bandwidth and my connection isn't always quick
Fumesucker
Mar 2015
#99
Climate change is scarier, it's happening now, and it's worse for people than AI
Auggie
Mar 2015
#129