Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

bananas

(27,509 posts)
8. A number of organizations are looking at global existential risks
Sun Oct 26, 2014, 06:28 PM
Oct 2014

The Bulletin of Atomic Scientists Doomsday Clock isn't just about nuclear war:

http://thebulletin.org/overview

The Doomsday Clock is an internationally recognized design that conveys how close we are to destroying our civilization with dangerous technologies of our own making. First and foremost among these are nuclear weapons, but the dangers include climate-changing technologies, emerging biotechnologies, and cybertechnology that could inflict irrevocable harm, whether by intention, miscalculation, or by accident, to our way of life and to the planet.


Nassim Taleb, famous for "The Black Swan", is creating an institute at NYU:
http://nassimtaleb.org/tag/extreme-risk-institute/

Extreme Risk Institute

Nassim Taleb is starting the new academic year with a new role. Along with Charles Tapiero, Taleb will be co-director of the EXTREME RISK INITIATIVE, which is expected to develop into an Extreme Risk Institute within the NYU School of Engineering. Here is the official description from his Facebook Page:

In spite of the importance of extreme/hidden risks, there has not been a rigorous methodology to deal with them; statistical or mathematical approaches have not been formally reconciled with real-world decision-making the way engineering has traditionally integrated mathematics and real world heuristics. Extreme risks require both more mathematical and more practical rigor.

The “Extreme Risks Initiative”, ERI, is an NYU-School of Engineering interdisciplinary open research agenda, based on research axes defined by its members and a global research collaborations. Its approaches are at the intersection of the technical and the practical, based on a rigorous merger of theory and practice across interdisciplinary lines. These may include financial and economic engineering, urban risk engineering, transportation-networks, bio-systems, as well as global and environmental problems. A selected series of research axes as well as publications drawing on members’ Initiatives are included in the ERI a working paper series as well as current research enterprises.


Martin Rees and others created a Centre for Study of Existential Risk at Cambridge:
http://en.wikipedia.org/wiki/Centre_for_the_Study_of_Existential_Risk

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price (a philosophy professor at Cambridge), Martin Rees (a cosmologist, astrophysicist, and former President of the Royal Society) and Jaan Tallinn (a computer programmer and co-founder of Skype).[1] According to its website, CSER's advisors include philosopher Peter Singer, computer scientist Stuart J. Russell, statistician David Spiegelhalter, and cosmologists Stephen Hawking and Max Tegmark.[2] According to their website their "goal is to steer a small fraction of Cambridge’s great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future."[2][3]


Their website:
http://cser.org/

Safeguarding our passage through the 21st Century

The Centre for Study of Existential Risk is an interdisciplinary research centre focused on the study of human extinction-level risks that may emerge from technological advances. We aim to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Elon Musk: 'We are summoning the demon' with artificial intelligence bananas Oct 2014 #1
Tesla boss Elon Musk warns artificial intelligence development is 'summoning the demon' bananas Oct 2014 #2
I agree. eggplant Oct 2014 #3
A number of organizations are looking at global existential risks bananas Oct 2014 #8
Well, I'm glad at least SOMEBODY is thinking about this stuff. calimary Oct 2014 #27
"Rise of the Machines" BadtotheboneBob Oct 2014 #36
Corporations would love a set of equations pscot Oct 2014 #4
Yes, it would be bed if an AI piece of software could Helen Borg Oct 2014 #5
I agree, too. djean111 Oct 2014 #6
Skynet or the Matrix? Erich Bloodaxe BSN Oct 2014 #7
Earliest? I think "Colossus" from 1966. tclambert Oct 2014 #16
The book is quite good. kentauros Oct 2014 #20
The Wolves of Memory by George Alec Effinger. I'd recommend that, too. randome Oct 2014 #41
Or Ultron? Initech Oct 2014 #17
The Cylons from Battlestar Galactica n/t. airplaneman Oct 2014 #18
How about "Person of Interest" - current TV show? csziggy Oct 2014 #40
You need to catch up. Samaritan, the 2nd machine - the one that doesn't care - has access to all the 24601 Oct 2014 #42
I have them recorded, just haven't had time to watch csziggy Oct 2014 #44
Vigilance surfaces from time - they are dupes of Samaritan/Decima. Root & I are destined to be 24601 Oct 2014 #45
I had all of last season on the DVR but had to re-watch then erase csziggy Oct 2014 #46
We're already past that particular tipping point GliderGuider Oct 2014 #9
Flew out of Miami one night RobertEarl Oct 2014 #26
He should be ashamed of phil89 Oct 2014 #10
Stephen Hawking agrees kmlisle Oct 2014 #21
I don't think he means a biblical demon......... Marrah_G Oct 2014 #23
Ya know what if he wants a real concern he should look at the job we humans have done. cstanleytech Oct 2014 #11
Does he mean Republicans? MontyPow Oct 2014 #12
Underestimating your adversary is a quick & sure way to lose. Ask John Kerry how that feels. 24601 Oct 2014 #43
Overestimating your allies leads to similar failures MontyPow Oct 2014 #47
Your bait & switched from your topic. I'm not biting. Good night. 24601 Oct 2014 #48
Not really, but not surprised. MontyPow Oct 2014 #49
His worry might be moot. The race is on. Will AI take over (computer singularity) before human- rhett o rick Oct 2014 #13
I think human "intelligence" is doing a pretty demonic job now. nt valerief Oct 2014 #14
Old Computer Science joke Retrograde Oct 2014 #15
I'm with you on that Man from Pickens Oct 2014 #24
When we studied machine intelligence back in college, we encountered many unknowns. tclambert Oct 2014 #19
dead-hand automatic-response missile systems have been an issue since, what, the 50s? MisterP Oct 2014 #22
He says this with plans to make a self-driving Tesla. joshcryer Oct 2014 #25
+100 Duppers Oct 2014 #29
Pshaw. Finding the location of Ryleh, now that would be an existential threat. enki23 Oct 2014 #28
Oh, so when it comes to him selling cars he wants the government to butt out, but when others... JVS Oct 2014 #30
"Come with me if you want to live" Mister Nightowl Oct 2014 #31
At a minimum, true AI will be economically devastating. Xithras Oct 2014 #32
That combined with the artificial stupidity demon of Fox News would end us all. Kablooie Oct 2014 #33
What a moron. bemildred Oct 2014 #34
The danger of AI is not AI itself, but the human agendas behind it. True Blue Door Oct 2014 #35
It's still the human being... HoosierCowboy Oct 2014 #37
He'll change his tune when he figures out how to make money with it. Starry Messenger Oct 2014 #38
We must study old Star Trek episodes daleo Oct 2014 #39
Her. The Machine... nothing new here. hunter Oct 2014 #50
At its current level of advancement, and Jamastiene Oct 2014 #51
Latest Discussions»Latest Breaking News»Elon Musk warns against u...»Reply #8