Concerning AI | Existential Risk From Artificial Intelligence

Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?

https://concerning.ai

Eine durchschnittliche Folge dieses Podcasts dauert 29m. Bisher sind 75 Folge(n) erschienen. Alle zwei Wochen gibt es eine neue Folge dieses Podcasts.

Gesamtlänge aller Episoden: 1 day 11 hours 37 minutes

subscribe
share






0060: Peter Scott’s Timeline For Artificial Intelligence Risks


Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and Peter@HumanCusp.com For reference (not discussed in this episode): Crisis of Control: How Artificial SuperIntelligences May Destroy Or Save the Human Race by Peter J. Scott http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0060-2018-01-21.mp3


share








 February 13, 2018  42m
 
 

0059: Unboxing the Spectre of a Meltdown


SpectreAttack.com               http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0059-2018-01-14.mp3    


share








 January 30, 2018  20m
 
 

0058: Why Disregard the Risks?


There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of cognitive biases Alpha Zero Virtual Reality recorded January 7, 2017, originally posted to Concerning.AI http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0058-2018-01-07.mp3


share








 January 16, 2018  37m
 
 

0057: Waymo is Everybody?


If the Universe Is Teeming With Aliens, Where is Everybody?                 http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0057-2017-11-12.mp3


share








 January 2, 2018  18m
 
 

0056: Julia Hu of Lark, an AI Health Coach


Julia Hu, founder and CEO of Lark, an AI health coach, is our guest this episode. Her tech is really cool and clearly making a positive difference in lots of people's lives right now. Longer term, she doesn't see much to worry about.


share








 December 19, 2017  46m
 
 

0055: Sean Lane


Ted had a fascinating conversation with Sean Lane, founder and CEO of Crosschx.


share








 December 5, 2017  40m
 
 

0054: Predictions of When


We often talk about how know one really knows when the singularity might happen (if it does), when human-level AI will exist (if ever), when we might see superintelligence, etc. Back in January, we made up a 3 number system for talking about our own predictions and asked our community on facebook to play along […]


share








 November 21, 2017  29m
 
 

0053: Listener Feedback


Great voice memos from listeners led to interesting conversations.


share








 November 7, 2017  36m
 
 

0052: Paths to AGI #4: Robots Revisited


We continue our mini series about paths to AGI. Sam Harris’s podcast about the nature of consciousness Robot or Not podcast See also: 0050: Paths to AGI #3: Personal Assistants 0047: Paths to AGI #2: Robots 0046: Paths to AGI #1: Tools   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0052-2017-10-08.mp3


share








 October 24, 2017  43m
 
 

0051: Rodney Brooks Says Not To Worry


Rodney Brooks article: The Seven Deadly Sins of Predicting the Future of AI


share








 October 10, 2017  40m