Gesamtlänge aller Episoden: 1 day 11 hours 37 minutes
Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and Peter@HumanCusp.com For reference (not discussed in this episode): Crisis of Control: How Artificial SuperIntelligences May Destroy Or Save the Human Race by Peter J. Scott http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0060-2018-01-21.mp3
SpectreAttack.com http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0059-2018-01-14.mp3
There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of cognitive biases Alpha Zero Virtual Reality recorded January 7, 2017, originally posted to Concerning.AI http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0058-2018-01-07.mp3
If the Universe Is Teeming With Aliens, Where is Everybody? http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0057-2017-11-12.mp3
Julia Hu, founder and CEO of Lark, an AI health coach, is our guest this episode. Her tech is really cool and clearly making a positive difference in lots of people's lives right now. Longer term, she doesn't see much to worry about.
We often talk about how know one really knows when the singularity might happen (if it does), when human-level AI will exist (if ever), when we might see superintelligence, etc. Back in January, we made up a 3 number system for talking about our own predictions and asked our community on facebook to play along […]
We continue our mini series about paths to AGI. Sam Harris’s podcast about the nature of consciousness Robot or Not podcast See also: 0050: Paths to AGI #3: Personal Assistants 0047: Paths to AGI #2: Robots 0046: Paths to AGI #1: Tools http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0052-2017-10-08.mp3
Rodney Brooks article: The Seven Deadly Sins of Predicting the Future of AI