80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.

https://80000hours.org/podcast/

Eine durchschnittliche Folge dieses Podcasts dauert 2h17m. Bisher sind 237 Folge(n) erschienen. Dieser Podcast erscheint alle 9 Tage.

Gesamtlänge aller Episoden: 21 days 18 hours 14 minutes

subscribe
share






Great power conflict (Article)


Today’s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare.

If you want to check out the links, footnotes and figures in today’s article, you can find those here...


share








 September 22, 2023  1h19m
 
 

#163 – Toby Ord on the perils of maximising the good that you do


Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?

But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do...


share








 September 8, 2023  3h7m
 
 

The 80,000 Hours Career Guide (2023)


An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon and on Audible.

If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift.


share








 September 4, 2023  4h41m
 
 

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI


Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.

But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar)...


share








 September 1, 2023  59m
 
 

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite


"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892.

However, the number of human manual operators peaked in 1920 -- 30 years after this...


share








 August 23, 2023  3h30m
 
 

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment


"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that...


share








 August 14, 2023  2h36m
 
 

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less


In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, ".....


share








 August 8, 2023  2h51m
 
 

We now offer shorter 'interview highlights' episodes


Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, and if a topic matters to you we do recommend listening to the full episode — but we think these will be a nice upgrade on skipping episodes entirely...


share








 August 5, 2023  6m
 
 

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk


Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others...


share








 August 1, 2023  3h13m
 
 

#157 – Ezra Klein on existential risk from AI and what DC could do about it


In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D...


share








 July 24, 2023  1h18m