Originally released in June 2022.
If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free.
This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars...
Originally released in November 2018.
After dropping out of a machine learning PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option.
He decided to apply to OpenAI, and spent about 6 weeks preparing for the interview before landing the job...
Originally released in August 2022.
Today’s release is a professional reading of our new problem profile on preventing an AI-related catastrophe, written by Benjamin Hilton.
We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks...
Article originally published February 2022.
In this episode of 80k After Hours, Perrin Walker reads our career review of China-related AI safety and governance paths.
Here’s the original piece if you’d like to learn more.
You might also want to check out Benjamin Todd and Brian Tse's article on Improving China-Western coordination on global catastrophic risks...