This year we completed 49 projects with 4.3 FTE. Mostly private work: only 8 are published and only another 5 outputs are likely to make it out.
Our private projects were on topics like semiconductors, fusion, brain emulation, AI incident reporting, org strategy, new ML benchmarks, civilisational resilience, the social benefit of a crop-processing factory, the theory of ML evals…
- Gavin wrote a book with Dwarkesh Patel. Out around June.
- Vox profiled our colleagues at Samotsvety Forecasting, including Misha.
- A big paper on how to lie in machine learning (that is, on forty ways that evals are hard to make scientific).
- We helped the Alignment of Complex Systems Group write up a result on AIs preferring their own content. Forthcoming in PNAS.
- We finally published our big 90-page survey of AI’s likely effects from ten perspectives. ML, scientific applications, social applications, access, safety and alignment, economics, AI ethics, governance, and classical philosophy of life.
- We got an Emergent Ventures grant to enumerate all cases where a medicine is approved in one jurisdiction and banned in another - and maybe arbitrage regulatory equivalence to get them automatically approved (where desirable!).
- The 2024 Shallow Review of AI safety will be out soon.
- Sam evaluated the AI Safety Camp (one of the earliest outreach programmes), at their behest.
- We ran three summer camps for FABRIC: ASPR, Healthcamp and ESPR. Average student satisfaction 9.2 / 10 (but they are easily pleased).
- Metaculus is hosting some of our AI forecasting questions and also David’s Minitaculus.
- Gavin finally has a public track record; he was 89th percentile in the 2023 ACX Forecasting Contest (where the median single superforecaster was 70th).
- We spent a month in California together, two months in London together, and two months in Taipei together.