We’re forming a council of AI researchers who will judge resolutions on technical forecasting questions, and build an open-source set of standards available to forecasting organisations.
Current members are:
- Daniel Filan (Center for Human-Compatible AI)
- Chris Cundy (Stanford University)
- Gavin Leech (University of Bristol)
- William Saunders (Ought)
Request a resolution
Why is this important?
Many impactful groups work on AI forecasting, but these efforts all face the problem of operationalising clear questions.
For example, suppose in 2000 you use “superhuman Othello from self-play” as a benchmark of AI progress, and forecast it to be possible by 2020. It seems you were correct – very plausibly the AlphaZero architecture should work for this. However, in a strict sense your forecast was wrong – because no one has actually bothered to build a powerful Othello agent.
So if a calibrated forecaster were to face this or a similar question, considerations regarding who will bother to pursue what types of project can “screen off” the underlying technical progress questions we actually want to know.
This situation could be avoided by resolving questions with a council of experts, who could judge what feats are counterfactually plausible, and then have forecasters predict the council verdict instead of the underlying event.
What do council members commit to?
Roughly 5-20h of work/year, compensated by a symbolic honorarium.
Members discuss and vote on question resolutions in ~quarterly online meetings, with Parallel handling all operations and logistics.
Should I apply?
We think this opportunity would be good if you:
Have at least ongoing PhD experience in AI (or equivalent)
Are excited about contributing to infrastructure for the public good
Have some familiarity with forecasting (though we will provide training for members with little prior experience)
If you’d like to join the council, you can apply here.