Parallel

Parallel’s goal is to build the social and technical infrastructure needed for accurate predictions on decision relevant questions in the science and technology space. In particular we want to understand and forecast the future of AI and ML.


This started as a relatively simple project to list questions we expected to be particularly relevant to AI technology timelines on prediction markets. However, we quickly realized how many pieces were missing from the forecasting “ecosystem” that we would need in order to draw useful insight from markets.

Open Issues:

  • Operationalization: How do you generate forecastable questions that are tied to real world events and still contain a strong ‘signal’ of what is going to happen? How can you decompose a big question - when will we create AGI - into tightly ‘operationalized’ smaller questions?
  • Worldviews: How can we organize, display, and draw inferences from thousands of predictions and questions?
  • Mechanism Design: What’s the right structure to reward forecasters for their time and contributions, while still promoting the sharing of ideas? How can we leverage market mechanisms incentivize smart people to share their beliefs?
  • Enabling specialisation: how can we build out the “supply chain for forecasting”, and enable users to efficiently outsource tasks like research or question decomposition?
  • Scaling: Getting 10 people to share their ideas and models on a forecast is relatively easy - how do you scale that to a 1000 people, while promoting the correct ideas?

Our strategy is to build out this missing infrastructure. We expect this to look like a combination of software and organizations - real and fake money prediction markets and tournaments (ai.metaculus), new software to organize and display the aggregated forecasts, and codified knowledge (dictionary, resolution council).

If you have use cases, suggestions, or just want to chat about forecasting please reach out.

Team

Backers

This project would not be possible without the generous support of the Berkeley Existential Risk Initiative and the EA Long-term future fund.