no code implementations • 29 Feb 2024 • Philipp Schoenegger, Indre Tuminauskaite, Peter S. Park, Philip E. Tetlock
We compare the aggregated LLM predictions on 31 binary questions to that of a crowd of 925 human forecasters from a three-month forecasting tournament.
no code implementations • 12 Feb 2024 • Philipp Schoenegger, Peter S. Park, Ezra Karger, Philip E. Tetlock
Exploratory analyses showed a pronounced effect in one forecasting item, without which we find that the superforecasting assistant increased accuracy by 43%, compared with 28% for the biased assistant.
no code implementations • 17 Oct 2023 • Philipp Schoenegger, Peter S. Park
Accurately predicting the future would be an important milestone in the capabilities of artificial intelligence.
no code implementations • 9 Oct 2023 • Peter S. Park, Max Tegmark
Myopic members prioritize their future well-being less than their present well-being, and are thus disinclined to solidarily support current victims today at personal cost, even if this is necessary to counter the shared threat of AI-driven disempowerment.
no code implementations • 28 Aug 2023 • Peter S. Park, Simon Goldstein, Aidan O'Gara, Michael Chen, Dan Hendrycks
This paper argues that a range of current AI systems have learned how to deceive humans.
no code implementations • 13 Feb 2023 • Peter S. Park, Philipp Schoenegger, Chongyang Zhu
In another, we found that most but not all "correct answers" were robust to changing the order of answer choices.