AciernoJ.MischelS.PhillipsJ. (2022). Moral judgements reflect default representations of possibility. Philosophical Transactions of the Royal Society A, 377(1866), 20210341.
2.
AllenM.PoggialiD.WhitakerK.MarshallT. R.van LangenJ.KievitR. A. (2019). Raincloud plots: A multi-platform tool for robust data visualization. Wellcome Open Research, 4, 63.
BaumeisterR. F.AlquistJ. L. (2023). The pragmatic structure of indeterminacy: Mapping possibilities as context for action. Possibility Studies & Society, 1(1-2), 15–20.
5.
BaumeisterR. F.MarangesH. M.SjåstadH. (2018). Consciousness of the future as a matrix of maybe: Pragmatic prospection and the simulation of alternative possibilities. Psychology of Consciousness Theory Research and Practice, 5(3), 223–238.
6.
BaumeisterR. F.VohsK. D.OettingenG. (2016). Pragmatic prospection: How and why people think about the future. Review of General Psychology, 20(1), 3–16.
7.
BeghettoR. A. (2023). A new horizon for possibility thinking: A conceptual case study of human × AI collaboration. Possibility Studies & Society, 1(3), 324–341. https://doi.org/10.1177/27538699231160136
8.
BenderE. M.GebruT.McMillan-MajorA.ShmitchellS. (2021). On the dangers of stochastic parrots: Can language models be too big? [Conference session]. In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623).
9.
BinzM.SchulzE. (2023). Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6), e2218523120.
Boyce-JacinoC.DeDeoS. (2021). Cooperation, interaction, search: Computational approaches to the psychology of asking and answering questions. https://psyarxiv.com/5mgn2/
ChristianoP. F.LeikeJ.BrownT.MarticM.LeggS.AmodeiD. (2017). Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems, 30, 1–17.
14.
ColeS.KvavilashviliL. (2021). Spontaneous and deliberate future thinking: A dual process account. Psychology Research, 85, 464–479.
15.
CorazzaG. E. (2023). Beyond the adjacent possible: On the irreducibility of human creativity to biology and physics. Possibility Studies & Society, 1(1-2), 37–45. https://doi.org/10.1177/27538699221145664
16.
CzarnowskaP.VyasY.ShahK. (2021). Quantifying social biases in NLP: A generalization and empirical comparison of extrinsic fairness metrics. Transactions of the Association for Computational Linguistics, 9, 1249–1267. https://doi.org/10.1162/tacl_a_00425
17.
DillionD.TandonN.GuY.GrayK. (2023). Can AI language models replace human participants?Trends in Cognitive Sciences, 27, 597–600.
18.
HartmannJ.SchwenzowJ.WitteM. (2023). The political ideology of conversational AI: Converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation. arXiv preprint arXiv:2301.01768.
19.
HenrichJ.HeineS. J.NorenzayanA. (2010). Most people are not WEIRD. Nature, 466(7302), 29–29.
KvavilashviliL.RummelJ. (2020). On the nature of everyday prospection: A review and theoretical integration of research on mind-wandering, future thinking, and prospective memory. Review of General Psychology, 24(3), 210–237.
22.
LeeP.BubeckS.PetroJ. (2023). Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. New England Journal of Medicine, 388(13), 1233–1239.
23.
LeeP.GoldbergC.KohaneI. (2023). The AI revolution in medicine: GPT-4 and beyond. Pearson.
24.
MagnaniL. (2023). Possibilities in an abductive perspective: Creating affordances as cognitive chances. Possibility Studies & Society, 1(1-2), 127–136. https://doi.org/10.1177/27538699221142718
25.
MillsT.PhillipsJ. (2022). What comes to mind? Samples from relevance-based feature spaces. Proceedings of the Annual Meeting of the Cognitive Science Society, 44, 1–7.
26.
MitchellM.KrakauerD. C. (2023). The debate over understanding in ai’s large language models. Proceedings of the National Academy of Sciences, 120(13), e2215907120.
27.
MorrisA.PhillipsJ.HuangK.CushmanF. (2021). Generating options and choosing between them depend on distinct forms of value representation. Psychological Science, 32(11), 1731–1746.
OuyangL.WuJ.JiangX.AlmeidaD.WainwrightC.MishkinP.ZhangC.AgarwalS.SlamaK.RayA.SchulmanJ.HiltonJ.KeltonF.MillerL.SimensM.AskellA.WelinderP.ChristianoP.LeikeJ.LoweR. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730–27744.
30.
PhillipsJ.CushmanF. (2017). Morality constrains the default representation of what is possible. Proceedings of the National Academy of Sciences, 114(18), 4649–4654.
31.
PhillipsJ.KnobeJ. (2018). The psychological representation of modality. Mind & Language, 33(1), 65–94.
32.
PhillipsJ.MorrisA.CushmanF. (2019). How we know what not to think. Trends in Cognitive Sciences, 23(12), 1026–1040.
33.
RedshawJ.SuddendorfT. (2016). Children’s and apes’ preparatory responses to two mutually exclusive possibilities. Current Biology, 26(13), 1758–1762.
34.
ShtulmanA.PhillipsJ. (2018). Differentiating “could” from “should”: Developmental changes in modal cognition. Journal of Experimental Child Psychology, 165, 161–182.
35.
SjåstadH.BaumeisterF. R. (2023). Fast optimism, slow realism? causal evidence for a two-step model of future thinking. Cognition, 236, 105447. https://doi.org/10.1016/j.cognition.2023.105447
36.
SrinivasanG.AciernoJ.PhillipsJ. (2022). The shape of option generation in open-ended decision problems. Proceedings of the Annual Meeting of the Cognitive Science Society, 44, 3534–3540.
37.
StevensonC.SmalI.BaasM.GrasmanR.van der MaasH. (2022). Putting GPT-3’s creativity to the (alternative uses) test. arXiv 2206.08932.
38.
StiennonN.OuyangL.WuJ.ZieglerD.LoweR.VossC.RadfordA.AmodeiD.ChristianoP. F. (2020). Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33, 3008–3021.
39.
TalboyA. N.FullerE. (2023). Challenging the appearance of machine intelligence: Cognitive bias in LLMs. arXiv preprint arXiv:2304.01358.
40.
WangK.VariengienA.ConmyA.ShlegerisB.SteinhardtJ. (2022). Interpretability in the wild: A circuit for indirect object identification in GPT-2 small. arXiv preprint arXiv:2211.00593.
41.
WangW.WeiF.DongL.BaoH.YangN.ZhouM. (2020). Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural Information Processing Systems, 33, 5776–5788.
42.
WoldS.EsbensenK.GeladiP. (1987). Principal component analysis. Chemometrics and Intelligent Laboratory Systems, 2(13), 37–52.
43.
YamakoshiT.McClellandJ. L.GoldbergA. E.HawkinsR. D. (2023). Causal interventions expose implicit situation models for commonsense language understanding. arXiv 2306.03882.