RUNNING HEAD: WORLD AFTER COVID Expert Predictions of Societal Change: Insights from the World after COVID Project Igor Grossmann 1 , Oliver Twardus 1 , Michael E. W. Varnum 2 , Eranda Jayawickreme 3 , John McLevey 1 1 University of Waterloo, Waterloo, ON, N2L 3G1, Canada 2 Arizona State University, Tempe, AZ, 85287 3 Wake Forest University, Winston-Salem, NC, 27109 in press American Psychologist GitHub repo: github.com/grossmania/wac Correspondence to Igor Grossmann, PAS 3047, University of Waterloo, Waterloo, ON N2L 3G1, [email protected].
70
Embed
WORLD AFTER COVID Expert Predictions of Societal Change
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
RUNNING HEAD: WORLD AFTER COVID
Expert Predictions of Societal Change: Insights from the World after COVID Project
Igor Grossmann1, Oliver Twardus1, Michael E. W. Varnum2, Eranda Jayawickreme3, John
McLevey1
1 University of Waterloo, Waterloo, ON, N2L 3G1, Canada
2 Arizona State University, Tempe, AZ, 85287
3 Wake Forest University, Winston-Salem, NC, 27109
in press
American Psychologist
GitHub repo: github.com/grossmania/wac
Correspondence to Igor Grossmann, PAS 3047, University of Waterloo, Waterloo, ON N2L
Table 1Characteristics of the Expert Sample in thew World after COVID Project
Characteristic n %Country United States 34 59.6
Australia 4 7Canada 4 7United Kingdom 4 7
Hong Kong 3 5.3Germany 2 3.5Israel 1 3.2Japan 1 1.8Russia 1 1.8South Korea 1 1.8Spain 1 1.8Switzerland 1 1.8
Career stage Assistant Professor 1 1.8Associate Professor 8 14Full Professor 43 75.4Emeritus/Retired 5 8.8
Gender Male 37 65Female 20 35
Fellowship American Academy of Arts and Sciences 8 14 American Association for the Advancement of Science 3 5.3German National Academy of Sciences Leopoldina 2 3.5Academy of the Social Sciences in Australia 1 1.8Chatham House – Royal Institute of International Affairs 1 1.8National Academy of Education 1 1.8
Disaster & risk management 2 3.5Behavioral science / Business & leadership, clinical psychology, computer science, consumer behavior, developmental psychology, history, health psychology, moral psychology, neuropsychiatry, personality, political science, psychobiology, sociology
1 each 1.8 each
Note: We classified experts into broad categories (e.g., social psychology) if we could not determine specific sub-field (e.g., moral psychology).
Mixed method analyses
In devising our analytical procedure, we were inspired by the Delphi method (Rowe and
Wright, 2001) and the Expert Elicitation Procedure (Morgan, 2014)–two common ways to
WORLD AFTER COVID 10
systematically evaluate expert judgments. These methods employ interviews following a
structured protocol in which experts are unaware of responses from other experts, as did the
interviewees in the World After COVID project. That said, given limited time commitments
among our interviewees during the pandemic, the interviews in the World after COVID project
diverge from these approaches in that they do not feature the iterative, discussion-based element
common to these methods.
Table 2Preamble and questions posed to participants.
Section TextPreamble My colleagues and I are interested in psychological and social change within a few years after the
pandemic (e.g., political changes, changes in attitudes or behavior toward certain groups, changes in mental health). We are also interested in the wisdom people will need to master the pandemic -- i.e., attitudes, behaviors, or general strategies people can use to successfully navigate the challenges ahead. The specific set of questions I am asking each participant in this project is below:
Q1 If you were to predict the domain or aspect of social life where we might observe the most significant positive societal and/or psychological change in response to the pandemic, what would it be?
Q2 What kind of wisdom will people need to capitalize on for the positive change you refer to above?Q3 If you were to predict the domain or aspect of social life where we might observe the most significant
negative societal and/or psychological change in response to the pandemic, what would it be?Q4 What kind of wisdom will people need to master to overcome this major negative societal change after
the pandemic?Q5 What one piece of wisdom do you think it is important to give people now to help them make it through
the pandemic?
In the initial step, the first author along with a research assistant reviewed interview
transcripts to identify unique themes in response to each question, such that each theme (i) was
present at least twice across interviews and (ii) showed minimal semantic overlap. Next, three
independent raters, one of whom was not familiar with the identity of the interviewees,
categorized responses to each of the five question from the 57 narratives on prevalence of these
themes. Interrater reliability was good (Cohen’s κ = .75), with disagreements resolved via group
discussion with the first author. In the case of additional non-reducible themes that were
identified during this discussion phase, independent raters categorized all statements for presence
WORLD AFTER COVID 11
of these newly identified themes. In the end, we identified twenty themes for predictions
concerning positive societal changes, twenty-two themes for predictions concerning negative
societal changes, and thirty-three themes for recommendations. Full codebook with
representative responses is available in the public repository on GitHub (github.com/grossmania/
wac ) and all technical details are in the on-line supplement.
Estimating uncertainty, sentiment, and outside view
Inspired by the expert elicitation procedures (Morgan, 2014), we quantified the degree of
uncertainty in expert predictions. Because the World after COVID Project did not include
questionnaire-based metrics of uncertainty (in turn subject to response bias), we employed a
combination of mixed method and natural-language processing techniques. Additionally, we
quantified whether experts presented their predictions in an individualized fashion, without
consideration of contextual information or base rates for trends they make forecasts about (a
tendency sometimes referred to as an “inside view;” Kahneman, 2011) or whether they explicitly
acknowledged the contextual information or commented on the base rates when making their
predictions (the “outside view” framing; Kahneman, 2011).
For mixed method analyses, we focused on the prediction-related questions for positive
and negative consequences. Because dialectical framing involved responses across positive and
negative predictions, each participant received only one code (yes=1 / no = 0). For outside view,
we separately categorized positive and negative predictions, which allowed us to compare
likelihood of invoking outside-view information across questions. More information on coding is
available on GitHub (github.com/grossmania/wac ) and in the on-line supplement . Two research
assistants who did not engage in coding prior categories independently categorized responses on
both categories. Inter-rater reliability Cohen’s κ was medium-large (rules of thumb for Cohen's
living in a moment), and preparedness for future pandemics. Three of these themes showed
connections across clusters: appreciation of nature and living in the moment, social
connectedness and gratitude, and work-life balance and reconsideration of habits. Negative
predictions in Panel B clustered into 5 groups concerning irrational dystopia (e.g., prejudice, &
authoritarianism, irrationality & misperception of the world), inequality and family strains,
mistrust, economic hardships, and ill-being. Again, three themes showed connections across
clusters: loneliness and estrangement/alienation, intimate relations and economic hardships, and
pessimism/despair. Notably, both positive and negative clusters included at least one
social/societal theme whereas no individual-focused themes formed a cluster on their own.
WORLD AFTER COVID 15
Figure 1. Predictions for most significant positive and negative consequences in response to the pandemic. % - percentage of experts mentioning a given theme. Scores to the right of the dashed horizontal line – predictions for most significant positive change.
WORLD AFTER COVID 16
Table 3
Most frequently mentioned themes: Definitions and example quotes.
Theme / Expert Definition / QuoteSolidarity Prioritizing needs of vulnerable groups (e.g., elders), thinking more "we" and less
"me" within your group, taking care of each other, doing what is best for all. Overcoming how we compartmentalize people as trustworthy or not. interpersonalcohesion, trust, "we-ness", sense that we are in this together.
We need to shift their emphasis to the common good, doing what will help not justourselves or our tribe, but what's good for all
- Robert Sternberg, cognitive psychology & human developmentPolitical engagement & structural change
Proactive efforts to bring social change and raising awareness of societal shortcomings; engaging in policy making/civic government, activism, structural change, to bring about social change (incl. support for telehealth, police accountability).
My hope is that we will end up seeing a lot of positive change from this, in terms of policy in terms of civic and government involvement.
- Jean Twenge, social psychologySocial connectedness
Paying attention to importance of interpersonal relationships, maintaining relationships.
Compared with before the pandemic, people during this pandemic feel more connected with the community.
- Melody Chao, social psychologyPrejudice & racism
Developing biased opinions of others, xenophobia, discrimination on basis of race.
There is an opportunity, unfortunately, for increased racism in certain cases. Consequently, stereotypes, prejudices against other group members might arise.
- Lisa Feldman Barrett, emotions & neurosciencePolitical conflict Geopolitical tensions, intergroup conflict, internal political violence, political
polarization.
We are all asking what comes next? What does this mean for the way our society is structured? Those are difficult questions. And who poses the answers to those questions? Is it the case that autocratic leaders might answer those questions for us, might give us an easy answer where the answer is essentially, it's because of them, it's because of the out group. We know that people can be really easy pulledinto these kinds of intergroup conflicts.
- Leaf Van Boven, social & political psychology
Heterogeneity of Predictions
Next, we quantified whether scores across themes were reducible to common
components. For positive predictions, results of the MCA suggested seven dimensions, ranging
from 11.6% to 6.4% of variance—less than a 40% reduction in information compared to original
themes in Figure 1. Each of these dimensions chiefly represented 1-2 items (when examining
WORLD AFTER COVID 17
squared cosine ≥ .4), and the first dimension explained only 11.6 % of the variance. For negative
predictions, results suggested nine dimensions, ranging from 12% to 5.2% of variance—less than
a 30% reduction in information compared to original 22 themes in Figure 1. Again, each of these
dimensions chiefly represented 1-2 items (when examining squared cosine ≥ .4), and the first
dimension explained only 12 % of the variance. Given that each theme by itself explains 5% of
the variance for predictions, these results (along with cluster analyses in Figure 2) suggest a
negligible degree of reducibility of themes to common overarching categories.
Figure 2. Network model of positive predictions (Panel A) and negative predictions (Panel B).
Cross-Temporal Variability in Themes
Heterogeneity of predictions was also evident over time. Even though predictions
concerned the period several years post-pandemic, different themes emerged in World after
COVID interviews conducted in summer, early, and late fall. The types of themes which were
prevalent appeared to correspond to the salience of current events. Figure 3 and Figure S3 in the
on-line supplement show cross-temporal variability in prevalence of top themes. Experts were
WORLD AFTER COVID 18
more likely to predict greater will for political and structural societal change, as well as greater
prejudice and racism after the death of George Floyd in Minneapolis in May 2020 and the
subsequent anti-police brutality protests. As new lockdowns were imposed in the US and other
countries in the fall of 2020, topics such as social inequality became more dominant in expert
reflections. Finally, in the week preceding and following the highly polarized US Presidential
election in early November, topics related solidarity and political conflict were more prevalent1.
Because experts were explicitly instructed to provide forecasts for a timeframe of several years
after the pandemic, the event-contingent fluctuation in forecasts may reflect focalism in expert
predictions (Wilson et al., 2000) or Bayesian information updating based on pressing societal
events of the moment (Griffiths and Tenenbaum, 2006). Regardless of the cause(s), the cross-
temporal variability in which most frequent themes further highlights the heterogeneity in
predictions.
1 Cross-temporal variability in positive predictions was very similar for experts from the US and elsewhere. For negative predictions, the cross-temporal variability appeared US-specific (see Figure S4). Overall, location did not significantly quality the cross-temporal trends in positive or negative predictions.
WORLD AFTER COVID 19
Figure 3. Distribution of top predictions for most significant positive and negative consequences in response to the pandemic across summer/fall 2020. % - relative percentage of experts mentioning a given theme during a given period. Responses are binned into month to ensure comparable number of participants in each temporal segment. As Figure S3 in the on-line supplement shows, similar results appeared when performing generalized linear mixed model (binomial distribution: theme mentioned/not mentioned) with interviewees’ codes as random factors to account for interdependence. Whereas positive themes significantly varied over time, χ2(df = 2) = 9.62, p = .008, negative themes did not, χ2(df = 2) = 3.71, p = .156.
Dialecticism, Outside View, and Affective Sentiment in Predictions
Participants’ forecasts showed a fair degree of dialecticism. Twenty-seven experts (48%
of the sample) explicitly framed their predictions in a dialectical fashion, emphasizing that the
WORLD AFTER COVID 20
forecasts are multidetermined and that the same issue can have both positive and negative
consequences2. Six experts prefaced their interviews by expressing the uncertainty and
multidetermined nature of their predictions from the start. Further analyses showed that less than
a third of interviewees used outside view framing (also see Figure S5 in the online supplement)
—i.e., most expert did not discuss information from possibly relevant events (pandemics or other
global crises), reflecting on the current pandemic as an idiosyncratic event that does not fit into
existing theoretical models of societal change.
To account for a possible demand effect when characterizing dialecticism in predictions,
we turned to computational sentiment analyses. Figure 3 shows the distribution of sentence-level
compound sentiment scores per expert response. The basic logic here is simple: a dialectical
response is more likely to entail both positive and negative sentiments (Spencer-Rodgers,
Williams, et al., 2010). For prediction questions, we expect responses to tend towards the
sentiment implied by the question (e.g., more positive sentiment if asked to predict positive
outcomes). Responses that tend towards the opposite sentiment implied by the question, or where
variability around the mean crosses 0, would suggest greater dialecticism. Conversely, scores
with less variability and scores aligned with the implied sentiment would suggest that the expert
is not dialectical. See online supplement for more details on the procedure.
As Figure 4 shows, many responses revealed sentiments that align with the question
asked. On average participants showed a tendency towards positive sentiment when asked to
predict potential positive outcomes and a tendency towards negative sentiment when asked to
predict potential negative outcomes. However, and consistent with our qualitative analyses, in
both cases many experts showed a dialectical tendency in their responses (negative sentiment for
2 Generalized linear mixed model analyses showed no significant association between dialectical framing and prevalence of specific positive, χ2(df = 19) = 5.52, ns, or negative themes, χ2(df = 21) = 13.74, ns.
WORLD AFTER COVID 21
positive predictions and positive sentiment for negative predictions), and an even greater number
showed sentiment ambivalence (tending towards 0). These tendencies were pronounced for
WORLD AFTER COVID 22
positive predictions, suggesting dialecticism is not simply a demand effect.
WORLD AFTER COVID 23
Figure 4. Summaries of compound sentiment scores for sentences nested within individual expert responses to predictions questions. x-axis shows a normalized, weighted composite score ranging from -1 to 1, where -1 stands for a strong negative focus and 1 stands for a strong positive focus. Scores for each expert are on the y-axis. The red points are means and the gray bars are standard error. Responses that tend towards the opposite sentiment implied by the question or where variability around the mean crosses 0 suggest greater dialecticism. Conversely, scores with less variability and scores aligned with the impliedsentiment would suggest that the expert is not dialectical. Overall sentiment (not nested within expert responses) is shown with black stars, with gray bars to indicate two standard errors.
WORLD AFTER COVID 24
Experts’ Recommendations
As Figure 5 shows, 33 distinct themes emerged from analyses of experts’ advice for type
of wisdom needed for the post-COVID world. These themes ranged from greater clarity in
governmental communication, to critical thinking, to bipartisan cooperation. As with predictions,
supplementary MCA analyses indicated only a negligible degree of data reduction. Beyond
heterogeneity in recommendations, two observations were noteworthy.
First, social connectedness, political/structural change, and solidarity—three common
themes for most significant positive change (Figure 1)—also appeared as most frequent
recommendations (see Figure 5). The central role of social connectedness among
recommendations for navigating the COVID-19 pandemic dovetails with results from several
empirical studies outlining the protective role of social connectedness for mental wellbeing (e.g.,
Metts et al., 2021), including studies showing how connectedness may modulate effects of major
disasters for one’s cognitive (Hikichi et al., 2017) and mental health (Bryant et al., 2017). These
themes represent experts’ hopes for strategies people and societies could enact both during the
pandemic and sustain in the world thereafter.
Second, network analyses showed two broader domains across clusters. As Table 4 and
supplementary Figure S6 show, expert recommendations emphasized moral concerns (e.g.,
prosociality, cooperation, pursuit of truth) and meta-cognitive fundamentals (e.g.,
acknowledgement of uncertainty, intellectual humility, perspective-taking, self-distancing,
balance of long- and short-term interests). Though we did not anticipate this convergence, it
appears consistent with the recently developed Common Wisdom Model (Grossmann,
Weststrate, et al., 2020), a point we will return to in the discussion.
WORLD AFTER COVID 25
Figure 5. Recommendations for sustaining positive changes, mitigate negative changes, and for weathering the pandemic. % - percentage of experts mentioning a given theme.
WORLD AFTER COVID 26
Table 4Moral and meta-cognitive themes were central to expert recommendations for the world after COVID.Theme Definition / Quote
Prosocial Behavior
Helping, sharing, donating, co-operating, supporting others, and volunteering, increased tolerance of others, being considerate of others.
I am hoping that we are going to see a lot more community engagement from individuals after this pandemic. Early on in the process, we saw a lot of people joining mutual aid groups or other kinds of voluntary associations, people helping their neighbors offering to go and get food or medications for them, people actually started to speak much more to those people who live around them.
- Daisy Fancourt, psychobiology
Bipartisanship and political cooperation
Process by which political groups and/or nations work together to a shared goal.
One place that we might see benefit, or something positive coming from this very difficult time is an appreciation, greater appreciation, of the need for collaboration at transnational levels.
- James Gross, emotions & emotion regulation
Critical thinking
Asking questions and thinking critically about information in a way that avoids bias; unbiased reasoning.
I think some people, and a lot of kids, are being exposed to how you think about information. How do you think about data? How do you think about evidence, and more to the point, how do you do it in a scientific way? What does science look like?
- David Dunning, judgment & decision-makingAcknowledge uncertainty / flexibility
Accepting uncertainty, adapting quickly to new circumstances, being flexible.
There are elements of my life and the world that are going to be a bit more uncertain right now. And that ability to get used to uncertainty is very difficult to have, to give yourself a little bit of self-compassion and your family members and your friends who are all struggling with this same problem.
Acknowledging there may be gaps in knowledge and mistakes made, questioning what we know.
How do we approach this situation? What do we know? How could we be better? It's not that we don't know the answer to what are the values we might aspire to, but rather that we're not really aware of them in our day to day life, so the wisdom is really more about becoming more persistently aware of those things that we care about and the way we want to structure our lives.
- Leaf Van Boven, social & political psychologyPerspective-taking
Being appreciative of diverse perspectives, considering the perspective of others.
Realizing that our problems are relatively small compared to what others are enduring might be a first step to motivating us help to change the conditions that has made this pandemic so bad for some reason. It could also be that having some perspective where we realize that for those of us for whom this is true that our problems are smallcompared to people in other groups.
- Valerie Tiberius, philosophy of virtues & wellbeingSelf-distancing Taking a step back and looking at ourselves within the social context. A form of mindfulness exercise allowing you
to put yourself/your issues into a broader perspective (e.g., of the issues concerning the whole planet).
Not social distancing, as we have seen, but self-distancing, and I think it is learning that you must learn that some of the ideals or values that define us, maybe aren't that essential to who we are. And once we got to self-distancing, we may be better to interact with people on the other side, whatever the other side looks like.
- Edouard Machery, epistemology & philosophy of scienceLong-term orientation
Not losing sight of the effects of what we do on future generations, balancing long-term and short-term perspectiveswith regards to goals and outcomes.
You have to look at long term interests, as well as short term ones. And that is proven hard for people to do. You have to look and say, what effect will this have on not only me when I'm older, but on my children and my grandchildren and other people's kids as well.
- Robert Sternberg, cognitive psychology & human development
WORLD AFTER COVID 27
Discussion
What will the world after COVID look like? Will friction increase in relationships as
couples spent more time together or would they draw closer? Will society become more
politically polarized, or more unified? Will people become more generous, or less willing to
share? Inspired by expert elicitation procedures, we sought to capture common themes and
quantify uncertainty in expectations for a post-COVID world from a number of leading
psychologists and other experts in human behavior, creating a time capsule of their predictions
and recommendations.
Common themes
When asked to predict the most significant positive changes after the pandemic, experts
in the World after COVID project converged on the idea that people will strengthen their social
ties and will reevaluate existing societal structure and personal habits. When asked what the most
significant negative changes might be, these experts highlighted social issues concerning
mistrust, political conflict, and alienation. These predictions often fell within the broad domain
of social or societal relations. Why might this be? Though speculative, it is possible that many
experts viewed the pandemic as not only leading to negative societal developments but also as
providing opportunities to disrupt patterns of socio-economic inequality (Piff et al., 2018) and
political conflict (Greenaway and Cruwys, 2019), and other troubling societal trends, many of
which had been on the rise before the COVID pandemic.
Behavioral science experts also characterized most significant psychological changes
following the pandemic in terms of social issues rather than individual-centered issues (e.g.,
change in habits or mental health). Relative salience of social themes in predictions was not due
to expert’s own field of research (see Figure S7 in the on-line supplement), raising the question
WORLD AFTER COVID 28
whether experts generally construe cultural change in terms of interpersonal dynamics rather
than individual-centered processes or whether the dominance of the social issues is unique to the
pandemic context. It is noteworthy that the focus on social issues in the World after COVID
project dovetails with other consolidated efforts to reflect on possible effects of the pandemic
(Rosenfeld et al., 2021), which justify focusing on social issues by pointing the inherently social
nature of the pandemic itself (i.e., decease transmission via human-human contact) and
corresponding mitigation efforts (e.g., social distancing and stay-at-home orders).
Turning to expert recommendations for weathering the pandemic and fostering a better
future, we also observed several common themes. Specifically, our experts converged on themes
emphasizing prosociality and meta-cognition (e.g., balancing short-term and long-term goals,
critical thinking, perspective-taking, self-distancing, or acknowledgment of uncertainty). As
briefly noted earlier, it is noteworthy that these broad categories fit with the core tenants of the
recently emerged Common Wisdom Model in psychological and cognitive sciences (CWM;
Grossmann et al., 2020)—a construct often invoked in the context of making meaning and
navigating challenging social issues (e.g., Glück et al., 2019; Grossmann, 2017; Grossmann and
Brienza, 2018). Central to CWM are moral aspirations (such as cooperation; Curry et al., 2019)
and meta-cognition (e.g., Flavell, 1979). Notably, most experts in the World after COVID
project had never heard of psychological wisdom scholarship, and supplementary analyses
showed that the familiarity with the CWM did not qualify expert responses. It is possible that in
the context of major societal challenges such as the COVID-19 pandemic, meta-cognition might
theoretically help make better sense of various constraints and obstacles for dealing with the
challenge at hand.
WORLD AFTER COVID 29
Diversity of predictions
Even though a few common themes for predictions and recommendations emerged, most
reflections on the world after COVID were distinct and showed little overlap between experts.
Critically, in contrast to other consolidated efforts that solely focus on the negative consequences
of the pandemic (e.g., Rosenfeld et al., 2021; Van Bavel et al., 2020), the present work highlights
the fact that behavioral science experts envision a range of positive consequences—a possible
antidote to focalism (Wilson and Gilbert, 2003) and negativity biases (Rozin and Royzman,
2001) in expert forecasting (Mellers et al., 2015).
Our results showed that predictions for the post-pandemic world were highly variable—
most themes did not converge across experts and varied over time. Not only did we observe little
consensus, close to half of our sample communicated their predictions in a dialectical fashion. In
other words, they were uncertain or ambivalent about the consequences of the pandemic. At the
same time, less than a third of experts applied an “outside view” (Kahneman, 2011) when
communicating their predictions—i.e., most experts did not provide the context or acknowledge
base rate trends.
This heterogeneity in expert predictions may appear somewhat obvious in hindsight. Our
sample included experts from different countries and different disciplines. Perhaps therefore
convergence among their predictions was relatively low. For instance, the diversity of our
sample (including scholars from multiple societies, experiencing the pandemic in different ways)
might have led to this heterogeneity in predictions. However, our supplementary analyses
suggest that predictions and recommendations do not show greater convergence when restricted
US-based experts. Furthermore, we did not observe greater convergence when restricting the
WORLD AFTER COVID 30
sample to social psychologists (as opposed to other psychologists or experts from other
disciplines; see Figure S7 in the on-line supplement).
However, without the benefit of hindsight, a priori we naively expected greater
convergence in predictions for several reasons. First, the structured interviews in the World after
COVID project provided standardized prompts. Second, expert predictions were constrained to
societal and psychological changes perceived by our participants as most significant, and the
prompts included a standard description and set examples of possible changes (see Table 2).
Third, the grounded approach taken in the mixed method analyses aimed to consolidate any
themes that overlapped into overarching categories. The sheer number of initial themes emerging
from the mixed method analysis was a testament to the heterogeneity in expert predictions.
Finally, given the number of high-profile joint statements about the possible psychological and
societal consequences of the pandemic published in top outlets in the field (e.g., Rosenfeld et al.,
2021; Seitz et al., 2020; Van Bavel et al., 2020), we anticipated at least some convergence in
scientific opinions. Yet what we observed in the present work suggests there is more variation in
psychologists’ expectations for the post-pandemic times than one might have suspected.
The Role of Epistemic Humility in Predicting Post-COVID Outcomes
The COVID-19 pandemic is destined to be a topic studied by historians, epidemiologists,
and other academics for years to come. It will also likely continue to receive a fair amount of
attention from psychologists and others in adjacent disciplines. Benefitting from hindsight,
psychological scientists and those in related fields will likely attempt to devise models that could
have help prevent or mitigate the negative social and health impacts of future pandemics. Initial
evidence appears to support this projection: Since the start of the pandemic, research related to
WORLD AFTER COVID 31
COVID-19 in the social sciences has grown exponentially. On PsyArXiv alone, there were
13,567 preprints mentioning COVID-19 as of January 15, 2021.
Taking a different approach, in the present project we sought to create a record of how
leading psychologists and others in related disciplines have been thinking about the pandemic
and its effects in situ. Without a crystal ball, the experts in the present project had to rely on their
intuitions and prior theoretical knowledge to predict possible consequences of the pandemic. The
“time capsule” approach of the World after COVID project will allow scholars to see in a broad
sense the extent to which these visions do or do not come to pass. We hope among other things
that this rich dataset will provide a useful tool for fostering improved predictions for key societal
and psychological trends and for nurturing intellectual humility (Mellers et al., 2019). The data is
publicly available (https://github.com/grossmania/wac) and we invite other scholars interested in
research synthesis, post-COVID reflections, and science communication to peruse it for their
needs.
The diversity of opinions and the degree of uncertainty expressed by respondents in
World after COVID project suggest that attempts by social and behavioral scientists to provide
single-voice guidelines for pandemic crisis mitigation (e.g., Seitz et al., 2020; Van Bavel et al.,
2020) may need to be taken with some caution (IJzerman et al., 2020). Papers with 30 or more
co-authors attempting to present a cohesive narrative make it difficult to accommodate dissenting
opinions, except in footnotes (e.g., Rosenfeld et al., 2021), and may create a somewhat
misleading impression of unifority as the authors must reach consensus. Psychologists should be
especially mindful of this variablity when communicating with public, the media, and
policymakers (Recchia et al., 2021). And these stakeholders in turn should consider seeking out
multiple independent opinions from such experts when seeking guidance.
WORLD AFTER COVID 32
Improving Prediction in Psychological Science
Although the present project did not aim to assess the accuracy of experts’ judgment of
societal change (cf. Hutcherson, et al., 2021), nonetheless it is worth considering how such
accuracy might be enhanced. For example, Grzanka and Cole (2021) have recently suggested
several ways to increase epistemic diversity in psychological science including greater
participation by under-represented groups, and greater attention to researchers’ blind spots and
assumptions. Greater epistemic diversity may provide an effective way to foster more accurate
predictions about societal events (Mellers et al., 2015).
Another way to enhance accuracy involves predictive modeling of scientific phenomena
that one aims to explain (Yarkoni and Westfall, 2017), especially in the context of studying
societal change (e.g., Henrich and Muthukrishna, 2021; Varnum and Grossmann, 2017).
Through this project and other endeavors, including a formal forecasting tournament among
social scientists being run by some of the present authors in parallel (i.e., the Behavioral and
Social Science Forecasting Collaborative; osf.io/6wgbj), we hope to improve prediction in
psychological science, especially regarding important real-world outcomes.
Psychologists can also become better at prediction by recognizing the multi-determined
nature of societal phenomena and by acknowledging uncertainty (Mellers et al., 2015; Recchia et
al., 2021). The good news, from our perspective, is that the present results, and those from
another line of work (Hutcherson et al., 2021) suggest that many psychological scientists tend to
do so already.
Insights from prior forecasting initiatives further suggest that accuracy of expert
predictions can be strengthened by heightening epistemic accountability in the process of making
a prediction. For instance, it is possible that expert opinions will be less biased when asked to
WORLD AFTER COVID 33
think of a future-oriented activity in a “pre-mortem” fashion (e.g., ranking of biggest effects of
the pandemic from a 2030 post-pandemic perspective) rather than a future-oriented fashion
(Klein, 2007). At the same time, such pre-mortem reflection can foster biases, too (e.g., positive,
or negative assumptions about the direction of societal progress). To avoid incorrect
assumptions, pre-mortem based expert groups may in fact benefit from diversity of opinions on
core assumptions about societal change.
On the practical side, psychologists’ expert recommendations from the World after
COVID project may advance the discourse on how to successfully adapt to the COVID-19
pandemic and its’ aftermath. Currently, much of that discourse is dominated by broad
generalizations (e.g., Wicke & Bolognesi, 2020). However, the heterogeneity, cross-temporal
variability, and uncertainty present in reflections of leading psychologists (and other experts in
human behavior) suggest that a more nuanced view might be appropriate. By acknowledging the
uncertainty and heterogeneity in predictions (also see Recchia et al., 2021), we can be better
prepared to flexibly navigate the societal challenges ahead.
WORLD AFTER COVID 34
References
Ackerman, J. M., Tybur, J. M., and Blackwell, A. D. (2021). What Role Does Pathogen-
Avoidance Psychology Play in Pandemics? Trends in Cognitive Sciences, 25(3), 177–186.
https://doi.org/10.1016/j.tics.2020.11.008
Blackwood, L., Livingstone, A. G., and Leach, C. W. (2013). Regarding Societal Change.
Journal of Social and Political Psychology, 1(1), 105–111.
https://doi.org/10.5964/jspp.v1i1.282
Bryant, R. A., Gallagher, H. C., Gibbs, L., Pattison, P., MacDougall, C., Harms, L., Block, K.,
Baker, E., Sinnott, V., Ireton, G., Richardson, J., Forbes, D., and Lusher, D. (2017). Mental
Health and Social Networks After Disaster. American Journal of Psychiatry, 174(3), 277–
Yaniv, I. (2011). Group diversity and decision quality: Amplification and attenuation of the
framing effect. International Journal of Forecasting, 27(1), 41–49.
https://doi.org/10.1016/j.ijforecast.2010.05.009
Yarkoni, T., and Westfall, J. (2017). Choosing prediction over explanation in psychology :
Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122.
https://doi.org/10.1177/1745691617693393
Zaki, J. (2020, April). Fighting coronavirus feels like a war. That might bring us together.
Washinton Post.
WORLD AFTER COVID 44
Online Supplement
For
Expert Predictions of Societal Change: Insights from the World after COVID Project
Igor Grossmann1, Oliver Twardus1, Michael E. W. Varnum2, Eranda Jayawickreme3, John
McLevey1
1 University of Waterloo, Waterloo, ON, N2L 3G1, Canada
2 Arizona State University, Tempe, AZ, 85287
3 Wake Forest University, Winston-Salem, NC, 27109
Supplementary methods
Ancillary results
Supplementary Figures 1-6
WORLD AFTER COVID 45
Supplementary methods
Notes on recruitment and interview procedure
Prospective participants received an email invitation to partake in the multimedia project.As outlined in the verbatim text of the email (see Appendix S1), the invitation clarified the focus on how the “current pandemic will alter our societies,” as well as “their advice regarding what kind of wisdom will be needed to make the world a better place after the pandemic is over.” Prospective participants were further notified that the interviews will be put together for a releasein the public domain (www.WorldafterCovid.info) and included a link to the preamble and five questions (see Table 2 in the main text). Prior to interview, World after Covid team further ensured that participants have a revise/restate any parts of the interview.
The preamble provided a range of examples for psychological change in the society, to ensure broad coverage of ideas (e.g., politics, inter-group attitudes, mental health). We aimed to provide a general set of ideas about types of changes social and behavioral scientists could talk about, without constraining the domains a particular sub-field of psychology. Further, the preamble offered a definition for the type of advice/wisdom sought from the experts – i.e., attitudes, behaviors, or general strategies people can use to successfully navigate the challenges (see Table 2 in the main text). This way, recommendations focused a range of behavioral and psychological responses in response to expected changes ahead. Notably, to standardize responses, the preamble instructions asked experts to focus on the same time frame for their predictions, zeroing in on the time a “few years after the pandemic.”
After the preamble, we presented participants with five question. Experts received these questions along with the preamble ahead of their scheduled interview to ensure sufficient time to prepare their response. A dramatic societal event like a pandemic can provide a range of reactions. To ensure participants did not project negative experiences they had over the pandemicon their post-pandemic predictions (Wilson and Gilbert, 2003), all interviews started with a question about positive consequences in response to the pandemic, prior to answering a question about negative consequences. Furthermore, the first prediction question was followed by a question about recommendations. This way, participants could naturally elaborate on their predictions and more easily connect the context of predictions to recommendations. By adding a question about recommendations between questions about positive and negative predictions, the interviewer also reduced the likelihood of demand effects—i.e., the same theme is mentioned response to the questions about positive and negative consequences merely due to the immediate proximity of these questions in the interview schedule3.
3 It is possible that some participants showed “dialecticism” (mentioning the same theme in response to questions about both positive and negative consequences) merely due to perceived demand. But we hope that receiving questions in advance and having time to reflect on them provided experts with ample opportunity to deliberate on themes they wanted to mention in response to each question, such that any dialecticism is not an artifact of the interview schedule but an intentional product of deliberation. Moreover, this concern is not applicable to the computational sentiment/ NLP analyses of the first question, which also showed a substantial degree of affective ambivalence.
Details on Multi-step Cross-validation Method for Quantifying Interviews
First, two scholars reviewed the first set of 30 interviews (June-July) to identify unique themes for each question in an iterative fashion. The guidelines we followed were:
Each theme should be present at least twice across interviews.
Themes should have minimum overlap while still being allowed to show natural dependencies (e.g., “importance of social connections” and “social support”).
In this initial phase, two independent raters, one of whom was blind to the identity of the interviewees, coded statements for the prevalence of pre-determined themes. Initial reliability was good (κ = .68), with disagreements resolved via group discussion with the senior scholar. Following an iterative procedure, any additional themes that were identified but which were not covered by the original categories were added to the codebook. We then recoded the transcripts for presence of the new category.
In the second stage, after the remaining interviews were completed in September - December 2020, another two coders (one of whom coded the initial set of 30 interviews) coded the new batch of interviews. Once again, agreement was high (Cohen’s κ = .87), with disagreements resolved in a group discussion. In this stage, coders identified several additional themes, which were again added to the codebook, resulting in another round of re-coding transcripts for the presence of these new themes. This procedure was repeated four times, until we identified the final list of themes for each question. Here, we also included some simple-occurrence themes if they were fully distinct and addressed the questions.
Coding open-ended interviews is inherently subjective. Even in the presence of high reliability between coders (as in our case), validity of the coding may be compromised due to various additional factors (e.g., a particular sentiment in a response, agreement with the opinion raised in the interview response). To address this issue, we introduce a novel top-down cross-validation approach:
A new, unbiased person blind to identity of interviewees reviews codes and respective transcripts, with the task to identify one key sentence [or key phrases, in case the theme isnot captured by a single sentence] from each person’s response to represent their code, guided by the codebook definitions.
Two further individuals, including the first author of the project, review these key sentences and flag any categories that required adjustment.
The idea behind this cross-validation approach is that this top-down, “bird’s eye” view allows for greater clarity when matching codes and themes compared to the classical grounded analysis. By matching each code to a core statement/phrase(s), one introduces extra rigor when evaluating each code. Indeed, in the process of such cross-validation, several minor inconsistencies were spotted and corrected prior to conducing subsequent analyses.
WORLD AFTER COVID 47
Classification of predicted themes into social/societal- vs. individual-centered
The broad open-ended questions that asked interviewees to discuss what positive and negative societal and/or psychological changes they expected to occur in response to the pandemic meant that interviewee interpretations of the questions could vary significantly, and this presents another possible explanation for the heterogeneity in expert predictions. Although widely understood, concepts such as societal change are rarely defined (Blackwood et al., 2013). To better understand how experts characterized the psychological post-COVID change in the society, we categorized responses as mentioning societal/social change, individual-focused change, as well as mixed responses. We defined each category as follows:
Social/Societal-based change was defined as change that focused on changes in social values, social structure, interpersonal relations, and organizations.Individual-centered change was defined as psychological change that focus on individual habits, behaviours, and mental health.
We focused once again on the forecast-related questions (Q1: positive consequences / Q3: negative consequences) to find what kind of change interviewees considered in their answer. The coding procedure involved looking at the previously found narrative themes for these questions and categorizing them as either individual or societal based on each theme’s definition. Each theme was coded based on their definition (only individual-based change=1, only societal change=2, both types of change=3) to find what kinds of change experts considered in their responses. Two research assistants (only one of whom had engaged in coding prior categories), independently categorized responses. As analyses below show, inter-rater reliability was medium-large, Cohen’s kappa = 0.80 (rules of thumb for Cohen’s kappa suggest h =.5 as medium effect size). Disagreements were resolved in a discussion with the senior author and another co-author on the project. Expert responses Q1 and Q3 were then categorized based on the values assigned to each theme they contained (only individual-based change considered=1, only societal-based change considered=2, both types of change considered=3). Participant responses for Q1 and Q3 were then scored individually, based on the themes they contained (only societal change = 1, only psychological change = 2, both types of change = 3), to determine which types of change participants considered in their answers (see Table S1 for results).
WORLD AFTER COVID 48
Coding for dialectical reasoning and outsider viewpoint considerations
Prior research on forecasting suggests that certain cognitive processes may be more conducive for accurate forecasting of geopolitical events (Mellers et al., 2015) and emotions toward close others in social conflict situations (Grossmann et al., 2021). Specifically, research suggests that superior forecasters tend to show greater likelihood of embracing:
more complex, dialectical reasoning (aspects of which are central to the notions of integrative complexity (Tetlock, 1985) and wisdom (Grossmann, Weststrate, et al., 2020) -- i.e., recognize the uncertainty and qualify forecasts by expressing multi-determined nature of predictions and considering both positive and negativeaspects of the same forecast;
an outsider viewpoint and consider base-rate information (rather than focus on thefocal event alone; Kahneman & Tversky, 1982).
Past work has employed a range of human-based coding strategies to characterize epistemic concerns that share family resemblance with the notion of dialectical reasoning – i.e., consideration/acceptance of seeming contradictions (for reviews, see Grossmann, 2018; Peng and Nisbett, 1999). In particular, there is a substantive body of scholarship on coding open-endedreflections and justifications for presence of integrative complexity – a measure of complex thinking that includes evaluative differentiation (i.e., consideration of a number of distinct and contradictory dimensions of a problem) and conceptual integration (i.e., development of complexconnections among differentiated characteristics) (Suedfeld et al., 1992; Suedfeld and Tetlock, 1977; Tetlock, 1985). Here, the evaluative differentiation component is closely connected to the idea of dialectical thinking, and has also been linked to superior accuracy in forecasting tournaments (Mellers et al., 2015). Indeed, some work on integrative complexity goes even further in the direction of an overlap with dialectical reasoning (Conway et al., 2008; Tetlock andTyler, 1996) , introducing a distinction between dialectical complexity (grappling with cognitive tensions between seemingly contradictory perspectives) and elaborative complexity (reducing tensions by generating reinforcing reasons for taking strong stands). An example of such dialectical reasoning / dialectical complexity is when a person considers the same outcome of a global pandemic as having both positive as well as negative consequences.
Typically, integrative complexity as well as features of dialectical reasoning are assessed via well-trained human-based coders (Tetlock et al., 2014), though new approaches using automated count of specific words that may indicate cognitive complexity exist (Conway et al., 2014). As Tetlock and colleagues (2014) pointed out, there is a trade-off between accuracy and efficiency/reliability. On the one hand, human-based coders can be noisy and biased. On the other hand, automated algorithms can miss nuances in written responses that human-based coders could pick up. Ultimately, in a spirit of multi-method approach to valid inferences in psychological research (Campbell and Fiske, 1959), including both human-based and automated indices of relevant characteristics (see greater discussion of automated indices below).
Notably, the interview schedule in the World after Covid projects allowed to elegantly capture dialectical reasoning: Because experts provided responses to questions about most
WORLD AFTER COVID 49
significant positive and negative consequences, human-based coders could categorize responses as those invoking dialectical thinking/ evaluate differentiation if participants explicitly acknowledged multi-determined nature of pandemic consequences or if they explicitly mentioned the same outcome as having both positive and negative consequences. This approach is similar to how human-based coding was employed to code dialectical thinking (e.g., Grossmann et al., 2010) or integrative complexity in the past (e.g., Tetlock, 1985), with an exception of a more constrained focus on responses to specific questions rather than free format rationales for one’s judgment employed in prior research. The narrower format of open-ended questions allows both for greater precision in establishing reliability across coders as well as straightforward automated analysis of dialecticism beyond “bag-of-words” approaches—i.e., scoring of texts based on % of words from pre-defined word dictionaries—used in prior scholarship (e.g., Conway et al., 2014).
We also used human-based coders to characterize expert’s likelihood of employing an outsider viewpoint in their reflections on the prediction questions (Questions 1 and 3). Specifically, two new coders independent categorized participants’ responses two groups. Independent rates scores narratives of experts who mentioned base rate information (e.g., consider how prior pandemic or other societal crises unfolded; empirical research on the association between a forecasted trend and related factors) as “outsider view.” Independent rates scored narratives of experts who exclusively focused on the focal event alone and did not consider examples from the past, general base rates, or empirical examples as “insider view.” Wenote that this procedure is novel and has not been validated in prior research. Moreover, because coding concerned explicit mentioning of base rates and contextual information, it is possible that experts engaged in “outside view” reasoning in preparations for their interviews—a common limitation of narrative methods. Consequently, we treated any scoring of insider vs. outsider viewpoints as exploratory. Like for the dialectical thinking, we compared % frequencies and typeof forecasts among these two groups.
Automated computational analyses of dialecticism
To augment human-based coding of dialecticism, we also build on natural language processing (NLP) techniques to evaluate sentiment orientation of each coded narrative. Here, the split elicitation format of separate, directed questions (positive consequences in Question 1 versus negative consequences in Question 3) allowed us to uniquely evaluate emotional dialecticism by examining the relative valence (from minus 1 standing for a purely negative sentiment to plus 1 standing for a purely positive sentiment). Emotional dialecticism (Grossmannet al., 2016; Spencer-Rodgers, Peng, et al., 2010) refers to co-occurrence of positive and negativesentiment and can be considered a sentiment-based equivalent of dialectical reasoning. Because our questions were directional in nature (i.e., we expected question about positive consequences to elicit positive sentiment and question about negative consequences to elicit negative sentiment), by examining relative deviation from the sentiment expected by the question, we can conceptualize dialecticism in response to each question. For the first question (positive consequences), greater deviation from minus 1 would show a more ambivalent response, and narratives with an overall sentiment at or above zero would show a mixed affect response. For the third question (negative consequences), greater deviation from plus 1 would show a more
WORLD AFTER COVID 50
ambivalent response, and narratives with an overall sentiment at or below zero would indicate a mixed affect response.
As outlined in the main text, we first segmented each expert response to each of the questions into sentences and next conducted a rule-based sentiment analysis (Hutto and Gilbert, 2014) at the sentence level. The rule-based model for sentiment analysis we used consists of a list of lexical features (along with their sentiment intensity measures), specifically attuned to sentiment in narrative contexts. It combines lexical features with rules embodying grammatical and syntactical conventions for expressing and emphasizing sentiment intensity. Prior validation work indicates that this rule-based sentiment model outperforms human raters and is more generalizable than other sentiment techniques, including the LIWC (Pennebaker et al., 2007), ANEW, the General Inquirer, SentiWordNet, and machine learning oriented techniques relying on Naive Bayes, Maximum Entropy, and Support Vector Machine (SVM) algorithms. Further details about this technique are presented by Hutto and Gilbert (2014). Using this technique, we summarized the overall sentiment of each sentence using a normalized, weighted composite score (“compound sentiment”) ranging from minus 1 = a strong negative focus to 1 = a strong positive focus.
The present approach to measuring dialecticism via NLP techniques shares some overlap with existing automated approaches to assess dialectical complexity in open-ended responses(e.g., Conway et al., 2014): both techniques rely on specific linguistic features to quantify overallsentiment in open-ended narratives. However, a few differences are noteworthy. Whereas prior automated approaches to quantify integrative complexity largely rely on a pre-defined dictionary of words, assigning each sentence a percentage score based on presence of words from a given category, our split elicitation procedure allowed us to focus on the overall sentiment, and logically infer dialecticism based on the nature of the targeted (positive/negative) questions, as outlined above. Because we examined overall sentiment, we could rely on more robust estimators such as the rule-based sentiment analysis discussed above.
In addition to sentiment analysis separately for each expert, we also performed a parallel sentiment analysis across experts. Here, we used the same rule-based algorithm for characterizing sentiments (Hutto & Gilbert, 2014), but in post-processing obtained composite scores standing for relative distributions of each experts’ sentiment to each other.
Details on homogeneity of prediction analyses
We examine homogeneity in predicted themes for each question in two ways. First, we examined the frequencies of themes mentioned. To control for number of participants, frequencyscores were divided by number of experts, with Figures 1 and 5 in the main text showing percentages of experts mentioning a given theme. By examining variability in relative frequencies of themes, we provided an overall ranking and relative salience of themes across expert reflections on possible societal and psychological changes ahead.
Second, we examined co-occurrences of themes: Are experts mentioning theme A and more likely to mention theme B? Specifically, we sought to use co-occurrences to explore whether themes follow a systematic pattern and can be reduced to a smaller number of underlying dimensions. Here, we aimed to use a data reduction procedure that can be suitable for
WORLD AFTER COVID 51
a categorical data. The logic behind using a data reduction technique is to explore whether the vast number of themes mentioned by experts can be reduced to a smaller number of broad categories, as well as to quantify the degree of reducibility of the data to such categories. Because the data is categorical (theme present vs. non-present), we had to rely on Multiple Correspondence Analysis (MCA)—a data reduction technique for nominal categorical data that represents data as points in a low-dimensional Euclidean space. MCA quantifies nominal data byassigning numerical values to the cases and categories so that objects within the same category are close together and objects in different categories are far apart. Each object is as close as possible to the category points of categories that apply to the object. Intuitively, MCA can be viewed as a nominal-level counterpart for the principal component analysis (PCA), or an extension of an ordinary correspondence analysis to a larger number of categories.
Typically, in the MCA one carries out correspondence analysis carried on a design matrixwith cases as rows and categories of variables as columns. In our analyses, we represented categories for each theme from content-analyses as columns (mentioned = 1 / not mentioned = 0)and participants as rows. Here, MCA explores co-occurrences of themes across participants, identifying the low-dimensional representation of such co-occurrences. As with a PCA, one can examine percentage of variance accounted by the low-dimensional representation in the data. Additionally, one can examine the quality of low-dimensional representation. Because the goal of the correspondence analysis is to reproduce the distances between points in a low-dimensionalspace, one can examine extracted dimensions in terms of their fit to the data, similarly to the interpretation of communality in the factor analysis. Low quality suggests that the number of dimensions does not well represent the data. To assess quality, we focus on the cosine2 – the squared correlations with each dimension, which can be interpreted as the correlation of the respective point with the representative dimension.
Cluster analyses
We further probed the co-occurrences of themes via hierarchical cluster analyses from theigraph package in R (Csardi and Nepusz, 2006). We used the same data from the categorical content analyses, with participants in rows, and themes (mentioned = 1 / not mentioned = 0) in individual columns. To ensure the identified clusters of themes are not due to differences in narrative size, we first computed partial Spearman correlations between themes, with number of words (tokens) in each response as a covariate and used the resulting partial correlation matrices for further cluster analyses. We further pruned correlations to avoid overplotting and eliminate negligible dependencies; based on estimates for individual difference research (e.g., Funder and Ozer, 2019), we set correlations < .17 to zero. We used igraph package to further convert values into a dissimilarity matrix to perform hierarchical cluster analyses.
In our hierarchical cluster analysis, each object was initially assigned to its own cluster and then the algorithm proceeds iteratively, at each stage joining the two most similar clusters, continuing until there is just a single cluster. At each stage distances between clusters are recomputed by the Lance-Williams dissimilarity update formula according to the complete linkage method to find similar clusters (a default in the igraph package). We subsequently plotted the clusters in different colors on top of the network graphs, with width of network edges corresponding to the top ten largest correlations in the network.
WORLD AFTER COVID 52
WORLD AFTER COVID 53
Classification by field of expertise
Due to the broad range of subject matter expertise between participants and their differing degrees of familiarity with the literature on social and cultural psychology, we investigated whether participants who were more familiar with social and cultural psychological concepts differed in their responses from those who lacked this familiarity. We categorized interviewees into two groups: (1) socio-cultural experts – i.e., participants’ field of study was social psychology, cultural psychology, moral psychology, or if they had published work that overlapped with these domains; (2) non-socio-cultural experts – i.e., participants who were experts in a domain outside of social, moral or cultural psychology and who had not published work that related to these domains. Domains of expertise were obtained by examining experts’ biographies and publication record.
Designating familiarity with wisdom scholarship
Given the broad lay definitions of wisdom, we explored whether experts who were familiar with wisdom-related literature differed in their responses compared to those who lacked familiarity. We sorted interviewees into two groups: (1) familiar - i.e., participants who had either published empirical or theoretical work on the topic of wisdom or who had published a commentary to a Psychological Inquiry target issue on wisdom (Grossmann et al., 2020) and (2) unfamiliar – i.e., participants who did not have any wisdom-related publications or did not work on any wisdom-related projects. At least ten interviewees explicitly said in the correspondence preceding interviews that they were unfamiliar with the wisdom scholarship, upon which we referred them to the broad definition of wisdom as recommendations for “attitudes, behaviors, or general strategies people can use to successfully navigate the challenges ahead.”
WORLD AFTER COVID 54
Ancillary results
How many predicted themes were social/societal- vs. individual-centered?
Table S1 Number of societal and individual-centered predictionsType of Change Positive Consequences Negative Consequences n % n %Individual-based 4 7% 5 8.8%Societal 23 40.4% 28 49.1%Both 30 52.6% 24 42.1%
Because more than half (54%) of the participants in the World after COVID project were experts in social and cultural psychology or related fields (see Figure S1), we also examined howpredictions varied by experts’ fields, comparing participants with expertise in social and cultural psychology (SC) to participants with expertise in other domains (e.g., history, mental health, psychology of aging). As Figure S7 indicates, distribution of themes among socio-cultural and non-social-cultural experts looked quite similar for both positive consequences, χ2 (df = 19) = 0.72, ns., and negative consequences, χ2 (df = 21) = 0.71, ns. The only noticeable non-significanttrend concerned slightly greater prediction of mistrust and estrangement/alienation as a negative consequence among socio-cultural experts.
Sentiment analyses across expert responses
Figure S2 provides a view of tendencies across expert responses rather than within responses by comparing the empirical cumulative distributions for compound sentiment scores for each question. The y-axis marks the 25th, 50th, and 75th percentiles. As expected, the average sentiment for all questions tends toward to sentiment implied by the question. At the same time, we see evidence of dialectical framing, too. Whereas 25% of responses to the question about negative consequences are strongly negative (compound sentiment scores < -0.25), a roughly equal proportion of responses (25%) to the same question show positive sentiment (compound sentiment scores > 0). Furthermore, another 10% of the sample shows average sentiment scores just below zero. Both crossing zero and hovering around zero suggest adialectical framing of negative predictions. The general pattern is also consistent for the questionabout positive consequences, albeit somewhat less pronounced: one average they are close to neural and about 15% cross over 0 in the opposite direction implied by the question. Notably, questions concerning recommended wisdom all lean in the positive direction. One should not that results from question-specific sentiment analyses are on a different level of analysis compared to results from analyses concerning invoking the same theme for questions about positive and negative consequences. Together, these analyses corroborate the observation of substantial dialecticism and uncertainty in experts’ predictions.
How do predictions relate to recommended wisdom?
A key question concerns type of advice/wisdom scientists recommended for different (positive and negative forecasts) to work through positive and negative consequences of the pandemic. To address this question, we can examine dependencies between themes mentioned for a particular
WORLD AFTER COVID 55
type of outcome, and next mentioning of themes needed for this outcome. Given that some participants mentioned the same theme both as consequences of the pandemic and the advice/wisdom needed to sustain positive consequences (e.g., critical thinking, live in the moment, political/structural change) or mitigate negative consequences, it would be trivial to see relevant relationships between these themes. In comparison, it would be more interesting to detect associations across themes concerning predictions (Questions 1 and 3) and wisdom (Questions 2, 4, and 5).
One way to address this question is to examine the interactive heatmap, visualized in the on-line supplement to the World after COVID project (https://grossmania.github.io/wac/wac_analyses.html# #relationship-of-forecasts-given-advice).
For positive consequences, we saw a set of non-trivial correlations (r > .3):
maintenance of greater science interest in the future was aligned with the recommendation to promote critical thinking.
greater social connectedness and optimism/positivity were aligned with the recommendation to improve communication.
greater health & wellbeing were aligned with the recommendations to improve work-life balance, focus more on living in the moment as well as a heightened ability to compromise/balance diverse interests.
greater resilience was aligned with the recommendation to focus more on sympathy & compassion.
learning from the pandemic in the future was aligned with the recommendation to promote political cooperation.
greater solidarity was aligned with the recommendation to heighted awareness of shared humanity.
greater transition to new technologies and reconsideration of habits was aligned with the recommendation to foster personal resilience.
strongest relationship: greater resilience, embracing new tech, and reconsideration of habits were aligned with the recommendation to acknowledgment of uncertainty.
For negative consequences, we saw a set of non-trivial correlations (r > .3):
combating estrangement/alienation, loneliness, decline in wellbeing was aligned with the recommendation to increase social connectedness;
combating economic hardships and rise in despair was aligned with the recommendation to foster social support;
combating rising social inequality was aligned with the recommendation to promote solidarity;
combating decline in autobiographic memory, estrangement/alienation, and well-being were aligned with the recommendation to heighted awareness of context for one’s life experiences;
combating irrationality was aligned with the recommendation to live in a moment/mindfulness, gratitude, and the meta-cognitive strategy of self-distancing;
WORLD AFTER COVID 56
combating low trust in science was aligned with the recommendation to engage in the meta-cognitive strategy of self-distancing;
combating political conflict was aligned with the recommendation to heighten appreciation of the concept of shared humanity;
strongest relationships: combating problems in intimate relations, educational inequality, and child development issues were aligned with the recommendation to heighted one’s ability to balance/reach a compromise across diverse interests.
A different, and perhaps more intuitive way to examine the associations between themes mentioned for predictions and wisdom is to examine connections between “edges” formed in the network of questions concerning predictions and the recommended wisdom. Once again, interactive visualizations of such edge-binding plots are available in the on-line supplement to the World after COVID project (https://grossmania.github.io/wac/wac_analyses.html#
#relationship-of-forecasts-given-advice). Like for the network analyses in the main text, we pruned smaller correlations (r < .17) prior to analyses. By inspecting all wisdom themes, one canexamine which of them speak to a broader range of predicted changes. For instance, wisdom-themes of acknowledgement of uncertainty, solidarity, and self-distancing had the largest number of connecting points to predictions of positive change, suggesting that experts in the world after COVID project viewed these wisdom themes as most effective in sustaining expectedpositive consequences of the pandemic overall. In a similar vein, wisdom themes of self-distancing, perspective-taking, patience, gratitude, and the will for political/structural change were most likely to occur in response to a range of expected negative consequences of the pandemic.
WORLD AFTER COVID 57
Appendix S1.
Invitation template for experts.
Dear XXXX,
As you are a world-renowned expert in [domain], I am hoping to get about 10-15 minutes of your time for an innovative video project designed to help individuals and societies navigate more gracefully through our current moment.
Speculation abounds about how the current pandemic will alter our societies. Journalists, politicians, pundits and regular people are making myriad predictions about what the “new normal” will look like. Largely missing from this conversation are the voices of social and psychological scientists. To this end, I am conducting a series of 5-min video interviews with leading social and psychological scientists in which I ask them to share their thoughts on what effects COVID-19 will have on society. I also ask them to share their advice regarding what kindof wisdom will be needed to make the world a better place after the pandemic is over. My colleagues and I will subsequently edit these interviews and will share them on an interactive website designed to showcase scientists’ perspectives on what a possible post-pandemic world may look like.
I very much hope you would be interested in contributing your thoughts on this topic! We can schedule a short 10-15 min videocall (zoom/skype/other platform of your choice) in which I will ask you to respond to 5 standard questions (see here: https://bit.ly/FuturePandemic). I envision responses to each question to be captured by 2-3 sentences. Alternatively, you could also pre-record your responses to these questions in a better resolution quality and send these to me directly.
Now, more than ever, the world needs to hear from social scientists. Would you have 10-15 minutes for a short video interview on these questions sometime during the next 2-3 weeks? We can schedule the interview at whatever time is most convenient for you.
Figure S1. Distribution of fields of expertise among interviewees. Size of the term represents relative size in the sample, ranging from 20 for social psychology to 1 for Risk Governance and Sustainability. Experts were classified into broad categories (e.g., social psychology) if no specific sub-field (e.g., relationship science, moral psychology) could be determined.
WORLD AFTER COVID 59
Figure S2. Empirical cumulative distributions of compound sentiment scores across expert responses. The x-axis scores range from -1 to 1, where -1 stands for a strong negative focus and 1 represents a strong positive focus. The y-axis marks the 25th, 50th, and 75th percentiles.
WORLD AFTER COVID 60
Figure S3. Likelihood for mentioning a specific forecasting theme as a function of time of interview and theme type. Separate analyses for top 3 positive and negative consequences. Estimates from a generalized linear mixed model (binomial distribution: theme mentioned/not mentioned) with interviewees’ codes as random factors to account for interdependence. Error band stands for 95% confidence interval around the estimate. Whereas positive themes significantly varied over time, χ2(df = 2) = 9.62, p = .008, negative themes did not, χ2(df = 2) = 3.71, p = .156.
WORLD AFTER COVID 61
Figure S4. Likelihood for mentioning a specific forecasting theme as a function of time of interview, theme type, and location (Non-US vs. US). Separate analyses for top 3 positive and negative consequences. Estimates from a generalized linear mixed model (binomial distribution: theme mentioned/not mentioned) with interviewees’ codes as random factors to account for interdependence. Error band stands for 95% confidence interval around the estimate. Location did not significantly qualify the Theme X Time interaction, χ2(df = 2) < 1.85, ps > .396.
WORLD AFTER COVID 62
Figure S5. Percentage of participants putting the forecasts in a broader context and acknowledging base rate information rather than focus on the pandemic alone. 0 = did not acknowledge broader context; 1 = acknowledged broader context for one of the forecasts, 2 = acknowledged broader context for both forecasts.
WORLD AFTER COVID 63
Figure S6. Network model of recommendations with Fruchterman-Reingold layout: Nodes which share more connections are closer to each other. Visible edges between themes reflect stronger correlations. Similar-colored themes reflect groups from hierarchical cluster analyses. Results are based on partial Spearman correlation matrix of themes (mentioned/not mentioned), controlling for number of words in interviewees’ responses for a given question. Panel A. Recommendations to sustain positive changes. Panel B. Recommendations to prevent negative changes. Panel C. Recommendations to make it through the pandemic.
Social Connectedness
Social Support
Evidence-based Judgement
Work-Life Balance
Acknowledge Uncertainty
Balance Diverse Interests
Perspective-taking
Critical Thinking
Polit.Cooperation
Solidarity
Improved Communication
Intellectual Humility
Live in the Moment
Learning from Pandemics
Polit.Structural Change
What's Important?
Shared Humanity
Sympathy&Compassion
Self-distancing
Embrace New Tech
Personal Resilience
Balance Diverse Interests
Clear GovernmentCommunication
Solidarity
Polit.Cooperation
Critical Thinking
Embrace New Tech
Agency/Control
Follow Rules
Optimism/Positivity
Learning from Pandemics
Improved Communication
Prosocial Behavior
Long-term Orientation
Acknowledge UncertaintyLive in the Moment
Patience
Perspective-taking
What's Important?
Social Connectedness
Self-distancing
Socio-econ Equality
Sympathy&Compassion
Take Science Seriously
Personal Resilience
Social Awareness
Balance Diverse Interests
Solidarity
Context Sensitivity
Polit.Cooperation
Critical Thinking
Evidence-based Judgement
Gratitude
Optimism/Positivity
Improved Communication
Long-term Orientation
Social Support
Social Connectedness
Acknowledge UncertaintyLive in the Moment
Patience
Perspective-taking
What's Important?
Shared Humanity
Socio-econ Equality
Sympathy&Compassion
Polit.Structural Change
Self-distancing
B CA
WORLD AFTER COVID 64
Figure S7. Predicted themes among experts with expertise in social/cultural psychology (SC) and the others (non-SC).
WORLD AFTER COVID 65
Supplementary References
Ackerman, J. M., Tybur, J. M., and Blackwell, A. D. (2021). What Role Does Pathogen-Avoidance Psychology Play in Pandemics? Trends in Cognitive Sciences, 25(3), 177–186. https://doi.org/10.1016/j.tics.2020.11.008
Blackwood, L., Livingstone, A. G., and Leach, C. W. (2013). Regarding Societal Change. Journal of Social and Political Psychology, 1(1), 105–111. https://doi.org/10.5964/jspp.v1i1.282
Bryant, R. A., Gallagher, H. C., Gibbs, L., Pattison, P., MacDougall, C., Harms, L., Block, K., Baker, E., Sinnott, V., Ireton, G., Richardson, J., Forbes, D., and Lusher, D. (2017). Mental Health and Social Networks After Disaster. American Journal of Psychiatry, 174(3), 277–285. https://doi.org/10.1176/appi.ajp.2016.15111403
Buehler, R., Griffin, D. W., and Ross, M. (1994). Exploring the planning fallacy: Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67(3), 366–381.
Campbell, D. T., and Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105.
Charmaz, K., and Belgrave, L. L. (2015). Grounded Theory. In The Blackwell Encyclopedia of Sociology. John Wiley & Sons, Ltd. https://doi.org/10.1002/9781405165518.wbeosg070.pub2
Conway, L. G., Conway, K. R., Gornick, L. J., and Houck, S. C. (2014). Automated Integrative Complexity. Political Psychology, 35(5), 603–624. https://doi.org/10.1111/pops.12021
Conway, L. G., Thoemmes, F., Allison, A. M., Towgood, K. H., Wagner, M. J., Davey, K., Salcido, A., Stovall, A. N., Dodds, D. P., Bongard, K., and Conway, K. R. (2008). Two ways to be complex and why they matter: Implications for attitude strength and lying. Journal of Personality and Social Psychology, 95(5), 1029–1044. https://doi.org/10.1037/a0013336
Csardi, G., and Nepusz, T. (2006). The igraph software package for complex network research. InterJournal, Complex Sy, 1695. https://igraph.org
Curry, O. S., Mullins, D. A., and Whitehouse, H. (2019). Is It Good to Cooperate? Testing the Theory of Morality-as-Cooperation in 60 Societies. Current Anthropology, 60(1), 47–69. https://doi.org/10.1086/701478
FeldmanHall, O., and Shenhav, A. (2019). Resolving uncertainty in a social world. Nature Human Behaviour, 3(5), 426–435. https://doi.org/10.1038/s41562-019-0590-x
Fincher, C. L., and Thornhill, R. (2012). Parasite-stress promotes in-group assortative sociality: The cases of strong family ties and heightened religiosity. Behavioral and Brain Sciences, 35(2), 61–79. https://doi.org/10.1017/S0140525X11000021
WORLD AFTER COVID 66
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906–911. https://doi.org/10.1037/0003-066X.34.10.906
Funder, D. C., and Ozer, D. J. (2019). Evaluating Effect Size in Psychological Research: Sense and Nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156–168. https://doi.org/10.1177/2515245919847202
Glück, J., Sternberg, R. J., and Nusbaum, H. C. (2019). Not Today, and Probably Not Tomorrow Either: Obstacles to Wisdom and How We May Overcome Them. In Applying Wisdom to Contemporary World Problems (pp. 445–464). Springer International Publishing. https://doi.org/10.1007/978-3-030-20287-3_16
Greenaway, K. H., and Cruwys, T. (2019). The source model of group threat: Responding to internal and external threats. American Psychologist, 74(2), 218–231. https://doi.org/10.1037/amp0000321
Griffiths, T. L., and Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9), 767–773. https://doi.org/10.1111/j.1467-9280.2006.01780.x
Grossmann, I. (2017). Wisdom in Context. Perspectives on Psychological Science, 12(2), 233–257. https://doi.org/10.1177/1745691616672066
Grossmann, I. (2018). Dialecticism across the Lifespan: Towards a Deeper Understanding of Ontogenetic and Cultural Origins of Dialectical Thinking and Emotional Experience. In J. Spencer-Rogers and K. Peng (Eds.), The psychological and cultural foundations of East Asian cognition: Contradiction, change, and holism (pp. 135–180). Oxford University Press.
Grossmann, I., and Brienza, J. (2018). The Strengths of Wisdom Provide Unique Contributions to Improved Leadership, Sustainability, Inequality, Gross National Happiness, and Civic Discourse in the Face of Contemporary World Problems. Journal of Intelligence, 6(2), 22. https://doi.org/10.3390/jintelligence6020022
Grossmann, I., Dorfman, A., Oakes, H., Santos, H. C., Vohs, K. D., and Scholer, A. A. (2021). Training for Wisdom: The Distanced-Self-Reflection Diary Method. Psychological Science,32(3), 381–394. https://doi.org/10.1177/0956797620969170
Grossmann, I., Hutcherson, C., Cassidy, C., Jayawickreme, E., Kara-Yakoubian, M., Varnum, M. E. W., and Twardus, O. (2020). World After COVID – The role of wisdom in a changingworld. https://worldaftercovid.info/
Grossmann, I., Huynh, A. C., and Ellsworth, P. C. (2016). Emotional complexity: Clarifying definitions and cultural correlates. Journal of Personality and Social Psychology, 111(6), 895–916. https://doi.org/10.1037/pspp0000084
Grossmann, I., Na, J., Varnum, M. E. W., Park, D. C., Kitayama, S., and Nisbett, R. E. (2010). Reasoning about social conflicts improves into old age. Proceedings of the National Academy of Sciences of the United States of America, 107(16), 7246–7250. https://doi.org/10.1073/pnas.1001715107
Grossmann, I., Weststrate, N. M., Ardelt, M., Brienza, J. P., Dong, M., Ferrari, M., Fournier, M.
WORLD AFTER COVID 67
A., Hu, C. S., Nusbaum, H. C., and Vervaeke, J. (2020). The Science of Wisdom in a Polarized World: Knowns and Unknowns. Psychological Inquiry, 31(2), 103–133. https://doi.org/10.1080/1047840X.2020.1750917
Heider, F. (1958). The naive analysis of action.
Henrich, J., and Muthukrishna, M. (2021). The Origins and Psychology of Human Cooperation. Annual Review of Psychology, 72(1), 207–240. https://doi.org/10.1146/annurev-psych-081920-042106
Hikichi, H., Tsuboya, T., Aida, J., Matsuyama, Y., Kondo, K., Subramanian, S. V, and Kawachi, I. (2017). Social capital and cognitive decline in the aftermath of a natural disaster: a naturalexperiment from the 2011 Great East Japan Earthquake and Tsunami. The Lancet PlanetaryHealth, 1(3), e105–e113. https://doi.org/10.1016/S2542-5196(17)30041-4
Hutcherson, C., Sharpinskyi, K., Varnum, M. E. W., Rotella, A., Wormley, A., Tay, L., and Grossmann, I. (2021). Behavioral scientists and laypeople misestimate societal effects of COVID-19. https://doi.org/10.31234/osf.io/g8f9s
Hutto, C., and Gilbert, E. (2014). VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. Proceedings of the International AAAI Conference on Web and Social Media. https://ojs.aaai.org/index.php/ICWSM/article/view/14550
IJzerman, H., Lewis, N. A., Przybylski, A. K., Weinstein, N., DeBruine, L., Ritchie, S. J., Vazire, S., Forscher, P. S., Morey, R. D., Ivory, J. D., and Anvari, F. (2020). Use caution when applying behavioural science to policy. Nature Human Behaviour, 4(11), 1092–1094. https://doi.org/10.1038/s41562-020-00990-w
Kahneman, D. (2011). Thinking Fast and Slow. In NY . Farrar, Strauss, and Giroux.
Kahneman, D., and Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80(4), 237–251.
Kelly, G. A. (1955). The psychology of personal constructs. Volume 1: A theory of personality. WW Norton and Company.
Klein, G. (2007). Performing a Project Premortem. Harvard Business Review, 85(9), 18–19.
Mellers, B., Stone, E., Murray, T., Minster, A., Rohrbaugh, N., Bishop, M., Chen, E., Baker, J., Hou, Y., Horowitz, M., Ungar, L., and Tetlock, P. E. (2015). Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions. Perspectives on Psychological Science, 10(3), 267–281. https://doi.org/10.1177/1745691615577794
Mellers, B., Tetlock, P. E., and Arkes, H. R. (2019). Forecasting tournaments, epistemic humilityand attitude depolarization. Cognition, 188, 19–26. https://doi.org/10.1016/j.cognition.2018.10.021
Metts, A., Zinbarg, R., Hammen, C., Mineka, S., and Craske, M. G. (2021). Extraversion and interpersonal support as risk, resource, and protective factors in the prediction of unipolar mood and anxiety disorders. Journal of Abnormal Psychology, 130(1), 47–59. https://doi.org/10.1037/abn0000643
WORLD AFTER COVID 68
Morgan, M. G. (2014). Use (and abuse) of expert elicitation in support of decision making for public policy. Proceedings of the National Academy of Sciences, 111(20), 7176–7184. https://doi.org/10.1073/pnas.1319946111
Nisbett, R. E., and Borgida, E. (1975). Attribution and the psychology of prediction. Journal of Personality and Social Psychology, 32(Journal Article), 932–943.
Peng, K., and Nisbett, R. E. (1999). Culture, dialectics, and reasoning about contradiction. American Psychologist, 54(9), 741–754.
Pennebaker, J. W., Booth, R. J., and Francis, M. E. (2007). Linguistic inquiry and word count: LIWC [Computer software]. Austin, TX: Liwc. Net.
Piff, P. K., Kraus, M. W., and Keltner, D. (2018). Unpacking the Inequality Paradox: The Psychological Roots of Inequality and Social Class. In Advances in Experimental Social Psychology (Vol. 57, pp. 53–124). Academic Press Inc. https://doi.org/10.1016/bs.aesp.2017.10.002
Recchia, G., Freeman, A. L. J., and Spiegelhalter, D. (2021). How well did experts and laypeopleforecast the size of the COVID-19 pandemic? PLOS ONE, 16(5), e0250935. https://doi.org/10.1371/journal.pone.0250935
Rosenfeld, D. L., Balcetis, E., Bastian, B., Berkman, E. T., Bosson, J. K., Brannon, T. N., Burrow, A. L., Cameron, C. D., Chen, S., Cook, J. E., Crandall, C., Davidai, S., Dhont, K., Eastwick, P. W., Gaither, S. E., Gangestad, S. W., Gilovich, T., Gray, K., Haines, E. L., … Tomiyama, A. J. (2021). Psychological Science in the Wake of COVID-19: Social, Methodological, and Meta-Scientific Considerations. Perspectives on Psychological Science.
Rowe, G., and Wright, G. (2001). Expert Opinions in Forecasting: The Role of the Delphi Technique. In J. S. Armstrong (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners (pp. 125–144). Springer. https://doi.org/10.1007/978-0-306-47630-3_7
Rozin, P., and Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5(4), 296-320.
Schaller, M., and Park, J. H. (2011). The behavioral immune system (and why it matters). Current Directions in Psychological Science. https://doi.org/10.1177/0963721411402596
Seitz, B. M., Aktipis, A., Buss, D. M., Alcock, J., Bloom, P., Gelfand, M., Harris, S., Lieberman,D., Horowitz, B. N., Pinker, S., Wilson, D. S., and Haselton, M. G. (2020). The pandemic exposes human nature: 10 evolutionary insights. Proceedings of the National Academy of Sciences, 117(45), 27767–27776. https://doi.org/10.1073/pnas.2009787117
Spencer-Rodgers, J., Peng, K., and Wang, L. (2010). Dialecticism and the co-occurrence of positive and negative emotions across cultures. Journal of Cross-Cultural Psychology, 41(1), 109–115. https://doi.org/10.1177/0022022109349508
Spencer-Rodgers, J., Williams, M. J., and Peng, K. (2010). Cultural differences in expectations of change and tolerance for contradiction: A decade of empirical research. Personality and Social Psychology Review, 14(3), 296–312. https://doi.org/10.1177/1088868310362982
WORLD AFTER COVID 69
Suedfeld, P., and Tetlock, P. E. (1977). Integrative Complexity of Communications in International Crises. The Journal of Conflict Resolution, 21(1), 169–184. https://doi.org/10.1177/002200277702100108
Suedfeld, P., Tetlock, P. E., and Streufert, S. (1992). Conceptual/integrative complexity. In C. P. Smith (Ed.), Motivation and personality (pp. 393–400). Cambridge University Press. https://doi.org/10.1017/CBO9780511527937.028
Tadmor, C. T., Tetlock, P. E., and Kaiping Peng. (2009). Acculturation Strategies and IntegrativeComplexity: The Cognitive Implications of Biculturalism. Journal of Cross-Cultural Psychology, 40(1), 105–139. https://doi.org/10.1177/0022022108326279
Tetlock, P. E. (1985). Integrative complexity of American and Soviet foreign policy rhetoric: A time-series analysis. Journal of Personality & Social Psychology, 49(Journal Article), 1565–1585. https://doi.org/10.1037/0022-3514.49.6.1565
Tetlock, P. E. (2005). Expert Political Judgement: How Good Is It? Princeton University Press.
Tetlock, P. E., Metz, S. E., Scott, S. E., and Suedfeld, P. (2014). Integrative Complexity Coding Raises Integratively Complex Issues. Political Psychology, 35(5), 625–634. https://doi.org/10.1111/pops.12207
Tetlock, P. E., and Tyler, A. (1996). Churchill’s Cognitive and Rhetorical Style: The Debates over Nazi Intentions and Self-Government for India. Political Psychology, 17(1), 149. https://doi.org/10.2307/3791947
Tversky, A., and Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science. https://doi.org/10.1126/science.185.4157.1124
University of Chicago News. (2020). COVID 2025: How the pandemic is changing our world. https://news.uchicago.edu/story/covid-2025-how-pandemic-changing-our-world
Van Bavel, J., Boggio, P. S., Capraro, V., Cichocka, A., Cikara, M., Crockett, M. J., Crum, A. J.,Douglas, K. M., Druckman, J. N., Drury, J., Ellemers, N., Finkel, E. J., Gelfand, M., Han, S., Haslam, S. A. A., Jetten, J., Kitaya, S., Mobbs, M., Napper, L. E., … Willer, R. (2020). Using social and behavioural science to support COVID-19 pandemic response. Nature Human Behaviour, 4(5), 460–471. https://doi.org/10.1038/s41562-020-0884-z
Varnum, M. E. W., and Grossmann, I. (2017). Cultural Change: The How and the Why. Perspectives on Psychological Science, 12(6), 956–972. https://doi.org/10.1177/1745691617699971
Wilson, T. D., and Gilbert, D. T. (2003). Affective forecasting. In M. P. Zanna (Ed.), Advances in Experimental Social Psychology: Vol. Volume 35 (pp. 345–411). Academic Press. https://doi.org/10.1016/s0065-2601(03)01006-2
Wilson, T. D., Wheatley, T., Meyers, J. M., Gilbert, D. T., and Axsom, D. (2000). Focalism: A source of durability bias in affective forecasting. Journal of Personal and Social Psychology, 78(5), 821–836. https://doi.org/10.1037/0022-3514.78.5.821
Yaniv, I. (2011). Group diversity and decision quality: Amplification and attenuation of the framing effect. International Journal of Forecasting, 27(1), 41–49.
WORLD AFTER COVID 70
https://doi.org/10.1016/j.ijforecast.2010.05.009
Yarkoni, T., and Westfall, J. (2017). Choosing prediction over explanation in psychology : Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122. https://doi.org/10.1177/1745691617693393
Zaki, J. (2020, April). Fighting coronavirus feels like a war. That might bring us together. Washinton Post.