1. Questions
Attitudes and preferences are important for modeling travel behavior (Conway et al. 2020). Because of their correlation with relatively stable behaviors (e.g., mode choice), we hypothesize that transport-related attitudes are predictable over time. While some have found this to be the case (van de Coevering, Maat, and Wee 2021), others have reported instability over time frames ranging from one week to two years (Sunkanapalli, Pendyala, and Kuppam 2000; Adams et al. 2013; Thøgersen 2006).
We examine the stability of individual questions as well as factors which represent underlying attitudes indicated by multiple questions. We hypothesize that stability will be higher for factors, since factors are less likely to reflect idiosyncratic responses to individual questions.
2. Methods
Our data come from the first two waves of the COVID Future survey, which were administered in April–October 2020 and November 2020–March 2021 (Chauhan et al. 2021), with individual responses recorded three and a half to eleven months apart. About 50% of responses were recorded between four and six months apart, and another 35% between six and eight months. 8% of responses were recorded under four months apart and 6% were recorded over eight months apart. We used a convenience sample to gather initial responses. This allowed us to quickly begin analyzing data, although the sample was unrepresentative. To achieve a larger, representative sample, we contacted respondents through survey organizations (Data Axle and Qualtrics) using a quota-sampling method. The 2,682 respondents analyzed here are fairly representative, though older, more educated, more likely to be female, and slightly higher income than the population (Table S1). All respondents were from the United States, with 44 of 50 states and Washington, D.C. represented.[1]
The survey included Likert-scaled indicators of travel-related attitudes ranging from environmentalism to opinion on remote work. For each of the 22 attitudinal statements analyzed, we recorded the mean score for each of the two waves of the survey and the mean absolute difference in score, with a one-point difference indicating a one-point shift on a five-point scale. We also reported the percentage of respondents who answered a question identically in both waves of the survey as well as the percentage of respondents whose change in answer between waves was zero or one points. An intraclass correlation coefficient (ICC) is provided for each question. ICC is a correlation coefficient used to measure the similarity of observations between paired waves of data. As is recommended by Koo and Li (2016), a two-way mixed effect, absolute agreement, single rater ICC is calculated.
Factor analysis is commonly used to identify underlying attitudinal constructs (Conway et al. 2020). We performed an exploratory factor analysis of the attitudinal statements to identify six factors from Wave 1 responses (Table S2).
Using the regression method, we estimated factor scores for both Wave 1 and Wave 2 responses. Factor analysis is generally applied to standardized data (Grice 2001). In order to make factor scores directly comparable, we “standardized” the Wave 2 responses using the mean and standard deviation of the Wave 1 data. We also standardized the computed factor scores to have mean 0 and standard deviation 1. Since the Wave 1 data was used for standardization, the mean and standard deviation of Wave 2 factor scores were slightly different from 0 and 1, although close. Due to this standardization, a 1-point change in a factor score can be interpreted as a one-standard deviation change.
In line with the methodology of Adams et al. (2013), we calculated an ICC for the Wave 1 and Wave 2 scores for each attitude, as well as the mean score difference and the mean absolute score difference between the two waves. We also calculated the percentage of respondents whose Wave 2 score was within one standard deviation of their Wave 1 score.
We replicated all factor-level analysis basing the factor analysis on Wave 2 rather than Wave 1 data and found similar results (Table S3).
3. Findings
Travel-related attitudinal statements demonstrate moderate stability (Table 1). The mean absolute difference in score was less than one for all statements. For most statements, only 50-60% of respondents provided exactly the same answers in both waves, which could suggest that responses are not stable. However, about 90% of respondents selected an answer that was the same or adjacent on the Likert scale—for instance, transitioning from strongly disagree to somewhat disagree. These small changes are unlikely to materially affect conclusions, particularly once factor-analyzed.
Koo and Li (2016) suggest ICC cutoffs of 0.5, 0.75, and 0.9 to signify moderate, high, and excellent stability, respectively. All statements have moderate stability, except one with high stability.
Figure 1 illustrates the stability of attitudinal statements by displaying heatmaps for the questions with the highest and lowest ICC values. Darker values indicate higher numbers of respondents associated with a particular cell. Both questions show a clustering of respondents along the bottom left-top right diagonal, which contains the 5 cells associated with the same answer in both waves of the survey.
Our analysis shows that factor-analyzed attitudes are fairly stable over time (Table 2). Changes in factor scores are symmetrical around 0, showing no definite trend over time (Figure 2). The mean absolute changes in score are all fractions of a standard deviation. The changes in attitude that did occur were fairly small in magnitude; 79–91% of respondents recorded a change in factor score of less than one standard deviation.
The ICC values for factors also demonstrate stability. Using the guidelines recommended by Koo and Li (2016), two attitudes display high stability and four display moderate stability. Attitudinal factors are generally found to be more stable than statements (Adams et al. 2013), which our analysis corroborates. On average, factors had higher ICCs than questions did.
As an additional examination of factor-level stability, respondents’ two Wave 1 and Wave 2 scores for a given factor were plotted (Figure 3). These scatter plots show clustering around the identity line, indicating that most respondents have Wave 2 scores very close to their Wave 1 scores.
Some have found that attitude stability is lower for certain demographic groups (Visser and Krosnick 1998). We repeated our factor-level analysis for subsamples of respondents based on age (under 35, 35-64, 65 and older), gender, education level (with/without a bachelor’s degree), and residential location (urban/suburban/rural[2]). We also compared those who took our survey early in the pandemic (June 2020 or earlier) to those who took it later (July 2020 or later). Finally, we compared those who responded to the two waves of the survey less than 4.5 months apart to those who took longer between waves. While we do find that some groups—notably younger people, residents of rural areas, and those surveyed early in the pandemic—demonstrate lower attitudinal stability, all groups still show at least moderate stability (ICC) for all attitudinal factors (Table 3). We also find that respondents recruited through a Qualtrics opinion panel have higher stability than those recruited through direct email or from a convenience sample, possibly because they are regular survey-takers with developed skills in accurate self-assessment and resistance to survey fatigue.
We conclude that transport-related attitudes are fairly stable over four to eleven months and that factors are more stable than questions based on ICCs. Our analyses also suggest that there is varied stability between different attitudes. “Anti-working from home” and “Pro-videoconferencing” are the least stable across the general sample as well as most demographic subgroups, possibly because the pandemic has familiarized people with remote work and virtual communication technologies. Consistent with findings from the field of psychology, increased familiarity may positively or negatively alter respondents’ perceptions (Hatemi 2013; Norton and Frost 2007).
However, respondent’s attitudes about the danger of COVID-19 were the most stable, despite greater familiarity with COVID-19 during the second wave of the survey. This may indicate agreement between learned information about COVID-19 and prior knowledge, concern about COVID-19 reflecting a more stable risk aversion attitude, or mechanism of attitude change that is more complex than familiarity generating more variable attitudes.
The COVID-19 pandemic has resulted in many people experiencing major life changes. Psychological research suggests that such events are associated with attitude change (Hatemi 2013), and this relationship has been found for travel-related attitudes in particular (Janke and Handy 2019). COVID-19 constitutes a major life event for many, so the stability observed here could be unusually low. In fact, some other reviews of the stability of transport-related attitudes find higher question-level ICCs of 0.66 to 0.77 (Molina-Garcia, Castillo, and Sallis 2010), which suggests greater reliability of attitudinal indicators under more typical circumstances. Even in this atypical time, however, attitudes are quite stable.
Acknowledgements
Conceptualization: LM, MWC, DS; Methodology: LM, MWC, DS, PK; Software: MWC; Validation: LM, MWC; Formal analysis: LM, MWC; Investigation: MWC, DS, SD, KM, ER; Data curation: MWC; Writing - original draft: LM; Writing - review & editing: LM, MWC, DS, PK, SD, A(K)M, SK, RP, ER; Visualization: LM; Project administration: LM; Funding acquisition: DS, SD, KM, RP
The sample did not include any respondents from Alaska, Delaware, Hawaii, Montana, North Dakota, or Wyoming
Areas considered urban have more than 2,213 households per square mile and areas considered suburban have more than 102 households per square mile (Kolko 2015).