Editorial by Tarek Mostafa
Survey non-response represents a major challenge to data analyses. It results in smaller samples, lower statistical power, incomplete histories in a longitudinal context, and more worryingly bias in sample composition. In this virtual issue, eight articles focusing on issues of survey response are republished.
Five articles explored the impact of survey practices on response, while the remaining three focused of the determinants of response and the consequences on sample representativeness. The eight articles cover longitudinal and cross-sectional surveys in addition to different survey modes such as postal, web, and face to face surveys.
Diaz de Raza (2005) explored the impact of questionnaire design on response in a postal survey. The author found that the colour of the questionnaire is the only element affecting response. Other design features such as size, envelopes and type of paper were found to have no effect. In a similar vein, Kereakoglow, Gelman and Partridge (2013) examined the impact of esthetically enhanced materials on response in a postal survey. The study relied on a randomized control trial of a sample of physicians and found that the use of such materials did not have a significant positive impact on response.
The study by Heerwegh et al. (2005) analysed the impact of personalization of survey email invitations on response in a web survey and on data quality. The findings showed that the use of personalized salutation has a positive and significant impact on response but, in contrast, is unrelated to data quality. Another contribution by Sappleton and Lourenço (2015) examined the effect of blank and non-blank email subject lines on response in web surveys. The authors hypothesized that a blank subject line will induce a sense of curiosity in the recipient. However, the findings showed that email subject lines do not affect response.
Williams et al. (2014) examined response in the context of a two-phase postal survey. The first phase consisted of a screener survey and the second of a topical survey. The authors varied the content of the screener questionnaire and found that response rates varied according to the information requested from respondents (first names, etc). Heerwegh and Loosveldt (2009) focused on the intention to participate in a web survey while relying on the theory of planned behaviour as their framework. More positive behavioural attitudes, more positive subjective norms, a higher sense of moral obligation and a higher degree of perceived behavioural control were found to significantly and positively influence the intention to participate in a survey.
The two remaining studies focused on the implications of non-response on sample representativeness. In Michie and Marteau (1999), the authors focused on response in two different surveys and found that response was lower among ethnic minority groups and respondents with lower education. The study by Plewis (2007) explored non-response in the first two waves of the Millennium Cohort Study and found that differential non-response among disadvantaged and ethnic minority groups exists but is minimal in comparison with the over sampling of the two categories in the first wave of the survey. In this context, the use of non-response weights is unlikely to have a substantial effect above and beyond the sampling weights.
All these papers have one element in common; they all raise awareness of the problem of survey non-response. Some dealt with survey design and implementation while others dealt with sample representativeness.
Looking forward, the interest in this topic is unlikely to decline. The new developments in survey methods will raise new challenges in terms of survey response and will require new adjustment techniques in addition to new survey procedures.
The following articles are free to read online until the end of November 2016
Vidal Díaz de Rada (2005)
This paper examines in some detail the influence of the questionnaire design on the ratio and quality of responses to mail surveys. To this end an investigation using Dillman’s total design method for mail surveys was used and carried out in Spain. To minimize the effort required in filling in the questionnaire, Dillman proposes that the questionnaire should be easy to fill in, and stresses the importance of its having an attractive and pleasing off‐set. This paper will outline a series of elements to take into consideration on the dimensions and size of the questionnaire, on its cover pages, on the type and colour of paper used, on the ordering of the questions, the envelope and stamps used to send off the questionnaire. This paper lays out the the suggestions made by Dillman as well as those of others who have written on this theme.
Evaluating the effect of esthetically enhanced materials compared to standard materials on clinician response rates to a mailed survey
Sandra Kereakoglow, Rebecca Gelman & Ann H. Partridge (2013)
Evidence suggests that physicians have lower response rates than non-physicians to mailed surveys. It is important to identify methods that can increase physician and nurse participation in health-related survey research. In an effort to improve response rates among clinicians, we developed esthetically enhanced survey materials (booklets printed on glossy paper with color and graphic designs) as part of a large mailed-survey study of oncology doctors and nurses who were listed as members of a North American cooperative oncology group. We randomized these clinicians to receive either an enhanced (90%) or standard (10%) paper survey (standard white paper with no color nor graphic designs, stapled together) about offering results of clinical trials to trial participants. Overall, 34% (793/2333) of the surveys were returned. There was no significant difference between the two groups; 33.7% (707/2300) responded to the enhanced materials and 36.9% (86/233) responded to the standard materials (p = .34). These results suggest that esthetically enhanced materials do not increase clinician response rates to mailed surveys, although further research is warranted in this area.
Dirk Heerwegh, Tim Vanhove, Koen Matthijs & Geert Loosveldt (2005)
Personalizing correspondence has often shown to significantly increase survey response rates in mail surveys. This study experimentally tests whether personalization of email invitations acts correspondingly to web survey response rates. Also, it is investigated whether personalization influences data quality. The results of the study, using a large student sample, show that personalization significantly increases the web survey response rate by 8.6 percentage points. The data quality does not appear to be affected in any major way by personalizing the email invitations. However, the analyses do show that respondents of the personalization condition tend to respond with more social desirability bias to sensitive questions. Therefore, it is concluded that personalization has positive effects on the survey response rate, but one should carefully consider whether or not to personalize when a survey on sensitive topics is conducted.
Email subject lines and response rates to invitations to participate in a web survey and a face-to-face interview: the sound of silence
Natalie Sappleton & Fernando Lourenço (2015)
This paper investigates the relationship between blank and non-blank email subject lines on levels of response to a solicitation to participate in an interview, and on participation in a web survey. Email use has grown substantially in recent years, presenting significant opportunity to the empiricist seeking research respondents. However, response to emails may be low because growth in the sheer volume of messages that individuals receive per day has led to a sense of ‘email overload’, and faced with the challenge of personal email management, many recipients choose to ignore some messages, or do not read them all fully. Drawing on information gap theory, we expected that sending an invitation with a blank subject line would induce a sense of curiosity in recipients that would improve email response and willingness to participate in research studies. However, findings from research with two samples with different propensities to participate in research (academics and business owners) revealed that an email invitation with a blank subject line does not increase overall response rates to a web survey and a face-to-face interview over either an informative subject line or a provocative subject line, but that it does prompt a greater number of active refusals. Based on this finding, recommendations for researchers are outlined.
Douglas Williams, J. Michael Brick, Jill M. Montaquila & Daifeng Han (2014)
For surveys targeting specific population groups, the two-phase postal approach (screener followed by a topical survey sent to eligible households) has been demonstrated to be more effective at identifying population domains of interest than random digit dial telephone methods considering cost, coverage, and response. An important question is how best to motivate screener response from eligible households. In 2011, we conducted a large-scale field test to empirically test a number of methods for motivating response. We fielded screening surveys that varied content-influencing relevance, and also switched screener questionnaires for following up nonrespondents to the initial postal survey – an approach we have labeled responsive tailoring. In another experiment, we tested the effect of asking for first names in the screener questionnaire. In this article, we describe the effects of these experimental treatments on response to both the screener and the topical survey.
Dirk Heerwegh & Geert Loosveldt (2009)
Even though web surveys have become increasingly popular, considerable efforts are necessary to obtain acceptable response rates. This explains the proliferation of experiments aimed at improving levels of participation in web surveys. The current study is not aimed at increasing web survey response by means of experimental research, but instead uses a general psychological theory to explain web survey response. More specifically, the theory of planned behaviour (TPB) is used to explain the intention to participate in a web survey. Previous studies have used this theory to explain survey participation in specific populations (students), but the current study extends that scope by targeting a general population. The results show that the TPB is capable of explaining people’s intentions to participate in a web survey in the context of a general population.
Ian Plewis (2007)
The advantages that birth cohort data offer researchers interested in the measurement and explanation of change across the life course are tempered by the problem of non‐response that becomes progressively larger as cohorts age. This article sets out the extent of this problem for the first two waves of the fourth in the series of UK birth cohorts: the Millennium Cohort Study. The response rate at Wave 1 is 72%, declining to 58% at Wave 2. Sample loss between Waves 1 and 2 was due to the failure to trace families who had moved, to contact families at a known address and to refusal. The correlates of these three kinds of non‐response are different. Although non‐respondents are systematically different from respondents at Waves 1 and 2, these differences in the propensity to respond are small compared with the unequal selection probabilities built into the sample design. It is, therefore, unlikely that weighting adjustments will have a substantial effect, over and above the effect of the sample design, on longitudinal analyses based on the first two waves of the study.
Susan Michie & Theresa Marteau (1999)
Low participation and high attrition rates in research studies may produce socially and psychologically biased study samples which limit the generalizability of findings. Although this problem is often acknowledged, published studies sometimes fail to assess and report non-response bias and make few efforts to prevent it. Two examples, one involving patients and another involving health professionals, are reported to illustrate the problem. The first was a randomized controlled trial of presenting a prenatal screening test, with questionnaires given to participants on four occasions. The final sample differed in ethnic group and use of screening, being more likely to be white and to undergo the test. The second example was a randomized controlled trial of communication skills training of obstetricians and midwives. Those completing the study differed from the others in holding more positive attitudes towards training and having better communication skills. Potential solutions to the problem of non-response bias are addressed.