frame) is available, then it is possible to proceed with the web‐only survey. However, if the list of e‐mail addresses is incomplete (bad sampling frame) or does not exist, the surveyor must decide if an alternative sampling frame is available, for example, a sampling frame of telephone numbers or postal addresses. If the alternative sampling frame is available, a mixed‐mode approach should be adopted. The surveyor should select a contact mode (telephone or mail) and approach the sampled interviewees to ask them to participate in the survey and if they can provide an e‐mail address or not. If the researcher intends to conclude the survey via web, he can provide a personal computer and Internet access (with e‐mail address) to those without Internet and e‐mail address. In this case, the data collection takes place via a web or mobile web survey. Thus, from this step of Figure 3.2, it is possible to follow the decision steps of the flowchart in Figure 3.1. Whether the researcher don't want to provide Internet access or interviewees do not agree to participate via the web or if they do not provide an e‐mail address, the interview should be administered using an alternative mode. In such a case, the surveyor must run a mixed‐mode survey with a web component (see Chapter 9). In this case also, from this step of Figure 3.2, it is possible to follow the decision steps of the flowchart in Figure 3.1.
Figure 3.2 Sub‐steps for deciding the mode of data collection
If no alternative sampling frame exists, only a non‐probability approach is possible, or an RDD telephone contact could be undertaken.3
There are considerable advantages associated with web/mobile web survey compared with face‐to‐face, mainly in term of cost and timeliness. However, web mode does perform more poorly in terms of coverage and participation. For this reason, researchers sometimes consider using a mixed‐mode approach including web, even if no problem exists in term of sampling frame (see experiments and comments in Jäckle, Lynn, and Burton, 2015).
If the conditions to a web survey are present, the surveyor will return to the steps and sub‐steps of the Figure 3.1 flowchart.
At this stage, the sub‐step to be faced is to Define the sampling frame and the sampling approach. This refers only to probability‐based surveys. The identification of the sampling frame requires a complete e‐mail list of the target population; thus, the entire target population should be online and listed (e‐mail list) without under‐coverage or duplications.4 Once the sampling frame and the sampling approach is decided, the sample selection takes place (see Chapter 4 about sampling problems and methods). Sampling strategies do not differ from those of traditional surveys. Other sub‐steps are the Questionnaire design and the Design paradata methodology. It is possible to work on in a parallel way. Paradata are usually “administrative data about the survey,” and, during the data collection, they are gathered. Web survey paradata include how many times the respondent had been accessing the questionnaire, item nonresponses, editing errors, and the time the questionnaire was completed. Thus, there are paradata about each observation in the survey. When considering mobile web surveys, a variety of devices could be used to complete the questionnaire; therefore paradata include the type of device, possibly the device where the survey contact/questionnaire is opened and the one from which the completed questionnaire is submitted. Paradata are useful for understanding problems in the questionnaire. For example, questions not clear enough or ambiguous are identified. Usually unclear questions present plenty of item nonresponse and higher response time, and/or they are inducing up and down navigation in the questionnaire. When monitoring survey participation, paradata are useful in order to allow for additional strategies in the solicitation process. For example, looking at the characteristics of the respondents during the data collection, it is possible to use adaptive design (see Chapter 8). This brings to higher response rate and limited costs. Paradata also provide insights into the under‐coverage and over‐coverage. In summary, paradata are an interesting information source that can improve the survey process and identify errors and their interrelations.
According to Callegaro (2013), in web surveys, it is possible to distinguish between device‐type paradata and questionnaire navigation paradata. Device‐type paradata provide information regarding the kind of device used to complete the interview (i.e., tablet or desktop). They provide information about the technical features of the device (browser, screen resolution, IP address, and several other characteristics). Questionnaire navigation paradata describe the full set of activities undertaken in completing the questionnaire, for example, mouse clicks, forward and backward movements along the questionnaire, number of error messages generated, time spent per question, and question answered before dropping out (if dropout exists). Other authors (for example, Heerwegh, 2011) distinguish between client‐side paradata (they include click mouse and everything related to the activities of the respondent) and server‐side paradata (they include everything collected from the server hosting the survey). The literature proposes also other classifications, and the technology evolution is going to offer new types of paradata. Capturing paradata is one of the main challenges. Software industry improves greatly and constantly. Traditionally most programs were collecting only a few server‐side paradata. Not every program is registering client‐side paradata. Technological innovation and commitment on this important task have contributed to enlarge the offer of programs collecting paradata and transmitting the often unintelligible strings into useful data sets. Due to high innovation in this field, software is fast over; some discussion about the topic is found in Olson and Parkhurst (2013) and in Kreuter (2013).
Currently, it is clear that paradata are useful data types for several different functions, such as monitoring nonresponse and measurement profiles, checking for measurement error and bias, improving questionnaire usability, and fixing many other problems. Due to their potential usefulness in helping to understand relationships between different errors, improving the data collection process, and the quality of results, paradata require a decisional sub‐step to plan their structure and data collection.
Regarding the Design questionnaire sub‐step, several decisions must be undertaken. First, decision about the Questions’ wording and the Response format takes places. Question wording rules are similar to the other modes, except that the sentences should be especially simple and short; this recommendation is stronger for web surveys than for the other modes. As regards Response format, even if most of the basic criteria are like traditional modes, the general rules and response formats in web surveys are different and specific to a self‐completed questionnaire and to the digital format used (see Chapter 7). The criteria for mobile web survey are similar to the ones for web surveys. Some specific requirements are due to the technical structure of the devices, especially to the characteristic of the small size of the screen, especially in smartphones. A poorly structured questionnaire and technically not adequate to mobile phones could critically affect the quality of the survey; errors could arise in terms of response rate, item nonresponse, and estimates. An important issue arises when a mixed‐mode approach is adopted. In this case, the decision if the optimal approach for each mode or a unimode approach has to be adopted is debated (see Dillman, Smyth, and Christian 2014). Recently it has been suggested to achieve the balance between the basic presumption that survey questions should be as identical as possible between modes (unimode approach) and, at the same time, consider that