or money, unclear and late and inconsistent specifications, last‐minute changes, screens that are too small, and computers that are too slow.
1.2.8 CONCLUSIONS FROM MODERN‐DAY CHALLENGES
The description of modern‐day survey challenges leads to some conclusions:
Modern‐day surveys can be very hard.
No single person has all the answers.
New survey‐producing methods are necessary to address all the challenges within ever‐tightening constraints.
Small screen sizes often lead to adaptations of survey instruments such as using fewer points in a scale question.
With the proliferation of devices, it becomes harder to rely on unimode designs where all questions appear the same in all modes and devices (Dillman, Smyth, and Christian, 2014). Instead, the institute may strive for cognitive equivalence across all manifestations (de Leeuw, 2005).
1.2.9 THRIVING IN THE MODERN‐DAY SURVEY WORLD
Updated survey design methods may give ways to handle and even thrive in the modern‐day survey world. The idea is to use extremely powerful computer‐based specification to replace document specification and manual programming. This idea is described in the following:
Use a capable computer‐based specification system to define the questionnaire. A drag‐and‐drop specification may be adequate for simpler surveys, but when you get to surveys that must handle more of the multis mentioned above, or when you get to thousands of questions, drag‐and‐drop becomes too onerous.
Specification and survey methods research should use question structures (see below).
The institute should define its question‐presentation standards for each structure across all the multis. This requires some up‐front work and decisions.
When the specification is entered, the computer should generate the necessary source code and related configuration files for all multis. All these computer‐generated outputs should conform to the institute's standards.
Use a survey‐taking system that has evolved to cope with the modern‐day world.
1.3 Application
1.3.1 BLAISE
The historic developments with respect to surveys as described in the previous section took also place in the Netherlands. Particularly the rapid developments in computer technology have had a major impact on the way Statistics Netherlands collected its data. Efforts to improve the collection and processing of survey data in terms of costs, timeliness, and quality have led to a powerful software system called Blaise. This system emerged in the 1980s, and it has evolved over time so that it is now also able to conduct web surveys and mixed‐mode surveys. The section gives an overview of the developments at Statistics Netherlands leading to Internet version of Blaise.
The advance of computer technology since the late 1940s led to many improvements at Statistics Netherlands for conducting its surveys. For example, from 1947 Statistics Netherlands started using probability samples to replace its complete enumerations for surveys on income statistics and agriculture. The implementation of sophisticated sampling techniques such as stratification and systematic sampling is much easier and less labor intensive on a computer than manual methods.
Collecting and processing statistical data was a time‐consuming and expensive process. Data editing was an important component of this work. The aim of these data editing activities was to detect and correct errors in the individual records, questionnaires, or forms. This should improve the quality of the results of surveys. Since statistical offices attached much importance to this aspect of the survey process, a large part of human and computer resources were spent on it.
To obtain more insight into the effectiveness of data editing, Statistics Netherlands carried out a Data Editing Research Project in 1984. Bethlehem (1987) describes how survey data were processed. The overall process included manual inspection of paper forms, preparation of the forms for high‐speed data entry including correcting obvious errors or following up with respondents, data entry, and further correction.
The Data Editing Research Project discovered a number of problems:
Various people from different departments were involved. Many people dealt with the information: respondents, subject‐matter specialists, data typists, and computer programmers.
Transfer of material from one person/department to another could be a source of error, misunderstanding, and delay.
Different computer systems were involved from mainframe to minicomputers to desktop computers under MS‐DOS. Transfer of files from one system to another caused delay, and incorrect specification and documentation could produce errors.
Not all activities were aimed at quality improvement. Time was also spent on just preparing forms for data entry, and not on correcting errors.
The cycle of data entry, automatic checking, and manual correction was in many cases repeated three times or more. Due to these cycles, data processing was very time consuming.
The structure of the data (the metadata) had to be specified in nearly every step of the data editing process. Although essentially the same, the “language” of this metadata specification could be completely different for every department or computer system involved.
The conclusions of the Data Editing Research Project led to general redesign of the survey processes of Statistics Netherlands. The idea was to improve the handling of paper questionnaire forms by integrating data entry and data editing tasks. The traditional batch‐oriented data editing activities, in which the complete data set was processed as a whole, were replaced by a record‐oriented process in which each record (form) was completely dealt with in one session.
More about the development of the Blaise system and its underlying philosophy can be found in Bethlehem and Hofman (2006).
The new group of activities was implemented in a so‐called CADI system. CADI stands for computer‐assisted data input. The CADI system was designed for use by the workers in the subject‐matter departments. Data could be processed in two ways by this system:
Heads‐up data entry. Subject‐matter employees worked through a pile of forms with a microcomputer, processing the forms one by one. First, they entered all data on a form, and then they activated the check option to test for all kinds of errors. Detected errors were reported on the screen. Errors could be corrected by consulting forms or by contacting the suppliers of the information. After elimination of all errors, a “clean” record was written to file. If employees could not produce a clean record, they could write the record to a separate file of “dirty” records to deal with later.
Heads‐down data entry. Data typists used the CADI system to enter data beforehand without much error checking. After completion, the CADI system checked in a batch run all records and flagged the incorrect ones. Then subject‐matter specialists handled these dirty records one by one and correct the detected errors.
To be able to introduce CADI on a wide scale in the organization, a new standard package called Blaise was developed in 1986. The basis of the system was the Blaise language, which was used to create a formal specification of the structure and contents of the questionnaire.
The first version of the Blaise system ran on networks of microcomputers under MS‐DOS. It was intended for use by the people of the subject‐matter departments; therefore no computer expert knowledge was needed to use the Blaise system.
In