Good practice in the conduct and reporting of survey research

In Uncategorized

Good practice in the conduct and reporting of survey research

  1. JOHN SITZIA

+ Author Affiliations


  1. Research Department, Worthing & Southlands Hospitals NHS Trust, Worthing, West Sussex, UK
  • Accepted January 16, 2003.

Next Section

Abstract

Survey research is sometimes regarded as an easy research approach. However, as with any other research approach and method, it is easy to conduct a survey of poor quality rather than one of high quality and real value. This paper provides a checklist of good practice in the conduct and reporting of survey research. Its purpose is to assist the novice researcher to produce survey work to a high standard, meaning a standard at which the results will be regarded as credible. The paper first provides an overview of the approach and then guides the reader step-by-step through the processes of data collection, data analysis, and reporting. It is not intended to provide a manual of how to conduct a survey, but rather to identify common pitfalls and oversights to be avoided by researchers if their work is to be valid and credible.

Key words

Previous SectionNext Section

What is survey research?

Survey research is common in studies of health and health services, although its roots lie in the social surveys conducted in Victorian Britain by social reformers to collect information on poverty and working class life (e.g. Charles Booth [1] and Joseph Rowntree [2]), and indeed survey research remains most used in applied social research. The term ‘survey’ is used in a variety of ways, but generally refers to the selection of a relatively large sample of people from a pre-determined population (the ‘population of interest’; this is the wider group of people in whom the researcher is interested in a particular study), followed by the collection of a relatively small amount of data from those individuals. The researcher therefore uses information from a sample of individuals to make some inference about the wider population.

Data are collected in a standardized form. This is usually, but not necessarily, done by means of a questionnaire or interview. Surveys are designed to provide a ‘snapshot of how things are at a specific time’ [3]. There is no attempt to control conditions or manipulate variables; surveys do not allocate participants into groups or vary the treatment they receive. Surveys are well suited to descriptive studies, but can also be used to explore aspects of a situation, or to seek explanation and provide data for testing hypotheses. It is important to recognize that ‘the survey approach is a research strategy, not a research method’ [3]. As with any research approach, a choice of methods is available and the one most appropriate to the individual project should be used. This paper will discuss the most popular methods employed in survey research, with an emphasis upon difficulties commonly encountered when using these methods.

Descriptive research

Descriptive research is a most basic type of enquiry that aims to observe (gather information on) certain phenomena, typically at a single point in time: the ‘cross-sectional’ survey. The aim is to examine a situation by describing important factors associated with that situation, such as demographic, socio-economic, and health characteristics, events, behaviours, attitudes, experiences, and knowledge. Descriptive studies are used to estimate specific parameters in a population (e.g. the prevalence of infant breast feeding) and to describe associations (e.g. the association between infant breast feeding and maternal age).

Analytical studies

Analytical studies go beyond simple description; their intention is to illuminate a specific problem through focused data analysis, typically by looking at the effect of one set of variables upon another set. These are longitudinal studies, in which data are collected at more than one point in time with the aim of illuminating the direction of observed associations. Data may be collected from the same sample on each occasion (cohort or panel studies) or from a different sample at each point in time (trend studies).

Evaluation research

This form of research collects data to ascertain the effects of a planned change.

Previous SectionNext Section

Advantages and disadvantages of survey research

Advantages:

 

  • The research produces data based on real-world observations (empirical data).

  • The breadth of coverage of many people or events means that it is more likely than some other approaches to obtain data based on a representative sample, and can therefore be generalizable to a population.

  • Surveys can produce a large amount of data in a short time for a fairly low cost. Researchers can therefore set a finite time-span for a project, which can assist in planning and delivering end results.

 

Disadvantages:

 

  • The significance of the data can become neglected if the researcher focuses too much on the range of coverage to the exclusion of an adequate account of the implications of those data for relevant issues, problems, or theories.

  • The data that are produced are likely to lack details or depth on the topic being investigated.

  • Securing a high response rate to a survey can be hard to control, particularly when it is carried out by post, but is also difficult when the survey is carried out face-to-face or over the telephone.

 

Previous SectionNext Section

Essential steps in survey research

Research question

Good research has the characteristic that its purpose is to address a single clear and explicit research question; conversely, the end product of a study that aims to answer a number of diverse questions is often weak. Weakest of all, however, are those studies that have no research question at all and whose design simply is to collect a wide range of data and then to ‘trawl’ the data looking for ‘interesting’ or ‘significant’ associations. This is a trap novice researchers in particular fall into. Therefore, in developing a research question, the following aspects should be considered [4]:

 

  • Be knowledgeable about the area you wish to research.

  • Widen the base of your experience, explore related areas, and talk to other researchers and practitioners in the field you are surveying.

  • Consider using techniques for enhancing creativity, for example brainstorming ideas.

  • Avoid the pitfalls of: allowing a decision regarding methods to decide the questions to be asked; posing research questions that cannot be answered; asking questions that have already been answered satisfactorily.

 

Previous SectionNext Section

Research methods

The survey approach can employ a range of methods to answer the research question. Common survey methods include postal questionnaires, face-to-face interviews, and telephone interviews.

Postal questionnaires

This method involves sending questionnaires to a large sample of people covering a wide geographical area. Postal questionnaires are usually received ‘cold’, without any previous contact between researcher and respondent. The response rate for this type of method is usually low, ∼20%, depending on the content and length of the questionnaire. As response rates are low, a large sample is required when using postal questionnaires, for two main reasons: first, to ensure that the demographic profile of survey respondents reflects that of the survey population; and secondly, to provide a sufficiently large data set for analysis.

Face-to-face interviews

Face-to-face interviews involve the researcher approaching respondents personally, either in the street or by calling at people’s homes. The researcher then asks the respondent a series of questions and notes their responses. The response rate is often higher than that of postal questionnaires as the researcher has the opportunity to sell the research to a potential respondent. Face-to-face interviewing is a more costly and time-consuming method than the postal survey, however the researcher can select the sample of respondents in order to balance the demographic profile of the sample.

Telephone interviews

Telephone surveys, like face-to-face interviews, allow a two-way interaction between researcher and respondent. Telephone surveys are quicker and cheaper than face-to-face interviewing. Whilst resulting in a higher response rate than postal surveys, telephone surveys often attract a higher level of refusals than face-to-face interviews as people feel less inhibited about refusing to take part when approached over the telephone.

Previous SectionNext Section

Designing the research tool

Whether using a postal questionnaire or interview method, the questions asked have to be carefully planned and piloted. The design, wording, form, and order of questions can affect the type of responses obtained, and careful design is needed to minimize bias in results. When designing a questionnaire or question route for interviewing, the following issues should be considered: (1) planning the content of a research tool; (2) questionnaire layout; (3) interview questions; (4) piloting; and (5) covering letter.

Planning the content of a research tool

The topics of interest should be carefully planned and relate clearly to the research question. It is often useful to involve experts in the field, colleagues, and members of the target population in question design in order to ensure the validity of the coverage of questions included in the tool (content validity).

Researchers should conduct a literature search to identify existing, psychometrically tested questionnaires. A well designed research tool is simple, appropriate for the intended use, acceptable to respondents, and should include a clear and interpretable scoring system. A research tool must also demonstrate the psychometric properties of reliability (consistency from one measurement to the next), validity (accurate measurement of the concept), and, if a longitudinal study, responsiveness to change [5]. The development of research tools, such as attitude scales, is a lengthy and costly process. It is important that researchers recognize that the development of the research tool is equal in importance—and deserves equal attention—to data collection. If a research instrument has not undergone a robust process of development and testing, the credibility of the research findings themselves may legitimately be called into question and may even be completely disregarded. Surveys of patient satisfaction and similar are commonly weak in this respect; one review found that only 6% of patient satisfaction studies used an instrument that had undergone even rudimentary testing [6]. Researchers who are unable or unwilling to undertake this process are strongly advised to consider adopting an existing, robust research tool.

Questionnaire layout

Questionnaires used in survey research should be clear and well presented. The use of capital (upper case) letters only should be avoided, as this format is hard to read. Questions should be numbered and clearly grouped by subject. Clear instructions should be given and headings included to make the questionnaire easier to follow.

The researcher must think about the form of the questions, avoiding ‘double-barrelled’ questions (two or more questions in one, e.g. ‘How satisfied were you with your personal nurse and the nurses in general?’), questions containing double negatives, and leading or ambiguous questions. Questions may be open (where the respondent composes the reply) or closed (where pre-coded response options are available, e.g. multiple-choice questions). Closed questions with pre-coded response options are most suitable for topics where the possible responses are known. Closed questions are quick to administer and can be easily coded and analysed. Open questions should be used where possible replies are unknown or too numerous to pre-code. Open questions are more demanding for respondents but if well answered can provide useful insight into a topic. Open questions, however, can be time consuming to administer and difficult to analyse. Whether using open or closed questions, researchers should plan clearly how answers will be analysed.

Interview questions

Open questions are used more frequently in unstructured interviews, whereas closed questions typically appear in structured interview schedules. A structured interview is like a questionnaire that is administered face to face with the respondent. When designing the questions for a structured interview, the researcher should consider the points highlighted above regarding questionnaires. The interviewer should have a standardized list of questions, each respondent being asked the same questions in the same order. If closed questions are used the interviewer should also have a range of pre-coded responses available.

If carrying out a semi-structured interview, the researcher should have a clear, well thought out set of questions; however, the questions may take an open form and the researcher may vary the order in which topics are considered.

Piloting

A research tool should be tested on a pilot sample of members of the target population. This process will allow the researcher to identify whether respondents understand the questions and instructions, and whether the meaning of questions is the same for all respondents. Where closed questions are used, piloting will highlight whether sufficient response categories are available, and whether any questions are systematically missed by respondents.

When conducting a pilot, the same procedure as as that to be used in the main survey should be followed; this will highlight potential problems such as poor response.

Covering letter

All participants should be given a covering letter including information such as the organization behind the study, including the contact name and address of the researcher, details of how and why the respondent was selected, the aims of the study, any potential benefits or harm resulting from the study, and what will happen to the information provided. The covering letter should both encourage the respondent to participate in the study and also meet the requirements of informed consent (see below).

Previous SectionNext Section

Sample and sampling

The concept of sample is intrinsic to survey research. Usually, it is impractical and uneconomical to collect data from every single person in a given population; a sample of the population has to be selected [7]. This is illustrated in the following hypothetical example. A hospital wants to conduct a satisfaction survey of the 1000 patients discharged in the previous month; however, as it is too costly to survey each patient, a sample has to be selected. In this example, the researcher will have a list of the population members to be surveyed (sampling frame). It is important to ensure that this list is both up-to date and has been obtained from a reliable source.

The method by which the sample is selected from a sampling frame is integral to the external validity of a survey: the sample has to be representative of the larger population to obtain a composite profile of that population [8].

There are methodological factors to consider when deciding who will be in a sample: How will the sample be selected? What is the optimal sample size to minimize sampling error? How can response rates be maximized?

The survey methods discussed below influence how a sample is selected and the size of the sample. There are two categories of sampling: random and non-random sampling, with a number of sampling selection techniques contained within the two categories. The principal techniques are described here [9].

Random sampling

Generally, random sampling is employed when quantitative methods are used to collect data (e.g. questionnaires). Random sampling allows the results to be generalized to the larger population and statistical analysis performed if appropriate. The most stringent technique is simple random sampling. Using this technique, each individual within the chosen population is selected by chance and is equally as likely to be picked as anyone else. Referring back to the hypothetical example, each patient is given a serial identifier and then an appropriate number of the 1000 population members are randomly selected. This is best done using a random number table, which can be generated using computer software (a free on-line randomizer can be found at http://www.randomizer.org/index.htm).

Alternative random sampling techniques are briefly described. In systematic sampling, individuals to be included in the sample are chosen at equal intervals from the population; using the earlier example, every fifth patient discharged from hospital would be included in the survey. Stratified sampling selects a specific group and then a random sample is selected. Using our example, the hospital may decide only to survey older surgical patients. Bigger surveys may employ cluster sampling, which randomly assigns groups from a large population and then surveys everyone within the groups, a technique often used in national-scale studies.

Non-random sampling

Non-random sampling is commonly applied when qualitative methods (e.g. focus groups and interviews) are used to collect data, and is typically used for exploratory work. Non-random sampling deliberately targets individuals within a population. There are three main techniques. (1) purposive sampling: a specific population is identified and only its members are included in the survey; using our example above, the hospital may decide to survey only patients who had an appendectomy. (2) Convenience sampling: the sample is made up of the individuals who are the easiest to recruit. Finally, (3) snowballing: the sample is identified as the survey progresses; as one individual is surveyed he or she is invited to recommend others to be surveyed.

It is important to use the right method of sampling and to be aware of the limitations and statistical implications of each. The need to ensure that the sample is representative of the larger population was highlighted earlier and, alongside the sampling method, the degree of sampling error should be considered. Sampling error is the probability that any one sample is not completely representative of the population from which it has been drawn [9]. Although sampling error cannot be eliminated entirely, the sampling technique chosen will influence the extent of the error. Simple random sampling will give a closer estimate of the population than a convenience sample of individuals who just happened to be in the right place at the right time.

Sample size

What sample size is required for a survey? There is no definitive answer to this question: large samples with rigorous selection are more powerful as they will yield more accurate results, but data collection and analysis will be proportionately more time consuming and expensive. Essentially, the target sample size for a survey depends on three main factors: the resources available, the aim of the study, and the statistical quality needed for the survey. For ‘qualitative’ surveys using focus groups or interviews, the sample size needed will be smaller than if quantitative data is collected by questionnaire. If statistical analysis is to be performed on the data then sample size calculations should be conducted. This can be done using computer packages such as G*Power [10]; however, those with little statistical knowledge should consult a statistician. For practical recommendations on sample size, the set of survey guidelines developed by the UK Department of Health [11] should be consulted.

Larger samples give a better estimate of the population but it can be difficult to obtain an adequate number of responses. It is rare that everyone asked to participate in the survey will reply. To ensure a sufficient number of responses, include an estimated non-response rate in the sample size calculations.

Response rates are a potential source of bias. The results from a survey with a large non-response rate could be misleading and only representative of those who replied. French [12] reported that non-responders to patient satisfaction surveys are less likely to be satisfied than people who reply. It is unwise to define a level above which a response rate is acceptable, as this depends on many local factors; however, an achievable and acceptable rate is ∼75% for interviews and 65% for self-completion postal questionnaires [9,13]. In any study, the final response rate should be reported with the results; potential differences between the respondents and non-respondents should be explicitly explored and their implications discussed.

There are techniques to increase response rates. A questionnaire must be concise and easy to understand, reminders should be sent out, and method of recruitment should be carefully considered. Sitzia and Wood [13] found that participants recruited by mail or who had to respond by mail had a lower mean response rate (67%) than participants who were recruited personally (mean response 76.7%). A most useful review of methods to maximize response rates in postal surveys has recently been published [14].

Previous SectionNext Section

Data collection

Researchers should approach data collection in a rigorous and ethical manner. The following information must be clearly recorded:

 

  • How, where, how many times, and by whom potential respondents were contacted.

  • How many people were approached and how many of those agreed to participate.

  • How did those who agreed to participate differ from those who refused with regard to characteristics of interest in the study, for example how were they identified, where were they approached, and what was their gender, age, and features of their illness or health care.

  • How was the survey administered (e.g. telephone interview).

  • What was the response rate (i.e. the number of usable data sets as a proportion of the number of people approached).

 

Previous SectionNext Section

Data analysis

The purpose of all analyses is to summarize data so that it is easily understood and provides the answers to our original questions: ‘In order to do this researchers must carefully examine their data; they should become friends with their data’ [15]. Researchers must prepare to spend substantial time on the data analysis phase of a survey (and this should be built into the project plan). When analysis is rushed, often important aspects of the data are missed and sometimes the wrong analyses are conducted, leading to both inaccurate results and misleading conclusions [16]. However, and this point cannot be stressed strongly enough, researchers must not engage in data dredging, a practice that can arise especially in studies in which large numbers of dependent variables can be related to large numbers of independent variables (outcomes). When large numbers of possible associations in a dataset are reviewed at P < 0.05, one in 20 of the associations by chance will appear ‘statistically significant’; in datasets where only a few real associations exist, testing at this significance level will result in the large majority of findings still being false positives [17].

The method of data analysis will depend on the design of the survey and should have been carefully considered in the planning stages of the survey. Data collected by qualitative methods should be analysed using established methods such as content analysis [18], and where quantitative methods have been used appropriate statistical tests can be applied. Describing methods of analysis here would be unproductive as a multitude of introductory textbooks and on-line resources are available to help with simple analyses of data (e.g. [19, 20]). For advanced analysis a statistician should be consulted.

Previous SectionNext Section

Reporting

When reporting survey research, it is essential that a number of key points are covered (though the length and depth of reporting will be dependent upon journal style). These key points are presented as a ‘checklist’ below:

 

  1. Explain the purpose or aim of the research, with the explicit identification of the research question.

  2. Explain why the research was necessary and place the study in context, drawing upon previous work in relevant fields (the literature review).

  3. Describe in (proportionate) detail how the research was done.
    1. State the chosen research method or methods, and justify why this method was chosen.

    2. Describe the research tool. If an existing tool is used, briefly state its psychometric properties and provide references to the original development work. If a new tool is used, you should include an entire section describing the steps undertaken to develop and test the tool, including results of psychometric testing.

    3. Describe how the sample was selected and how data were collected, including:

     

    1. How were potential subjects identified?

    2. How many and what type of attempts were made to contact subjects?

    3. Who approached potential subjects?

    4. Where were potential subjects approached?

    5. How was informed consent obtained?

    6. How many agreed to participate?

    7. How did those who agreed differ from those who did not agree?

    8. What was the response rate?

  4. Describe and justify the methods and tests used for data analysis.

  5. Present the results of the research. The results section should be clear, factual, and concise.

  6. Interpret and discuss the findings. This ‘discussion’ section should not simply reiterate results; it should provide the author’s critical reflection upon both the results and the processes of data collection. The discussion should assess how well the study met the research question, should describe the problems encountered in the research, and should honestly judge the limitations of the work.

  7. Present conclusions and recommendations.

 

The researcher needs to tailor the research report to meet:

 

  • The expectations of the specific audience for whom the work is being written.

  • The conventions that operate at a general level with respect to the production of reports on research in the social sciences.

 

Previous SectionNext Section

Ethics

Anyone involved in collecting data from patients has an ethical duty to respect each individual participant’s autonomy. Any survey should be conducted in an ethical manner and one that accords with best research practice. Two important ethical issues to adhere to when conducting a survey are confidentiality and informed consent.

The respondent’s right to confidentiality should always be respected and any legal requirements on data protection adhered to. In the majority of surveys, the patient should be fully informed about the aims of the survey, and the patient’s consent to participate in the survey must be obtained and recorded.

The professional bodies listed below, among many others, provide guidance on the ethical conduct of research and surveys.

 

 

Previous SectionNext Section

Conclusion

Survey research demands the same standards in research practice as any other research approach, and journal editors and the broader research community will judge a report of survey research with the same level of rigour as any other research report. This is not to say that survey research need be particularly difficult or complex; the point to emphasize is that researchers should be aware of the steps required in survey research, and should be systematic and thoughtful in the planning, execution, and reporting of the project. Above all, survey research should not be seen as an easy, ‘quick and dirty’ option; such work may adequately fulfil local needs (e.g. a quick survey of hospital staff satisfaction), but will not stand up to academic scrutiny and will not be regarded as having much value as a contribution to knowledge.

Previous SectionNext Section

Footnotes

  • Address reprint requests to John Sitzia, Research Department, Worthing Hospital, Lyndhurst Road, Worthing BN11 2DH, West Sussex, UK. E-mail: [email protected]

Previous Section

References

  1. London School of Economics, UK. Http://booth.lse.ac.uk/ (accessed 15 January 2003).
  2. Vernon A. A Quaker Businessman: Biography of Joseph Rowntree (1836–1925). London: Allen & Unwin, 1958.
  3. Denscombe M. The Good Research Guide: For Small-scale Social Research Projects. Buckingham: Open University Press, 1998.
  4. Robson C. Real World Research: A Resource for Social Scientists and Practitioner-researchers. Oxford: Blackwell Publishers, 1993.
  5. Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to their Development and Use. Oxford: Oxford University Press, 1995.
  6. Sitzia J. How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J Qual Health Care 1999; 11: 319–328.
  7. Bowling A. Research Methods in Health. Investigating Health and Health Services. Buckingham: Open University Press, 2002.
  8. American Statistical Association, USA. Http://www.amstat.org (accessed 9 December 2002).
  9. Arber S. Designing samples. In: Gilbert N, ed. Researching Social Life. London: SAGE Publications, 2001.
  10. Heinrich Heine University, Dusseldorf, Germany. Http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/index.html (accessed 12 December 2002).
  11. Department of Health, England. Http://www.doh.gov.uk/acutesurvey/index.htm (accessed 12 December 2002).
  12. French K. Methodological considerations in hospital patient opinion surveys. Int J Nurs Stud 1981; 18: 7–32.
  13. Sitzia J, Wood N. Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care 1998; 10: 311–317.
  14. Edwards P, Roberts I, Clarke M et al. Increasing response rates to postal questionnaires: systematic review. Br Med J 2002; 324: 1183.
  15. Wright DB. Making friends with our data: improving how statistical results are reported. Br J Educ Psychol 2003; in press.
  16. Wright DB, Kelley K. Analysing and reporting data. In: Michie S, Abraham C, eds. Health Psychology in Practice. London: SAGE Publications, 2003; in press.
  17. Davey Smith G, Ebrahim S. Data dredging, bias, or confounding. Br Med J 2002; 325: 1437–1438.
  18. Morse JM, Field PA. Nursing Research: The Application of Qualitative Approaches. London: Chapman and Hall, 1996.
  19. Wright DB. Understanding Statistics: An Introduction for the Social Sciences. London: SAGE Publications, 1997.
  20. Sportscience, New Zealand. Http://www.sportsci.org/resource/stats/index.html (accessed 12 December 2002).