Research

Survey Attributes, Development, Utilization, and Interpretation in Healthcare Research

John Ward, DC, MA, MS

 

author email    corresponding author email   

Topics in Integrative Health Care 2016, Vol. 6(3)   ID: 6.3002



Published on
January 8, 2016
Text Size:   (-) Decrease the text size for the main body of this article    (+) Increase the text size for the main body of this article
Share:  Add to TwitterAdd to DiggAdd to del.icio.usAdd to FacebookAdd to GoogleAdd to LinkedInAdd to MixxAdd to MySpaceAdd to NewsvineAdd to RedditAdd to StumbleUponAdd to Yahoo

Introduction: Surveys are important for the evaluation of healthcare data trends that affect patient populations.

Methods: This narrative review describes survey attributes, development, utilization, and interpretation from perspectives pertinent to researchers.

Results: Survey questions should be sensitive to cultural, psychological, and economic factors. Questions may be open-ended or closed-ended. They may be distributed in-person, through telephone interviews, by mail, online, or through direct physical distribution. Newly created surveys should use age-appropriate, education level-appropriate, culturally sensitive, and time-sensitive questions. Researchers will need to select an appropriate mechanism of quantifying and analyzing responses. Newly created surveys should be pilot-tested on small sample populations that are representative of the intended larger population. There is no set agreed upon way to report surveys, but the most common survey reporting deficiencies appear to be failing to: provide survey questions, report on validity or reliability of the instrument, provide the response rate, discuss how well the sample represents the entire population, and describe how missing data are handled.

Conclusions: Surveys are critical to discovering data trends and encouraging modifications in clinical treatment guidelines. The questions should focus on set domains, but be capable of accommodating all relevant answers. Survey questions should be tested to ensure they have reasonable validity for what they are trying to measure and that they are reliable if used multiple times with various populations.

Introduction

Surveys are important for research and evaluation of data trends within patient populations.1-4 The information they provide can lead to improvements in healthcare practice patterns through the development of treatment guidelines.5-9  
 
The three main categories of surveys are: descriptive, comparative, and predictive.10 Descriptive surveys provide characteristic information about a specific population. Comparative surveys are used to compare and contrast the attributes of two or more groups. Predictive surveys forecast the actions of a particular group of individuals based on their previous patterns in life. 
 
Advantages of using surveys are that they can: 1) reach many respondents with minimal effort, and 2) process how multiple categorically different variables interact with one another.11-12 A disadvantage of surveys is that low response rates can threaten their external validity.11

Methods

Topically, the section subheadings of this narrative review were designed to be similar to most of the individual titles of Fink’s nine-volume textbook series on developing surveys.10,13-17 The specific aims of this article are to describe the attributes of surveys, and how they are developed, distributed, and analyzed.

How should survey questions be phrased?



Survey questions should be sensitive to cultural,13,18-23 psychological,13 and economic factors.13 Ideally, questions will make sense to the respondent,13,24 be relevant to important time periods in his/her life, and avoid expression of bias.13 Loaded questions should be avoided.13,25 An example of a loaded question would be for a doctor to ask a patient with a general headache, “How long have you been having migraines?” This question is pressuring the patient to accept that they have a set type of headache even though they may have another type (i.e. cluster, tension, or sinus headache). If possible, researchers should adopt questions from other surveys that have already been validated.13  
 
Questions can be open-ended or closed-ended.12-13,25 Open-ended questions allow respondents to write their own answers. These can explain a respondent’s motivation when it is not clear to the researcher.13,25 A disadvantage of using open-ended questions is that the responses need to be coded by the researcher, which allows for subjective interpretation.25 Closed-ended questions offer predetermined answer choices and are faster for researchers to analyze. One disadvantage of closed-ended questions is that sometimes the most appropriate response to a question is missing from the given options for a participant.25

Types of survey distribution

Surveys may be administered in-person, through telephone interviews, by mail, online, or by direct physical distribution to participants for return at a later date. 
 
If surveys are administered in-person by multiple surveyors, then the surveyors should be trained to ask questions using a standard format or similar phrasing. For example, one researcher might ask a respondent, “are you married,” while a different researcher asks, “are you married or do you have a life partner.” If the primary purpose of the survey question was to see if the person lives alone then the second researcher would be asking a more informative question that would take into account same-sex couples. Quality checks of each surveyor should be performed to ensure standardized phrasing. One advantage of using in-person surveys is that questions can be asked by respondents to ensure they clearly understand the meaning of particular survey questions. There are two significant disadvantages of using in-person surveys. They have been found to result in more positive scores in some fields compared to other forms of survey administration.26-27 Additionally, studies have shown that research participants are less likely to report morbidity or socially inadequate behaviors during in-person surveys.28-30 As a result, human interaction during the administration of a survey can impact the findings produced. 
 
Telephone surveys possess their own unique attributes, particularly when compared to mail-out surveys. Comparison studies between telephone surveys and mail-out surveys have shown that telephone surveys are less expensive,31-33 are more likely to be fully completed,28,34-35 and demonstrate an increased chance a person will choose to take the survey in the first place.36-38 A disadvantage of telephone surveys is that they typically involve random-digit-dialing (RDD) to landline telephone numbers and not cell phones.39 As a result, those potential respondents who only use cell phones,40-41 or who have unlisted landlines42 will be missed.
 
Mail surveys involve sending paper surveys directly to respondents. Two significant advantages of mail surveys are that they can contain lengthy questions that a participant may fill out at their convenience, and they can include pictures.13 Two significant disadvantages of mail surveys that have been reported are greater levels of missing data and higher numbers of inconsistent answers.12,43
 
Online surveys are directly hosted on websites. This allows researchers to force participants to answer a given question before proceeding to the next question.39,44 There are several online survey platforms that can be used,  including: REDCap, Survey Monkey, Zoomerang, Qualtrics, and QuesGen.25 Online surveys result in savings on postage and printing costs45-46  and can use skip questions. The latter refers to the display of subsequent questions based on initial responses.12,25 For example, only respondents who answer “female” to a question about gender will later see a series of female-only reproductive questions in their survey. Unfortunately, online surveys often have low response rates.47-50 Other disadvantages are a bias toward younger, more computer-literate participants51-52 and the exclusion of individuals who do not have access to a computer.11
 
Handout surveys are distributed by healthcare providers directly in their office for later deposit into a drop box or to be returned via mail. An advantage of in-office surveys is that they are cheaper than mail-out and telephone surveys.53 One disadvantage is that in-office handout surveys have the potential for measurement bias if office gatekeepers distribute the survey to selected patients, rather than according to a predetermined random pattern.54

Factors to consider when designing a new survey



Figure 1 lists seven questions that should be answered when designing a new survey.13 As survey questions are developed, researchers should look at existing surveys and attempt to emulate questions that are relevant to their topic. This provides a standard by which the results of the newly developed survey can be compared.


Fig.1 Questions to answer as a survey is being developed.
Questions to ask prior to developing a survey
1)
What is the purpose? 
2)
Who are the respondents?
3)
How will the survey be administered?
4)
How will the response be tallied (e.g., a rating system, or
 
  categorically unique answers)
5)
How much time will respondents be expected to spend on
 
  the survey?
6)
Will the survey need to be translated to another language?
7)
Will the respondents be anonymous?
 


 
Prior to creating a survey, it is important to understand the end purpose of the information that researchers are trying to gain. This requires investigators to be familiar with existing research on their topic and to develop a survey that helps identify relevant gaps in existing information. By identifying these gaps, future interventional studies can be developed to improve overall healthcare outcomes.
 
Questions must be developed with respondents in mind. The questions should be age-appropriate, education level-appropriate, culturally sensitive, and time-sensitive. Researchers need to make an effort to write questions that most respondents will be able to understand. Questions should not ask participants to recall information that happened too long ago, because this may increase the chance they will not remember the information accurately. Predetermined inclusion and exclusion criteria should also be established to ensure that the correct population is receiving the survey. If a population is not selected appropriately, the internal and external validity of the study may be affected. 
 
Surveys can be administered in-person, through telephone interviews, mailed, online, or through in-office handouts. Each option has its own strengths and weaknesses, some of which have already been discussed. 
 
Survey answers can be based on a rating scheme or they can involve nominal categories (e.g., patient race, religion). The Likert scale is an example of a rating scheme that uses a point value system.12,25 This scale can involve positively-scaled options or negatively-scaled options. The following example is of negatively-scaled Likert options: strongly agree (1), agree (2), neutral (3), disagree (4), and strongly disagree (5).12 With this form of scaling the more negative responses are worth a higher point value. When using Likert scales, some researchers suggest utilizing a combination of negatively-scaled and positively-scaled questions, which is referred to as a balanced scale, to reduce the response set effect. This effect is seen when participants choose one given answer many times in a row (e.g., strongly disagree) because they feel it likely applies to most questions.55 Other less-common rating scales include: Rasch model scaling, interval-ratio scaling, and semantic differential scaling.12
 
When surveys are developed, researchers should create domain-specific questions if possible that can be answered in a reasonable amount of time.56-57 For example, the Short Form (SF)-36 has a domain for physical health problems, emotional health problems, social activities, and other domains.58 If a survey includes too many questions, respondents may not want to complete the survey and may even answer questions inaccurately.
 
If a survey is intended to be distributed in multiple languages, then interpretation will be required. Ideally, researchers will translate the survey to the new language and then have it independently translated back to see if the content is being relayed properly.
 
Surveys can either be anonymous or not. The benefit of anonymous surveys is that they allow researchers to ask more personal questions. For example, a researcher could ask participants about their sexual history.

Visual considerations in creating a survey



Surveys typically should utilize size 10 font or greater.13-14 If researchers are anticipating that some of their survey respondents will have deficiencies with reading text (i.e., presbyopia), they can make the font even larger.25 When open-ended questions are used, the amount of open space left for responses is important. Studies have shown that the size of open space for writing on the survey is directly proportional to how much the respondents will write.59-60 Finally, research has demonstrated that shorter surveys have higher response rates.56-57 

How to test a survey on a small population



Prior to utilizing a new survey, it is ideal to test it using a small group of sample respondents. This can be used to determine whether any of the questions are misunderstood. Testing a survey out on a small group of respondents, individually, or with a focus group can also identify unforeseen issues that may arise later when the survey is fully administered to a large population.12,14,61 Figure 2 illustrates some of the issues researchers should review when testing their survey.


Fig. 2. Survey testing questions a researcher should answer.
Questions that should be answered by survey testing
1)
Are the survey questions appropriate for respondents?
2)
Are any questions misleading?
3)
If the survey is given through interview are the surveyors
 
  able to use the surveys appropriately?
4)
Is the information obtained by the survey reliable?
5)
Is the information obtained by the survey valid?
 

When pilot-testing a survey, researchers should choose a group of participants that is representative of the target population they ultimately want to sample. This can be done through probability sampling or non-probability sampling.12,14  
 
Probability sampling means that any person within a known target population of the study might be selected.12,14 This includes: simple random sampling, stratified random sampling, systematic sampling, and cluster sampling. Simple random sampling is typically performed when a list of participants in a given population is matched up against a predetermined random number list (e.g., recruit applicant #1, 5, 12, 14).12,14 Thus, participants out of the sample pool will be recruited at random.  Stratified random sampling involves a population being divided into subgroups, termed strata, and then a random sample is taken from those subgroups.12,14,62 For example, a researcher may want to sample participants with low back pain as the main population. The researcher then further develops strata based on age groups (e.g., divide the population into: below 18, 19-30, 31-50 and 51+ years-of-age strata) and analyzes low back pain as it applies to the various age groups. Systematic sampling involves picking participants at set intervals. An example would be selecting every 5th person out of the total population for the study.12,14 Finally, cluster sampling involves selecting participants within some form of cluster (e.g., participants from one physical therapy clinic, as opposed to all physical therapy clinics in a city). 
 
Non-probability sampling means that only a particular set of individuals is chosen to participate in the survey.12,14 This can involve convenience sampling, snowball sampling, quota sampling, or focus groups.12,14 Convenience sampling utilizes individuals that are readily available for sampling based on opportunity. An example would be a classroom of student volunteers.12,14 Snowball sampling involves surveying participants and then asking them to recommend people they know in the same demographic which is studied.12,14,63 Quota sampling involves determining a set number of participants in given subgroups that need to be sampled and then recruiting that quantity.12,14 For example, a researcher could choose to sample 50 men and 50 women with whiplash. Finally, focus groups typically involve structured interactions between 10-20 participants during which survey questions can be reviewed.64-66 
 
When administering a survey, response rates may vary. There is no set standard response rate. Unsolicited mail-out surveys often can have response rates as low as 20%.13,17 Survey response rates may be increased by using incentives.  Studies have shown that monetary or gift incentives significantly increase survey response rate.12,56,67-74  

Measurement of survey reliability and validity



Psychometrics represents the study of the design, administration, and interpretation of quantitative assessments.16 It often involves the evaluation of reliability and validity. The quantification of these attributes is important for the assessment of the quality of a survey.
 
Reliability refers to the reproducibility of the data from a survey instrument.  Test-retest reliability is utilized when set respondents repeat the survey at two different time points.16,75 Correlation coefficients (e.g., r values) can be developed to compare the responses of a group of respondents at two different time points. Values for r at or above 0.7 are considered good.16 The practice effect, or the bias from remembering one’s previous answers to survey questions, can be treated as a covariate when the same group is surveyed twice in close temporal succession. A covariate is a side variable that is not directly controlled which can impact the main dependent variable studied. To solidify this point, assume a researcher is tracking athlete caloric intake based on different age groups. If the researcher is measuring data throughout the entire year the researcher’s findings could be impacted during the winter holidays (i.e. Thanksgiving), thus “season” could act as a covariate. The practice effect can be handled by changing the order of the survey questions or by altering a few words in each question without changing the context of the questions.16
 
Validity refers to how well a survey measures what it is intended to measure.  The different kinds of validity include face, content, criterion, and construct validity.16 Face validity involves showing a survey to untrained individuals and asking them if the survey appears to reasonably address its topic.16,25 This is the least stringent method of validity.16 Face validity can be measured by providing respondents with a survey and then asking them afterword if they feel the survey focused on a particular topic. If the survey did not focus on the intended topic, then the researchers could improve the questions. Content validity involves having a set of one or more subject matter experts review a survey to determine if it only includes what it should.16 This form of validity involves multiple experts rating how much they agree with a given statement in a survey being essential to the intended topic domain. For example, it could involve raters reviewing a statement and deciding that it is ‘essential’, ‘useful, but not essential’, or ‘not essential’. Criterion validity provides quantitative evidence as to how well one survey compares to another.16 For example, a new low back pain survey could be given to low back pain patients along with the more established Roland-Morris Low Back Pain and Disability Questionnaire.76 Then the researcher could compare the scores on the new survey against the more established survey. Construct validity is the most valuable form of validity, but it often takes years to measure.16 It is essentially a measure of the degree to which a survey measures what it claims to measure after being analyzed with multiple populations over several years.

How to report survey data



At a minimum, the authors of a survey journal article should introduce the significance of their topic, describe the methods and results of their survey, review their conclusions, and discuss the implications, recommendations, and future directions of research.17,77 Bennet et al reviewed 117 surveys and concluded that there is no standard way of reporting survey research.76 However, the most common weaknesses they observed were failing to: provide survey questions (35%), report on validity or reliability of the instrument (19%),  provide the response rate (25%), discuss how well the sample represents the entire population (11%), and describe how missing data were handled (11%).77
 
Survey data can be reported graphically by utilizing percentages, pie charts, bar graphs, line charts, shaded regional maps, and other means. Data tables generally should report sample size (n) and overall percentage for each population sampled, per question. Finally, researchers should avoid scaling the Y-axis to prevent readers from misinterpreting bar graph data, if utilized.17 Assume a researcher wants to compare two groups of students with a survey where scores are scaled from 0 to 100%. Group A scores a 75% average, while group B has a 90% average. If a researcher starts the Y-axis at 70% for their graph it will make the difference in performance between group A and B appear large (Figure 3). However, if the researcher starts the Y-axis at 0, the true lowest axis for the data, the relationship between group A and B will be more accurately demonstrated.


Fig. 3. Illustration of how Y-axis scaling when reporting data can impact data visualization using the exact same data from two groups.


Conclusions

Surveys are valuable for discovering data trends in populations. Many factors must be considered when developing and implementing new surveys to include an understanding of cultural, psychological and economic factors that may impact the population studied. Based on these observations, future interventions can be developed to improve healthcare outcomes. An example of this occurring would be how the Institute for Clinical Systems Improvement (ICSI) developed adult acute and subacute low back pain health care guidelines based on reviewing several health surveys.78

Acknowledgement

The author of this manuscript would like to thank Claire Noll, M.L.I.S. for assistance with editing.
Share:  Add to TwitterAdd to DiggAdd to del.icio.usAdd to FacebookAdd to GoogleAdd to LinkedInAdd to MixxAdd to MySpaceAdd to NewsvineAdd to RedditAdd to StumbleUponAdd to Yahoo

References

1.   

Owen-Smith A, McCarty F, Hankerson-Dyson D, DiClemente R. Prevalence and predictors of complementary and alternative medicine use in African-Americans with acquired immune deficiency syndrome. Focus Altern Complement Ther 2012;17:33-42.



2.   

Chan C, Mok N, Yeung E. Aerobic exercise training in addition to conventional physiotherapy for chronic low back pain: a randomized controlled trial. Arch Phys Med Rehabil 2011;92:1681-1685.



3.   

Vincent H, George S, Seay A, Vincent K, Hurley R. Resistance exercise, disability, and pain catastrophizing in obese adults with back pain. Med Sci Sports Exerc 2014;46:1693-1701.



4.   

Azimi P, Ghandehari H, Sadeghi S, Azhari S, Aghaei H, Mohmmadi H, Montazeri A. Severity of symptoms, physical functioning and satisfaction in patients with lumbar spinal stenosis: a validation study of the Iranian version of the Swiss Spinal Stenosis Score. J Neurosurg Sci 2014;58:177-182.



5.   

Goertz M, Thorson D, Bonsell J, Bonte B, Campbell R, Haake B, et al. Institute for Clinical Systems Improvement: Adult Acute and Subacute Low Back Pain, 15th ed. Bloomington, MN:ICSI, 2012.



6.   

Chou R, Qaseem A, Snow V, Casey D, Cross Jr. T, Shekelle P, et al. Diagnosis and treatment of low back pain: a joint clinical practice guideline from the American College of Physicians and the American Pain Society. Ann Intern Med 2007;147:478-491.



7.   

Airaksinen O, Brox J, Cedraschi C, Hildebrandt J, Klaber-Moffett J, Kovacs F, et al. Chapter 4: European guidelines for the management of chronic nonspecific low back pain. Eur Spine J 2006;15:S192-300.



8.   

Anderson-Peacock E, Blouin J, Bryans R, Danis N, Furlan A, Marcoux H, et al. Chiropractic clinical practice guideline: evidence-based treatment of adult neck pain not due to whiplash. J Can Chiropr Assoc 2005;49:158-209.



9.   

Childs J, Cleland J, Elliott J, Teyhen D, Wainner R, Whitman J, et al. Neck pain: clinical practice guidelines linked to the international classification of functioning, disability, and health from the orthopaedic section of the American Physical Therapy Association. J Orthop Sports Phys Ther 2008;38:A1-34.



10.   

Fink A. How to Design Surveys: The Survey Kit 5. Thousand Oaks, CA: Sage Publications; 1995.



11.   

DePoy E, Gitlin L. Introduction to Research: Understanding and Applying Multiple Strategies, 4th ed. St. Louis, MO: Elsevier-Mosby; 2011.



12.   

Neutens J, Rubinson L. Research Techniques for the Health Sciences. 4th ed. San Francisco, CA: Benjamin Cummings; 2010.



13.   

Fink A. How to Ask Survey Questions: The Survey Kit 2. Thousand Oaks, CA: Sage Publications; 1995.



14.   

Fink A. How to Sample in Surveys: The Survey Kit 6. Thousand Oaks, CA: Sage Publications; 1995.



15.   

Fink A. How to Analyze Survey Data: The Survey Kit 8. Thousand Oaks, CA: Sage Publications; 1995.



16.   

Litwin M. How to Measure Survey Reliability and Validity: The Survey Kit 7. Thousand Oaks, CA: Sage Publications; 1995.



17.   

Fink A. How to Report on Surveys: The Survey Kit 9. Thousand Oaks, CA: Sage Publications; 1995.



18.   

Kleinman A, Eisenberg L, Good B. Culture, illness and care: clinical lessons from anthropologic and cross-cultural research. Ann Int Med 1978;88:251-258.



19.   

Pachter L. Culture and clinical care: Folk illness beliefs and behaviors and their implications for health care delivery. JAMA 1994;271:690-694.



20.   

Jasti S, Siega-Riz A, Bentley M. Dietary supplement use in the context of health disparities: cultural, ethnic and demographic determinants of use. J Nutr 2003;133:S2010-2013.



21.   

Hsiao A, Wong M, Goldstein M, Yu H, Andersen R, Brown E, et al. Variation in complementary and alternative medicine (CAM) use across racial/ethnic groups and the development of ethnic-specific measures of CAM use. J Altern Complement Med 2006;12:281-290.



22.   

Kakai H, Maskarinec G, Shumay D, Tatsumura Y, Tasaki K. Ethnic differences in choices of health information by cancer patients using complementary and alternative medicine: An exploratory study with correspondence analysis. Soc Sci Med 2003;56:851-862.



23.   

McLaughlin L, Braun K. Asian and Pacific Islander cultural values: Considerations for health care decision making. Health Soc Work 1998;23:116-126.



24.   

Bains S, Egede L. Association of health literacy with complementary and alternative medicine use: a cross-sectional study in adult primary care patients. BMC Complement Altern Med 2011;11:138.



25.   

Hulley S, Cummings S, Browner W, Grady D, Newman T. Designing Clinical Research, 4th ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2013.



26.   

Walker A, Restuccia J. Obtaining information on patient satisfaction with hospital care: mail versus telephone. Health Serv Res 1984;19:291-306.



27.   

de Vries H, Elliott M, Hepner K, Keller S, Hays R. Equivalence of mail and telephone responses to the CAHPS hospital survey. Health Serv Res 2005;40:2120-2139.



28.   

Brøgger J, Bakke P, Eide G, Gulsvik A. Comparison of telephone and postal survey modes on respiratory symptoms and risk factors. Am J Epidemiol 2002;155:572-576.



29.   

Gmel G. The effect of mode of data collection and of non-response on reported alcohol consumption: a split-sample study in Switzerland. Addiction 2000;95:123-134.



30.   

Galobardes B, Sunyer J, Antó J, Castellsagué J, Soriano J, Tobias A. Effect of the method of administration, mail or telephone, on the validity and reliability of a respiratory health questionnaire. The Spanish centers of the European asthma study. J Clin Epidemiol 1998;51:875-881.



31.   

Rissel C, Ward J, Jorm L. Estimates of smoking and related behavior in an immigrant Lebanese community: does survey method matter? Aust N Z J Public Health 1999;23:534-537.



32.   

Perkins J, Sanson-Fisher R. An examination of self- and telephone-administered modes of administration for the Australian SF-36. J Clin Epidemiol 1998;51:969-973.



33.   

Pederson L, Baskerville J, Ashley M, Lefcoe N. Comparison of mail questionnaire and telephone interview as data gathering strategies in a survey of attitudes toward restrictions on cigarette smoking. Can J Public Health 1985;76:179-182.



34.   

van Ooijen M, Ivens U, Johansen C, Skov T. Comparison of a self-administered questionnaire and a telephone interview of 146 Danish waste collectors. Am J Ind Med 1997;31:653-658.



35.   

Labarère J, François P, Bertrand D, Peyrin J, Robert C, Fourny M. Outpatient satisfaction: validation of a French-language questionnaire: data quality and identification of associated factors. Clin Perform Qual Health Care 1999;7:63-69.



36.   

Ngo-Metzger Q, Kaplan S, Sorkin D, Clarridge B, Phillips R. Surveying minorities with limited-English proficiency: does data collection method affect data quality among Asian Americans? Med Care 2004;42:893-900.



37.   

McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, et al. Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. Health Technol Assess 2001;5:1-256.



38.   

Rhee K, Allen R, Bird J. Telephone vs. mail response to an emergency department patient satisfaction survey. Acad Emerg Med 1998;5:1121-1123.



39.   

Sinclair M, O’Toole J, Malawaraarachchi M, Leder K. Comparison of response rates and cost-effectiveness for a community-based survey: postal, internet and telephone modes with generic or personalized recruitment approaches. BMC Med Res Methodol 2012;12:132.



40.   

Blumberg S, Luke J. Wireless substitution: early release of estimates from the National Health Interview Survey, July-December 2009. Centers for Disease Control and Prevention 2010:1-17.



41.   

Blumberg S, Luke J. Wireless substitution: early release of estimates from the National Health Interview Survey, January-June 2011. Centers for Disease Control and Prevention 2011:1-19.



42.   

Guterbock T, Diop A, Ellis J, Holmes J, Le K. Who needs RDD? Combining directory listings with cell phone exchanges for an alternative telephone sampling frame. Soc Sci Res 2011;40:860-872.



43.   

Groves R. Research on survey data quality. Public Opin Q 1987;51:S156-172.



44.   

Galliher J, Stewart T, Pathak P, Werner J, Dickinson L, Hickner J. Data collection outcomes comparing paper forms with PDA forms in an office-based patient survey. Ann Fam Med 2008;6:154-160.



45.   

Cabanoglu C, Warde B, Moreo P. A comparison of mail, fax, and web-based survey methods. Int J Mark Res 2001;43:441-452.



46.   

Jacobsen K. Introduction to Health Research Methods: A Practical Guide. Sudbury, MA: Jones & Bartlett Learning; 2012.



47.   

VanGeest J, Johnson T, Welch V. Methodologies for improving response rates in surveys of physicians: a systematic review. Eval Health Prof 2007;30:303-321.



48.   

Braithwaite D, Emery J, De Lusignan S, Sutton S. Using the internet to conduct surveys of health professionals: a valid alternative? Fam Pract 2003;20:545-551.



49.   

Kim H, Hollowell C, Patel R, Bales G, Clayman R, Gerber G. Use of new technology in endourology and laparoscopy by American urologists: internet and postal survey. Urology 2000;56:760-765.



50.   

Hollowell C, Patel R, Bales G, Gerber G. Internet and postal survey of endourologic practice patterns among American urologists. J Urol 2000;163:1779-1782.



51.   

Scott A, Jeon S, Joyce C, Humphreys J, Kalb G, Witt J, Leahy A. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Med Res Methodol 2011;11:126.



52.   

Aitken C, Power R, Dwyer R. A very low response rate in an online survey of medical practitioners. Aust N Z J Public Health 2008;32:288-289.



53.   

Gribble R, Haupt C. Quantitative and qualitative differences between handout and mailed patient satisfaction surveys. Med Care 2005;43:276-281.



54.   

Anastario M, Rodriguez H, Gallagher P, Cleary P, Shaller D, Rogers W, et al. A randomized trial comparing mail versus in-office distribution of the CAHPS clinician and group survey. Health Serv Res 2010;45:1345-1359.



55.   

Chong A, Fok S. Validation of the Chinese expanded euthanasia attitude scale. Death Studies 2013;37:89-98.



56.   

Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. BMJ 2002;324:1183.



57.   

Nakash R, Hutton J, Jørstad-Stein E, Gates S, Lamb S. Maximizing response to postal questionnaires—a systematic review of randomized trials in health research. BMC Med Res Methodol 2006;6:5.



58.   

Cruser des A, Maurer D, Hensel K, Brown S, White K, Stoll S. A randomized, controlled trial of osteopathic manipulative treatment for acute low back pain in active duty military personnel. J Man Manip Ther 2012;20:5-15.



59.   

Christian L, Dillman D. The influence of graphical and symbolic language manipulations on responses to self-administered questions. Public Opin Q 2004;68:57-80.



60.   

Israel G. Effects of answer space size on responses to open-ended questions in mail surveys. J Off Stat 2010;26:271-285.



61.   

Aicken C, Cassell J, Estcourt C, Keane F, Brook G, Rait G, et al. Rationale and development of a survey tool for describing and auditing the composition of, and flows between, specialist and community clinical services for sexually transmitted infections. BMC Health Serv Res 2011;11:30.



62.   

Jackson C, Taubenberger S, Botelho E, Joseph J, Tennstedt S. Complementary and alternative therapies for urinary symptoms: use in a diverse population sample qualitative study. Urol Nurs 2012;32:149-157.



63.   

Biernacki P, Waldford D. Snowball sampling: problems and techniques of chain referral sampling. Soc Meth Res 1981;2:141-163.



64.   

Barnett M, Cotroneo M, Purnell J, Martin D, Mackenzie E, Fishman A. Use of CAM in local African-American communities community-partnered research. J Natl Med Assoc 2003;95:943-950.



65.   

Airhihenbuwa C, Kumanyika S, Agurs T, Lowe A, Saunders D, Morssink C. Cultural aspects of African American eating patterns. Ethn Health 1996;1:245-260.



66.   

Owen-Smith A, Sterk C, McCarty F, Hankerson-Dyson D, DiClemente R. Development and evaluation of a complementary and alternative medicine use survey in African-Americans with acquired immune deficiency syndrome. J Altern Complement Med 2010;16:569-577.



67.   

Yammarino F, Skinner S, Childers T. Understanding mail survey response behavior: a meta-analysis. Public Opin Q 1991;55:613-639.



68.   

Church A. Estimating the effect of incentives on mail survey response rates: a meta-analysis. Public Opin Q 1993;57:62-79.



69.   

Thorpe C, Ryan B, McLean S, Burt A, Stewart M, Brown J, et al. How to obtain excellent response rates when surveying physicians. Fam Pract 2009;26:65-68.



70.   

VanGeest J, Johnson T, Welch V. Methodologies for improving response rates in surveys of physicians: a systematic review. Eval Health Prof 2007;30:303-321.



71.   

Hopkins K, Gullickson A. Response rates in survey research: a meta-analysis of the effects of monetary gratuities. J Exp Educ 1992;61:52-62.



72.   

Oden L, Price J. Effects of a small monetary incentive and follow-up mailings on return rates of a survey to nurse practitioners. Psychol Rep 1999;85:1154-1156.



73.   

Everett S, Price J, Bedell A, Telljohann S. The effect of monetary incentive in increasing return rate of a survey to family physicians. Eval Health Prof 1997;20:207-214.



74.   

Martinez-Ebers V. Using monetary incentives with hard-to-reach populations in panel surveys. Int J Public Opin Res 1997;9:77-86.



75.   

Larson N, Neumark-Sztainer D, Story M, Van den Berg P, Hannan P. Identifying correlates of young adults’ weight behavior: survey development. Am J Health Behav 2011;35:712-715.



76.   

Macedo L, Maher C, Latimer J, Hancock M, Machado L, McAuley J. Responsiveness of the 24-, 18- and 11-item versions of the Roland Morris Disability Questionnaire. Eur Spine J 2011;20:458-63.



77.   

Bennett C, Khangura S, Brehaut J, Graham I, Moher D, Potter B, Grimshaw J. Reporting guidelines for survey research: an analysis of published guidance and reporting practices. PLoS Med 2011;8:e1001069.



78.   

Goertz M, Thorson D, Bonsell J, Bonte B, Campbell R, Haake B, et al. Institute for Clinical Systems Improvement. Adult acute and subacute low back pain health care guidelines. Nov 2012 update.