Southern Online Journal of Nursing Research


Issue 2, Vol. 5
May 2004


Accruing the Sample in Survey Research


Saunjoo L. Yoon, PhD, RN1; Claydell H. Horne, PhD, RN2

 1Assistant Professor; 2Adjunct Associate Professor

University of Florida College of Nursing, Gainesville





Although sampling is a challenge in any type of research, survey research is particularly vulnerable to bias based upon the sample. The purposes of this article are to discuss 1) the use of random sampling in survey research, 2) methods to enhance acquisition of the sample, and 3) the strengths and limitations of four types of sampling methods including telephone, mail, face-to-face (personal) interview, and electronic (Internet) surveys. Examples of survey types are presented and strategies are proposed to improve the sampling process and yield more generalizable findings.


Keywords: survey research, sample acquisition, mail survey, telephone survey, interview survey, electronic survey



Data from an entire population of interest are rarely feasible to obtain; therefore, careful sampling is necessary to obtain unbiased estimates of the desired population. Selection of the study population should be based on previous research and the current research question. Frequently, the findings from survey research are not generalizable beyond the selected sample either because a convenience sample is used or because the response rates from the sample are low. There are, however, ways to make the findings of survey research more generalizable and, therefore, more beneficial. Thus, the purposes of this article are threefold: 1) to discuss the use of random sampling in survey research, 2) to discuss methods to enhance acquisition of the sample, and 3) to discuss the strengths and limitations of four types of sampling methods including telephone, mail, face-to-face (personal) interview, and electronic (Internet) surveys. Examples of survey types are presented and for each type, strategies are proposed to improve the sampling process and yield more generalizable findings. Although important, the calculation of sample size is not a focus of this discussion.


Sampling Framesfor Survey Research

Random sampling and convenience sampling are the two most common sampling techniques used by researchers to conduct quantitative studies, while purposive or judgmental sampling is used for qualitative studies. Random sampling can be defined as “a selection process in which each element in the population has an equal, independent chance of being selected.”1p295

In contrast, convenience sampling or accidental sampling, a type of nonprobability sampling, involves a use of “the most conveniently available people as study participants.”1p292 Random sampling allows the researcher to estimate the degree of expected error and to more accurately generalize the findings to the population of interest, thereby, increasing the credibility of research findings in survey research.2-3 Despite the advantages of using random sampling, many nursing researchers use convenience sampling because, as its name suggests, it is convenient, economical, and less time-consuming. Another deterrent to random sampling includes ethical decisions that place participants at risk if they are randomized into a control group.2 The advantages of convenience sampling must be weighed against its disadvantages, these include possible sampling bias, a less representative sample of the population, and a limited generalizability of the results.1-2  Therefore, for non-experimental research such as a survey design, it is well worth the effort and cost for researchers to use random sampling methods whenever possible.

Sample Sources for Survey Research

Once researchers have identified their population of interest, they should determine what sources they will use for sample selection. Public records such as national or state licensing agencies, Departments of Safety and Motor Vehicles, voter registration lists, and telephone directories are usually available for a minimal charge at national, state, and local levels.  

When choosing a source for subject selection, researchers should be knowledgeable of the types of information provided. It is important to identify sample limitations, such as cost, unavailable demographic data, and questions about whether the sample composition is representative of the study population. Although the cost of obtaining the listing may be a limitation, most publicly available lists are reasonably priced according to cost to the agency. Researchers should determine that the necessary demographic information such as age, gender, or race is available and that the individuals on the list can be sorted by demographic characteristics. For example, the voter registration office in a given area can supply information on the percentage of registered voters, as well as on age, gender, and race. Thus, researchers can be confident that subject selection is representative in these areas. If a researcher is studying adult women over 18 years of age in a given geographicalarea and decides to use the Department of Safety and Motor Vehicles as a source, it would be important to know the percent of women over 18 years of age without a current driver’s license. Also, the researcher would need to determine whether the cohort of people with driver’s licenses has a racial representation similar to the population of interest.

If a mailed survey is to be used, the source should provide a current address, which is often problematic in our mobile society. Programs such as can be purchased to validate addresses for large lists of individuals. It is important to understand that unidentified, undeliverable addresses may adversely affect response rates. Researchers often cannot tell the difference between a non-respondent and a person who did not receive the survey because of an outdated address.

If a phone survey is planned, the researcher must know a current phone number, the percent of residences in the United States without telephones, and the percent of residences with unlisted telephone numbers. Information on telephone access in the United States and in individual states is readily available from the U.S. Census Bureau  (See Table 1). Telephone directory data tend to be a representative source, since only 1.2% of owner occupied homes and 4.9% of renter-occupied homes within the U.S. are without telephones.4 A future concern related to dependence on telephone directories for generalizable data is the trend towards mobile phones instead of conventional landline phones since cell phone usage continues to rise.5-6


Table 1.    Number of owned and rented households with telephone access in the United States*


Owner Occupied


Renter Occupied



Number of Homes with Telephone/ %

Number of Homes without Telephone/ %

Number of Homes with Telephone/ %

Number of Homes without Telephone/ %

United States

68,987,521/ 98.8

828, 992/ 1.2

 33,921,875/ 95.1

1,741,713/ 4.9


1,226,085/ 97.4

32,601/ 2.6

437,438/ 91.4

40,956/ 8.6


134,801/ 97.3

3,702/ 2.7

80,115/ 96.4

2,982/ 3.6


1,260,415/ 97.4

33,222/ 2.6

570,591/ 93.9

37,099/ 6.1


700,923/ 96.9

22,535/ 3.1

285,026/ 89.3

34,212/ 10.7


6,510,872/ 99.5

35,365/ 0.5

4,823,969/ 97.3

132,664/ 2.7


1,109,537/ 99.4

6.768/ 0.6

525,824/ 97.0

16,109/ 3.0


867,844/ 99.8

1,898/ 0.2

419,431/ 97.1

12,497/ 2.9


214,704/ 99.4

1,342/ 0.6

79,985/ 96.7

2,705/ 3.3

Dist of Columbia

100,449/ 99.2

767/ 0.8

141,605/ 96.3

5,517/ 3.7


4,402,889/ 99.1

38,822/ 0.9

1,795,149/ 94.7

101,069/ 5.3


1,995,712/ 98.3

33,581/ 1.7

914,517/ 93.6

62,559/ 6.4


226,016/ 99.2

1,767/ 0.8

168,964/ 96.3

6,493/ 3.7


336,335/ 98.9

3,578/ 1.1

124,213/ 95.7

5,519/ 4.3


3,057,188/ 99.0

31,936/ 1.0

1,395,663/ 92.9

106,992/ 7.1


1,644,229/ 98.5

24,854/ 1.5

623,502/ 93.4

43,721/ 6.6


824,174/ 99.1

7,253/ 0.9

305,365/ 96.1

12,484/ 3.9


709,593/ 98.7

9,280/ 1.3

299,022/ 93.7

19,996/ 6.3


1,093,344/ 97.2

31,954/ 2.8

422,691/ 90.8

42,658/ 9.2


1,099,706/ 97.8

25,289/ 2.2

486,859/ 91.7

44,199/ 8.3


368,076/ 99.2

2,844/ 0.8

143,288/ 97.3

3,992/ 2.7


1,334,659/ 99.5

6,935/ 0.5

614,199/ 96.1

25,066/ 3.9


1,504,549/ 99.8

3,699/ 0.2

916,050/ 97.9

19,282/ 2.1


2,758,204/ 98.7

35,142/ 1.3

927,710/ 93.5

64,605/ 6.5


1,404,118/ 99.4

8,606/ 0.6

469, 368/ 97.3

13,035/ 2.7


725,837/ 95.9

31,314/ 4.1

252,065/ 87.1

37,218/ 12.9


1,519,237/ 98.5

23,073/ 1.5

611,243/ 93.7

41,041/ 6.3


243,609/ 98.3

4,091/ 1.7

104,912/ 94.5

6,055/ 5.5


445,849/ 99.2

3,457/ 0.8

207,497/ 95.7

9,381/ 4.3


453,225/ 99.1

4,020/ 0.9

280,863/ 95.6

13,057/ 4.4

New Hampshire

329,505/ 99.6

1,278/ 0.4

139,838/ 97.2

3,985/ 2.8

New Jersey

2,003,488. 99.6

7,810/ 0.4

1,004,955/ 95.4

48,392/ 4.6

New Mexico

451,750/ 95.2

22,685/ 4.8

187,258/ 92.0

16,278/ 8.0

New York

3,720,170/ 99.5

19,077/ 0.5

3,203,986/ 96.6

113,627/ 3.4

North Carolina

2,139,976/ 95.5

32,294/ 1.5

897,100/ 93.5

62,643/ 6.5

North Dakota

169,766/ 99.1

1,544/ 0.9

82,767/ 96.4

3,075/ 3.6


3,040,933/ 99.0

31,581/ 1.0

1,307,123/ 95.2

66,136/ 4.8


895,820/ 97.6

22,321/ 2.4

384,458/ 90.6

39,694/ 9.4


850,560/ 99.3

6,330/ 0.7

461,354/ 96.8

15,479/ 3.2


3,382,468/ 99.3

23,699/ 0.7

1,328,855/ 96.9

41,981/ 3.1

Rhode Island

244,374/ 99.7

776/ 0.3

158,113/ 96.8

5,161/ 3.2

South Carolina

1,081,657/ 97.7

25,962/ 2.3

388,422/ 91.1

37,813/ 8.9

South Dakota

195,142/ 98.6

2,765/ 1.4

86,647/ 93.8

5,691/ 6.2


1,538,993/ 98.6

22,468/ 1.4

627,483/ 93.5

43,961/ 6.5


4,639,854. 98.4

77,440/ 1.6

2,518,591/ 94.1

157,469/ 5.9


497,222/ 99.1

4,437/ 0.9

193,257/ 96.8

6,365/ 3.2


168,513/ 99.3

1,264/ 0.7

68,486/ 96.7

2,371/ 3.3


1,819,243/ 99.0

18,715/ 1.0

822,274/ 95.5

38,941/ 4.5


1,457,489/ 99.4

9,496/ 0.6

782,799/ 97.3

21,614/ 2.7

West Virginia

537,792/ 97.1

15,834/ 2.9

163,903/ 89.6

18,952/ 10.4


1,417,655/ 99/4

9,005/ 0.6

632,681/ 96.2

25,203/ 3.8


132,972/ 98.1

2,516/ 1.9

54,401/ 93.6

3,719/ 6.4

Puerto Rico

748,632/ 81.4

171,079/ 18.6

212,023/ 62.1

129,591/ 37.9

* U.S. Census Bureau 4



Issues Related to Non-Response to Survey Research

Non-response is a major concern to researchers conducting survey research using any survey mode because the non-response rate may affect the representativeness of the sample. Although there is no documented minimum acceptable response rate in the literature, the Office of Management of Budget of the federal government requests that government contracted surveys achieve a minimum of 75% response rate.7p42 However, wide ranges of response rates are recognized in different survey studies.3 Dillman3p150 suggests and discusses five elements for survey studies to achieve high response rates.They are as follows:

1.      a respondent-friendly questionnaire,

2.      up to five contacts with the questionnaire recipient,

3.      inclusion of stamped return envelopes,

4.      personalized correspondence, and

5.      a token financial incentive that is sent with the survey request.


Other elements affecting response rates relate to the type of survey mode used such as mail, telephone, face-to-face (personal interview), and Internet (web) surveys; geographical locations (e.g., rural vs. urban); and salience.7-9 Each type of survey mode has its own strengths and limitations; therefore, a mixture of these survey modes may increase the response rate.3,7 When survey modes are mixed, one must consider time, cost, and compatibility of the survey modes. To increase the response rate in survey research, it is imperative for the researcher to choose the best survey mode(s) for the selected sample from the very beginning of the study design.


Selected Surveys Modes: Strategies to Increase Response Rates

Telephone Survey

Telephone survey can be an economical, convenient mode to access a large number of persons in a short time frame, is flexible, and is not geographically bound. The phone numbers of subjects can be from a generated list or by random-digit-dialing (RDD). An advantage to the generated list is that if addresses are known, an advance letter can be mailed prior to the telephone call and small advance incentives can be given. It is difficult to achieve a high response rate with RDD.7p47 Limitations to telephone surveys include the distaste of the general public to calls from persons they do not know, the high number of calls (6-10) needed to one household to make contact,7p46 information is limited to that appropriate for telephone conversations, privacy may be limited, and persons with certain disabilities may be excluded. Specifically, persons who are hard of hearing (a large percentage of the oldest old) and persons with mild cognitive impairment are disadvantaged with telephone surveys; whereas, these groups may be able to effectively complete a mailed survey or a face-to-face interview survey.

There are ways to increase response rates with telephone surveys. If possible, use a combined mail and phone mode with an advance letter and incentive. Quickly identify the caller as the person who sent the advance letter so that the caller will not be falsely viewed as a telemarketer. If RDD is used, start conversation with identification and succinct purpose for the call. Be courteous, do not start with “small talk” such as “how are you today?,” speak clearly and slowly, give an approximate time that the telephone interview will take, and ask permission to proceed. Use simplicity and give explicit directions for responses before asking the question. The survey should be formatted for short answers such as yes or no or the end points of a Likert scale; a full Likert scale is difficult for most telephone respondents. Response rates for telephone interviews are highly dependent upon the interviewer’s skills.

Mailed Survey

A common approach to survey research is the mailed survey. There are numerous strengths to this mode with some being ease of accessibility to the participant, written questions are easier to communicate than verbal questions, responses can be more detailed and contemplated, and the participant can respond at a more convenient time. However, the mailed survey is not without limitations such as convincing the person to participate without the human element of the interviewer, inability to differentiate between undelivered mail and non-respondents, and need of the respondent to have skills in reading and writing.

Response rates to mailed surveysusing random sampling are usually low (around 30%1,12-13); therefore, additional measures are needed to increase this rate. The more done to encourage the respondent to complete and mail the survey immediately, the more likely it is to be returned.  Dillman reports response rates over 70% on mailed surveys when some of the following techniques are considered.3

Time Considerations

The researcher should choose a time for mailing surveyswhen people are more likely to respond and less likely to put the survey aside to never resurface. Year-end holidays are not good times for surveys to arrive in the mail; therefore, it is poor planningto mail surveys during the months of November or December. Also, times of high volume mail such as the first of the year or the first of the month should be avoided. A strategic plan is to mail the surveys so that they will arrive on a Tuesday or Wednesday. If people do not complete the survey immediately, they frequently use the weekend to catch-up on mail that arrives in the middle of the week. Generally, summer is not an ideal time to mail surveys, because people are often away on vacation. They are not as likely to complete a survey that comes with a volume of other mail upon their return home. Researchers need to know the migratory trends of the geographical area they are surveying, since some people spend months at a time in another location.

Method of Mailing and Preparation.

First-class mail should be used for greater efficiency. Although bulk mail is less expensive, it may not be delivered for up to three weeks, and busy people are not as likely to open bulk mail upon receipt or sometimes even at all. First-class mail carries a sense of increased importance. A postage stamp that is attractive and not controversial will generate more response than metered mail. To encourage prompt return, a self-addressed, stamped envelope should be included. Again, a postage stamp is preferred for the return mail. A request should be made for the return of mail that cannot be forwarded or for the forwarded mail address to be sent to the researcher so that address records can be updated and the researcher can account for undelivered mail that would otherwise have to be considered a non-response.

The introductory letter is of great importance. Rather than a generic letter, use the person’s name. Databases and computer merge capabilities make this personal touch easy. Another personal touch is added by individually signing each letter. The letter should be short while still informative. Be clear and concise about your request and include a convincing reason for the person to participate. Also mention the approximate time it will take the recipient to complete the survey. State a reasonable time for return, such as two weeks after mailing. If too much time is allowed, there is greater likelihood that the response will be put off and forgotten. If too little time is given and the person cannot immediately complete and mail the survey, this too may discourage response. Finally, anonymity or confidentiality of information must be assured. If possible, the letter will get better response if it is written or sent by someone whose name the respondent recognizes. Of course the letter should be on official letterhead, typed, neat, and professionally written. A small incentive included with the mailing will also increase the likelihood of a returned survey. Some suggestions are a miniature pencil to use in survey completion, a gift certificate, a small laminated calendar or other useful tool such as a calorie counter, a small book of postage stamps, or a small amount of cash.

Survey Characteristics

The survey must be attractive and professional in appearance. Simplicity is the best rule. In addition to providing quality measurements, the directions and questions must be clearly and simply stated.3,7The length of the questionnaire should be short. If the questionnaire appears to be long, create appropriate subsets of questions for certain subsamples of your respondents.7 Surveys should be as short as possible to enhance response rates and should include only those questions essential for addressing the hypotheses and research questions. If items do not address the purpose, hypotheses, and research questions, they should not be included. Item choices should have little narrative with the option of “not applicable” or “other” included when appropriate. Begin the survey with simple non-threatening information. Some of the more sensitive demographics, such as age, income, ethnicity, and religious preference, should be placed at the end of the survey. After completing most of the survey, a respondent who may have initially hesitated to answer sensitive demographic information will be more likely to complete the survey if this information is at the end; if such information is at the beginning, however, a person may not even start the survey. A non-threatening method of asking age is to ask date of birth; income information is better asked by giving income categories for selection. A good model for race information is the census survey.10 Allow a write-in space for race so that each person can feel included in the manner in which they choose. Sometimes, it is the little things that make a person feel excluded and refuse to participate. Provide simple directions for returning the survey. Otherwise, a completed survey may never be put into the mail for return.

Anonymous versus Confidential Surveys

Researchers must determine if the survey will be anonymous or confidential. In anonymous surveys, the respondents remain unidentifiable; in confidential surveys, the respondents are identifiable but are assured that information will be kept private. The most important issue is that the participants understand the privacy issues and are clear about the researcher’s intent. Researchers must be honest and forthright in the issues of anonymity and confidentiality. If the information sought is of a sensitive nature, anonymity will likely result in better response rates; however, the disadvantage of anonymity is that the first respondents cannot be identified and omitted insubsequent second mail-outs. Fowler7 suggests an alternative strategy for follow-up when anonymity is desired.A postcard with the respondent identifier is sent with the survey and a request is made to place the card in the mail when the survey is completed. Fowler’s card reads as follows:

Dear Researcher, I am sending this postcard at the same time that I am putting my completed questionnaire in the mail. Since my questionnaire is completely anonymous, this postcard will tell you that you need not send me a further reminder to return the questionnaire.7p50


Fowler reports that the number of postcards usually equals the number of returned surveys.

Subsequent Mail-Outs

If participants can be identified by returned surveys, Dillman3 recommends that a postcard reminder be mailed to all non-respondents about 10 days following the first mail-out. Then ten days after the postcard, a letter, questionnaire, and a return stamped envelope are sent to non-respondents.If the survey is confidential, an identification code is used, rather than the recipient’s name, to identify participants. The code and its purposeare briefly explained to participants. If the surveys are anonymous, it may be necessary to send a reminder card to all potential participants, remembering to thank those who returned the survey. It may also be advantageous to send another copy of the survey to all, in case the person has misplaced the first mailing. However, since respondents are not identified in anonymous surveys and each individual must receive the survey again, the disadvantages of re-sending the survey are added cost and the chance that some subjects may unintentionally submit two surveys.

Prediction of Response Rates From Mailed Surveys

An average response rate from a random sample for mailed surveys has been reported to be around 30%.12-13 The suggestions provided above can increase this rate. There tend to be fast responders, scattered respondents during the return period, and a few responders after the requested time for return. This pattern repeats itself with the second mailing, but in smaller numbers. The history of survey research for the authors yields some predictions for response rates. About 30 % of the total number of respondents will return the surveys within the first week after mailing. The time between the end of week one and the effects of the second mail-out will yield 50% of the total return. The second mailing of the survey to non-respondents will yield another 20%. A close projection of the total response rate may be made from the early respondents. For example if 1,000 surveys are mailed and 180 are returned within the first week, you can project that your return rate will be about 60% (see Figure 1).



Figure 1.
         Equation for Predicting Response Rate

Step1. Number of first-week respondents: 30% :: Total number of respondents (x) : 100%   

Step 2. X (total number of respondents)\total number of potential subjects (mailed surveys)=RR

Step 1 calculation: (180 * 100% = 30% * x) or (180 = .3x) thus x = 600 [ratio and proportion]

Step 2 calculation: 600 divided by 1000 = 60% return rate



Face-to-Face (personal) Interview Survey

Another mode of survey research is the face-to-face interview. The strengths of a face-to-face interview include enhanced descriptive data and the opportunity to apply longer and more complex instruments. Also, the face-to-face interview may enhance data collection from special populations such as older adults and those with disability, particularly in areas such as response rate and missing data. Some types of responses, such as use of a Likert scale, yield better data from a face-to-face survey when compared to a telephone survey. There are limitations to the face-to-face interview in that it is more costly and time-consuming than a telephone or mail survey, and anonymity is impossible. Also, ‘social desirability response bias’, that is, the tendency to misrepresent responses according to desirable social values, may hinder the survey results of the face-to-face interview, particularly if the research questions pertain to sensitive information.1 It is important to provide a permissive environment and to assure respondents’ confidentiality to increase the likelihood for an honest response.1p359

An example of a multimodal sampling process follows (see Figure 2). A random selection of the sample, as described above, can be followed by telephone and/or mail contact to seek agreement and to make arrangements for the interview. If using mail contact, the researcher sends letters to the sample with postage-paid, self-addressed return postcards (see Figure 3). The researcher then can make arrangements via telephone to conduct the face-to-face interviews with people who return postcards indicating their willingness to participate.


Figure 3.    Example of Postcard for Subject Response


Electronic Survey

The use of the Internet for survey research is increasing. With Internet access on the rise, electronic transmission of surveys is a consideration for future research. Currently, however, only select populations have high Internet usage (e.g. college students, higher education, faculty and staff).Persons who use the Internet more are much more likely to complete a web based survey than those who rarely use the Internet. Therefore, persons who rarely use the Internet may be unintentionally excluded from electronic surveys, which would be a limitation of the survey findings of this method.

The proportion of households with Internet access more than doubled between 1997 and 2000. As of August 2000, 51% of U.S. households had one or more computers and 42% of U.S. households had at least one member who used the Internet at home11 indicating that computer availability and Internet access are becoming synonymous. Currently, households with Internet access are more likely to be higher incomes, married-couple households, and families with a school-age child. Access varies by geographical area, with the west the most likely and the south the least likely to have Internet access. Households in metropolitan areas are more likely to have Internet access, while nonmetropolitan areas are the least likely. Whites and Asians are more likely than Blacks and Hispanics to have Internet access.11 Potential strengths and limitations are well documented (see Table 2).7

Some of major strengths of the electronic mode are low cost, fast returns, and provision of adequate time for thoughtful answers. Limitations include limited sample selection resulting in a generalizability issue and the difficulty of identifying a specific sample population. Electronic surveys also limit control over who actually completes the survey and the number of times one participant may submit the information. While certain study groups may be an ideal population for electronic survey, others are not representative of the population at the current time. Response rates for electronic surveys are usually dependent upon the use of mechanisms that “decrease human effort in the person-computer interaction ”14p2 including the assurance that the systems function as designed. For example, if a web page is not accessible when the participant decides to respond, the chances are high that this person will become a non-respondent. There is much potential for the electronic survey mode; however, many areas of concern such as security and sampling issues still exist. Other detailed suggestions and discussions about complex issues of using electronic mail and the web can be found in the book by Schonlau, Ficker, and Elliott.15


Table 2.   Benefits and limitations of telephone, mail, face-to-face interview, and electronic survey modes

Survey Mode




Human contact

Lower cost/interview than personal contact

Availability of random-digit-dialing (RDD)

Faster data retrieval

Not geographically bound

Improved response rates with multimodal use


High number of calls to make contact
Hearing dependent

Lacks visual component

No advance incentive possible

RDD yields lower response rate

Limited interviewer observations

Questionnaire constraints

May lack privacy for response

Decreased use of land lines

Not anonymous

Dislike for and confusion with telemarketer calls


Can be widely distributed

Self administered in own time

Visual component available

Can use spacing, graphics, print to enhance

Advance incentive possible

Choice of anonymity

Relatively low cost in personnel


Lacks human contact

Respondents need language skills

Lack of control over who responds

Dependent on accurate mailing addresses



Face-to-Face Interview

Human contact

Rapport and confidence established

Effective method for enhancing cooperation


Non-verbal cues available

Enhanced descriptive data

Complex instructions can be used

Access to persons with disabilities

Advance incentive possible

Longer surveys are possible


Costly in relation to personnel time

Requires trained interviewers

Geographical limitations

Anonymity is impossible

Social desirability response bias


Self administered in own time

Visual component

Less time consuming, thus lower costs

Potential for fast return of data

Choice of anonymity


Lacks human contact

No advance incentive possible

Limited to Internet users

Respondents need language skills

Need for visual acuity

Lack of control over who responds

Concern by respondents for privacy



Bias stemming from survey research sampling can be reduced if sources for the sampling population are carefully selected and the sampling process is correctly done. Randomization can be achieved from a larger population if it has similar characteristics to the population of interest for the study. Researchers may select telephone, mail, face-to-face interview or Internet to deliver the survey. The strengths and limitations of each method must be assessed to choose the best approach. Consideration should be given to multimodal approaches. Survey research can be a critical and valuable part of a research project.



1.     Polit, D. F., & Beck, C. T. (2004). Nursing research: Principles and methods (7th ed.). Philadelphia: Lippincott.

2.    Talbot, L. A. (1995). Populations and samples. In L. A. Talbot (Ed.), Principles and practice of nursing research. St Louis: Mosby.

3.    Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York: John Wiley & Sons Inc.

4.    U.S. Census Bureau. (2001). QT-H9. Occupancy, Telephone Service, Housing Facilities, and Meals Included in Rent: 2000 Data Set: Census 2000 Summary File 3 (SF 3) - Sample Data. Retrieved May 8, 2003 from

5.    Anonymous (April 29, 2003), Cell phone usage overtakes landlines in the US. Retrieved from

6.    Hannon, D. (April 19, 2001). Wireless Outlook: Industry experts see cell phone usage plateau. Retrieved from

7.         Fowler, F. J. (2002). Survey research methods (3rd ed.). Thousand Oaks, CA: Sage.

8.    Dillman, D. A., & Carley-Baxter, L. R., (2000). Structural determinants of mail survey response rates over a 12- year period, 1988-1999. Retrieved December 22, 2003 from

9.    Dillman, D. A., Phelps, G, Tortora, R., Swift, K., Johrell, J., & Berck, J. (2000). Response rate measurement differences inmixed mode surveys using mail, telephone, interactive voice response and the Internet.Paper presentation at Annual Meeting of the American Association for Public Opinion Research, May 2000, Montreal, Canada. Retrieved: December 22, 2003, from

10.  U.S. Census Bureau. (2000). Racial and ethnic classifications used in Census 2000 and beyond. Retrieved May 8, 2003 from

11.   U.S. Census Bureau. (2001, September). Home computers and internet use in the United States: August 2000. Retrieved from Current Population Reports. U.S. Census Bureau, U.S. Department of Commerce. Retrieved from

12.   Waltz, C. F., Strickland, O. L., & Lenz, E. R. (1991). Measurement in nursing research (2nd ed.). Philadelphia: F.A. Davis Company.

13.  Sand-Jecklin, K., & Badzek, L. (2003). Nurses and nutraceuticals. Knowledge and use. Journal of Holistic Nursing, 21, 383-397.

14.  Bowker, D., & Dillman, D.A. (2000). An experimental evaluation of left and right oriented screens for web questionnaires, Presentation to annual meeting of American Association for Public Opinion Research, Portland, Oregon,  Retrieved October, 2000 from

15.  Schonlau, M., Fricker, R. D., & Elliott, M. N. (2002). Conducting research surveys via e-mail and the web. Santa Monica, CA: RAND



Copyright, Southern Nursing Research Society, 2004

This is an interactive article. Here's how it works: Have a comment or question about this paper? Want to ask the author a question? Send your email to the Editor who will forward it to the author. The author then may choose to post your comments and her/his comments on the Comments page. If you do not want your comment posted here, please indicate so in your email, otherwise we will assume that you have given permission for it to be posted