Artemis Takes Aim

User surveys

User surveys are conducted in order to 1) measure the use of library resources, 2) measure the level of user satisfaction in obtaining needed resources, 3) evaluate the quality of service provided and 4) obtain constructive criticism on improvements for delivery of library services. (Reitz, 2004). User surveys are widely used because they are economical to administer and because they provide a direct form for obtaining data from the receivers of the service. To yield accurate data from which to analyze and evaluate library service, attention must first fall on the design of user survey.


Elements that shape the user survey design include: 1) selection of target population to be surveyed, 2) development of questions to be asked, 3) choosing the mode of administration, 4) creating an the form or instrument in which questions are to be delivered, 5) conducting the survey, 6) organizing received data for preparation of analysis and, 7) analysis of the data. (Mathison, 2005, pp.).


Methods and instruments used for conducting user surveys may include a variety of tools with the aim of providing the user a mode for communicating their “perception of how well the library use experience answered their information problem, improved their productivity or the quality of their output” (Miller, 2004,). The methods most widely used are questionnaires and interview forms.


Guidelines for construction of questionnaires and interview questions are targeted toward increasing the reliability and accuracy of results by formatting constructed questions to “ensure that the respondents understand the question and can be reasonably expected to have the knowledge to answer them. Each question must be related to the survey’s objectives and must be precise and unambiguous” (Miller, 2004).


Thomas F. Burgess offers a model for the design of questionnaires. He suggests that the questionnaire be divided into three elements to a) determine the questions to be asked, b) select the question type for each question and specify the wording, 3) design the questions sequence and overall questionnaire layout” (Burgess, 2001). Burgess writes that “a key link needs to be established between the research aims and the individual questions via the research issues” (Burgess, 2001). The questions should be designed to directly answer the aims of the research. Burgess lists variety of question types to be included in the construction of the questionnaire including open-ended and closed questions, single and multiple response questions, ranked response and rated responses (Burgess,2001). Length of the questions must be designed to garner precise responses but should be brief in word length to promote quick reading and answering. Layout format of the questions must also be conducive to moving through inquiries posed and promote user completion of the questionnaire or interview. Attention to demographics also ensures that a wide range of respondents is captured. Pilot programs are suggested prior to wide implementation of user surveys (Miller, 2004).


User surveys rely on indicators for measuring satisfaction. On a questionnaire, users may rate their satisfaction with reference service provided on a scale, however, the “indicator … may not be a valid representation of ‘satisfaction’ because different users may assess the same level of reference services differently on the Likert scale” (Feather, J. & Sturges, P., 2003). Analysis of data then must be careful to consider if the intended object of the measurement is captured. As such, user surveys are used as part of wider system of library service evaluation that is inclusive of wide range of methods for collection of data to be analyzed.





Burgess, T.F.(2001). A general introduction to the design of questionnaires for survey research. Univeristy of Leeds, Information Systems Services.


Feather, J. & Sturges, P. (Ed.). (2003). International encyclopedia of information and library science, 2nd ed. NY: Routledge.


Reitz, J. M. Dictionary for Library and Information Science. CT: Libraries Unlimited, 2004.


Mathison,S. (Ed.). (2005). Encyclopedia of evaluation. CA.: Sage.


Miller, L. (2004). User satisfaction surveys. Australasian Public Libraries and Information Services, 17, 3, 125-33. Retrieved November 10, 2006, from Library and Information Science Abstracts database.


Reitz, J. M. Dictionary for Library and Information Science. Westport, CT: Libraries Unlimited, 2004.





Lettycia Terrones

November 2006


Diana L. Ascher, PhD, MBA, is a principal at Stratelligence and a co-founder of the Information Ethics & Equity Institute. Her lifelong interest in knowledge and decision making has focused on the evaluation, classification, organization, communication, and interpretation of information, and motivates her work in the fields of behavioral science, finance, higher education, information studies, journalism, law, leadership, management, medicine, and policy. She brings more than two decades of experience as a writer, editor, media director, and information strategist to her work.

DianaUser surveys