Artemis Takes Aim

Evaluation

Definition

Evaluation is the determination of the value, significance, worth, or condition of something or someone (OED). In regards to reference service, evaluation is used to critique the usefulness, efficacy, and efficiency of reference transactions. Evaluation of reference services is necessary in order to produce “better reference librarians, better use of resources (including online services) and certainly better results for the library users” (Katz 1984 pp. 3-8).

 

Methods

Evaluation of reference services is the subject of much debate within the library and information science field. Everyone sees the need for it, but there is no consensus on the best way to accomplish it (Katz 1984 pp. 3-8). Part of the problem of evaluation is that there are few standards within the field as to what constitutes a reference transaction or what the best indicator of quality could be. Evaluators could look at the reference staff, the reference transaction, or the reference resources. Evaluation can measure how well reference staff is meeting the needs of specific populations using the library, how well they are using the resources available, how satisfied users are with the reference transaction, how accurate answers provided are, and how useful these answers are.

 

Because reference service is a mixture of interpersonal interaction and resource use, there are many ways that evaluation can occur, including recording the number of questions, recording time spent with questioners, measuring user satisfaction, rating qualities desirable in reference staff, and compiling unanswerable questions (Stone 1942). These methods are considered obtrusive tests, in that reference staff self report or are aware that evaluation is occurring. However, each of these ways has advantages and disadvantages. Another way of evaluating reference staff is the unobtrusive test, in which the library and reference staff do not know they are under evaluation. This test is usually performed by researchers who visit or otherwise engage reference staff to ask a question and record their findings. However, there are problems with the unobtrusive test as well, as asking the same question to various reference staff does not take into account the individuality of each library (i.e. the question may not be one that will ever be asked by its patrons) and it is difficult to create exactly equal lists of questions to be asked while taking into account the population served by the library (Hubbertz 2005).

 

Most recently, researchers have focused on measuring accuracy, utility and user satisfaction as an evaluation of reference service, with other various methods thrown in like budget, library collection, and staff ability (Richardson 2002). Richardson (2002) finds that these measures none of is sufficient on their own, as what is commonly measured now (accuracy, utility, and customer satisfaction) is unrelated, causing results found for one of these measures to present an inaccurate picture of the reference transaction as a whole.

 

Current trends in evaluation of reference services consider what can be termed “virtual reference.” Virtual reference can include reference services through email, instant messenger, chat, or co-browsing among other things (MARS 2004).

 

Resources for Evaluation of Reference Services:

 

Katz, B. & Fraley, R. (1984). Evaluation of Reference Services. New York: Haworth Press.

 

Novotny, E. (2001). “Evaluating Electronic Reference Services: Issues, Approaches and Criteria.” The Reference Librarian 74: 103-120.

 

Ronan, J., Reakes, P., & Cornwall, G. (2002/2003). “Evaluating Online Real-Time Reference in an Academic Library: Obstacles and Recommendations.” The Reference Librarian 79/80: 225-240.

 

Whitlatch, J. (2000). Evaluating Reference Services: A Practical Guide. Chicago: ALA.

 


 

Sources:

 

Hubbertz, A. (2005). “The Design and Interpretation of Unobtrusive Evaluations.” Reference & User Services Quarterly 44.4: 327-335.

 

Katz, B. (1984) “Why We Need to Evaluate Reference Services: Several Answers.” In Katz, B. & Fraley, R. (1984). Evaluation of Reference Services. New York: Haworth Press.

 

MARS Digital Reference Guidelines Ad Hoc Committee. (2004). “Guidelines for Implementing and Maintaining Virtual Reference Service.” Retrieved 21 November 2006 from http://www.ala.org/ala/rusa/rusaprotools/referenceguide/virtrefguidelines.htm.

 

Oxford English Dictionary Online. Retrieved 21 November 2006 from http://www.oed.com.

 

Richardson, J. (2002). “Reference is Better than We Thought.” Library Journal 127.7: 41-42.

 

Stone, E. (1942). “Methods of Evaluating Research Service.” Library Journal 67: 296-298.

 

 

 

 

Carrie E. Williams

dianaascher

Diana L. Ascher, PhD, MBA, is a principal at Stratelligence and a co-founder of the Information Ethics & Equity Institute. Her lifelong interest in knowledge and decision making has focused on the evaluation, classification, organization, communication, and interpretation of information, and motivates her work in the fields of behavioral science, finance, higher education, information studies, journalism, law, leadership, management, medicine, and policy. She brings more than two decades of experience as a writer, editor, media director, and information strategist to her work.

DianaEvaluation