“It was information based”: Student Reasoning when Distinguishing Between Scholarly and Popular Sources

May 16, 2018 at 05:05AM
via In the Library with the Lead Pipe

In Brief:
We asked students to find an article and answer the following questions: Is this a popular or scholarly article? How can you tell? We analyzed student answers to better understand the reasoning used to distinguish between scholarly and popular sources. Our results suggest that framing sources as “scholarly or popular” is confusing rather than clarifying for students.

by Amy Jankowski, Alyssa Russo, Lori Townsend

Introduction

Scholarly and popular sources are a longstanding construct in library instruction. A quick Google search brings up an abundance of LibGuides and tutorials on the subject. However, we have found that teaching students to identify and classify information sources using a rigid binary categorization is problematic. In an effort to better understand the ways students conceptualize and evaluate sources, we stepped back to ask: what kind of reasoning do students apply when distinguishing between scholarly and popular sources?

Scholarly and popular sources in information literacy instruction and assessment

The Information Literacy Competency Standards for Higher Education (Association of College & Research Libraries, 2000) specifically address scholarly and popular sources; one student learning outcome states that an information literate student, “Identifies the purpose and audience of potential resources (e.g., popular vs. scholarly, current vs. historical)” (p. 8). The explicit inclusion of the ability to differentiate “popular vs. scholarly” sources reinforces the prominence of this binary paradigm. The subsequent Framework for Information Literacy for Higher Education (American Library Association, 2015), however, does not specifically address scholarly and/or popular sources. Instead, it presents complex core concepts that underlie information creation, accessibility, and broader context, with numerous threads to aspects of scholarly and popular sources.

Many information literacy studies include discussion of scholarly and popular sources, underscoring their prevalence in library instruction practice. In several studies, scholarly and popular sources are primarily presented to undergraduate students through discussion of specific, mutually exclusive characteristics, such as author qualifications, the presence of a bibliography, editorial or peer-review process, among others (Chapman, Pettway, & Scheuler, 2002; Ferrer‐Vinent & Carello, 2008; Fleming-May, Mays, & Radom, 2015; Knight, 2002; Lowe, Booth, Tagge, & Stone, 2014; Shao & Purpur, 2016).

Studies also suggest that students struggle with understanding and articulating what exactly a scholarly or popular source is (Fleming-May et al., 2015; Radom & Gammons, 2014). Kim and Sin (2011), as well as List and Alexander (2018), found that while students effectively articulate source evaluation criteria, they do not reliably use these criteria when selecting sources. Relatedly, Carter and Aldridge (2016) examined the words that students use to explain their evaluation of sources and found that students tend to focus on content—primarily through vague or inaccurate terms and circular reasoning—which correlated with ineffective assessments.

Problematizing the way librarians talk about scholarly and popular sources

Studies by Insua, Lantz, and Armstrong (2018) and Fisher and Seeber (2017) have discussed how the discrete scholarly and popular binary is problematic to students’ development of a more complex understanding of sources through in-depth evaluative engagement. Seeber (2016) further problematizes the oppositional scholarly versus popular binary, in which he emphasizes that this framing, where scholarly sources are positioned as “better” than popular sources, centralizes the library and resultantly alienates students. He suggests making a deliberate change in phrasing from “scholarly versus popular” to “scholarly and popular and ___,” which eliminates competition, presents terms on equal footing, and brings other types of information sources into discussion.

Students overestimate their abilities

Gross and Latham (2009, 2011) consistently found that students, particularly those with below-proficient information literacy skills, showed a tendency to overestimate their abilities. Two additional studies suggest students overestimate their abilities to correctly identify information sources as scholarly or popular. Bandyopadhyay (2013) found that while a majority of students were able to correctly identify a research article as such when it was presented as a single item, only 26.7% correctly identified two research articles when presented in a group of four articles. Molteni and Chan (2015) studied student confidence as it pertains to aspects of the research process and found that though 74% of students rated their confidence in differentiating between primary and secondary materials as “good” or higher, students were able to correctly identify a scholarly or popular source only 51% of the time.

Recognizing online information formats is difficult for students

More broadly, studies suggest that students have difficulty recognizing information formats encountered online. Buhler and Cataldo (2016) investigated student perceptions of online information resources through a survey, which provided sample sources and asked respondents to identify the corresponding online format or source type. The authors found a high level of source misidentification across formats and respondent demographics. Leeder (2016) tested students’ abilities to identify scholarly and non-scholarly sources in an online format, including blogs, trade journals, scholarly research articles, and book reviews. He found that students misidentified these formats 60% of the time.

Methods

Participants and setting

The purpose of this qualitative analysis was to understand how first-year undergraduate students determined whether an article they found in a library database was popular or scholarly. The research population for this study consisted of students enrolled at the University of New Mexico (UNM) in English Composition III (ENGL 120). UNM is a large, Hispanic-serving institution and Carnegie Research University classified as highest research activity. ENGL 120 is an undergraduate course that most incoming first-year students take to fulfill UNM Core Curriculum requirements. As part of a flipped classroom, many students were required to complete the online ENGL 120 library tutorial. We used a convenience sampling method focusing on the 1,745 students enrolled in ENGL 120 during the spring semester of 2016. From that population, a sample of 955 students were included in this study. All materials and procedures were approved by the UNM Office of the Institutional Review Board.

Materials

The online ENGL 120 library tutorial was developed using Qualtrics, an online survey software. In one module students were asked to find an article about their research topic in a library database and answer the following questions about that article: “Is this a popular or scholarly article? How can you tell?” Responses were collected in Qualtrics and later exported into spreadsheets and Overview, an open-source document mining application, for analysis.

Procedure and data analysis

Each of the 955 student responses consisted of two basic elements: 1) citation information about a source the student found and an answer to the prompt: “Is this a popular or scholarly article?” and 2) an answer to a second prompt “How can you tell?” We considered the two elements separately.

First, we determined each student’s choice about whether their source was scholarly or popular, and then we made our own determination for comparison. We deleted responses that didn’t contain enough information for us to identify the source. The students were supposed to choose between Scholarly and Popular. However, we actually classified sources into Scholarly, Popular, and Other. A fourth classification, Unclear, was used when we couldn’t tell how a student was classifying the source they found.

Second, we developed codes based on student responses to the “How can you tell?” question. Code labels are shorthand that describe meaning in the text of each student response. An initial batch of 100 student responses was analyzed to develop a set of emergent codes that would eventually be applied to the rest of the data. Batch by batch, enough codes emerged to sufficiently describe all of the reasoning presented in student responses. The research team deliberated about the codes throughout this iterative process by adding, deleting, and modifying the list so that the code list was manageable, yet specific enough to capture interesting occurrences in the data. Table 1 contains the final list of codes, with definitions and examples.

Table 1: Codes1
Code Label Code Definition Example (student responses)
anatomy individual elements/parts of the article or their arrangement within the article—e.g., abstract, works cited, images, advertisements, volume info, length, etc.—or whether the article exists in print or online “It is a scholarly article, i can tell because the authors name is there along with the volume number along with page numbers.”
audience mention of audience, who the article was written for “Magazines are usually aimed at the general audience, which leans towards popular as opposed to scholarly.”
authority who wrote/produced/published the article, credibility of the source, credentials, affiliations (“published by the NY Times”) “I know this is what it is because it source seems pretty legit.”
currency the time since publication, +/- “It is quite old.”
labeled an icon or written label indicates source type (scholarly, periodical, news, etc.) “It is a review. The page tells you before you click on the title”
language anything about language, e.g., big words, writing style, tone “Popular. The article is not written to sound eloquent or free of slang, it is written in a more laid back fashion.”
multiple authors mentions that the article is written by more than one author “written by an epidemiologist and two professors…”
named format type of publication—e.g.: encyclopedia, journal, academic journal, newspaper, magazine, etc., or an instance of that type of publication (newspaper article, academic journal article, blog post)—as justification for decision or part of reasoning (NOT just mentioned in passing) “I think it’s a popular article because it was published in a news magazine.”
peer review mentions peer-review or the process of peer-review “Scholarly article because of the wide amount of peer revision included in this article.”
popularity actually popular—number of views, popularity of a topic, many views/shares/citations, ranking in search results “I dont’ think it is a popular article because it looks pretty unvisited.”
purpose reason or aim for which an information source exists or was created “Scholarly, beacuase this type of article is used for discoveries in the scientific community.”
qualities something about the nature of the information in the article not covered by a more specific code. —e.g. viewpoint, importance, objectivity—or valuing the information in the article (credible, reliable, good, in depth, etc.) “No. It’s not very informative, and it’s obviously not informational“
research presence or absence of research; response must indicate some understanding of research (experiments, investigation, talks about researchers), mention of outside research/evidence/sources and/or the methodology used to gather the information “Scholarly, becasue it . . . describes the methods in which the data was gathered and what conclusions can be drawn from the data.”
search searching for the article, how easy/hard it was to find, limiting the search, using specific keywords, anything to do with the search process, mention of search results, issues of access to article online “It is a scholarly article. I limited my search to where only scholarly journals were given.”
title mentions specific publication title “Popular because it’s in Men’s Health magizine, and it’s about bread.”
topic describes subject or topic covered in chosen article, or mentions topic as part of reasoning “Popular because batman is not a scholarly topic”

Third, we coded student responses. Multiple codes were sometimes assigned to fully represent the meaning in each student response. For example,

Student response:

“This is a scholarly article because it was written by an author who specializes in adolescent psychiatry. The article was peer reviewed a couple of times. The article was a little longer than a popular article. It also had a bibliography at the end of the reading.”
Codes assigned: authority, peer review, anatomy

All responses were initially coded independently by at least two of the authors. Both sets of independent coding were compared among the whole group and any coding differences were discussed until consensus was reached. During this process, codes were further negotiated and refined, which sometimes resulted in the need to to go back to re-code.

Eventually, all of the student responses and codes were combined into one master spreadsheet and uploaded into Overview, a data mining application that allowed us to group responses by code or visualize data in a word cloud. These tools, as well as concept mapping applications, aided our thematic analysis.

Limitations

Using an online tutorial for our data collection came with a few limitations. The short-answer question format allowed students to be unclear in their scholarly or popular article determination. For example, several students began their answer with only a “yes” before giving their reasoning. Further, we did not offer an “I’m not sure” option, so students were forced to make a choice, which might have led to hedging or uncertainty in some responses. Additionally, many responses were ambiguous or extremely short; for this reason, 111 responses (11.6%) were excluded from our analysis.

Methods Coda: Defining scholarly, popular, and the “other” situation

Initially, we intended to determine when students were correct or incorrect in identifying sources as scholarly or popular, as well as determine what types of reasoning correlated with correct and incorrect responses. The dichotomy of scholarly and popular sources is generally considered common knowledge among reference and instruction librarians, but we struggled to articulate the precise meaning of or division between these categories. For example, is everything published in an academic journal scholarly? If not, how do we categorize a news brief, book review, or editorial published in an academic journal? What about an article that isn’t research based but is specialized beyond a popular audience? Is there one or more definable intermediary category of source types that falls outside of the scholarly and popular divide, and if so, where exactly do we delineate divisions?

In pursuing these questions, we used a definition for scholarly from a University of Illinois LibGuide titled “How Do I… Determine if a Source is Scholarly?” (n.d.) which states, “Scholarly sources . . . are written by experts in a particular field and serve to keep others interested in that field up to date on the most recent research, findings, and news”. Within this definition, editorials, book reviews, news reports, and other non-peer reviewed content published in journals may qualify as scholarly. We also established a very basic definition that describes popular sources as those that are intended to be read by a general audience. We drew a hard division in that scholarly content required a clear connection to original research, whereas non-research based information, trends, or innovations for a specific professional audience would be categorized as other. The commonality in each definition is not form or specific attributes, but instead the purpose for which sources exist in the world and the community for whom they are intended.

Through our effort to standardize definitions for scholarly and popular formats, and considering the relative frequency with which students selected a diversity of other formats, we realized that judging whether students were correct or incorrect in their identification of scholarly or popular sources was less interesting and meaningful than the reasoning that brought them to these decisions. We resultantly shifted our analytical focus away from correct and incorrect judgments and instead specifically toward students’ reasoning associated with students’ own scholarly or popular determinations.

Results

The results of this study are based on two different sets of data. One set of data (Dataset A) included all of the student responses where a source could be identified and reasoning was given, even those responses where it wasn’t clear whether the student classified the source as scholarly or popular. We used this data when analyzing student reasoning and generating codes and themes. Dataset A consists of 844 student responses. The second set of data (Dataset B) included only those student responses where it was clear how they classified the source, scholarly or popular. This second set of data was used in counting the number of scholarly/popular identified responses. Dataset B consists of 637 student responses. While this is a qualitative study, we did use counts to identify codes that were more strongly associated with scholarly or popular classifications made by students. These distinctions are not statistically significant correlations. We used these counts in suggesting or inferring broad trends in the qualitative data. For each of the tables below, the Dataset that is used will be noted.

Table 2 shows the number of times each code was used, which identifies the most and least common reasoning used by students in their responses.

Table 2 (Dataset A)
Code Number of times code used % of responses with this code
authority 191 23%
named format 152 18%
research 130 15%
qualities 127 15%
popularity 113 13%
anatomy 104 12%
search 101 12%
topic 90 11%
labeled 79 9%
title 67 8%
audience 51 6%
language 42 5%
peer review 41 5%
multiple authors 34 4%
purpose 29 3%
currency 19 2%

Table 3 shows how often students identified a source as scholarly or popular and which codes were most commonly used with each.

Table 3 (Dataset B)
Count % of Total Most common codes
Scholarly 467 73% labeled, research
Popular 170 27% popularity, currency
Total 637

As this is a qualitative study, the bulk of our substantive results are documented in the discussion of themes that follows. Table 4 shows the themes we identified in our analysis of the data and the codes that make up those themes.

Table 4 (Dataset A)
Theme Codes
Access/systems labeled, search
Authority authority, multiple authors, peer review, title
Content language, qualities, research, topic
Form anatomy, named format
Popularity popularity

Themes

Through qualitative coding and analysis, we were able to identify five broader themes in our data to further explore aspects of student reasoning. Each theme represents an evident trend, which we discuss conceptually and through examples.

Access/systems

One trend in reasoning related to how students were accessing information sources, primarily in terms of library systems or databases. The specific codes we identified related to this trend include search and labeled, which we grouped into the broader theme of Access/systems. The Access/systems codes indicate that students commonly rely on explicit indicators and faceted search capabilities within library systems or databases to help them make determinations about whether information sources qualify as scholarly or popular.
Looking at student reasoning classified using the labeled code, we frequently see students attribute specific labels to particular source types. Labeled is used much more frequently when students identify a source as scholarly rather than popular, which we can potentially contribute to a gap in student understanding of which labels are associated with popular sources (e.g. periodical).

Access/systems, where students refer to a label or icon as an explicit indicator of source type:

“Icon to the left of the title says “Academic Journal,” Therefore I assume it is a scholarly article.”

“Scholarly article (academic journal), because it says before the title of the article about what format this is.”

“opinion popular, and I can tell because it states it on the document.”

Through the search code, we see students connecting aspects of database searching to source type determination as well, particularly in relation to faceted searching or filters, through which students indicate to the database what type of source they want to find. We also see students conflate top search results with popularity and popular source type, as discussed under the Popularity theme.

Access/systems, where students refer to aspects of search—filters, keywords, faceted searching—as indicators of source type:

“It is a scholarly article. I know this because in my choice of singling out I checked that I only wanted scholarly articles.”

“Scholarly article, because I checked off scholarly articles that are peer reviewed.”

“Yes because it can be found using a lot of key words.”

Access/systems, where students associate a specific database with a certain source type or authority:

“Scholarly. Found it through UNM libraries”

“It is a scholarly article because I had to go to a specific search engine to find it.”

“This is a popular article because of the ability to access it from a regular web search like on google.”

In a small handful of instances, a student response is associated with some element of Access/systems but fell under a different code beyond labeled and search. These responses refer to aspects of how an information resource is accessed or how it is made available to an audience, suggesting a trend in which students take into account a resource’s accessibility or (un)availability as a way to determine source type.

Access/systems, where students associate an element of accessibility with a particular source type:

“Scholarly, because sometimes the article isnt available”

“I think it’s a popular article and not a scholarly article because it mentions it is peer reviewed and can be fully viewed online.”

“It’s a scholarly article. I can tell because it’s listed as an academic journal. This isn’t something that someone would find in a everyday magazine.”

Under the Access/systems theme, much of what students are relying on in these instances are information systems created by or for libraries. When students rely on library systems to make source type or quality determinations, the systems are in control over their success through resource labels and faceted search structure.

This also suggests that the systems we create to make library materials accessible may work to impede students’ deeper analysis and understanding of sources. Our library systems create categorical shortcuts for students unfamiliar with complex, discipline-based source formats, and the ways in which students interact with information sources are increasingly removed from the context of their broader geography—both the physical (i.e. neighboring books on a shelf or adjacent articles a daily newspaper) and digital (i.e. browsable collection of articles in a journal issue or on a magazine’s homepage). Students may identify a source by recognizing a label, however, items in our systems are detached from their larger parent format, and the richness of context found in the whole information package or system is often lost.

Authority

Authority emerged as a theme encompassing discussion of the individuals and processes responsible for writing, producing, publishing, and providing access to articles. The specific codes we identified related to this trend include authority, multiple authors, peer review, and title. The authority code was most frequently applied to responses that mentioned author affiliation. Descriptors like “well known,” “major journal,” or “research institute” were occasionally included to demonstrate the authority of the affiliate. Students also attributed multiple authors to scholarly articles, although this shortcut could be misleading.
Authority, where students refer to author affiliation or multiple authors :

“Yes. It was published in a sports medicine journal by the division of orthopedic surgery at Duke.”

“This is a scholarly article as it is not associated with any popular entities, rather the entity listed is the international space station”

“It is a scholarly article because of the many authors that helped create it.”
“Scholarly because there are many authors many are professors”

Author expertise and the importance of a review process emerged as additional aspects of Authority. Students pointed to the peer review status of their article, described other editorial processes, and also pointed out when review processes were missing.

Authority, where students reference author expertise or an article’s review process:

“scholarly article because it was written by an author who specializes in adolescent psychiatry.”

“. . . she seems to be a popular author with many articles published .”

“This is a scholarly article it was wriiten by a professor and was peer reviewed”

“The information has to be reviewed and edited in order to be allowed in the magazine.”

Some students discussed Authority in terms of where they found their article online, such as mentioning website domains and scholarly search engines. Most frequently, these types of responses expressed an appeal to the library’s authority. Other comments indicated a limited understanding of information systems, mixing and matching terms like “website,” “search engine,” and “database.”

Authority, where students reference the library’s authority:

“Yes, its on UNM’s website thing.”

“Scholarly article because it is on a database website. ”

Participants also expressed Authority by naming specific publication titles as indicators of credibility. Recognizable titles, such as The New York Times or The Wall Street Journal, were described in terms of being a “big company,” “popular news company,” “reliable source,” or “reputable source.” The problem is that students further connected credibility to scholarly articles, which can be misleading.

Authority, where publication title does not help students identify popular articles:

“Scholarly , it came from NY Times, a very reliable source.”

“This is a scholarly article as it is from a reputable source of the US News World Report.”

In addition to misleading students, the implication that credibility and scholarly articles are synonymous reinforcers the false information dichotomy that associates scholarly information with better information.

Content

The Content theme concerns responses where student reasoning centers around the type of information an article contains, including how that information is communicated. The Content theme focuses on the communication of the ideas contained in the source, which the students interact with through reading. Content emerged throughout several codes: language, qualities, research and topic.

Students sometimes used the presence of research or evidence as a basis for determining whether an article was scholarly or popular. This reasoning was more likely to be used by students when arguing that a source was scholarly.

Research, concerned with evidence or methodology:

“Scholarly; it used much more facts and figures than it did opinion. It was backed by hard research, not by opinion.”

“I would say this atricle is popular becasue there is not data presented. It has a formal tone, but uses language not specific to the field of study, making it easier for a general audience to understand.”

“Scholarly Article, because a commission board did extremely in depth research to produce this article.”

Research, where a study or research is referred to:

“yes, because it is a study performed on mice. People are interested in that”

“No. It is simply about a professional baseball player and how he became one of the best over time. There is no scientific experiments or data.”

Student reasoning around topic often asserted that certain topics are inherently more scholarly or popular than others, sometimes focusing on the approach taken to a particular topic as well. The topic code sometimes dovetailed with the popularity and currency codes when students used the perceived popularity or current nature of a topic to inform their choice.

Topic, where currency, popularity, or approach contributed to a determination of popular:

“I believe this article is popular because it mainly focuses on racial discrimination and the obstacles that African-Americans of all backgrounds had to overcome to become who they are today. This is a crucial, trending issue in today’s society, affecting many individuals.”

“This is both a popular and scholarly article. It’s popular because it discusses a relevant media topic, such as Star Wars, and is scholarly because it discusses gender roles surrounding Star Wars.”

Topic, where the topic or approach to the topic was considered scholarly:

“Its a scholarly article because it was written about the National Health Service”

“I would argue that it is a scholarly article because it discusses the U.S war on drugs and the legal aspects of it.”

Another common approach in the Content theme was to argue that some characteristic of the information and language demonstrated whether the source was scholarly or popular. Students taking this approach often described the language or content with adjectives—e.g., accurate, factual, in-depth, opinionated, reliable, informational, detailed, scientific, formal. Students often used value-laden language to distinguish between the content in scholarly and popular sources.

Qualities, where students described or characterized content:

“It is more of a popular artile than a scholarly one because it does not provide any information just helps you to think in a different way.”

“Scholarly, because the information is dry and too complicated to be directed toward a broad audience”

“scholarly, it was information based”

Language, where students characterized the language used:

“This is a scholarly article because in the first few sentences of the article, they use very large words such as stymied and oligarchs.”

“It is a scholarly article because it is published in a research article and because it explains things using scientific language.”

Student responses often associated scholarly articles with unbiased or credible information and popular articles with opinionated, less credible information, and entertainment. The reasoning associated in particular with the qualities code was often vague or relatively meaningless. Students were often unable to articulate reasoning that typified the information contained or written in their source accurately.

The Content theme highlights how students struggle to make logical and evidence-based assertions about the quality and purpose of sources based on the information contained in those sources and how that information is communicated.

Form

The Form theme emerged around student responses that used visual, structural, or other format related cues to make decisions about their sources. This theme consists of the anatomy and named format codes. Students using this reasoning often referred to the types of characteristics librarians give in tables or lists that typify scholarly and popular sources.

Form, where student reasoning involved identifying specific elements:

“Scholarly, it has an abstract and hypothesis.“

“Scholarly because it had different volumes.”

“this is a popular article as it has no work cited page and seems to come from an old publication called New Republic”

“Popular because they are on Facebook and twitter and their are also comments at the end.”

Form, where students referred to a specific named format:

“I believe this is a popular article because it is originally from a volume of Futures for Children which is a newletter/magazine type of publication.”

“Its a Wall Street Journal so it could be counted as a scholarly journal, but it is a big news journal that people can trust to use for research.”

“It is a scholarly article because it from an engineering website.”

Student reasoning in this theme illustrates how the use of these types of indicators may encourage a superficial interaction with sources. The named format code, in particular, was associated with somewhat circular reasoning, such as “scholarly because its from an academic journal.” Though this reasoning is technically true, scholarly and academic are often used as near synonyms, so it’s a bit like saying it’s overcast because it’s cloudy.

Popularity

A fundamental misunderstanding of the term popular in relation to information sources frequently appeared in students’ responses. Many students’ discussion of popular articles was expressed in terms of popularity, that is, the idea of being well-liked by many people; popularity emerged as a code as well as an independent theme.

When a student stated that they found a popular article, we assumed that they found a non-scholarly article. However, inserting the word “very” before indicating that their article was popular allowed for multiple interpretations. On one hand, the student may have meant that they found a very non-scholarly magazine article, but on the other hand, they may have meant that the source was well known and widely read. Instances like these underscore the context sensitive nature of language and echo issues that Carter and Aldridge (2016) discussed, such as their observation that students rely on composition vocabulary to evaluate information, “despite explicit instructions to consider what they had learned from the librarian” (p. 27).

Popularity, where students conflate popular sources with popularity:

“it was published in the New York Times which is a very popular magazine with a wide variety of readers. [emphasis added]”

“Yes, because it was in a journal that is super popular. [emphasis added]”

Popularity was not limited to instances when students selected a popular article. Popularity emerged in several cases where students were unclear in their determination of whether the article was popular or scholarly. In fact, some students asserted that their article was both scholarly and popular.

Popularity, where students do not clearly decide whether their source is scholarly or popular, however, popularity emerges in their response:

“Yes, it has a works cited page and is one of the first choices that pops up on the database. It is also peer-reviewed.”

“It is not as popular as i thought it was going to be. There arent a whole lot of articles written on it.”

“Yes, I do feel as if this scholarly article is popular due to it being the first one that popped up on the list. As well as it being very information to the point where I feel as if many people have had the honor of reading it.”

Students most frequently indicated popularity by referring to their article’s ranking within the results page. Subject matter, particularly if the article covered a trending topic, was a specific indicator that students cited when identifying popular articles. In a similar fashion, some students wrote about the relationship between their article and its audience as an indicator for identifying popular articles. Participants commonly expressed that an article was popular because it appealed to everybody. Alternatively, some students described this relationship by asserting that their article could only be popular to a specific audience. Students also expressed the idea of popularity by commenting that their article had a lot of views, or that it had been used in other studies.

Popularity, where students refer to ranking in the search results, topic, and audience:

“It is one of the first articles so it implies that it’s been view alot.”

“Popular, because recycling and paper waste has become a huge topic discussed.”

“It might be popular for people who research about sea level rise.”

“It gives you a percentage of how often it is referred to in other scholarly articles.”

Conclusion and a call to action

In summary, we identified five themes in our data: Access/systems, Authority, Content, Form, and Popularity. Labels and faceted search tools featured in library information systems can mislead students’ deeper analysis and understanding of sources. Students tended to associate scholarly articles with credibility and popular articles with less credible, subjective information. Students also struggled to make evidence-based assertions about the quality and purpose of sources based on the content of those sources. Finally, a fundamental misunderstanding of the term popular to mean popularity also misled students’ decisions.

We believe that librarians can use these findings to inform our practice. First, we can admit that the way many of us have been teaching “scholarly vs. popular” relies on heuristics that are shorthand for what librarians already understand. Heuristics can be described as mental shortcuts or rules of thumb that guide us through a decision-making process, meant to help make decisions quickly. But if students don’t understand the concepts upon which these heuristics are based, they can mislead students into relying on surface level clues and encourage a bias towards familiar or simple information. Starting students with heuristics instead of a closer examination of the purpose of information formats may encourage misunderstandings.

Second, we can change the way we talk about information. In their study of how librarians and writing instructors talk about the research process and information literacy in the classroom, Holliday and Rogers (2013) found that “the words we use have consequences, some of them long-lasting” and adjusting our language “re-directs our own practice as teachers, especially in where we focus our instructional attention.” (p. 268) In our own teaching, we no longer talk about “scholarly vs. popular” when characterizing information sources. However, this begs the question of how do we talk about information? At UNM, we have dubbed this issue the container conundrum.2

Detailing the particulars of our developing approach is beyond the scope of this paper, but we can broadly state that it is based on a “format” threshold concept that is heavily informed by genre theory, from the field of rhetoric. We encourage students to examine three aspects of distinct information formats: purpose (why does this thing exist in the world and who made it), process (how is it created, both intellectually as well as physically, including quality control processes), and product (what typifies its final form, how do we recognize it, what elements are expected). We are also experimenting with techniques that conform more closely to the approach fact-checkers take in making accurate evaluations of sources rather than the deep reading techniques of the humanities or long librarian checklists like the CRAAP test.

We would encourage our readers to find alternative ways to help students make sense of information sources. Situating sources in a broader evaluative framework that takes the nuances of audience, purpose, and other real-world context into account is likely to lead to more authentic understandings of the information landscape by students.


Thank you to our brilliant colleagues David Hurley and Jorge Ricardo López-McKnight, who helped to initially envision and lay the groundwork for this project. Thank you as well to Mark Emmons who consulted with us through our data analysis, and Susanne Clement, who helped with content review. And to Kevin Seeber, our external reviewer; Kellee Warren, our internal reviewer; and Denisse Solis, our publishing editor, we offer our enthusiastic thanks for your time and thoughtful insights through the peer-review, editorial, and publication process.


References

American Library Association. (2015). Framework for Information Literacy for Higher Education. Retrieved from http://bit.ly/2oZTPzb

Association of College & Research Libraries. (2000). Information Literacy Competency Standards for Higher Education. Retrieved from http://bit.ly/2Gp08m2

Bandyopadhyay, A. (2013). Measuring the disparities between biology undergraduates’ perceptions and their actual knowledge of scientific literature with clickers. The Journal of Academic Librarianship, 39(2), 194–201. http://bit.ly/2L5l70N

Buhler, A., & Cataldo, T. (2016). Identifying e-resources: An exploratory study of university students. Library Resources & Technical Services, 60(1), 23–37. http://bit.ly/2Gp08Cy

Carter, T. M., & Aldridge, T. (2016). The collision of two lexicons: Librarians, composition instructors and the vocabulary of source evaluation. Evidence Based Library and Information Practice, 11(1), 23–39. http://bit.ly/2Ir0a2r

Chapman, J. M., Pettway, C. K., & Scheuler, S. A. (2002). Teaching journal and serials information to undergraduates: Challenges, problems and recommended instructional approaches. The Reference Librarian, 38(79–80), 363–382. http://bit.ly/2Gp099A

Ferrer‐Vinent, I. J., & Carello, C. A. (2008). Embedded library instruction in a first‐year biology laboratory course. Science & Technology Libraries, 28(4), 325–351. http://bit.ly/2InqkmU

Fisher, Z., & Seeber, K. (2017). Finding foundations: A model for information literacy assessment of first-year students. In the Library with the Lead Pipe. Retrieved from http://bit.ly/2Gp09q6

Fleming-May, R. A., Mays, R., & Radom, R. (2015). “I never had to use the library in high school”: A library instruction program for at-risk students. Portal: Libraries and the Academy, 15(3), 433–456. http://bit.ly/2IoP6D9

Gross, M., & Latham, D. (2009). Undergraduate Perceptions of Information Literacy: Defining, Attaining, and Self-Assessing Skills. College & Research Libraries, 70(4), 336–350. http://bit.ly/2Gp0adE

Gross, M., & Latham, D. (2011). What’s skill got to do with it?: Information literacy skills and self‐views of ability among first‐year college students. Journal of the American Society for Information Science and Technology, 63(3), 574–583. http://bit.ly/2L5l9Wt

Holliday, W., & Rogers, J. (2013). Talking about information literacy: The mediating role of discourse in a college writing classroom. Portal: Libraries and the Academy, 13(3), 257–271. http://bit.ly/2GoKnLy

How do I… determine if a source is scholarly? (n.d.). Retrieved April 2, 2018 from http://bit.ly/2IpL6SI

Insua, G. M., Lantz, C., & Armstrong, A. (2018). In their own words: Using first-year student research journals to guide information literacy instruction. Portal: Libraries and the Academy, 18(1), 141–161. http://bit.ly/2GoKoPC

Kim, K.-S., & Sin, S.-C. J. (2011). Selecting quality sources: Bridging the gap between the perception and use of information sources. Journal of Information Science, 37(2), 178–188. http://bit.ly/2IoNwAW

Knight, L. A. (2002). The role of assessment in library user education. Reference Services Review, 30(1), 15–24. http://bit.ly/2GoKpTG

Leeder, C. (2016). Student misidentification of online genres. Library & Information Science Research, 38(2), 125–132. http://bit.ly/2ImkPog

List, A., & Alexander, P. A. (2018). Corroborating students’ self-reports of source evaluation. Behaviour & Information Technology, 37(3), 198–216. http://bit.ly/2GoKqHe

Lowe, M., Booth, C., Tagge, N., & Stone, S. (2014). Integrating an information literacy quiz into the learning management system. Communications in Information Literacy, 8(1), 115–130. http://bit.ly/2L5letf

Molteni, V. E., & Chan, E. K. (2015). Student confidence/overconfidence in the research process. The Journal of Academic Librarianship, 41(1), 2–8. http://bit.ly/2GpbTIU

Radom, R., & Gammons, R. W. (2014). Teaching information evaluation with the five Ws: An elementary method, an instructional scaffold, and the effect on student recall and application. Reference & User Services Quarterly, 53(4), 334–347. http://bit.ly/2L5lfNP

Seeber, K. P. (2016, February). It’s not a competition: Questioning the rhetoric of “scholarly versus popular” in library instruction. Presented at the Critical Librarianship & Pedagogy Symposium, The University of Arizona. Retrieved from http://bit.ly/2GoKruM

Shao, X., & Purpur, G. (2016). Effects of information literacy skills on student writing and course performance. The Journal of Academic Librarianship, 42(6), 670–678. http://bit.ly/2L4swx8

  1. All example student responses are quoted exactly as written by students. Spelling and grammatical errors are not edited or noted with “[sic]” in an effort to authentically represent student comments while not drawing specific attention that highlights minor mistakes. []
  2. We’ll be presenting on this topic at the 2018 Library Instruction West Conference. []

#