Disinformation in the Library Catalog


Technicalities 42(2) Sept/Oct 2022: 11-15

Readers of a certain age will recall the Gourman Report [1], an early incarnation of the now ubiquitous rankings of colleges, universities, and graduate programs that have grown to be an industry. Anyone even thinking about attending college or a university has surely run into one of these guides, the most popular being the U.S. News and World Report Best Colleges Ranking which began in 1983 and whose cover promises to help you “Find the Best Colleges… For You!” Almost as bad as consulting one of these guides is notconsulting one of these guides, as they are so widely used that you could probably be accused of malpractice if you did not recommend one to your college-going kid.

Well, there actually is a worse thing than using one of these books, and that is using one of the Gourman reports. The first version of Jack Gourman’s creation was published in 1967, with later editions in 1977, and multiple editions 1982-1985. The most recent edition came out in 1997, so I was going to write that topic sentence in the past tense: the worst thing was to use one of his books. But in a remarkable display on the longevity of dubious information, I spotted a 2016 online conversation discussing the books–in fact praising them–and saying you could continue to use them despite the fact they were now dated because, after all, rankings do not change that much.

The fact that the books were not originally published with a reputable partner was one clue that something was amiss. These were largely published by “National Education Standards” until the last edition, which was published with the Princeton Review (that pillar of another part of the college selection game, test preparation) and distributed by Random House. That they decided to get into the college ranking game with Gourman is remarkable given that problems with the Gourman reports were well-documented in the mid-1980s, including in a series of papers by David S. Webster in Change [2]and The Library Quarterly [3]where he concluded that “Gourman’s books, individually and as a group, are virtually without merit.”[4]

Lies and Statistics

In his review of the 1967-1985 books, Webster reports a series of anomalies that cast Gourman’s whole project in doubt. Perhaps most damnning is that Gourman “was exceptionally vague about which data he used and how he analyzed them.”[5]  It was said that he used perception data, that is, asking leading experts to rank the programs. As Howard White put it, “Perception studies are not studies of quality; they record perceived quality, and that perception may or may not be valid when compared to other qualitative rankings. Such rankings, attempted most directly by Jack Gourman, have also been criticized, not only for their methodology but, more specifically, because that methodology is often not disclosed or explained.”[6]

It is one problem to rely on perception data, but yet entirely another to rely on dubiously sourced and possibly fabricated perception data. If you are going to use perception data, then the quality of the data rests on the quality of the person making the judgment. We need to know whose perceptions are we relying upon? And in the case of Gourman, it appeared that rather than relying on informed opinions, he was just stating his own opinion. “No one who has reviewed … any of Gourman’s ten books …has ever been able to find a single college or university administrator, faculty member, or student who recalls ever having been contacted for information by letter, phone call, or personal visit from Gourman or any employee…”[7]

There were of course other concerns. In his 1977 ranking of academic departments, every school was scored exactly 0.01 lower than the next higher-rated one for example, 4.88, 4.87, 4.86, 4.85, 4.84, etc. and frequently, for twenty or thirty consecutive departments, there were no ties and no integers skipped. In the case of law schools, Gourman consecutively listed fifty in perfect descending order, from 3.50 to 3.01, with no ties. Then he reeled off forty more law schools, again without ties and skipped integers, from 2.90 to 2.51.[8]

More numerical shenanigans came in later editions. Gourman’s 1980 rankings of graduate departments

provided four subscores, for “curriculum,” “faculty instruction,” “faculty research,” and “library resources”…. An institution’s overall rating, on his usual 5-point scale, was the simple average of its subscore ratings…. Every single institution held exactly the same ordinal rank position in all four… of these subscore ratings as it did in overall rank. That is, if a school’s English department was ranked twenty-fifth, it was ranked twenty-fifth in all four subscores.[9]

Abigail Hubbard pointed out another problem with Gourman’s scores in a letter to RQ:

Our library, the Houston Academy of Medicine—Texas Medical Center Library, is very unique in that it is a consortium library serving approximately twenty different client groups, including two major medical schools: Baylor College of Medicine and The University of Texas Medical School—Houston. Mr. Gourman’s most recent report in dicates the following ratings for library resources for these two schools:[10]

Baylor College of Medicine                                   4.38
University of Texas Medical School—Houston         3.49
Very interesting, considering the library resources for the two schools are IDENTICAL.


Despite these well-documented irregularities, Gourman’s reports remained popular at the bookstore and in the library. “While the Gourman rankings are generally discounted and even ridiculed, they are at the same time heavily used. Reference desk staff members in academic libraries confirm this, and the physical condition of these volumes attests to frequent scrutiny.”[11] Ouch!

The Truth is Clear, Our Response Less So

The Gourman reports have questionable value, and yet what librarians are supposed to do it about it is even a bigger question. The books continued to be used despite their documented short-comings; they were in the ready-reference section at the University of Iowa in 1988, where a very kind librarian handed them to me with a smile. One obvious option would be to dump the books. Our Texan epistolarian Ms. Hubbard made an early call for for better information literacy: “In addition to more careful collection development for reference collections, perhaps a more discriminating consumer is also in order” though she also notes that the book’s very inclusion in the library collection provides some testimony of the book’s reliability.[12]

I think this is an interesting wrinkle, and basically the answer that librarians, journalists and Mark Zuckerberg have all given: the patron/reader/consumer is responsible for discerning the truth. Zuckerberg, in a speech at Georgetown University in October 2019, in the run-up to the presidential election where truthfulness and disinformation were major campaign issues, stood at a large wooden lectern and said that Facebook would not factcheck politicians’ speech or their political ads because such comments added to our discourse and were in the public interest to hear. Even if the statements were false. “Stuff” gets added to the collection or the platform; we are not responsible; that is the role of the reader. In my personal poll of “trustworthy occupations” I would rank “billionarire social media platform operators” quite low.

My friend Jonathan Furner is a discerning and articulate professor. He says that the paradigm that we have followed in knowledge organization and librarianship is that of relevance, that is, we allow users to decide for themselves what they think is worthwhile to read, but perhaps we are turning to a role for the veridical. I would add that “the relevance paradigm” is a heuristic one in the sense of “enabling someone to learn something for themselves.” Such a pedagogy works best when the consequences of failing are low and probability of success is high.

That describes the situation of a discerning patron using a well-designed catalog in the context of a well-curated collection of resources. Librarians never really had to validate resources too carefully. For published materials, that role would have fallen primarily to publishers. But it raises an interesting point: what should we do about it? Cutter says we should assist users in the choice of a book according to its character, and we carry no reservation in describing books as being a work of fiction or ascribing its topic. But if you want to indicate that a book appears to be the product of fraud, suddenly things get very complicated.

Others do not seem to have as many qualms about the veridical. Both science and Science have rigorous truth standards and have developed procedures for maintaining that standard. Journalists struggle more, but reputable newsrooms and editorial desks are generally trying, though they have many failures and are constantly denigrated for their efforts. Both have mechanisms for reviewing their work and retracting statements. This is more difficult in science and scholarship, where speculation and open conjecture is part of the work, and identifying error is a major function of the discourse.

In 2014,Science published an article by a UCLA graduate student about the ability of door-to-door canvassers to persuade voters:

Can a single conversation change minds on divisive social issues, such as same-sex marriage? A randomized placebo-controlled trial assessed whether gay (n = 22) or straight (n = 19) messengers were effective at encouraging voters (n = 972) to support same-sex marriage and whether attitude change persisted and spread to others in voters’ social networks. The results… show that both gay and straight canvassers produced large effects initially, but only gay canvassers’ effects persisted in 3-week, 6-week, and 9-month follow-ups.[13]

The study reported such large and long-term effects that it raised the curiosity of one reader, which turned to doubt and then suspicion that the data for the study had been faked. Now when you click on the article in the online version of the journal there is a banner in bright red letters across the top of the page where the article appears: “RETRACTED 28 MAY 2015; EDITORIAL EXPRESSION OF CONCERN 20 MAY 2015; SEE LAST PAGES”[14]and two new pages are inserted appended onto the article that explains the retraction. One sentence could be used to describe the Gourman reports. “Independent researchers have noted certain statistical irregularities in the responses…. [The author] has not produced the original survey data from which someone else could independently confirm the validity of the reported findings.”[15]

What Should a Cataloger Do?

Could we or would we do something equivalent in our cataloging records? First, I am recalling for the first time in years that we used to write subjects in red letters at the top of catalog cards… there is a memory. My sympathy goes out to the people that used to have to type those strings on a special typewriter equipped with a red ribbon. Obviously, marking out falsehoods or retractions in red letters is not standard cataloging practice, and not because we have previously reserved red text for subject headers.

In an older piece that I consider something of a classic, Ross Atkinson described bibliographic citations as “intertext”. One element of an intertextual model of a citation is to “[recognize] the essential play of contexts that occurs within the citation itself” and that “there exists a contextual relationship within any citation that permits the understanding and use of one element to be defined or influenced by another.”[16]Title, place and name of publisher, name of author, the physical description, etc. are all interpreted together, with one element compared to each of the others, to develop an understanding as to the nature of the represented document. Atkinson gives an example of revising one’s opinion of when that document “purport[s] to examine the philosophical foundations of the Enlightenment [but] turns out to consist of seven pages rather than the seven hundred pages I was anticipating from the title.”[17]Such interpretations are also informed by other dimensions of citation-as-intertext, including the reader’s contextual familiarity with other texts, and the various uses that a reader may make of the text.

Our example of the Gourman reports has pretty subtle signalling by these standards that there might be something amiss with the books. The clearest indicator might have been the publisher, “National Education Standards” (NES). Webster, our main critic of the reports, " “personally investigated the NES locations in Los Angeles. He never located a physical sign of the publisher’s existence…. All of Gourman’s Reports were self-published but under the ruse of official-sounding entities.”[18]How many readers would recognize that something might be afoot from that publisher statement?

Ultimately what we have here is a catalog failure. Our heuristic model, as we said, requires discerning patrons using well-designed catalogs in the context of well-curated collections. Let us not blame the collection developers, and certainly not our discerning patrons. Only one reader was so bothered by the publisher statement that he actually tried to verify their authenticity, and that only following an interview with the author, not because something in the citation seemed amiss or discordant. I think the problem here is that the catalog is not particularly well-designed. We have not attested explicitly regarding the nature of the document, and our description in this case was so subtle that it eluded the attention of users for years.

One solution might be to issue a “Librarian Expression of Concern” similar to the “Editorial Expression of Concern,” utilized by Science. I am sure this would alarm many of my colleagues who feel like that job could be endless, and violate the trust we show in our users to discern what is relevant to their own particular use. But perhaps there are intermediate solutions between empaneling librarians to pass judgment on the veracity of a book, and the laissez-faire attitude of a “caveat lector.”

Another letter writer to RQ, Edmund F. SantaVicca,wrote to say that “I would like to suggest that librarians offering any of the Gourman reports to their patrons insert a copy of Webster’s article—as complementary addenda to the Gourman volumes.”[19]. Could such a system be built that an entry in the catalog for a book be immediately followed by reviews that discuss the book? We are already beginning to build out those kinds of relationships in our catalogs.

Another option would be to continue the work enshrined in RDA of recording the full name of the publisher for hopes of being able to retrieve all the books by a given author. Extending this solution in a linked environment, for example, could permit the user to click on the publisher’s name to see what else has been published by that particular organization. Thus, by clicking on “National Education Service” the user could see that the organization has published no other books, and perhaps that it has no board of directors, along with other Wikipedia-like statements about its history and activities. Or lack thereof.

So, the vision here, once again, is for a catalog linked into a wider data environment that allows for deeper end-used engagements and explorations, supported by open content but within a framework of dedicated, trained and ethical editors, including (especially) librarians. It is only providing the right kind of information that we can begin to ensure that users make good decisions regarding facts that enter into public discourse. Social media does not have this; they obscure sources of information and provide no context to the various claims on their platforms. Our task and work are slightly different, but as knowledge and information professionals, why would we be so surprised that our own tools fail to meet some of the basic requirements of an effective information service? Caveat emptor, indeed.

 Works Cited

[1] See, for example, Gourman, Jack. The Gourman Report: A Rating of American and International Universities. Los Angeles: National Education Standards, 1977.

[2] Webster, David. "Who Is Jack Gourman: And Why Is He Saying All Those Things About My College?" Change: The Magazine of Higher Learning 16, no. 8 (1984): 14-19, doi:10.1080/00091383.1984.9940502.

[3] Webster, David S. "Jack Gourman's Rankings of Colleges and Universities: A Guide for the Perplexed." RQ 25, no. 3 (1986): 323-31, http://www.jstor.org/stable/25827650.

[4] Webster, p. 323.

[5] Webster, p. 324.

[6] Herbert, S. White. "Perceptions by Educators and Administrators of the Ranking of Library School Programs: An Update and Analysis." The Library Quarterly: Information, Community, Policy 57, no. 3 (1987): 252-253, doi:10.1086/601902.

[7] Webster, p. 324.

[8] Webster, p. 327.

[9] Webster, p. 329.

[10] Hubbard, Abigail. "Letter." RQ 26, no. 1 (1986): 135-36, http://www.jstor.org/stable/25827819.

[11] White, p. 253.

[12] Hubbard, p. 136.

[13] LaCour, Michael J., and Donald P. Green. "When Contact Changes Minds: An Experiment on Transmission of Support for Gay Equality." Science 346, no. 6215 (2014): 1366, doi: 10.1126/science.1256151.

[14] Ibid.

[15] Ibid.

[16] Atkinson, Ross. "The Citation as Intertext: Toward a Theory of the Selection Process." Library Resources and Technical Services 28, no. 2 (1984): 111.

[17] Ibid.

[18] Lacy, Tim. “The Gourman Fraud and the Commodification of Higher Education.” S-USIH: Society for U.S. Intellectual History. Feb. 25, 2018. https://s-usih.org/2018/02/the-gourman-fraud-and-the-commodification-of-higher-education/ (accessed Feb. 4, 2022).

[19]SantaVicca, Edmund F. "Letter." RQ 26, no. 1 (1986): 135-36, http://www.jstor.org/stable/25827819.