8QLYHUVLW\5DQNLQJV+RZ:HOO'R7KH\0HDVXUH/LEUDU\ 6HUYLFH4XDOLW\" %ULDQ-DFNVRQ portal: Libraries and the Academy, Volume 15, Number 2, April 2015, pp. 315-330 (Article) 3XEOLVKHGE\-RKQV+RSNLQV8QLYHUVLW\3UHVV DOI: 10.1353/pla.2015.0026 For additional information about this article http://muse.jhu.edu/journals/pla/summary/v015/15.2.jackson.html Access provided by Mount Royal College (13 Apr 2015 22:40 GMT) Brian Jackson University Rankings: How Well Do They Measure Library Service Quality? Brian Jackson abstract: University rankings play an increasingly large role in shaping the goals of academic institutions and departments, while removing universities themselves from the evaluation process. This study compares the library-related results of two university ranking publications with scores on the LibQUAL+™ survey to identify if library service quality—as measured within the LibQUAL+™ dimensions affect of service, information control, and library as place—is related to the standings. The results suggest that some indicators used to rank universities favor libraries with more highly rated physical facilities, while largely ignoring the impact that other services have on library quality. C Introduction ommunicating the value of academic libraries is a central purpose of the assessment and data-gathering activities in which libraries engage. Much has been written about how libraries can measure and use inputs, outputs, and outcomes to express to stakeholders the quality of library services. When data are used to convey library value, they are usually gathered and framed internally, by those who work in and are familiar with library operations. External bodies, though, may also evaluate library quality, university rankings being an obvious and contentious example. Rankings are contentious partly because they incorporate only a limited number of indicators, upon which there may be disagreement as to their validity as quality measures. The extensive consideration given to measuring service quality in academic libraries plays little role in university rankings, where more comprehensive considerations of library value may be sacrificed for brevity. Despite these and other problems associated with university rankings, there is evidence that they have an impact on student recruitment and that they partially shape administrative decision-making.1 It is useful, portal: Libraries and the Academy, Vol. 15, No. 2 (2015), pp. 315–330. Copyright © 2015 by Johns Hopkins University Press, Baltimore, MD 21218. 315 316 University Rankings: How Well Do They Measure Library Service Quality? therefore, to understand what rankings measure and to define any relationships the standings have with service quality measures designed within the library community. To that end, this study will compare the results of an internal evaluation, LibQUAL+™, with those of two university ranking publications. LibQUAL+™ is a widespread standardized tool for collecting user feedback on library service quality. The core of LibQUAL+™ is the collection from users of minimum, perceived, and desired scores on twenty-two questions covering three dimensions of library services: affect of service, information control, and library as place. By measuring users’ minimum acceptable level of service quality, perceived or actual level of service quality, and desired or ideal level of service quality, LibQUAL+™ generates gap scores that measure both the degree to which perceived service meets minimum standards (adequacy gap) and the degree to which perceived service meets desired levels (superiority gap). While the effectiveness of gap scores in measuring satisfaction has been questioned,2 LibQUAL+™ offers the benefits of relative ease of implementation, comprehensiveness, reliability, and stable comparisons across time and institutions. The LibQUAL+™ Canada consortium, which provides Canadian libraries the option of running the survey simultaneously with other institutions in Canada, has been formed on a triennial basis since 2007. The consortium offers the added benefit of consistent, localized comparisons.3 The two most widely distributed university ranking and rating publications in Canada, the Maclean’s Guide to Canadian Universities and the Globe and Mail’s Canadian University Report, both factor measures of libraries into their overall scores. Maclean’s Guide to Canadian Universities, published annually since 1993 by Maclean’s magazine (circulation: 313,007),4 divides established universities in Canada into three categories: medical/doctoral, comprehensive, and primarily undergraduate. The guide rates them based largely on quantitative measures related to faculty research outputs and awards, economic indicators, campus resources, and libraries. The qualitative exception is the reputation component, which is determined based on a survey of community members “whose professions put them in a position to form opinions about how well universities are meeting the needs of students and how ready their graduates are to embark on successful careers.”5 The library component, which receives a weight of 12 to 15 percent of the overall rank, varying by year, is determined based on the percentage of institutional budgets dedicated to libraries (hereafter referred to as expenses), the percentage of library budgets spent on new acquisitions (acquisitions), the number of holdings per full-time equivalent student (holdings per student), and, for libraries in the medical/doctoral category, the total number of holdings (total holdings). Libraries are ranked separately within each of these categories and given an overall standing based on a formula that weights each category. While the education literature and university administrators have widely criticized the Maclean’s rankings,6 they are the longest running and most consistent university ranking in the Canadian popular media. The Canadian University Report (CUR, formerly Canadian University Report Card), published by the Globe and Mail newspaper (circulation—weekday: 291,571, Saturday: 354,850)7 is based on an annual survey of undergraduate students. The CUR, which divides Canadian universities into four categories based on the size of student enrollment, measures student satisfaction across a number of attributes that include teaching, Brian Jackson campus services, registration, facilities, community, and libraries, among others. Each attribute receives a letter grade from A+ to D, which is translated from student responses to Likert scale questions. Both the Canadian University Report and the Guide to Canadian Universities circulate widely. The media report on them, and universities use them as promotional tools, which undoubtedly impact the reputation and perceived prestige of institutions in the wider community,8 and may influence enrollment deci- . . . library decision makers must understand sions of students.9 Institutions what measures are directly incorporated and libraries, therefore, have an interest in responding to into rankings, how these measures relate rankings either by planning to various areas of library service, and the to optimize their scores or by communicating to stakehold- degree to which they convey service quality. ers their reasons for not doing so. To respond appropriately, library decision makers must understand what measures are directly incorporated into rankings, how these measures relate to various areas of library service, and the degree to which they convey service quality. As a relatively comprehensive tool for measuring library service quality, LibQUAL+TM provides a means for examining potential relationships between areas of library service and published rankings. Literature Review LibQUAL+™ A vast body of literature discusses the development, structure, reliability and validity, implementation, analysis, and institutional response to the LibQUAL+™ survey. Of relevance to this study are a number of papers that explore relationships between library attributes and LibQUAL+™ scores. Fred Heath, Colleen Cook, Martha Kyrillidou, and Bruce Thompson compared LibQUAL+™ gap and dimension scores with scores on the Association of Research Libraries (ARL) Membership Criteria Index, which is based on the size of collections, budget, and staff. The authors found a low-to-moderate relationship between overall adequacy scores and the Index but only insignificant correlates between superiority gap and perceived scores. Dimensionally, information access (since modified and renamed information control) showed a relatively strong but not significant relationship with the Index, which is not surprising given the weight attributed to collection measures in the Index.10 Similarly, Jessica Kayongo and Sherri Jones compared the LibQUAL+™ information control scores of ARL members with data collected by the ARL. The most significant relationships they found were between faculty information control scores and materials expenditures and between overall information control scores and service hours, indicating significant differences among the expectations of user groups.11 Ben Hunter and Robert Perret compared LibQUAL+™ scores with data from the Association of College and Research Libraries’ (ACRL) Library Trends and Statistics database, while Damon Jaggars, Shanna Smith, and Fred Heath looked at LibQUAL+™ 317 318 University Rankings: How Well Do They Measure Library Service Quality? and library size.12 Both studies found significant relationships between library size, measured by resource availability and Carnegie classification, and minimum and desired scores on the information control dimension. Adequacy and superiority gap scores, though, lacked any demonstrable relation to available resources, suggesting that, while expectations for collections might be higher at institutions with larger resource bases, other factors determine satisfaction with collections. Additionally, while Jaggars, Smith, and Heath found that faculty at masters level universities had higher expectations in the affect of service and library as place dimensions than did those at research level institutions, Hunter and Perret found no relationships between measures of expectation for service or the physical library and any ACRL measures, including those that might be expected, such as the number of reference transactions, total staff, and presentations to groups. These findings corroborated those of Douglas Joubert and Tamera Lee, who found that the total number of staff working in health science libraries is not related to LibQUAL+™ service scores. They did find, though, that the ratio of staff to users had a significant impact on affect of service scores.13 LibQUAL+™ scores have also been compared with results on other assessment tools. Eric Ackermann combined LibQUAL+™ scores with results from the local Undergraduate Exit Survey at Radford University in Radford, VA, using a meta-analysis process designed to improve accuracy by reducing sampling error.14 While Ackermann focused on the process of data analysis, the technique described can be used to compare LibQUAL+™ scores with those of other widely used assessment tools, including the National Survey of Student Engagement and the Higher Education Research Institute Faculty Survey. Looking directly at the LibQUAL+™ questions, Susan McKnight compared the library attributes measured by LibQUAL+™ with those identified by users during a series of Customer Value Discovery workshops, the purpose of which were to identify the aspects of library service most valued by customers. McKnight found that LibQUAL+™ LibQUAL+TM scores are repreaddresses most of the same core values that sentative of library service qual- library users identify, unprompted.15 McKnight’s findings indicate that LibQUAL+TM ity, as judged by users, and may scores are representative of library service be validly used as a measure of quality, as judged by users, and may be valsuch for comparative purposes. idly used as a measure of such for comparative purposes. University Rankings While much has been done to compare LibQUAL+™ scores with data that the library community values and regularly collects, fewer studies examine the extent to which library service quality is reflected in evaluation conducted external to the academy. One reason for this may be that a number of prominent rankings do not include indicators directly related to libraries. Clearly, though, the library still plays a role in the educational, research, and reputational factors that are factored into most ranking systems.16 There is value in developing an understanding of rankings, regardless of the degree to which Brian Jackson libraries are included, as indicators of the specific attributes of universities that impact student recruitment and institutional prestige. Much of the literature on university rankings focuses on problems inherent in the notion of ranking universities, in the methods used to score universities, and in the interpretation of results by the media and the public.17 Stewart Page and Ken Cramer have paid particular attention to the ways in which Maclean’s uses data in its rankings and the potential impact this has on students. They have noted that the ranks assigned to individual attributes by Maclean’s bear little relationship with overall status and with one another, pointing out that all universities will receive higher and lower scores, depending on the attribute in question, and that an overall rank does not indicate a university’s fit for any particular student.18 They have also found that there is little to no correlation between the rankings provided by Maclean’s and surveys of student satisfaction, such as that used by the CUR.19 While the authors have expressed doubt that the attributes scored by Maclean’s are significant elements in student decisions to attend a particular university,20 the question remains as to whether published rankings actually have an impact on student applications. A number of studies have reported some influence of rankings on university choice. In the Canadian context, Richard Mueller and Duane Rockerbie found that universities that improved by one rank in the Maclean’s list experienced a 1.3 percent increase in applications.21 In a Canadian University Survey Consortium survey that asked firstyear students to rate the degree to which various factors impacted their enrollment decisions, Maclean’s was rated as a very important factor by 19 percent of respondents and the CUR by 13 percent of respondents.22 Both publications influenced engineering and business students more heavily. Outside of Canada, James Monks and Ronald Ehrenberg compared student enrollment decisions with results of the U.S. News & World Report’s school rankings. The authors found that a lower (that is, worse) rank correlates with a higher rate of acceptance of student applications, a decrease in the percentage of accepted students who attend, and lower average SAT scores for incoming students.23 Similarly, Amanda Griffith and Kevin Rask found that U.S. News & World Report rankings significantly influence the university enrollment decisions of high-achieving students, particularly those who are self-funded.24 Student recruitment is only one way in which rankings influence academic institutions. There is evidence that the standings play a significant role in shaping the administration of higher education. In a series of questionnaires and interviews with university leaders worldwide, Ellen Hazelkorn found that rankings are used in strategic planning, both implicitly and explicitly, and that they influence resource allocations, drive marketing initiatives, and motivate support for research at the expense of teaching.25 Because most rankings are based on a standard and narrow set of criteria, they may push institutions “into following the template of the globally dominant universities that lead the rankings: research-intensive institutions with selective admissions policies, conducting funded research in many disciplines, with particular focuses on science and technology and elite professional schools.”26 One effect of rankings, then, has been to shift universities into a reactive approach to evaluation and to partly remove the players themselves, including libraries, from the evaluation process. 319 320 University Rankings: How Well Do They Measure Library Service Quality? Because of the influence of rankings on students and administrators and because those who are most familiar with libraries may be excluded from the ranking process, it is important that the measures used to evaluate libraries be understood. Regardless of the degree of attention universities pay to the standings themselves, Because of the influence of rankings on it is appropriate that universities students and administrators and because and libraries respond in meanthose who are most familiar with libraries ingful ways when institutional may be excluded from the ranking process, reputations and ability to recruit students may be influenced it is important that the measures used to without input from those workevaluate libraries be understood. ing in universities. Methods LibQUAL+™ scores for each of the three dimensions were obtained from the LibQUAL+™ Web site for all postsecondary libraries in Canada that ran the survey in either 2007 or 2010 and were included in either the Maclean’s rankings or the CUR for those years. The 2008 and 2011 editions of the CUR were used to represent 2007 and 2010 data, because the surveys were conducted in the spring of the earlier years and the reports published in the fall. In the case of two institutions, the CUR provided distinct scores for separate campuses, but LibQUAL+TM scores did not. The study excluded these institutions. Table 1 outlines the total number of libraries that fit these criteria, the percentage of the total number of institutions that the sample represents, and the overlap of institutions included in each sample year. With half to more than two-thirds of ranked Canadian universities, depending on the year and ranking, included in the study, the sample is large enough to draw some conclusions about potential relationships between the standings and LibQUAL+TM. The LibQUAL+TM scores selected for comparison include perceived, adequacy gap, and superiority gap scores. If relationships exist between rankings and LibQUAL+TM results, the use of these three scores should indicate if those relationships reflect user satisfaction (perceived scores) or the degree to which libraries meet users’ expectations (adequacy and superiority) of library service quality.27 This is an important consideration because some factors, such as library size, may have an impact on both rank and user expectations.28 Because both ranks and LibQUAL+TM scores are ordinal types of data, Spearman’s rank order correlation coefficient (Spearman’s rho) was selected as the best measure of correlation between the variables.29 In the case of Maclean’s data, which include both ranks and the absolute values upon which ranks are based, only the rankings were used because only these variables contribute to overall university status. Because Maclean’s ranks institutions within categories, Spearman’s rho was calculated separately for each grouping.30 The number of libraries in each category ranged from ten to twelve, an acceptable sample range for the application of Spearman’s rho.31 Canadian University Report data were treated in a similar way. Although the CUR does not provide a numerical rank based on scores, a university’s position in compari- Brian Jackson Table 1. Sample size and overlap, by publication and year Ranking Year Sample size % of total ranked Overlap between study years Canadian 2007 31 58 75% University Report 2010 32 52 Maclean’s 2007 32 65 2010 34 69 80% son to other institutions is arguably as important as the provided grade. To account for relative position within the CUR, relationships between CUR grades and undergraduate adequacy, superiority, and perceived LibQUAL+TM scores overall and in each dimension were measured using Spearman’s rho. The scores of all institutions were included without categorical distinction because the criteria used by the CUR to measure library quality do not differ with university size or mission. Results Results of the Spearman’s rank correlation test on Maclean’s data indicate a trend toward relationships among the ranks for holdings per student and LibQUAL+TM library as place scores. Among the 2007 data (Table 2), superiority scores for the library as place dimension were very significantly correlated (p < 0.01) with the rankings for the holdings per student category in medical/doctoral schools. Weaker relationships (p < 0.05) involving holdings per student data also appeared with library as place perceived scores at comprehensive institutions and adequacy scores at medical/doctoral schools. This trend was more evident in observations from 2010 data. Spearman’s tests on the 2010 results (Table 3) reveal a trend in which very statistically significant relationships (p < 0.01) occur primarily between the ranking based on holdings per student and all three LibQUAL+TM dimensions. These relationships appeared most frequently for scores at primarily undergraduate universities, with significant relationships occurring with all three affect of service scores and adequacy and superiority scores for the library as place dimension. Scores based on holdings per student were closely related to adequacy and superiority scores in the library as place dimension at medical/ doctoral institutions and with superiority information control scores at comprehensive universities. Weaker but still significant relationships (p < 0.05) existed between total holdings and all three library as place scores at medical/doctoral libraries and holdings per student and some information control scores at both comprehensive and medical/ 321 0.000 0.150 Comprehensive Primarily 0.609 0.059 Comprehensive Primarily ** ρ < 0.01 * ρ < 0.05 Total holdings –0.093 0.683* Primarily Medical/doctoral 0.605 Comprehensive undergraduate –0.057 Medical/doctoral per student undergraduate –0.247 Medical/doctoral undergraduate 0.393 Medical/doctoral Holdings Expenditure Acquisitions Perceived –0.051 0.251 0.261 0.055 0.282 0.471 –0.233 –0.134 0.326 0.366 Adequacy –0.278 0.550 0.494 –0.170 0.184 0.678* –0.297 0.150 0.150 0.344 0.257 –0.475 –0.159 –0.031 –0.315 0.410 –0.031 0.068 0.283 0.309 Superiority Perceived Affect of service –0.070 0.494 0.511 –0.202 0.462 0.350 –0.099 0.209 0.311 0.116 Adequacy –0.013 0.550 0.676* –0.216 –0.192 0.479 –0.013 0.083 –0.075 0.025 Superiority Information control 0.167 0.467 0.686* 0.009 0.628 0.209 –0.068 0.533 0.583 0.095 Perceived 0.441 0.167 0.268 0.612* 0.628 0.310 –0.018 0.133 0.583 0.514 0.570* 0.367 0.424 0.662** 0.427 0.256 0.266 0.267 0.577 0.286 Adequacy Superiority Library as place 2007 Spearman’s rank correlation coefficients between Maclean’s indicators and LibQUAL+™ dimensional ranks by university category Table 2. 322 University Rankings: How Well Do They Measure Library Service Quality? 0.382 0.091 Comprehensive Primarily 0.509 0.473 Comprehensive Primarily ** ρ < 0.01 * ρ < 0.05 Medical/doctoral 0.113 Primarily undergraduate 0.191 0.789** Comprehensive per student Total holdings 0.221 Medical/doctoral undergraduate –0.359 Medical/doctoral undergraduate 0.019 Medical/doctoral Holdings Expenditure Acquisitions Perceived 0.236 0.755** 0.318 0.484 0.607* 0.200 –0.023 0.125 0.309 0.096 Adequacy 0.283 0.814** 0.252 0.517 0.514 0.239 0.016 –0.082 0.193 0.017 0.633* –0.202 0.655* 0.199 –0.405 0.536 0.177 0.477 0.391 0.402 Superiority Perceived Affect of service –0.182 0.661* 0.543 0.049 0.718* 0.305 0.082 0.173 0.107 0.399 Adequacy 0.009 0.180 0.750** 0.121 –0.018 0.443 0.280 0.514 0.073 –0.288 Superiority Information control 0.608* 0.545 0.573 0.580* 0.389 0.223 0.156 0.407 0.136 0.280 Perceived 0.649* 0.855** 0.452 0.837** 0.825** 0.318 –0.100 0.061 0.239 0.481 0.664* 0.764** 0.641* 0.867** 0.589 0.355 –0.065 0.016 0.168 0.559 Adequacy Superiority Library as place 2010 Spearman’s rank correlation coefficients between Maclean’s indicators and LibQUAL+™ dimensional ranks, by university category Table 3. Brian Jackson 323 324 University Rankings: How Well Do They Measure Library Service Quality? doctor schools. University expenditures on libraries are related to adequacy scores in all three dimensions at primarily undergraduate institutions. The Spearman’s rank correlation test using data from the CUR (Table 4) showed very significant relationships (p < 0.01) between 2007 LibQUAL+TM perceived, adequacy, and superiority scores in the information control dimension and for service adequacy scores. Service superiority and all three types of library as place scores were also related to CUR rankings, but to a lesser degree (p < 0.05). A shift occurred with the results of analysis of the 2010 data in which the most significant correlations (p < 0.01) appeared within the library as place dimension, with all three types of scores being highly correlated to CUR standings. The three affect of service scores and adequacy scores in the information control dimension were also related (p < 0.05) to CUR scores. Discussion This study sought to identify relationships between the quality of a library’s services, as measured by LibQUAL+™, and its score on published ranking systems. It asked if the relatively small volume of data presented by external rankings to represent library quality reflects the results of internally framed and comprehensive quality measures and which, if any, service areas are most closely related to the standings. While the results are mixed, the library as place dimension has the strongest and most consistent relationOf the four measures used by ship with the rankings. Maclean’s, only one, holdings per Of the four measures used by Maclean’s, only one, holdings per student, has a consisstudent, has a consistent relationtent relationship with LibQUAL+TM scores. ship with LibQUAL+TM scores. One might expect a measure of holdings to correlate with the information control dimension but, in this study, the strongest and most frequent relationships occurred between the holdings per student category and library as place scores in 2007 and 2010 and affect of service scores in 2010. Looking deeper, the LibQUAL+TM scores that incorporate expectations, adequacy, and superiority scores in the library as place dimension aligned most closely with the rankings. An examination of the methodology used by Maclean’s offers a potential explanation for these results. Although the Maclean’s description of the way in which it collects data is not entirely clear, it appears as if the holdings per student category includes only print holdings.32 One possible explanation for the results, then, may be that libraries able to accommodate more print materials per student may tend to create greater expectations of the physical library due to a larger library size or due to the general impressions generated by a library with a high print resource to student ratio. Larger numbers of print books could conceivably impact scores in the service dimension, as well. Greater selection and availability may lead to fewer disappointing interactions with staff regarding holds, recalls, and fines. If, as it appears, electronic collections are not included with the holdings per student data, it could explain why those data do not correlate with information control LibQUAL+TM scores, which include a number of questions specifically related to availability and access to electronic collections. All 2010 ** ρ < 0.01 * ρ < 0.05 0.335 0.379* All 2007 Perceived 0.420* 0.512** Adequacy 0.364* 0.446* Superiority Affect of service 0.188 0.571** Perceived 0.405* 0.636** Adequacy 0.180 0.564** Superiority Information control 0.584** 0.421* Perceived 0.684** 0.426* Adequacy 0.657** 0.383* Superiority Library as place Spearman’s rank correlation coefficients between CUR scores and LibQUAL+™ dimensional scores, by year Table 4. Brian Jackson 325 326 University Rankings: How Well Do They Measure Library Service Quality? Results of the Spearman’s rank correlation test using CUR scores fluctuated significantly between 2007 and 2010, with the strongest relationships moving from information control to library as place. This change is attributable to the difference in the way that the CUR presented the scores between 2007 and 2010. In 2007, reported scores were based only on one question that asked respondents to rate satisfaction with the library overall. In 2010, though, reported scores were based on the mean scores of three questions that asked about the availability of library resources, study space, and hours of operation. This change in relationships between CUR and LibQUAL+TM scores is important for several reasons. First, the newspaper did not publish details of the changes in reporting methods with the scores in print or on its Web site. Information regarding methodological changes was available only in internal reports provided to participating institutions, which are presumably not read as widely as the print and online features, even if they are made available to the public.33 If the reputation of a library and its appeal to prospective students can be influenced by these scores, it is important for both libraries and potential users to understand changes in data gathering and reporting over time. Libraries cannot claim to have improved based on or act upon CUR scores if reporting criteria vary from year to year. Second, if the information control dimension is closely related to scores on the 2007 CUR question regarding overall satisfaction with the library, it suggests that information control, which focuses on both availability and access to print and electronic collections, is more closely related to overall satisfaction than are service and the physical library. This corroborates previous findings that suggest information control scores are more closely tied to LibQUAL+TM overall satisfaction scores than are other dimensions.34 Finally, the addition of specific questions regarding collections, study space, and hours served to strengthen relationships between perceptions of space and CUR library scores. As only one CUR question regarding study space bears resemblance with questions in the library as place dimension on the LibQUAL+TM survey, the strength of the relationship may be unexpected. The three CUR survey questions, though, asked students to “Please indicate how satisfied you are with the different aspects of the library at your institution. If there is more than one library on campus, please think of the main one or the one that you use most often” followed by each of the three areas in question: availability of books/articles/periodicals, study space, and library hours of . . . information control, which focuses operation. By referring to physion both availability and access to print cal campus libraries, the question and electronic collections, is more closely regarding materials could prime respondents to think of print related to overall satisfaction than are sources in the library. If that is the service and the physical library. case, the mean responses to the three questions combined, like the Maclean’s rankings, may favor institutions based on the quality or size of facilities, with less regard for other library services. The strength of the relationships between library spaces and both Maclean’s holdings per student indicator and CUR scores raises the concern that some methodological decisions made by each publication advantage libraries with physical facilities gauged to be Brian Jackson superior by users, at the expense of other factors that may be equally or more important to student enrollment and resource allocation decisions. Other choices of input measures, specifically within the Maclean’s rankings, have no relationship with internal measures of library service quality. If, as widespread use suggests, LibQUAL+TM is considered an effective and comprehensive tool for measuring service quality, and the variables used by Maclean’s to evaluate libraries bear little relationship with LibQUAL+TM results, then Maclean’s generally fails to capture the elements of library service that the library community deems most important to libraries and their users. The CUR may capture some important elements of library services, but its scores are by no means as representative of overall quality as they imply. Conclusion University ranking systems attempt to convey quality using a relatively small number of variables. For both the Maclean’s Guide and the CUR, the validity of their scores comes into question when they claim to measure the quality of libraries based on limited information. Neither survey truly attempts to incorporate the number and quality of electronic resources in library collections, nor do they include the myriad other services that contribute to library service quality. In this study, the perceived quality of For both the Maclean’s Guide and library spaces has the greatest relationthe CUR, the validity of their scores ship with the published scores. While it stands to reason that user notions comes into question when they claim about what a physical library should to measure the quality of libraries be, which often includes large numbers based on limited information. of print books,35 can have an impact on service level scores, and satisfaction with library spaces may influence service quality perceptions in other areas,36 the quality of library spaces and their capacity to hold books cannot be said to be definitive of the overall quality of library services. Many factors contribute to satisfaction with library services, and the way that these factors interact with one another is complex. It is difficult to capture these interactions in any one survey or evaluation of library services, but this difficulty is rarely communicated when libraries are scored and ranked. These results may not be news to university and library administrators, most of whom are likely familiar with the limitations of rankings. Such standings, though, continue to have an influence on university executives, faculty, and students. These stakeholders may or may not be concerned about the ranking of library indicators specifically, but most ranking systems incorporate a weighting system in which all indicators contribute to an overall standing. The rank’s component parts, including libraries, influence any action that is inspired even in part by one of these systems. For that reason, it is important that stakeholders understand rankings. As experts in library quality, librarians have a role in defining for other interested parties how rankings actually measure libraries. Simply accepting rankings without comment, begrudgingly or otherwise, ignores the impact that they may have on future users and the institution itself and on the ability of libraries to get across a more comprehensive message of library value. 327 328 University Rankings: How Well Do They Measure Library Service Quality? Brian Jackson is an assistant professor and librarian at Mount Royal University in Calgary, Alberta, Canada; he may be reached by e-mail at: bjackson@mtroyal.ca. Notes 1. James Monks and Ronald G. Ehrenberg, “U.S. News & World Report’s College Rankings: Why Do They Matter?” Change: The Magazine of Higher Learning 31, 6 (November–December 1999): 43–51; Ellen Hazelkorn, Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence (New York: Palgrave Macmillan, 2011). 2. Michael J. Roszkowski, John S. Baky, and David B. Jones, “So Which Score on the LibQUAL+TM Tells Me If Library Users Are Satisfied?” Library & Information Science Research 27, 4 (2005): 427–28. 3. Sam Kalb, “Benchmarking on a National Scale: The 2007 LibQUAL+TM Canada Experience,” Performance Measurement and Metrics 11, 2 (2010): 163. 4. Alliance for Audited Media, “Top 10 Canadian Magazines by Paid & Verified Circulation,” last modified June 30, 2013, accessed October 3, 2013, http://www.auditedmedia.com/ news/research-and-data/top-10-canadian-magazines-for-june-2013.aspx. 5. Mary Dwyer, “Measuring Excellence,” Maclean’s, November 22, 2010, 173. 6. Stewart Page and Ken Cramer, “An Update on the Use of Ranks in Calibrating and Marketing Higher Education,” Journal of Marketing for Higher Education 13, 1/2 (2004): 87–99; Stewart Page, “Ranking of Canadian Universities: A New Marketing Tool,” Journal of Marketing for Higher Education 10, 2 (2008): 59–69; Stewart Page, Kenneth M. Cramer, and Laura Page, “Canadian University Rankings: Buyer Beware Once Again,” Interchange 41, 1 (2010): 81–89; Daniel Drolet, “Many Quit Maclean’s Survey,” University Affairs 47, 6 (June– July 2006): 28–29. 7. Alliance for Audited Media, “Total Circ: Circulation Averages for the Six Months Ended 3/31/13,” last modified March 31, 2013, accessed October 3, 2013, http://abcas3. auditedmedia.com/ecirc/newstitlesearchcan.asp. 8. Ross Williams, “Methodology, Meaning and Usefulness of Rankings,” Australian Universities’ Review 50, 2 (2008): 51; Nicholas A. Bowman and Michael N. Bastedo, “Anchoring Effects in World University Rankings: Exploring Biases in Reputation Scores,” Higher Education 61, 4 (2010): 432. 9. Monks and Ehrenberg, “Why Do They Matter?” 43–51; Amanda Griffith and Kevin Rask, “The Influence of the US News and World Report Collegiate Rankings on the Matriculation Decision of High-Ability Students: 1995–2004,” Economics of Education Review 26, 2 (2007): 244–55; Richard E. Mueller and Duane Rockerbie, “Determining Demand for University Education in Ontario by Type of Student,” Economics of Education Review 24, 4 (2005): 469–83, doi:10.1016/j.econedurev.2004.09.002. 10. Fred Heath, Colleen Cook, Martha Kyrillidou, and Bruce Thompson, “ARL [Association of Research Libraries] Index and Other Validity Correlates of LibQUAL+TM Scores,” portal: Libraries and the Academy 2, 1 (2002): 39–40. 11. Jessica Kayongo and Sherri Jones, “Faculty Perception of Information Control Using LibQUAL+TM Indicators,” Journal of Academic Librarianship 34, 2 (2008): 135–38. 12. Ben Hunter and Robert Perret, “Can Money Buy Happiness? A Statistical Analysis of Predictors for User Satisfaction,” Journal of Academic Librarianship 37, 5 (2011): 402–8; Damon Jaggars, Shanna Smith, and Fred Heath, “Does Size Matter? The Effect of Resource Base on Faculty Service Quality Perceptions in Academic Libraries,” in Proceedings of the 2008 Library Assessment Conference: Building Effective, Sustainable, Practical Assessment, Seattle, Washington, August 4–7, 2008, ed. Steve Hiller, Martha Kyrillidou, and Jim Self (Annapolis Junction, MD: ARL Publications, 2009), 317–21. 13. Douglas J. Joubert and Tamera P. Lee, “Empowering Your Institution Through Assessment,” Journal of the Medical Library Association 95, 1 (2007): 49–50. Brian Jackson 14. Eric Ackermann, “Library Assessment on a Budget: Using Effect Size Meta-Analysis to Get the Most Out of the Library-Related Survey Data Available Across Campus,” Performance Measurement and Metrics 9, 3 (2008): 192–201. 15. Susan McKnight, “Are There Common Academic Library Customer Values?” Library Management 29, 6/7 (2008): 617. 16. Younghee Noh, “The Impact of University Library Resources on University Research Achievement Outputs,” Aslib Proceedings 64, 2 (2012): 129; Sharon Weiner, “The Contribution of the Library to the Reputation of a University,” Journal of Academic Librarianship 35, 1 (2009): 8. 17. Examples include Simon Marginson, Global University Rankings: Where to From Here? (Singapore: Asia-Pacific Association for International Education, 2007); Philip G. Altbach, “The Globalization of College and University Rankings,” Change: The Magazine of Higher Learning 44, 1 (2012): 26–31; Williams, “Methodology, Meaning and Usefulness of Rankings,” 51–58; Brian Pusser and Simon Marginson, “University Rankings in Critical Perspective,” Journal of Higher Education 84, 4 (2013): 544–68; Sarah Amsler and Chris Bolsmann, “University Ranking as Social Exclusion,” British Journal of Sociology of Education 33, 2 (2012): 283–301, doi: 10.1080/01425692.2011.649835; John P. A. Ioannidis, Nikolaos A. Patsopoulos, Fotini K. Kavvoura, Athina Tatsioni, Evangelos Evangelou, Ioanna Kouri, Despina G. Contopoulos-Ioannidis, and George Liberopoulos, “International Ranking Systems for Universities and Institutions: A Critical Appraisal,” BMC Medicine 5, 30 (2007), doi: 10.1186/1741-7015-5-30. 18. Page and Cramer, “An Update on the Use of Ranks,” 91–92; Page, “Ranking of Canadian Universities,” 65–67. 19. Page, Cramer, and Page, “Buyer Beware Once Again,” 85; Page and Cramer, “An Update on the Use of Ranks,” 96–97. 20. Kenneth M. Cramer and Stewart Page, “Calibrating Canadian Universities: Rankings for Sale Once Again,” Canadian Journal of School Psychology 22, 1 (2007): 9. 21. Mueller and Rockerbie, “Determining Demand for University Education,” 482. 22. Prairie Research Associates, 2013 First-Year University Student Survey: Master Report (Winnipeg, MB: Canadian University Survey Consortium, 2013), http://www.cusc-ccreu. ca/publications/2013_CUSC_FirstYear_master%20report.pdf 23. Monks and Ehrenberg, “Why Do They Matter?” 49. 24. Griffith and Rask, “Matriculation Decision of High-Ability Students,” 250. 25. Hazelkorn, Rankings and the Reshaping of Higher Education, 93–113. 26. Pusser and Marginson, “University Rankings in Critical Perspective,” 558. 27. Roszkowski, Baky, and Jones, “So Which Score on the LibQUAL+TM Tells Me If Library Users Are Satisfied?” 424. 28. Jaggars, Smith, and Heath, “Does Size Matter?” 317–21. 29. Jaehwa Choi, Michelle Peters, and Ralph O. Mueller, “Correlational Analysis of Ordinal Data: From Pearson’s r to Bayesian Polychoric Correlation,” Asia Pacific Education Review 11, 4 (December 2010): 459–66. 30. It is important to note that the purpose of this exercise is not to rank libraries based on LibQUAL+™ scores, but to determine if relationships exist between library service quality as measured by LibQUAL+™ and published rankings. 31. Choi, Peters, and Mueller, “Correlational Analysis of Ordinal Data,” 465. 32. Dwyer, “Measuring Excellence,” 173. Maclean’s methodological description states that the figures provided for the expenses and acquisitions categories capture spending on electronic resources, while no such information is provided for the holdings per student and total holdings categories, where such a description would be more appropriate. The expense category is the percentage of institutional budgets devoted to library budgets overall, so the statement that it accounts for spending on electronic sources is meaningless because it would include all library expenses. 329 330 University Rankings: How Well Do They Measure Library Service Quality? 33. Simon Fraser University in Burnaby, BC, for example, makes these reports available on the Web site of its Institutional Research and Planning section: http://www.sfu.ca/irp/ surveys/urc.html. 34. Hunter and Perret, “Can Money Buy Happiness?” 406. 35. Heather Lea Jackson and Trudi Bellardo Hahn, “Serving Higher Education’s Highest Goals: Assessment of the Academic Library as Place,” College & Research Libraries 72, 5 (2011): 437. 36. Eugene Harvey and Maureen Lindstrom, “LibQUAL+® and the Information Commons Initiative at Buffalo State College: 2003 to 2009,” Evidence Based Library and Information Practice 8, 2 (2013); Jennifer Gerke and Jack M. Maness, “The Physical and the Virtual: The Relationship Between Library as Place and Electronic Collections,” College & Research Libraries 71, 1 (2010): 20–31.