The number of publications in the Finnish universities of applied sciences (UAS) has increased dramatically over the last ten years. Even 3-4 times more various types of publications are released annually. What is even more interesting than the increased number of papers is the impact and quality of these publications. How is quality and impact of publications even defined and measured?
Currently, quality and impact are not incentivised in the core funding model by the Ministry of Education and Culture. In this article, we examine the elements of quality and impact of the publications in the UAS context and consider including these in the core funding model. We assess the developed and established metrics for publication quality and impact, and their pros and cons. Finally, we discuss recent advances in the field, including artificial intelligence and its effect on publishing.
State of the Art in the Metrics for Quality and Impact Assessment in Scientific Journals
The impact by scientists in academia is typically measured by citation-based metrics. The quality of published research has often been estimated by the citation count, an impact factor (JIF), and peer review. The citation count, indicating how often a publication is referenced by other research papers, has been used as a main measure of the scientific impact and as an indicator of the overall quality of a research paper (Tahamtan & Bornmann 2019). A higher citation count typically implies that the scientific work has had a significant influence on subsequent research.
The impact factor (IF) or journal impact factor (JIF) aims to evaluate the relative importance of scientific journals. JIF is calculated each year by dividing the number of citations received in that year for papers published in the 2 preceding years by the number of “citable items” published during the two preceding years (Garfield, 2006). JIF-style metrics have limitations, since they are highly susceptible to skew from “blockbuster” or subsequently retracted papers (Rossner, Van Epps & Hill 2007).
Peer review is the accepted best practice for determining which papers are published in academic journals. Peer review operates as the predominant process for assessing the validity, quality, and originality of scientific articles for publication (Sovacool et al. 2022). The limitations of peer-review process include reviewer bias, lack of agreement among reviewers, vulnerability to various forms of system gaming such as ‘lottery behavior’ by authors, predatory journals self-peer-review scams, and the time lag for publication of articles and the resulting delay in the dissemination of scientific findings (Carneiro et al. 2020). Despite its limitations and criticism, peer review is still generally perceived as key to quality control for research (Rossner, Van Epps & Hill 2007).
Citation counts (and JIF), however, may not necessarily reflect a broader impact of research, such as educational, cultural, environmental, and economic impact (e.g., Holmberg, Bowman, Bowman, Didegah & Kortelainen 2019), but accessible means of assessing this impact have not been readily available (Dinsmore, Allen & Dolby 2014). Over the past years, the shift of academic literature from paper journals to online platforms has led to the rise of alternative metrics, or altmetrics (Dinsmore, Allen & Dolby 2014). Altmetrics involve, for instance, including social media mentions, downloads, and online discussions pertaining to a publication. Most scientific journals and altmetrics platforms (e.g., Altmetric.com, PlumX) measure activities and engagement surrounding research publications across different online platforms. For instance, PlumX measure
- usage, i.e., how often a research output is viewed, downloaded, or accessed online;
- captures, i.e., how many users have bookmarked, saved, or otherwise captured the research output in their reference management tools or social bookmarking services;
- mentions of the research in social media platforms, blog posts, news articles, policy documents etc.
- social media engagement;
- citations in non-traditional sources, and
- online discussions.
Altmetrics are not meant to replace traditional research metrics (McEvoy & Latour 2023), but they may be useful in mapping the networks where research is being disseminated and discussed and to track where and how researchers engage with the public and, through that, hinting at societal influences of research ( Holmberg, Bowman, Bowman, Didegah & Kortelainen 2019). While top journals contain higher quality content and are, therefore, cited more, social media users also tend to choose higher quality content. Namely, high impact journals are more read on Mendeley, more tweeted (now X:ed), more posted on Facebook, and more mentioned in blog and news posts (Bowman & Holmberg 2017). Moreover, altmetrics are becoming increasingly recognised as a reliable indicator of article reach and success. Many funding agencies are now looking at Altmetrics to provide additional information about a research paper. Taken together, altmetrcis can be viewed as having a complementary role in visualising the impact of research alongside typical bibliometrics (McEvoy & Latour 2023).
Quality and Impact Assessment for Other Publications
Most common publication types in the Finnish UASs are articles for general public, articles in a compilation and articles in special journals (Vipunen 2023). As described in the introduction, the number of these has increased dramatically over the last decade. The quality of these not JuFo level articles are often ensured by an editorial board. This is a body which a UAS organises itself. There are not commonly and mutually accepted criteria for acceptance of the article between UASs. This has led to criticism that some of the published materials do not fulfil the characteristics of a university level publication (Luokkanen, Sakko, Lassila-Merisalo, Laaksonen & Friman 2023).
The situation for impact is even trickier. There is not a commonly accepted meaning for impact of a UAS publication nor means to measure it. However, Priem, Taraborelli, Groth & Neylon (2010) have suggested impact of publications consisting of four pillars: usage (downloads, views), peer-review (expert opinion), citations and alt-metrics (storage, links, bookmarks, conversations). (See previous chapter)
Suggestions for New Quality and Impact Indicators in Publishing for UASs
The current core funding model of UASs consists of a two percent (2%) share for publications, public artistic and design activities, audiovisual material, and ICT software. The performance indicator is the number of above-mentioned, emphasised with a coefficient for open publications. In practice, this means there are no incentives for quality nor impact. This is the opposite to the core model of the universities where refereed scientific publications with open publications play a key role.
Artificial intelligence (AI) is expected to help in increasing the number – hopefully also quality – of publications. However, in UAS context, it can be seen as a threat to editorial board work, as the resources are limited, for example for checking the references. Peer-reviewing processes for scientific articles take time and volunteers are sometimes difficult to find. This raises a question if so-called quality publications will be behind the pay wall in the future because they are more expensive to produce.
One possible solution to ensure and increase the quality of UAS publications would be to include a peer-review procedure to professional publications. The review board could consist of merited experts from higher education institutions in different RDI fields. Another possibility would be cross-evaluation or shared editorial board between different UASs, for example, certain companion UASs, such as 3UAS. That is, a board of experts from Metropolia and Haaga-Helia would evaluate articles submitted from Laurea and vice versa. Cross-review among other UAS editorial boards could also be established across Finnish UASs. In addition, UASs could use a quality label “Editorial excellence” (e.g., Dhand 2023) for excellent quality work in reviewing professional papers. Some academic journals recognise a small list of “outstanding” reviewers every one to two years with a certificate (Sovacool et al. 2022); perhaps something to consider for editorial boards of UAS journals as well. According Sovacool and colleagues (2022), this serves as a way of acknowledging excellent reviewers and building morale and support for the journal.
Taken together, to ensure the quality of UAS papers, we suggest that the core funding model would have impact parameter(s) for publications in addition to the number of articles. Altmetrics could be one of the metrics to describe the public engagement of publications of UASs and therefore these are preferred methods over traditional scientific publication metrics. New pillars for the model, such as usage (downloads, views) and altmetrics (links, conversations etc.), are needed to capture the new dimensions that impactful RDI work at the UASs can offer in the 2020s. In addition, peer-review processes of UAS papers need to be developed further.
References
- Bowman, T. D., & Holmberg, K. 2017. On the differences between citations and altmetrics: An investigation of factors driving altmetrics vs. citations for Finnish articles Fereshteh Didegah (Corresponding author). Jasis&T, 1–18.
- Carneiro, C. F. D., Queiroz, V. G. S., Moulin, T. C., Carvalho, C. A. M., Haas, C. B., Rayêe, D., Henshall, D. E., De-Souza, E. A., Amorim, F. E., Boos, F. Z., Guercio, G. D., Costa, I. R., Hajdu, K. L., van Egmond, L., Modrák, M., Tan, P. B., Abdill, R. J., Burgess, S. J., Guerra, S. F. S., … Amaral, O. B. 2020. Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. Research Integrity and Peer Review, 5(1).
- Dhand, R. 2023. Editorial Excellence: Recognizing and Nurturing Good Editors. Springer Nature.
- Dinsmore, A., Allen, L., & Dolby, K. 2014. Alternative Perspectives on Impact: The Potential of ALMs and Altmetrics to Inform Funders about Research Impact. PLoS Biology, 12(11).
- Garfield, E. 2006. The History and Meaning of the Journal Impact Factor. JAMA, 295(1), 90–93.
- Holmberg, K., Bowman, S., Bowman, T., Didegah, F., & Kortelainen, T. 2019. What Is Societal Impact and Where Do Altmetrics Fit into the Equation? Journal of Altmetrics, 2(1).
- Luokkanen, S., Sakko, S., Lassila-Merisalo, M., Laaksonen, P., & Friman, M. 2023. Ratkaiseeko raha kaiken – laadun merkitys (ammatti)korkeakoulujen julkaisutoiminnassa. UAS Journal, 2.
- McEvoy, N. L., & Latour, J. M. 2023. From impact factors to Altmetrics: What numbers are important in publishing your paper? In Nursing in Critical Care.
- Priem, J., Taraborelli, D., Groth, P., & Neylon, C. 2010. Altmetrics: a manifesto. Altmetrics.org.
- Rossner, M., Van Epps, H., & Hill, E. 2007. Show Me the Data. Journal of General Physiology, 131(1), 3–4.
- Sovacool, B. K., Axsen, J., Delina, L. L., Boudet, H. S., Rai, V., Sidortsov, R., Churchill, S. A., Jenkins, K. E. H., & Galvin, R. 2022a. Towards codes of practice for navigating the academic peer review process. In Energy Research and Social Science (Vol. 89).
- Tahamtan, I., & Bornmann, L. 2019. What do citation counts measure? An updated review of studies on citations in scientific documents published between 2006 and 2018. Scientometrics, 121(3), 1635–1684.
- Vipunen Education Statistics Finland. Education Statistics Finland. Cited 27.9.2023