Possibilities of Quality and Impact Assessment of RDI Publications in the Universities of Applied Sciences

Teksti | Pia Saari , Alina Leminen

The number of publications in the Finnish universities of applied sciences (UAS) has increased dramatically over the last ten years. Even 3-4 times more various types of publications are released annually. What is even more interesting than the increased number of papers is the impact and quality of these publications. How is quality and impact of publications even defined and measured?

illustration.
Photo by Isaac Smith on Unsplash

Currently, quality and impact are not incentivised in the core funding model by the Ministry of Education and Culture. In this article, we examine the elements of quality and impact of the publications in the UAS context and consider including these in the core funding model. We assess the developed and established metrics for publication quality and impact, and their pros and cons. Finally, we discuss recent advances in the field, including artificial intelligence and its effect on publishing.

State of the Art in the Metrics for Quality and Impact Assessment in Scientific Journals

The impact by scientists in academia is typically measured by citation-based metrics. The quality of published research has often been estimated by the citation count, an impact factor (JIF), and peer review. The citation count, indicating how often a publication is referenced by other research papers, has been used as a main measure of the scientific impact and as an indicator of the overall quality of a research paper (Tahamtan & Bornmann 2019). A higher citation count typically implies that the scientific work has had a significant influence on subsequent research.

The impact factor (IF) or journal impact factor (JIF) aims to evaluate the relative importance of scientific journals. JIF is calculated each year by dividing the number of citations received in that year for papers published in the 2 preceding years by the number of “citable items” published during the two preceding years (Garfield, 2006). JIF-style metrics have limitations, since they are highly susceptible to skew from “blockbuster” or subsequently retracted papers (Rossner, Van Epps & Hill 2007).

Peer review is the accepted best practice for determining which papers are published in academic journals. Peer review operates as the predominant process for assessing the validity, quality, and originality of scientific articles for publication (Sovacool et al. 2022). The limitations of peer-review process include reviewer bias, lack of agreement among reviewers, vulnerability to various forms of system gaming such as ‘lottery behavior’ by authors, predatory journals self-peer-review scams, and the time lag for publication of articles and the resulting delay in the dissemination of scientific findings (Carneiro et al. 2020). Despite its limitations and criticism, peer review is still generally perceived as key to quality control for research (Rossner, Van Epps & Hill 2007).

Citation counts (and JIF), however, may not necessarily reflect a broader impact of research, such as educational, cultural, environmental, and economic impact (e.g., Holmberg, Bowman, Bowman, Didegah & Kortelainen 2019), but accessible means of assessing this impact have not been readily available (Dinsmore, Allen & Dolby 2014). Over the past years, the shift of academic literature from paper journals to online platforms has led to the rise of alternative metrics, or altmetrics (Dinsmore, Allen & Dolby 2014). Altmetrics involve, for instance, including social media mentions, downloads, and online discussions pertaining to a publication. Most scientific journals and altmetrics platforms (e.g., Altmetric.com, PlumX) measure activities and engagement surrounding research publications across different online platforms. For instance, PlumX measure

  1. usage, i.e., how often a research output is viewed, downloaded, or accessed online;
  2. captures, i.e., how many users have bookmarked, saved, or otherwise captured the research output in their reference management tools or social bookmarking services;
  3. mentions of the research in social media platforms, blog posts, news articles, policy documents etc.
  4. social media engagement;
  5. citations in non-traditional sources, and
  6. online discussions.

Altmetrics are not meant to replace traditional research metrics (McEvoy & Latour 2023), but they may be useful in mapping the networks where research is being disseminated and discussed and to track where and how researchers engage with the public and, through that, hinting at societal influences of research ( Holmberg, Bowman, Bowman, Didegah & Kortelainen 2019). While top journals contain higher quality content and are, therefore, cited more, social media users also tend to choose higher quality content. Namely, high impact journals are more read on Mendeley, more tweeted (now X:ed), more posted on Facebook, and more mentioned in blog and news posts (Bowman & Holmberg 2017). Moreover, altmetrics are becoming increasingly recognised as a reliable indicator of article reach and success. Many funding agencies are now looking at Altmetrics to provide additional information about a research paper. Taken together, altmetrcis can be viewed as having a complementary role in visualising the impact of research alongside typical bibliometrics (McEvoy & Latour 2023).

Quality and Impact Assessment for Other Publications

Most common publication types in the Finnish UASs are articles for general public, articles in a compilation and articles in special journals (Vipunen 2023). As described in the introduction, the number of these has increased dramatically over the last decade. The quality of these not JuFo level articles are often ensured by an editorial board. This is a body which a UAS organises itself. There are not commonly and mutually accepted criteria for acceptance of the article between UASs. This has led to criticism that some of the published materials do not fulfil the characteristics of a university level publication (Luokkanen, Sakko, Lassila-Merisalo, Laaksonen & Friman 2023).

The situation for impact is even trickier. There is not a commonly accepted meaning for impact of a UAS publication nor means to measure it. However, Priem, Taraborelli, Groth & Neylon (2010) have suggested impact of publications consisting of four pillars: usage (downloads, views), peer-review (expert opinion), citations and alt-metrics (storage, links, bookmarks, conversations). (See previous chapter)

Suggestions for New Quality and Impact Indicators in Publishing for UASs

The current core funding model of UASs consists of a two percent (2%) share for publications, public artistic and design activities, audiovisual material, and ICT software. The performance indicator is the number of above-mentioned, emphasised with a coefficient for open publications. In practice, this means there are no incentives for quality nor impact. This is the opposite to the core model of the universities where refereed scientific publications with open publications play a key role.

Artificial intelligence (AI) is expected to help in increasing the number – hopefully also quality – of publications. However, in UAS context, it can be seen as a threat to editorial board work, as the resources are limited, for example for checking the references. Peer-reviewing processes for scientific articles take time and volunteers are sometimes difficult to find. This raises a question if so-called quality publications will be behind the pay wall in the future because they are more expensive to produce.

One possible solution to ensure and increase the quality of UAS publications would be to include a peer-review procedure to professional publications. The review board could consist of merited experts from higher education institutions in different RDI fields. Another possibility would be cross-evaluation or shared editorial board between different UASs, for example, certain companion UASs, such as 3UAS. That is, a board of experts from Metropolia and Haaga-Helia would evaluate articles submitted from Laurea and vice versa. Cross-review among other UAS editorial boards could also be established across Finnish UASs. In addition, UASs could use a quality label “Editorial excellence” (e.g., Dhand 2023) for excellent quality work in reviewing professional papers. Some academic journals recognise a small list of “outstanding” reviewers every one to two years with a certificate (Sovacool et al. 2022); perhaps something to consider for editorial boards of UAS journals as well. According Sovacool and colleagues (2022), this serves as a way of acknowledging excellent reviewers and building morale and support for the journal.

Taken together, to ensure the quality of UAS papers, we suggest that the core funding model would have impact parameter(s) for publications in addition to the number of articles. Altmetrics could be one of the metrics to describe the public engagement of publications of UASs and therefore these are preferred methods over traditional scientific publication metrics. New pillars for the model, such as usage (downloads, views) and altmetrics (links, conversations etc.), are needed to capture the new dimensions that impactful RDI work at the UASs can offer in the 2020s. In addition, peer-review processes of UAS papers need to be developed further.

References

URN http://urn.fi/URN:NBN:fi-fe20231129149853

Jaa sivu