in Diálogos de saberes
REFLECTIONS ON THE NATIONAL POLICY TO IMPROVE THE IMPACT OF NATIONAL SCIENTIFIC PUBLICATIONS AND THE NEW MODEL OF CLASSIFICATION OF SCIENTIFIC JOURNALS
Main Text
In August 2016, the Administrative Department of Science, Technology and Innovation in Colombia (Colciencias), led by its Directorate for the Promotion of Research, released the document No. 1601 entitled "National Policy to Improve the Impact of National Scientific Publications", in which the low level of contribution of the national scientific community to the creation and production of worldwide knowledge is identified as a problem. The governing body of the National System for Science, Technology and Innovation (SNCTI, for its initials in Spanish) says this situation is associated to three main reasons:
The low impact and number of scientific documents published by national authors.
The reduced levels of impact of scientific journals published in the country.
The publications by national authors and the scientific journals have little recognition and a low level of citation in the international scientific community (Colciencias, 2016).
In response to the problems detected, the Directorate for Promotion and a committee of experts on editorial issues, designed the new Model of Classification of Scientific Journals, which is constituted in two stages: one diagnostic and another that will yield the official results, which will have a validity of two years. Both stages will be integrated by other three phases: (i) fulfillment of the editorial management (ii) validation, evaluation and visibility, and (iii) impact of the scientific journal.
In the first one, the fulfillment of the editorial management, it is the obligation of the journal to be placed in one of the disciplines, areas and large areas of knowledge according to the Organization for Economic Co-operation and Development (OECD) and according to the topics held by the journal in its different fascicles, all with the aim of unifying and homogenizing the different disciplines that would be incorporated within the same great area of knowledge in order to establish and measure the maximum and minimum terms in which each quartile will be organized and so place the journals within the different categories (A1, A2, B and C). In this way, for example, the journal Diálogos de Saberes publishes research articles mainly concerning Law, integrating the large field of Social Sciences, along with journals related to psychology, economics and business, educational sciences, sociology, political science, social and economic geography, journalism and communication, among others.
This means that the dynamics of citation within the scientific and academic communities that conform the different disciplines of the large area of knowledge has a great importance for the categorization of the journal in the National Indexing System, because when belonging to the same large area, the quartiles will be affected by those disciplines in which the researchers are usually cited: the greater the number of citations in an area or discipline, the greater the value of the H5, which in turn will result in a larger number of citations needed to achieve the margins of 25-49% and 50-74.9% of its large area of knowledge and thus be placed at least in the categories B and C.
For this reason, it is considered a failure in the design of the Measurement Model to accept an approach of grouping by areas and not by disciplines, even more when the dynamics of interaction between academic communities are very different; whereas in sciences such as psychology and economics it is usual to cite between researchers, because of the importance of the experimental antecedents, there are other disciplines in which the level of citation is much smaller, so a grouping as the one proposed by the Model, in addition to ignoring the findings made by the sociology of science -and which show the different structures of the scientific community and their differences in the construction of science- it can endanger the channels of scientific knowledge dissemination in those disciplines where citation levels are lower, and for which the quartiles to be recognized in categories B or C are unreachable, given the high levels of citation of other disciplines in the same area.
The situation described may have the effect of reducing the means of scientific dissemination by monopolizing scientific journals -in the case of some disciplines- through which researchers can show their results, which will discourage intellectual production and scientific work by reducing the probabilities of recognition within the respective academic and scientific community; in other cases, it may leave some journals dedicated to the dissemination of very specific topics without indexing, which would remove pluralism from science, particularly those with a small number of scientists or experts.
According to phase (ii), which aims to review the evaluation process and visibility, it is important to highlight the requirements proposed by the Model for the journals to guarantee exogamy, requiring, for example, that: a significant percentage of the published articles are the product of research supported by institutions other than the publishing institution, publishers do not publish more than one article per year in the journal edited by them, at least 80% of the editorial/scientific committee members are externally affiliated to the publishing institution and, finally, at least 60% of the evaluators of the articles published in the different issues of the journal do not belong to the editorial/scientific committee, nor to the publishing institution. All of these requirements are considered to strengthen the rigor and scientific quality of the editorial processes, contributing to the quality of the works published in the journals edited by national institutions.
In spite of the importance recognized in the previous requirements, it is necessary to note that this phase also evaluates the visibility of the magazine, for which the Model demanded the fulfillment of at least one of the following conditions: (a) To be included in at least one Bibliographic Citation Index (IBC in Spanish); (b) in at least one Bibliographic Index (IB in Spanish); or (c) in at least one Bibliographic Base with Scientific Selection Committee (BBCCS in Spanish). However, although the role of these channels of scientific communication in the dissemination of journals is understood, the argument that the IBC, IB and BBCCS, established to satisfy the condition exposed, are exclusively dictated by Colciencias in the list of Indexing and Summary Systems (SIR in Spanish) is not comprehended, even more so when a methodology was not offered to identify the SIRs that would integrate the list, which somehow leaves a glimpse of arbitrariness in the process, especially when its effects may be to eliminate the recognition as scientific journal in the medium of publication under consideration, since the failure to comply with that condition immediately excludes the journal from being indexed.
On the other hand, and in relation to phase (iii), which evaluates the impact of articles published in the journal in terms of citations received, the measurement model ignores how the different publications affect the state of the science in which it is ascribed, as well as its discipline, because when being located in the exclusive field of Law, the impact of a scientific article will be greater inasmuch as it has transformative effects on the state of the art of the aforementioned science. The publication will not have the same impact, even if it has been cited in other articles, compared with one that has been considered as a source of Law of doctrinal nature for the solution or response to a legal problem, in a sentence or as a reference for the construction of a public policy for the solution of a social problem.
Finally, it is important to keep in mind that the Model, as well as the Policy to Improve the Impact of Publications, disregarded the ten principles of The Leiden Manifesto For Research Metrics, so we call the attention of the authorities responsible, for them to be analyzed and rethought and from them, the Model to be replanted, taking into account the need to protect and encourage the creation of science that is adapted to the needs of the context in which it is produced and that is evaluated with parameters that go beyond quantitative indicators that dehumanize science and restrict it to a series of factors that do not necessarily respond to the quality and validity of its results, as well as its wealth, understood as its contribution to the solution of needs that contribute to a better quality of individual and community life.
These Aforementioned principles indicate:
1. Quantitative evaluation should support qualitative, expert assessment. Quantitative metrics can challenge bias tendencies in peer review and facilitate deliberation. This should strengthen peer review, because making judgements about colleagues is difficult without a range of relevant information. However, assessors must not be tempted to cede decision-making to the numbers. Indicators must not substitute for informed judgement. Everyone retains responsibility for their assessments.
2. Measure performance against the research missions of the institution, group or researcher. Programme goals should be stated at the start, and the indicators used to evaluate performance should relate clearly to those goals. The choice of indicators, and the ways in which they are used, should take into account the wider socio-economic and cultural contexts. Scientists have diverse research missions. Research that advances the frontiers of academic knowledge differs from research that is focused on delivering solutions to societal problems. Review may be based on merits relevant to policy, industry or the public rather than on academic ideas of excellence. No single evaluation model applies to all contexts.
3. Protect excellence in locally relevant research. In many parts of the world, research excellence is equated with English language publication. Spanish law, for example, states the desirability of Spanish scholars publishing in high-impact journals. The impact factor is calculated for journals indexed in the US-based and still mostly English-language Web of Science. These biases are particularly problematic in the social sciences and humanities, in which research is more regionally and nationally engaged. Many other fields have a national or regional dimension - for instance, HIV epidemiology in sub-Saharan Africa. This pluralism and societal relevance tends to be suppressed to create papers of interest to the gatekeepers of high impact: English-language journals. The Spanish sociologists that are highly cited in the Web of Science have worked on abstract models or study US data. Lost is the specificity of sociologists in high-impact Spanish language papers: topics such as local labour law, family health care for the elderly or immigrant employment. Metrics built on high-quality non-English literature would serve to identify and reward excellence in locally relevant research.
4. Keep data collection and analytical processes open, transparent and simple. The construction of the databases required for evaluation should follow clearly stated rules, set before the research has been completed. This was common practice among the academic and commercial groups that built bibliometric evaluation methodology over several decades. Those groups referenced protocols published in the peer reviewed literature. This transparency enabled scrutiny. For example, in 2010, public debate on the technical properties of an important indicator used by one of our groups (the Centre for Science and Technology Studies at Leiden University in the Netherlands) led to a revision in the calculation of this indicator. Recent commercial entrants should be held to the same standards; no one should accept a black-box evaluation machine. Simplicity is a virtue in an indicator because it enhances transparency. But simplistic metrics can distort the record (see principle 7). Evaluators must strive for balance - simple indicators true to the complexity of the research process.
5. Allow those evaluated to verify data and analysis. To ensure data quality, all researchers included in bibliometric studies should be able to check that their outputs have been correctly identified. Everyone directing and managing evaluation processes should assure data accuracy, through self-verification or third-party audit. Universities could implement this in their research information systems and it should be a guiding principle in the selection of providers of these systems. Accurate, high-quality data take time and money to collate and process. Budget for it.
6. Account for variation by field in publication and citation practices. Best practice is to select a suite of possible indicators and allow fields to choose among them. A few years ago, a European group of historians received a relatively low rating in a national peer-review assessment because they wrote books rather than articles in journals indexed by the Web of Science. The historians had the misfortune to be part of a psychology department. Historians and social scientists require books and national language literature to be included in their publication counts; computer scientists require conference papers be counted. Citation rates vary by field: top-ranked journals in mathematics have impact factors of around 3; top-ranked journals in cell biology have impact factors of about 30. Normalized indicators are required, and the most robust normalization method is based on percentiles: each paper is weighted on the basis of the percentile to which it belongs in the citation distribution of its field (the top 1%, 10% or 20%, for example). A single highly cited publication slightly improves the position of a university in a ranking that is based on percentile indicators, but may propel the university from the middle to the top of a ranking built on citation averages .
7. Base assessment of individual researchers on a qualitative judgement of their portfolio. The older you are, the higher your h-index, even in the absence of new papers. The h-index varies by field: life scientists top out at 200; physicists at 100 and social scientists at 20-30 (ref. 8). It is database dependent: there are researchers in computer science who have an h-index of around 10 in the Web of Science but of 20-30 in Google Scholar . Reading and judging a researcher’s work is much more appropriate than relying on one number. Even when comparing large numbers of researchers, an approach that considers more information about an individual’s expertise, experience, activities and influence is best.
8. Avoid misplaced concreteness and false precision. Science and technology indicators are prone to conceptual ambiguity and uncertainty and require strong assumptions that are not universally accepted. The meaning of citation counts, for example, has long been debated. Thus, best practice uses multiple indicators to provide a more robust and pluralistic picture. If uncertainty and error can be quantified, for instance using error bars, this information should accompany published indicator values. If this is not possible, indicator producers should at least avoid false precision. For example, the journal impact factor is published to three decimal places to avoid ties. However, given the conceptual ambiguity and random variability of citation counts, it makes no sense to distinguish between journals on the basis of very small impact factor differences. Avoid false precision: only one decimal is warranted.
9. Recognize the systemic effects of assessment and indicators. Indicators change the system through the incentives they establish. These effects should be anticipated. This means that a suite of indicators is always preferable - a single one will invite gaming and goal displacement (in which the measurement becomes the goal). For example, in the 1990s, Australia funded university research using a formula based largely on the number of papers published by an institute. Universities could calculate the ‘value’ of a paper in a refereed journal; in 2000, it was Aus$800 (around US$480 in 2000) in research funding. Predictably, the number of papers published by Australian researchers went up, but they were in less-cited journals, suggesting that article quality fell.
10. Scrutinize indicators regularly and update them. Research missions and the goals of assessment shift and the research system itself co-evolves. Once-useful metrics become inadequate; new ones emerge. Indicator systems have to be reviewed and perhaps modified. Realizing the effects of its simplistic formula, Australia in 2010 introduced its more complex Excellence in Research for Australia initiative, which emphasizes quality.
Copyright & License
This is an open-access article distributed under the terms of the Creative Commons Attribution License
Author
Didier Andrés Ávila Roncancio
Undergraduate studies in Law at Universidad Libre de Bogotá. Studies in the Stock Market of Colombia. Candidate for a Master's Degree in Economic Law at Pontificia Universidad Javeriana. Member of the group of socio-legal research of the Universidad Libre, Bogotá, Colombia recognized by Colciencias in the category A. Editorial Assistant of the journal Diálogos de Saberes., Bogotá, Colombia