Authors:
(1) Hamid Reza Saeidnia, Department of Information Science and Knowledge Studies, Tarbiat Modares University, Tehran, Islamic Republic of Iran;
(2) Elaheh Hosseini, Department of Information Science and Knowledge Studies, Faculty of Psychology and Educational Sciences, Alzahra University, Tehran, Islamic Republic of Iran;
(3) Shadi Abdoli, Department of Information Science, Université de Montreal, Montreal, Canada
(4) Marcel Ausloos, School of Business, University of Leicester, Leicester, UK and Bucharest University of Economic Studies, Bucharest, Romania.
RQ 4: Future of Scientometrics, Webometrics, and Bibliometrics with AI
RQ 5: Ethical Considerations of Scientometrics, Webometrics, and Bibliometrics with AI
Conclusion, Limitations, and References
Beside positive aspects so outlined, nevertheless, the use of artificial intelligence (AI) in scientometrics, webometrics, and bibliometrics raises important ethical considerations that should be carefully addressed.
AI algorithms often require access to large amounts of data, including personal and sensitive information [73]. It is crucial to ensure that proper data protection measures are in place to safeguard privacy and prevent unauthorized access [74]. Data anonymization and encryption techniques should be employed, and compliance with relevant data protection regulations should be followed [75].
AI algorithms can be prone willingly or inadvertently, to bias, whence can result in unfair or discriminatory outcomes [17, 76]. It is important to ensure that AI models are trained on diverse and representative datasets to avoid perpetuating existing biases [77]. Regular monitoring and auditing of AI systems should be conducted to identify and address any biases that may arise [78].
Sometimes, AI algorithms can be complex and opaque, making it difficult to understand how they arrive at their decisions [79]. Thus, it is important to promote transparency and explainability in AI models used in scientometrics, webometrics, and bibliometrics. Researchers and users should have access to information about the data used, the algorithms employed, and the decision-making processes of the AI systems [76, 79].
As AI systems become more autonomous, it is essential to establish clear lines of accountability and responsibility [80]. Developers, researchers, and users should be aware of their roles and responsibilities in ensuring the responsible and ethical use of AI in these fields. This includes addressing any potential biases, errors, or unintended consequences that may arise from the use of AI.
In cases where personal data is involved, obtaining informed consent from individuals is crucial [78]. Researchers and organizations should have robust consent management processes in place to ensure that individuals understand how their data will be used and have the ability to provide or withdraw consent.
Furthermore, the use of AI in scientometrics, webometrics, and bibliometrics may have implications for employment and society as a whole. It is important to consider the potential impact on jobs, the distribution of resources, and the broader societal implications. Measures should be taken to mitigate any negative effects and ensure a fair and equitable transition. Regular monitoring and evaluation of AI systems should be conducted to assess their performance, identify any biases or ethical concerns, and make necessary improvements. This ongoing monitoring and evaluation process should involve interdisciplinary collaboration and engagement with stakeholders.
Addressing these ethical considerations requires a multidisciplinary approach involving researchers, policymakers, ethicists, and stakeholders from various fields. Open dialogue, transparency, and ongoing evaluation are essential to ensure that AI is used responsibly and ethically in scientometrics, webometrics, and bibliometrics.