AI Policies.
In Science and Philosophy, we are aware that “in the vast majority of articles, as AI language models appear to be expanding at an exponential rate and their capabilities tend to develop at an unprecedented pace, [this has led to] significant controversies and dilemmas in terms of how they will change academic writing and knowledge production.”
For this reason, we have divided the policy on Artificial Intelligence into the following statements.
AI is not an author; it is a research assistant that is useful for explaining articles, exploring data, and formatting citations.
Risks: depend on a) biases in the data; b) incorrect, inaccurate, or misleading information of "critical importance," particularly hallucinated references reflected in academic works, in completely fabricated data used for empirical purposes.
This raises concerns about the reliability of the tool in scientific writing. Texts created with AI show a high error rate from a factual perspective, which calls into question their use in certain areas of research, such as bibliometric analysis.
Regarding quality, AI-generated texts tend to be of lower quality and comprehensiveness; however, it is also worth emphasizing that it is increasingly difficult to differentiate between AI-generated texts and human-written texts, particularly in texts that raise concerns about research accuracy and the compromise of quality for the sake of productivity.
Ethics: concerns about the definition of authorship, accountability, and the application of ethical standards in academic publishing. The legal aspect of ethical concerns includes risks such as copyright violations and plagiarism arising from unauthorised information. Verified or incorrectly generated by ChatGPT.
Concerns about “creativity” can no longer be limited to writing ability, but also to using ChatGPT or other large language models (LLMs) to write creatively,” which ultimately questions the very essence of authorship and academic writing and may be a central idea in the era of AI-assisted research.
Academic Innovation and Integration: The use of ChatGPT for research leads to an increase in AI hallucinations, especially in terms of references, which may be alarming given the growing issues surrounding the lack of capacity, or in certain cases, rigor, in peer review.
We recommend the cautious use of AI in scientific literature to preserve publication standards and reliability with rigorous human oversight, i.e., strict adherence to copyright laws and academic regulations when using AI tools to limit potential harm.
It should also be noted that academic integrity, although endangered by the excessive use of ChatGPT in writing, can be protected, as it is easier to detect or highlight data errors or false data or results through ChatGPT.
Self-regulation, critical thinking, and ethical interaction with AI in educational contexts are encouraged; any intention that is unrelated to the AI policy proposed here will be grounds for article rejection.
Reference
Lenvdai GF. ChatGPT in academic writing: A scientometric analysis of the literature published between 2022 and 2023. Journal of Empirical Research on Human Research Ethics. 2025;20(3):131-148. doi: 10.1177/15562646251350203


