Can artificial intelligence be the author of my scientific article? What you need to know if you write with AI

Artificial intelligence (AI) tools have recently seen a notable increase in their relevance within the academic sphere. ChatGPT, Gemini, Claude, and other natural language platforms have become part of the daily practices of many researchers and authors, especially in tasks related to writing, editing, and even generating scientific articles. However, this technological advance raises fundamental questions regarding the ethics and integrity of scientific research and publication: Can AI be considered the author of a scientific article? Should it be cited? And what obligations do authors who use these tools have?

This debate should not be underestimated. The lines separating legitimate technological assistance from ethical responsibility are being redrawn in academic publishing. In order to guide the scientific community, various organizations have established clear positions that all authors, editors, and reviewers should be aware of.

Artificial intelligence cannot be an author

It is imperative to clarify that artificial intelligence cannot be considered the author of any academic work. The Committee on Publication Ethics (COPE) states in its statement on “Authorship and AI Tools”1 that authorship entails legal, ethical, and intellectual responsibility, attributes that AI, by its very nature, does not possess.

Similarly, World Association of Medical Editors (WAME)2 maintains that chatbots such as ChatGPT do not meet the criteria for authorship established by the International Committee of Medical Journal Editors (ICMJE). According to these guidelines, only human individuals can be considered authors, as they must be able to publicly take responsibility for the publication's content.

For its part, the journal JAMA3 emphasizes that no AI tool can sign manuscripts, submit corrections, respond to reviewers, or validate data. Therefore, authorship is presented as a strictly human condition.

Declaring the use of artificial intelligence: an ethical duty

The inability of AI to be an author does not preclude its use in the scientific writing process. Indeed, many AI-based tools can be very useful for rewriting texts, checking grammar, generating initial structures, or even suggesting titles. However, the use of such platforms must be approached with transparency.

The COPE guidelines recommend that any significant use of artificial intelligence tools be indicated in the article, either in the methods section, in a footnote, or in the acknowledgments, specifying which tool was used, for what purpose, and at what stage of the work. This statement does not imply a transfer of authorship, but rather a practice of editorial honesty.

For example, if an author has used artificial intelligence to generate preliminary ideas for organizing content, this should be clearly indicated. Similarly, if artificial intelligence has been used to write a preliminary abstract, this should be noted accordingly. This measure aims to avoid ambiguity, protect the integrity of editorial processes, and strengthen trust in published academic texts.

Responsibility remains human

A fundamental aspect to consider in this debate is intellectual responsibility. When an author decides to use an artificial intelligence tool, they must remember that all content generated or edited with its assistance falls under their direct responsibility. This process involves meticulous data verification, confirmation of the originality of the works, and correction of any inaccuracies. Furthermore, ensuring the accuracy and legitimacy of the information disseminated is essential.

As with any other technical tool or non-authored contributor, artificial intelligence can provide support but cannot replace academic judgment or intellectual authorship. The human author must detect and correct errors generated by these platforms, which occasionally produce inaccurate or invented references, before submitting the article to a journal.

WAME emphasizes authors' duty to assume their responsibilities, even when the content has been generated by artificial intelligence. Using these tools without critical supervision is equivalent to publishing a work without proper review, which is contrary to good scientific practice.

What if the journal does not allow the use of artificial intelligence?

Currently, many scientific journals are implementing their own policies regarding the use of artificial intelligence. Some platforms do not allow any content generated by artificial intelligence, while others accept it under certain conditions, provided that it is explicitly stated.

Therefore, before submitting an article for consideration in a journal, it is imperative to scrutinize the specific editorial policies. It should be noted that failure to comply with these guidelines may result in the manuscript's rejection or, if considered academic misconduct, lead to more serious problems.

As with other aspects of scientific publishing, such as conflicts of interest or originality of content, it is crucial to implement strategies that ensure editorial honesty.

How should artificial intelligence be cited in a scientific article?

Another common question among authors is whether artificial intelligence should be cited. In this case, a consensus has been reached: it is recommended to mention AI as a tool, not as an author. This would be similar to how statistical software is cited.

It would be advisable to include a brief note specifying which tool was used, the version used, and the purpose for which it was intended. For example, you could mention: “ChatGPT (version X) was used as a support tool for the initial structuring of the draft.” This type of mention provides clarity and avoids misunderstandings.

Final reflection

The use of artificial intelligence in academic writing is not, in itself, a misguided practice. However, when used appropriately, it can become a valuable tool for improving scientific manuscripts’ efficiency, clarity, and quality. Nevertheless, the author's responsibility as the person responsible for verifying and guaranteeing academic integrity is unwavering.

The scientific community must be based on transparency, ethics, and rigor. Technological tools are undergoing rapid change, but the principles remain unchanged. The adoption of clear practices, the declaration of the use of artificial intelligence, and the assumption of full responsibility for content remain the basis of honest and reliable scientific publishing.

References

1 COPE Council. COPE position – Authorship and AI – English. https://doi.org/10.24318/cCVRZBms

2 Zielinski, C., Winker, M. A., Aggarwal, R., Ferris, L. E., Heinemann, M., Lapeña Jr, J. F., … & Habibzadeh, F. (2023). Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. Colombia Médica54(3), e1015868. http://doi.org/10.25100/cm.v54i3.5868

3 Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge. JAMA. 2023;329(8):637–639. doi:10.1001/jama.2023.1344

Keywords:

    Share this article: