Use of AI

Policy on the Use of Generative AI Tools

The International and Comparative Law Review (ICLR) recognizes the growing use of artificial intelligence (AI) tools, including Large Language Models (LLMs) and generative models, in academic research and writing. While such tools can enhance the efficiency and clarity of scholarly work, their use must adhere to strict ethical and transparency standards to uphold the integrity of academic publishing.

Recommendations for Authors

Authors submitting to ICLR must comply with the following guidelines when using AI tools:

  1. Ethical Use of AI:
    • AI tools may be used to assist with language editing, formatting, or generating diagrams and graphical abstracts.
    • AI must not be used to formulate original research conclusions, arguments, or substantive scholarly contributions.
  2. Transparency and Acknowledgment:
    • If AI tools are used in the preparation of the manuscript (e.g., structuring the text, generating figures, or summarizing content), their use must be explicitly acknowledged in the manuscript.
    • The acknowledgment should specify the tool used and its purpose (e.g., "This manuscript was prepared with the assistance of [AI Tool Name], used for language editing purposes.").
  3. Author Accountability:
    • Authors are fully responsible for the accuracy, originality, and integrity of all content generated or edited with the assistance of AI tools.
    • AI-generated content must be reviewed and verified to ensure it meets academic and ethical standards.

This policy aligns with the recommendations provided by Palacký University Olomouc regarding the ethical use of generative AI tools. For further guidance, authors are encouraged to review the full recommendations available at: https://www.upol.cz/nc/en/news/news/clanek/ups-recommendations-for-the-use-of-generative-ai-models/