Logo for AiToolGo

AI Tools in Academic Writing: Guidelines for Ethical Use

In-depth discussion
Technical
 0
 0
 100
This article outlines guidelines for the responsible use of AI tools in academic writing and research, distinguishing between assistive and generative AI tools. It emphasizes the importance of human oversight, disclosure of AI-generated content, and the need for rigorous verification of AI outputs to maintain academic integrity.
  • main points
  • unique insights
  • practical applications
  • key topics
  • key insights
  • learning outcomes
  • main points

    • 1
      Clear distinction between assistive and generative AI tools
    • 2
      Emphasis on the importance of human oversight and accountability
    • 3
      Comprehensive guidelines for ethical AI usage in academic submissions
  • unique insights

    • 1
      The article highlights the potential biases in AI-generated content and the need for careful review.
    • 2
      It discusses the evolving landscape of AI tools and their implications for academic integrity.
  • practical applications

    • The article provides actionable guidelines for authors on how to ethically integrate AI tools into their writing and research processes.
  • key topics

    • 1
      Distinction between assistive and generative AI tools
    • 2
      Disclosure requirements for AI-generated content
    • 3
      Ethical considerations in using AI in academic writing
  • key insights

    • 1
      Guidelines for ethical AI use in academic submissions
    • 2
      Focus on maintaining academic integrity while using AI tools
    • 3
      Recommendations for verifying AI-generated content
  • learning outcomes

    • 1
      Understand the distinction between assistive and generative AI tools.
    • 2
      Learn the ethical implications and responsibilities when using AI in academic writing.
    • 3
      Gain insights into best practices for disclosing AI-generated content.
examples
tutorials
code samples
visuals
fundamentals
advanced content
practical tips
best practices

Introduction to AI Use in Writing and Research

The American Institute of Mathematical Sciences (AIMS) acknowledges the growing role of Artificial Intelligence (AI) in academic writing and research. AI tools, such as ChatGPT, offer transformative potential by assisting with idea generation, overcoming writer's block, and streamlining editing processes. However, AIMS emphasizes the importance of understanding the limitations of these technologies and using them responsibly to uphold academic and scientific integrity. Human oversight and accountability remain crucial to ensure the accuracy and reliability of published content. These guidelines support authors in effectively integrating AI tools into their work while maintaining the highest standards of scholarly publishing.

Assistive AI Tools vs. Generative AI Tools

It's essential to differentiate between assistive and generative AI tools. Assistive AI tools, such as Grammarly, Curie, and LanguageTool, enhance content authored by the user by providing suggestions, corrections, and improvements. These tools refine and improve clarity in independently created content. Generative AI tools, like ChatGPT and DALL-E, produce original content, including text, images, and translations. Content primarily created by generative AI is considered 'AI-generated,' even with subsequent human modifications. Understanding this distinction is vital for proper disclosure and responsible use.

Disclosure Requirements for AI Use

While the use of assistive AI tools does not require disclosure, AIMS mandates that all content, including AI-assisted content, undergo rigorous human review to ensure quality and authenticity. Authors must disclose any AI-generated content in their submissions, including text, images, or translations. This disclosure allows the editorial team to make informed publishing decisions. A 'Use of AI tools declaration' section is provided for this purpose, requiring authors to specify the tool used and its application in the work.

Guidelines for Using Generative AI Tools

When using generative AI tools, authors must adhere to specific guidelines to maintain academic integrity. First, disclose the use of AI tools in the submission. Second, carefully verify the accuracy, validity, and appropriateness of AI-generated content, as Large Language Models (LLMs) can produce incorrect or misleading information. Third, meticulously check sources and citations, ensuring proper referencing. Fourth, appropriately cite AI-generated content following established referencing conventions. Fifth, avoid plagiarism and copyright infringement by confirming that the submission contains no plagiarized material. Sixth, be aware of potential biases in AI-generated text and ensure inclusivity and impartiality. Seventh, acknowledge the limitations of LLMs, including potential inaccuracies and knowledge gaps. Eighth, remember that AI tools cannot be recognized as co-authors, and the author remains responsible for the work. Finally, stay updated on the latest developments and ethical challenges related to AI-generated content.

Prohibited Uses of Generative AI

Certain uses of generative AI are strictly prohibited. Authors must not use generative AI to create or modify core research data artificially. Sharing sensitive personal or proprietary information on AI platforms like ChatGPT is also prohibited, as it may expose confidential data or intellectual property. Editors and reviewers must maintain the confidentiality of the peer review process and must not share information about submitted manuscripts or peer review reports in generative AI tools. Reviewers are also prohibited from using AI tools to generate review reports.

COPE Guidelines and AI Tool Usage

AIMS follows the Committee on Publication Ethics (COPE) guidelines regarding the use of AI tools. COPE emphasizes that AI tools cannot be listed as authors of a paper, as they cannot take responsibility for the submitted work, assert conflicts of interest, or manage copyright and license agreements. Authors must disclose the use of any generative AI tools in the writing of a manuscript, the production of images or graphical elements, or the collection and analysis of data. Authors are fully responsible for the content of their manuscript, including any portion produced by an AI tool, and are liable for any breach of publication ethics.

Ensuring Ethical and Responsible AI Use

The guidelines provided by AIMS aim to ensure the responsible and ethical use of AI tools in writing and research, preserving the integrity and quality of academic and scientific publications. By adhering to these guidelines, authors can leverage the benefits of AI while maintaining the highest standards of scholarly work. Disclosure, verification, and awareness of limitations are key to responsible AI integration.

Further Resources on AI in Scholarly Work

For further information on AI in scholarly work, authors can refer to resources such as the World Association of Medical Editors (WAME) recommendations on chat bots and scholarly manuscripts, the Committee on Publication Ethics (COPE)’s position statement on Authorship and AI tools, and the STM Whitepaper on Generative AI in Scholarly Communication. These resources provide additional insights and guidance on navigating the evolving landscape of AI in academic publishing.

 Original link: https://www.aimsciences.org/index/GuidelinesforAI

Comment(0)

user's avatar

      Related Tools