Generative AI Policy

GENERATIVE AI POLICY

Pedagogical Perspective (PedPer)

ISSN: 2822-4841  |  DOI Prefix: 10.29329

Quick Summary

Pedagogical Perspective (PedPer) (eISSN: 2822-4841) acknowledges that generative AI tools (e.g., large language models, AI-assisted writing/translation tools, and AI image tools) may be used in scholarly workflows. This policy sets requirements for authors, reviewers, editors, and editorial staff to ensure transparency, confidentiality, and research integrity, in alignment with COPE’s position statement on AI in authorship.

1) Core Principles

  • Human accountability: Authors, reviewers, and editors remain fully responsible for the accuracy, originality, ethical compliance, and integrity of all content and decisions.
  • Transparency: All permitted AI use must be disclosed as described below.
  • Confidentiality & data protection: Manuscripts and peer-review materials must not be shared with third-party AI systems in ways that breach confidentiality, copyright, or data protection.

2) Policy for Authors

Permitted uses

Authors may use generative AI tools for:

  • Language polishing (grammar, clarity, style);
  • Translation support;
  • Non-substantive formatting assistance;
  • Coding/data-processing assistance (provided outputs are verified and documented);
  • Literature search support (provided all references are independently verified by the authors).

In all cases, authors must review all outputs, verify their accuracy, and disclose the use.

Prohibited / unacceptable uses

Generative AI tools must not be used to:

  • Fabricate or falsify data, results, participant information, or observations;
  • Generate fake or inaccurate references, citations, quotations, or DOIs;
  • Produce unattributed paraphrases of copyrighted text (plagiarism);
  • Create or modify images/figures in a misleading way or alter the interpretation of results;
  • Generate substantial scientific content (e.g., research questions, theoretical arguments, claims, interpretations, conclusions) without appropriate human authorship and without disclosure;
  • Produce peer review responses, revision letters, or editorial correspondence on behalf of the authors without disclosure.

Authorship

AI tools cannot be listed as authors. Only humans can meet authorship criteria and take responsibility for the work. This is consistent with COPE’s position statement and PedPer’s Authorship & Contributorship policy.

Mandatory AI Use Disclosure

At submission, authors must include an AI Use Disclosure in the Declarations section of the manuscript and in the Title Page Form, stating:

  • The tool(s) used (name and version, if known);
  • The purpose (e.g., language editing, translation, code assistance);
  • Which sections or tasks were supported;
  • Confirmation that authors reviewed and verified all outputs.

If no generative AI tools were used, authors should include a statement to that effect (e.g., “No generative AI tools were used in the preparation of this manuscript.”).

Suggested short disclosure:

“Generative AI tools were used for language editing/translation assistance. The authors reviewed and take full responsibility for the final content.”

Suggested detailed disclosure:

“We used [Tool name, version] to support [purpose]. AI assistance was limited to [sections/tasks]. All outputs were verified by the authors, who take full responsibility for the integrity and accuracy of the manuscript.”

Privacy and sensitive information

Authors must not upload confidential, proprietary, or personal/sensitive data (including identifiable participant information) to AI tools unless they have a lawful basis and the tool’s terms allow such use. Any AI use involving sensitive data must be clearly described and justified. See also the Privacy Statement Policy.

3) Policy for Reviewers

No AI processing of manuscripts

Reviewers must not upload, paste, or process any part of the submitted manuscript, supplementary files, or peer-review correspondence in any generative AI tool or third-party system. This protects confidentiality, copyright, and data security.

Reviewer accountability

Peer-review reports must reflect the reviewer’s own expert judgement. AI-generated peer review reports are not acceptable. For full details, see the Peer Review Policy.

4) Policy for Editors and Editorial Office

Confidentiality and decision integrity

Editors and staff must not share manuscripts, reviewer reports, or editorial correspondence containing manuscript content with generative AI tools in ways that breach confidentiality, copyright, or data protection.

Limited permitted uses

Editors/staff may use AI tools for:

  • Language polishing of general editorial communications;
  • Workflow support (planning, checklists);
  • Metadata consistency checks.

Manuscript content must not be shared with AI tools, and editorial decisions must remain human-led and documented.

5) Figures, Images, and Multimedia

  • AI-generated or AI-modified images are permitted only if clearly disclosed and not misleading.
  • Any AI-assisted image creation/modification must be described in the figure legend and/or methods section (tool, purpose, and what was changed).
  • Image manipulation that changes the interpretation of results is prohibited.

6) Generative AI as a Research Subject

PedPer welcomes research about generative AI in education (e.g., studies examining the use, effectiveness, or ethical implications of AI tools in teaching and learning). Such studies are evaluated on their scholarly merit like any other submission and are not affected by this policy’s restrictions on AI use in manuscript preparation.

Authors of such studies should clearly distinguish between (a) the AI tools they investigated as part of the research and (b) any AI tools they used in preparing the manuscript itself. Both should be disclosed, but in separate statements.

7) AI Detection and Screening

PedPer may use AI detection tools and similarity screening software (e.g., iThenticate) as part of its editorial assessment process. However, AI detection tools have known limitations (including false positives and false negatives). Therefore:

  • AI detection scores are used as a diagnostic indicator, not as the sole basis for editorial decisions;
  • Authors will be given the opportunity to respond to any AI detection concerns before a decision is reached;
  • The editorial team evaluates all evidence in context, consistent with the principles of fairness and due process.

8) Compliance and Consequences

PedPer may apply similarity checks, AI detection screening, and integrity assessment. Undisclosed or inappropriate AI use may result in:

PedPer distinguishes between inadvertent non-disclosure (honest error) and deliberate concealment. The editorial response is proportionate to the nature and severity of the violation.

9) Policy Evolution

Generative AI technologies and their role in scholarly publishing are evolving rapidly. PedPer will review and update this policy periodically to reflect developments in technology, international best practices, and guidance from organisations such as COPE. Authors are encouraged to check this page for the most current version before submission.

Last updated: 2025

Related Policies

Contact

For questions about generative AI use: info@pedagogicalperspective.com