For Authors | For Reviewers | For Associate Editors | For Handling Editors | Editorial Resources | Editorial Board and Staff | Contact Us
Policy on Artificial Intelligence (AI)
The Transportation Research Record policy on AI is adapted from the Sage Publications Artificial Intelligence Policy
Authors may use AI tools in the preparation of their manuscripts, provided this is done responsibly and in accordance with ethical research standards. Use of AI must be disclosed during the submission process and, where appropriate, within the manuscript itself. Disclosure requirements are outlined below.
Assistive AI (no disclosure required)
These tools identify issues related to spelling, grammar, punctuation, and basic sentence structure. Authors may use assistive AI tools (e.g., Grammarly, Microsoft Editor) solely to improve their manuscript in this limited capacity without disclosure.
Use of tools such as ChatGPT to improve clarity or rewrite text should be treated as generative AI and must be disclosed, even if the intent is editing, as these tools produce new phrasing rather than simply correcting errors. Some tools (e.g., Grammarly) include both assistive and generative features. Authors should disclose use as appropriate to the function applied. If there is any uncertainty, authors are encouraged to include a disclosure.
Generative AI (disclosure required)
Generative AI tools produce or substantially modify content, including text, references, images, code, or other outputs that may influence the research methodology, analysis, results, or conclusions. Examples include tools such as ChatGPT, Claude, Google Gemini, Microsoft Copilot, GitHub Copilot, and image-generation tools such as DALL·E or Midjourney. Authors must disclose any such use at submission and describe which tool was used and for what purpose. Disclosure should also be included in the manuscript (e.g., in the methods or acknowledgements), where appropriate.
Authors should cite original sources rather than AI tools as primary references. AI tools may be cited when they are used to create outputs such as images or visualizations as representative contributions.
As generative AI tools may produce inaccurate or fabricated content (e.g., incorrect facts or non-existent references), authors are responsible for verifying the accuracy of all outputs and checking original sources.
Examples of generative AI use that require disclosure include:
- Assistance with literature review or compilation of relevant sources
- Translation of materials as part of the research process
- Use of AI-generated code or software (e.g., GitHub Copilot) to support research activities
- Assistance with data visualization
- Generation of illustrations or infographics (e.g., DALL·E, Midjourney)
- Code that has been enhanced or checked using AI tools
- Assistance with compiling or formatting references
- Advanced language editing or rewriting of text (e.g., using tools such as ChatGPT, Claude, or Microsoft Copilot to substantially revise, rephrase, or restructure sections of a manuscript)
Prohibited use of AI
AI tools must not be used in ways that compromise the integrity, validity, or originality of the research or review process. Inappropriate or undisclosed use of AI may result in rejection of the submission or other editorial action.
AI must not be used to generate, alter, or present content in a misleading or unethical manner, or to replace essential elements of the research process that require human judgement and accountability.
Examples of prohibited use include:
- Generation of false, misleading, or inaccurate content, data, or results
- Fabrication of references or citation of non-existent sources
- Use of AI to generate data, analyses, or findings without appropriate methodological transparency
- Conducting interviews or generating participant data using AI in place of human subjects
- Use of AI to analyze or interpret research data without clear methodological description, appropriate human oversight, and verification of outputs. This applies to all forms of analysis, including qualitative and quantitative, etc.
- Plagiarism or failure to appropriately attribute sources
- Presentation of AI-generated images or outputs as original research data or findings
- Use of AI tools to write peer review reports or editorial decisions, or otherwise breach confidentiality in the review process
- AI bots such as ChatGPT should not be listed as an author on your submission.
Undisclosed use of generative AI will be treated as a breach of this policy. All authors and volunteers are expected to adhere to these standards, and concerns about potential misuse should be raised with the journal.
Reviewers are not permitted to use ChatGPT or other generative AI tools to assist in the review of manuscripts or to upload manuscript content or proprietary information belonging to authors into such tools. Assistive AI may be used in a limited capacity to improve spelling, grammar, or basic clarity of the reviewer’s own writing, provided this use is consistent with the Assistive AI guidance above. If reviewers are found to have used generative AI tools in the review process, the review may be discarded, and they may not be invited to review manuscripts in the future.