Skip to main content

Generative AI in Research

The university encourages the use of Generative AI (GenAI) in research in a way that promotes trust in science, using the foundational ethical principles for reproducibility and transparency. These guidelines are intended to provide best practice for use of GenAI in a rapidly evolving landscape and cover the lifecycle of research, including applying for funding, conducting research, disseminating and sharing research results.

All uses of GenAI must adhere to the standards of the Professional Conduct Policy (UHAP 7.01.04) and the Code of Academic Integrity.

Requirements:

  1. Researchers are ultimately responsible for their research, including content developed by GenAI.
  2. Follow Information Resource Classification Standards published by the Information Security Office when deciding what allowable data types can be shared with GenAI.
  3. Follow the requirements of relevant funders:
    1. The National Science Foundation (NSF) announced to the research community that reviewers are prohibited from uploading proposal content or records into non-approved GenAI tools. Additionally, GenAI use should be explained in the project description or the project may be flagged for research misconduct under the December 8, 2025 Proposal and Award Policies and Procedures Guide (PAPPG).
    2. The National Institutes of Health (NIH) published a notice prohibiting GenAI in the peer review process. NIH further explains in their Open Mike blog that GenAI use in the writing of a proposal may introduce several concerns related to misconduct and the writer does so at their own risk.

Citing and Disclosing GenAI use

If you rely on GenAI output, you should cite it. The Committee on Publication Ethics' Authorship and AI tools webpage provides information on citing GenAI use. Good citation style recommendations have been suggested by the American Psychological Association (APA) and the Chicago Manual of Style.

Disclosure of AI use has nearly become a consensus in the scholarly community, but its format and substance vary. The scheme suggested by Resnik and Hosseini could be used for reference:

Table 1. Disclosing AI use in research and writing.

Disclosure is mandatory when, for example, using AI

  • To formulate questions or hypotheses, design and conduct experiments.
  • To draft parts of the paper, summarize, paraphrase, significantly revise or synthesize textual content.
  • To translate parts or the whole paper.
  • To collect, analyze, interpret or visualize data (quantitative or qualitative).
  • To extract data for review of the literature (systematic or not) and identify knowledge gaps.
  • To generate synthetic data and images reported in the paper or used in research.

Disclosure is optional when, for example, using AI

  • To edit existing text for grammar, spelling or organization.
  • To find references or verify the relevance of human-found references.
  • To find and generate examples for existing content.
  • To brainstorm and offer suggestions for the organization of a paper or the title of a paper/section.
  • To validate and/or offer feedback on existing ideas, text and code.

Disclosure is unnecessary when, for example, using AI

  • To suggest words or phrases that enhance clarity/readability of an existing sentence.
  • In part of a larger operation where AI is not generating or synthesizing content or making research decisions; for example, when AI is integrated into other systems/machines.
  • As a digital assistant, for example, to help organize and maintain a project's digital assets and workflows.

Resnik, D. B., & Hosseini, M. (2026). Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary? Accountability in Research, 33(2). https://doi.org/10.1080/08989621.2025.2481949

 

Utilizing GenAI for tasks such as assisting non-native speakers with translation, transcribing spoken language to written language, or formatting documents would not be considered substantive use and disclosure would be optional.

Conclusion

GenAI tools are evolving quickly. Users should stay up to date on good practice through reputable sources, such as the Responsible AI office.