Generative AI in Research
The university encourages the use of Generative AI (GenAI) in research in a way that promotes trust in science, using the foundational ethical principles for reproducibility and transparency. These guidelines are intended to provide best practice for use of GenAI in a rapidly evolving landscape and cover the lifecycle of research, including applying for funding, conducting research, disseminating and sharing research results.
All uses of GenAI must adhere to the standards of the Professional Conduct Policy (UHAP 7.01.04) and the Code of Academic Integrity.
Requirements:
- Researchers are ultimately responsible for their research, including content developed by GenAI.
- Follow Information Resource Classification Standards published by the Information Security Office when deciding what allowable data types can be shared with GenAI.
- Follow the requirements of relevant funders:
- The National Science Foundation (NSF) announced to the research community that reviewers are prohibited from uploading proposal content or records into non-approved GenAI tools. Additionally, GenAI use should be explained in the project description or the project may be flagged for research misconduct under the December 8, 2025 Proposal and Award Policies and Procedures Guide (PAPPG).
- The National Institutes of Health (NIH) published a notice prohibiting GenAI in the peer review process. NIH further explains in their Open Mike blog that GenAI use in the writing of a proposal may introduce several concerns related to misconduct and the writer does so at their own risk.
Citing and Disclosing GenAI use
If you rely on GenAI output, you should cite it. The Committee on Publication Ethics' Authorship and AI tools webpage provides information on citing GenAI use. Good citation style recommendations have been suggested by the American Psychological Association (APA) and the Chicago Manual of Style.
Disclosure of AI use has nearly become a consensus in the scholarly community, but its format and substance vary. The scheme suggested by Resnik and Hosseini could be used for reference:
Disclosure is mandatory when, for example, using AI
|
Disclosure is optional when, for example, using AI
|
Disclosure is unnecessary when, for example, using AI
|
Resnik, D. B., & Hosseini, M. (2026). Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary? Accountability in Research, 33(2). https://doi.org/10.1080/08989621.2025.2481949
Utilizing GenAI for tasks such as assisting non-native speakers with translation, transcribing spoken language to written language, or formatting documents would not be considered substantive use and disclosure would be optional.
Conclusion
GenAI tools are evolving quickly. Users should stay up to date on good practice through reputable sources, such as the Responsible AI office.