Queen’s promotes the responsible use of Generative AI across our community and has defined five guiding principles to help students, staff, and faculty make informed, effective decisions about integrating it into their work.
Each principle below includes quick self-assessment questions to help users judge whether a use is prohibited, permitted, encouraged, or required. These checks confirm that benefits outweigh risks and that use aligns with responsible practice. People remain accountable for all AI supported activity, and AI should enhance human work even when it automates routine tasks.
In some cases, expert AI or algorithmic impact assessments may be needed to confirm that a use is safe and appropriate.
Generative AI tools may change how people work across the university, but they do not alter the value or impact of that work. When used thoughtfully, they can support tasks and help shift effort toward higher priority activities. Community members remain responsible for ensuring that AI supports human judgment and capability rather than replace it.
Ask whether the use of AI in this context will:
- Increase productivity in a meaningful way.
- Create content or opportunities that strengthen human outcomes.
- Make or adjust decisions in a careful and accountable manner.
- Handle repetitive or tedious tasks so people can focus on other priorities.
The use of AI tools can introduce data security and privacy risks, and we must ensure that any benefits do not compromise the integrity or confidentiality of university information. Community members are responsible for understanding how their data is used, stored, retained, and disclosed, and for ensuring that appropriate safeguards are in place throughout the life of the system.
Ask whether the use of AI involves:
- Processing information classified as confidential or internal under the Queen’s Data Classification Standard.
- Clear terms that specify who owns the data and information submitted.
- Clear terms that specify who owns any outputs created.
- Clear explanations of how the data, its processing, and the supporting systems will be protected.
AI generated output can contain bias, errors, or false information. Content must be reviewed and checked against reliable sources. Community members are responsible for the quality, accuracy, and appropriateness of any outputs or decisions that result from these systems and for ensuring they do not reflect gender, cultural, or other biases. Anyone using an AI tool is accountable for the outputs they choose to use.
Ask whether the output will be reviewed for:
- Accuracy and completeness to avoid misleading or false information.
- Relevance and appropriateness for the intended use.
- Freedom from gender, cultural, systemic, and other forms of bias.
Equally important is the quality of the input used to generate AI content. Community members are responsible for ensuring that the data they provide is accurate, relevant, appropriate, and free from discriminatory assumptions.
Ask whether the input will be reviewed to ensure:
- No misleading or false information is included.
- It is relevant and appropriate for the task.
- It is free from gender, cultural, and other discriminatory bias.
People will respond differently to the use of AI tools, and they should be informed when these tools are involved. Transparency supports trust and helps ensure that concerns are acknowledged. Community members are responsible for communicating openly about how they use AI to create content or make decisions that may affect others.
Ask whether:
- There is a requirement to cite or disclose the use of AI for this purpose.
- The outcomes or decisions produced have impacts for community members.
- Those affected have been informed that AI was used.
- A process exists for community members to provide feedback on outcomes or decisions made with AI.
Accountability for the use of AI tools rests with community members. Responsible practice supports research impact, academic integrity, and effective operations across the university. Ongoing education and training help ensure that people make informed choices and use these systems appropriately. Community members are responsible for ensuring that their use of AI is ethical, necessary, and aligned with university values.
Ask whether:
- The use of AI for this purpose violates any university policy.
- It has been authorized or approved.
- Clear guidelines or instructions have been provided.
- It supports or is integrated into an existing operational, research, administrative, or academic process.
- Training is required for you or others before using AI for this purpose.
Additional guidance will be created by offering suggested responses that support each self-assessment question:
|
SELF-ASSESSMENT QUESTION |
GUIDING ANSWER |
|
Guiding Principle: Safeguarding Data |
|
|
If the answer is “Yes, confidential and/or internal information is processed”, the generative artificial intelligence use case is prohibited, unless explicitly authorized by an appropriate university authority, based on teh results of thorough analysis done through AIA*. If the answer is “Unsure”, community members are required to review and ensure compliance with relevant data classification, handling, and sharing guidelines, or request support. For more information on how to classify this data, please review Queen’s Data Classification Standards. *Note that it may be permitted to use AI tools which process confidential information so long as the appropriate assessments and safeguards are observed. |