We hope this message finds you well as we prepare for the upcoming semester.
We're writing today to discuss an increasingly relevant topic in our digital age: the use of generative Large Language Models (LLMs), such as OpenAI's GPT-4, in the academic setting. These models, powered by cutting-edge artificial intelligence, have become increasingly prevalent in research, teaching, and even student study tools due to their ability to generate comprehensive and contextually appropriate responses.
While LLMs can certainly enhance our pedagogical and research methodologies, they also raise several critical considerations around academic integrity, authorship, and the protection of sensitive information. We'd like to provide some recommendations for their responsible use:
Inclusion in Your Course Syllabus: To foster transparency and manage expectations, we urge you to include a section in your syllabus outlining your stance on AI usage in your course. Both the undergraduate and graduate honor councils are requesting that you do so, and similarly, we are making the same request. This can cover whether students are permitted to use such tools for assignments and the extent of their allowed use. Also, please note that the honor council guidance notes that the use of an LLM (referred to broadly in the undergraduate honor council update as AI) must be cited.
Protection of Sensitive Information: Another significant concern is the potential risk of exposing sensitive or protected information through these LLMs. We cannot emphasize enough the importance of not using LLMs when dealing with sensitive data, including student records, FERPA and HIPPA-protected information, personally identifiable information, or research data covered by non-disclosure agreements. In general, if it is information that is password protected, it should not be shared. These AI models do not have the capability to understand or respect data privacy standards, so we must take this precaution ourselves.
Impact on Learning and Assessment: Finally, we should remember the goal of education is to facilitate student understanding and growth. Overreliance on LLMs could lead to an unintended consequence of diminishing the critical thinking skills we aim to foster. On the other hand, students will be navigating jobs and a world when they graduate that are impacted by these tools and there are potential opportunities that these tools provide. It's important to articulate to students the role these tools should play in their learning journey.
As always, we remain committed to fostering a dynamic, innovative, and ethical academic environment. We encourage your thoughts and feedback on this important topic. Let's work together to make the most of the technology at our disposal while safeguarding our academic integrity and the privacy of our community.
Amy Dittmar, Howard R. Hughes Provost and Executive Vice President for Academic Affairs
Paul Padley, Vice President for IT and Chief Information Officer
Note: The first draft of this document was generated by ChatGPT4; the final version differs significantly from that first draft but as we noted above, we must cite the use of this.