The short answer: we have not adopted a university wide policy.
The long answer: rather, the university differs to specific colleges, disciplines, and/or departments to drive their own policies in favor of matching the ever evolving scale and variety of tools with the value-driven and ethical learning outcomes created by our faculty.
What this means is students need transparency. Please communicate your own course expectations, beliefs--even values--of generative AI. What usage you may encourage? What usage is prohibited? What are the consequences? And why?
Our Academic Integrity Policy is still the standard bearer; a students' submitted work is presumed their own. Using LLMs or any other AI tool--without attributions or citations--is prohibited.
While we do not have a specific syllabus policy of language, the following questions may help you in developing your own. Dr. Emily Bender shared them in her opening remarks in a Fall 2024 Congressional roundtable, “AI in the Workplace: New Crisis or Longstanding Challenge?”
In short, how are using AI tools empowering students' humanity? How are using AI rejecting or limiting their humanity? Most importantly, we urge faculty to begin from a place of trust; in doing so, will only move campus dialogue and enrich our learning communities.
Should faculty like guidance around a specific AI tool and/or assessment, please reach out to our CTLE Director, Persis Driver.