In this mini-lesson, learn how to locate and interpret institutional policies on AI use, reflect on when you can and can’t use AI, and be transparent about how you use AI in your work.
Whether or not you can ethically use GenAI and how will vary depending on the context. For staff, W&M is formulating institutional policies, and in the interim, has some general guidelines (see below). For faculty and researchers, different journals and funders increasingly have policy pages outlining how they permit or forbid use in publication, reviewing and grant writing. For students, each class syllabus should contain an AI use policy statement (and if not, ask your professor).
And of course, these are simply baseline policy guidelines. Given the ethical considerations we’ve seen in previous lessons, you may already have or can reflect on your own personal values and beliefs and commitments you have made to guide your use. For example, you may want to consider the following:
- Who or what might be harmed if I use GenAI for this purpose?
- Is GenAI necessary or appropriate for this particular use?
- How much thinking am I ceding to the AI tool and how much human input is there?
- How is using GenAI helping or harming my skills?
And while there are no foolproof AI detectors, W&M’s Honor Code and Code of Ethics apply to all we do.
Read
Discuss
What are some policy guidelines that apply to your ethical use of AI as seen in the GenAI Guidelines for Use for W&M? What are some personal values and beliefs you apply to your own AI use?
29 replies on “thing 8: Your ethical AI use”
For my daily work, I use University policy, state & federal policy as guidelines. If I were to use AI tools to faciliate my work, I would continue to be compliant, non-discriminatory, and vetting the information generated. Personally, I would ensure I am not being discriminatory nor disclosing private information.
It looks like the rules are the same as for using any other source: whatever you used should be cited, otherwise it is plagiarism. I can live with that.
As always, cite what you did not come up with and don’t sell someone else’s work as yours. The guidelines make sense, such as maintaining compliance and citations. I am always worried about sharing information with AI and will also use this in my personal usage.
I will definitely post the website showing how students should acknowledge the use of AI. Still thinking about which assignments (parts of assignments) its ok to use AI on and which its not.
The policy guideline that most applies to my position is to continuously assess GenAI tools for bias, accuracy, and impact as well as checking for accuracy and security prior to implementing. I work with a lot of different technologies and these technologies may be updated regularly. Some libraries or functions that were recommended in the past may now be deprecated due to a discovered exploit or security issue. Generative AI that was trained on the previous information may just tell me to use a twenty year old deprecated library that has many security issues. I feel it is my job to let it know what current libraries I want it to use in my prompt so it doesn’t just grab things from anywhere.
Echoing other comments pertaining to using private, sensitive, and confidential information. Safeguarding confidential information requires constant vigilance and I wouldn’t want to use AI to help me with my job but at the expense of compromising student safety and security.
I appreciate the information around citing AI usage, and the information around our policies. As more people use AI for various tasks, will citing it continue to used? Safety first, as a general rule, regarding student and applicant privacy/data.
I don’t use GenAI to do my job, and I don’t want my students to become overly dependent on it. It is my preference that they don’t use it at all and I state that in the syllabus. It is of course, very difficult to determine if student work was assisted by AI. I like the idea of a declaration like the third source provided examples of. I can imagine some assignments where some AI use would be permissible if declared.
I’ve mostly used Grammarly, which now uses AI (before it didn’t). I’ve never really used it beyond correcting grammar and spelling. I hadn’t realized that you could now do prompts in it, but the Acknowledging AI tools and technologies article led me down that rabbit hole. I haven’t had any real need or use for it, but I might try it out on an email or something light.
Will these acknowledgements be limited to prompts, or would I have to write a statement saying that I used Grammarly to check for grammar and spelling, which we never really had to do before?
I like the declarations, especially for students, because it shows how AI can be used as a tool to augment your work vs. doing the work for you, as well as what is permissible. It also shows the thought process of the writer better, which is intriguing but also a little disconcerting. You’re now lifting the veil a little bit more into how a person thinks or approaches a question/problem.
I think the guidelines are consistent with other policies for use and citation of resources. The biggest guidelines I refer to for my work are maintaining security and transparency. As GenAI usage continues to become more prevalent in our work, I think the easiest thing we can do is have open dialogue about our usage. Especially with all the nuances and ethical concerns, prioritizing transparency is key. And since we are in higher education, modeling this transparency and other safe practices for students and the public is a high priority.
The guidelines for using AI seem to follow other best practices for online tools, so that is easy to remember. I had not thought about how AI output might be declared or cited – I guess my old manual is probably rather out of date at this point! I think making it an accepted practice that AI usage be declared, and noting how it was used, would go a long way to building trust around AI. I hope this becomes a common practice.
I am careful not to include sensitive information in a prompt. I maintain a human in the loop to ensure that output is accurate. For me personally, if I am writing a thank you note or card of sympathy, I tend to use my own words rather than relying on AI because I prefer my own heartfelt sentiment in those instances.
The most important limitation for use of AI in my role would be using Personal Identifiable Information (PII) in queries. Using PII or other sensitive information in AI queries would go against our policies. Other personal values that I would apply to using AI generated information would be to think twice about how it benefits my learning. Since much of the generated information would likely need validation, for my own learning style, I’d learn best by researching from start to finish on my own. A better use of AI for the research I am considering would be to query what questions to ask (in order to focus research).
The guidelines were all pretty basic for anyone that has had to do any sort of research paper before. Very similar to journal articles and such. I prefer to use it to quickly write a draft email when I have writer’s block, or it is a day when I have to go to the office and end up getting pulled in all directions and lose time to reply to people (yay never ending emails).
I think that AI can be very helpful for someone in a role like mine, especially those that struggle to be concise or have their information flow well in emails. I never put the email in to AI, I always tell it what I want to say and it just zhuzhs it up a bit.
I have seen prohibitions on the use of AI for some journal reviews and manuscripts, which suppressed my interest in learning more. I have also gotten some AI generated responses to scientific questions that were blatantly misleading to me, but may seem plausible to students. The level of documentation needed for the queries is rather cumbersome for my work or the work of my students. Finding the right balance for use of AI in my courses will be tough. My inclination is to forbid AI, but then I obligate myself to consider whether it has been used lest student get rewarded for taking AI shortcuts. I have already had a disappointing number of Honor Code referrals in my career.
I mostly use GenAI to revise/polish up emails and create blurbs for our newsletter. Whenever I’m generating either, I am taking into account the university’s policies. I make sure that I’m adhering to confidentiality policies and that the responses I use are not discriminatory. These also align with my own personal values: respect individuals equally and honor their privacy.
It is good to have clear policies around AI for everyone at W&M. As we expand our use, we need to be careful not to use any personal student information in prompts. Even though there are aspects of AI that make me feel uncomfortable, it is important that we all model how to use AI responsibly for students and cite/declare along the way.
The declarations page was especially helpful. I try to track where I’ve used AI as I work, and I’ll color AI text red in my document so it doesn’t get accidentally “absorbed”. Writing is really about how we wrestle with the ideas and think through how to communicate. AI can really hijack that process and allows us to skip over the wrestling part. I like the idea of documenting the prompts used, because then you can see some of the thought process.
I don’t use AI in my daily work- not even with writing emails. However, I am trying to think through ways to incorporate AI in the classroom, at least from the standpoint of how to critically engage with it. I think each department could benefit from having a written policy.
Security, transparency, and compliance all apply to my ethical use of GenAI. After logging into CoPilot, I found it interesting that Microsoft has different privacy rules for enterprise/education than for personal accounts. That certainly opens a door for exploitation of people’s privacy and data if they aren’t aware of the difference. Although, I am not any more trusting that there wouldn’t be a data breach by nefarious actors on a enterprise/education account than a personal account, which I am more concerned about than any one company misusing my data. As for transparency, many of the journals I publish in are Elsevier journals, and I like the simplicity of their GenAI policy. Basically, “these technologies should only be used to improve readability and language of the work,” and everything else is not allowed unless under very special circumstances or with explicit approval from the editor.
The most interesting part of this for me was the section on “Tools to Help Manage AI Documentation.” These practices provide a level of rigor for research that allows the GenAI tool output to be examined in much the same way I would go back and look at a journal article or an interview transcript.
I eagerly await more detailed guidelines for staff! I am most concerned about data privacy with the use of GenAI, and currently try only Microsoft AI tools explicitly approved by WM IT. I have attempted to use AI in emails and in excel, but in both cases, doing so hasn’t yet been efficient. I think one day the tech will be worth the investment, but it’s not quite there yet for my work.
I generally don’t use AI much in my job. If I do use it, it’s to help write something that I’m having trouble getting started. I am glad to see that the university is taking steps to make sure students and faculty have guidelines to use when operating in this new reality.
The data security and Safe Use of Artificial Intelligence Across the Commonwealth policies come into play in my position. In personal use of AI, the Do No Harm and privacy concerns are in the forefront of my mind.
I appreciate having these guidelines as reference points. I particularly like the questions framed here, as more ethical than simply legal compliance oriented. I worry about the effects of Generative AI on the environment and my own thinking capabilities, and those are things that are not good fits for compliance frameworks. Though I think we could discuss voluntarily putting a throttle on acceptable use cases to avoid unnecessary energy usage, and setting some organization guidelines. I know other environmental organizations in Virginia have done this (The Nature Conservancy for one).
Related, I had some recent work quoted by an AI news generator recently that did NOT follow these guidelines, very odd: https://www.ainvest.com/news/chinese-investors-shift-capital-indonesia-tariffs-china-goods-2508/
It did include the a disclaimer in tiny print at the bottom that it was all AI generated and not human-reviewed, which at least is somewhat useful to point to for why their quote of me is not in my own voice.
I think the W&M policies are clear. It makes sense to follow security procedures and be transparent when using AI. It also makes sense to me to always have a human in the loop to evaluate accuracy and security. In my job, I mainly use AI to generate ideas for letters or presentations in my office. Nothing I do is published, though I was very interested in how citations should be made. A whole new world compared to when I was in College for sure!
So far I have been basically avoiding using AI, partially not to accidentally stumble in some ethics sinkhole. Things are changing so quickly that it is almost impossible for any guidelines to catch up. Luckily, my research area does not work with any sensitive data. I am also involved with scientific publishing, and there are a lot of ethical concerns there.
I think the rules for AI are essentially the same with our forms of content – if we need to use something, cite it! For AI though, we also need to check if the sources actually exist because there are many hallucinations. I would not use data/answers if they could not give me an accurate source. But after verifying the source, I’d feel comfortable using it; because i essentially did the necessary research to get the results!