thing 14: AIs and Brain Damage?

In this tutorial, we’ll examine a recent study on the effects that AI use has on human brains.

Watch/Read

“MIT just proved how ChatGPT impacts our brains” video (6:30)

A piece on Time.com summarizing the MIT research discussed in the video above, but with an additional focus on the effects of AI use on the developing brains of young people (6-minute read or listen to audio for 7:38)

Discussion

In the comments below, share 2-3 sentences on whether you think the potential damaging cognitive impact from AI use should be a factor in a W&M policy on AI for students.

29 replies on “thing 14: AIs and Brain Damage?”

While I think it is important to share these findings, I think it is too early in this type of research to initiate change in W&M policies. In addition to professors informing their students of the limitations of AI use in the courses they teach, perhaps a statement should be issued to students regarding the preliminary findings on the research and to encourage intential use of GenAI. In addition, I think it is important to address to what degree does relying on AI affect a student’s view of their own abilities and what emotional/psychological effects result from the lack of a student’s own input.

As research continues to be published, I would argue that it’s important to keep a watchful eye on it but to not let it overwhelmingly influence policy. I think it is incredibly important to acknowledge the limitations of the MIT study. While I understand the urgency in sharing the results and while they logically make sense, I disagree that we can take it all as irrefutable fact in its current state. It would be unwise to make dramatic changes to policy simply because one article said so. I do think that educators can use this information as a jumping-off point for discussions with students about the use of AI, but I think we should be cautious as information around AI changes seemingly every day.

I don’t think it should be a factor for W&M AI policy for students. This study is very early, not peer reviewed, and could have been designed to get a clearer picture. The Brain-to-LLM group did better in the 4th round? What if they did it two more times? Would that have changed? Also, this is very specific to essays. I don’t think essay writing with AI will ever be allowed, so I don’t think this will even be an issue for our students.

I think W&M professors should be aware of the potential damaging cognitive impact from AI as they navigate these uncharted waters of AI in academia. Faculty know their students and should observe how AI may influence student learning outcomes. More data is needed, I think, before we can address policy.

I think W&M professors should be aware of the potential damaging cognitive impact from AI as they navigate these uncharted waters of AI in academia. Faculty know their students and should observe how AI may influence student learning outcomes. More data is needed, I think, before we can address policy.

I think it is too early to use these results to impact W&M policies — peer review and replicability are important. My concern is that these results may have confirmation bias, namely that there is an expectation that relying on AI will reduce “how well” we think, and that is what we are seeing. I think that research that illuminates how to use the tools effectively will be more useful and impactful long-term.

So, I’m not sure if framing the problem as “potential damaging cognitive impact from AI use” is the right approach. AI is here, it’s here to stay, and it’s only going to get more efficient (quantum computing). In less than 18 years, we’re going to have students who are native to AI. I think the real question is “What skills do we want our students to come out with?” If it’s critical thinking, what assignments do we normally assign, and how does AI potentially impact it? Same for writing and researching. From there, we can then craft an AI policy for students.

The study results are very limited and not peer-reviewed etc. yet. However, I do believe that the urgency in learning more about AI impacts on brain use and brain development is justifiable. AI is here to stay and is developing fast, becoming incorporated into more and more parts of our lives. It is imperative that we understand if and what we are giving up by accepting AI assistance. I do believe keeping an eye out for new research results and ensuring that students and faculty are aware of these is in W&M best interest. Eventually policy might have to be adapted to address AI.

I agree with those above that more research is needed before W&M develops an over-arching AI policy. I do think this study is useful to share with students and to check in with them– do they think using GenAI is impacting their critical thinking skills? Why or why not? Do they even care about such a thing?

Yes, I think the potential cognitive impact of AI use should be considered in a W&M policy for students. While AI can support learning, overreliance could discourage critical thinking and problem-solving. But it’s too early to make a concrete decision: we are only 1-2 years into this, and scientifically, it just is not long enough for scientists to conduct research to figure out exactly how horrible it is

I think one study by itself does not tell us a lot, but it does point to some cause for concern. As I tell my students: GenAI is a crutch, and like all crutches, you want to think through just how much you are relying on that crutch, and whether your ability to function without it is eroding.

I agree with others here. I don’t think we should change our policies as an institution at this time. Studies are still undergoing, and if we are going to change policies on AI because of this, then we’d have to do that for a lot of other things.

I don’t think W&M should base policy on this study yet since it’s still early and not peer-reviewed. That said, the possibility of AI affecting how students think and process information is worth keeping an eye on. Instead of rushing to strict rules, the school could focus on educating students about both the benefits and risks of using AI so they can make smarter choices.

Awareness is important, but I’m not sure how it should play into policy. This is an early study, and the group of participants wasn’t large, so I don’t feel it is conclusive. W&M students are very smart. I think that if professors have honest discussions about concerns and ask them to limit use of AI in assignments, that will be effective.

I think that more research and data needs to be generated on this subject as well as having more long term studies before creating large changes to W&M policy. Obviously copying and pasting directly from a chat-bot should not be allowed, but writing essays is not always the best way to judge mastery for every subject.

In ideal world our smart students are smart enough to realize the value of learning and not to fall into traps of using shortcuts that reduce their educational benefits. But in this ideal world we don’t need honor code either. So I do think it is the responsibility of the college to put forward the policies that give faculty permission to organize the learning process in the optimal way. I think it is clear that the situation is fluent, and these policies need constant updating, but I am afraid they are necessary to help our students to avoid temptations. Of course the challenge is to at the same time to allow the positive uses of new technologies.

While it is something to monitor as the research develops, it should not impact W&M policy. As with a number of other comments, I think it is too early to get overly concerned about this study and the overall impact of using AI.

Yes, I think the potential impact of AI use should be considered in a W&M policy. Especially since while AI can be a powerful tool for efficiency and support, overreliance may hinder students’ development of independent skills, critical thinking, and confidence. A balanced policy could encourage critical engagement with AI while still allowing it as a helpful resource since its obvious AI is here to stay.

Yes, I think the potential impact of AI use should be considered in a W&M policy. Especially since while AI can be a powerful tool for efficiency and support, overreliance may hinder students’ development of independent skills, critical thinking, and confidence. A balanced policy could encourage critical engagement with AI while still allowing it as a helpful resource since its obvious AI is here to stay.

Yes, I think this information should influence our policy on AI. If our goal as an educational institution is to educate and to teach students how to think, and think critically, we should highlight ways that might be diminished by using AI.

I think it’s important to acknowledge the MIT study, but it’s too early to base W&M policy on it. Results need to be peer-reviewed and replicated. For now, it seems better to use this research as a starting point for conversation.

Very interesting to watch the video on how it leads to a decrease in brain activity. Question is, does activity equal intelligence? Does less activity just mean you came to a similar conclusion faster? Also what is the control? I’d be interested in a control where someone was able to just use a search engine, or something like cliff notes, to help with these tasks they performed and compare the results. This also doesn’t consider the benefit to learning through ai. It can be a tool to help you understand complex subjects in a way that is more digestible. Without further research and studies it seems obvious that this should not dictate policy.

I think that there is a distinction between using these tools for brain growth activates, or for creative writing, and other tasks that require you to involve yourself in the material, and basic short-responses, such as discussion boards where most people weren’t actively participating in the first place. I do believe that with repeated stunting, particularly in youth, there could be a significant atrophy of these critical skills later in life, and that in many places, society will be playing catch-up to perform these tasks without assistance.

I think this type of information is very preliminary. I think we need to be clear about our policy on AI for W&M students, and communicate that these are some of the potential effects that are currently not well understood.

I do think the impact of AI on cognition–in this case, specific to writing–should impact W&M AI use policy eventually. But I agree with the other comments that these findings are a little too early to implement into a policy; especially as it is only limited to essay writing.

I am not impressed or convinced. Basically, what they found out was that, if you cheat and let someone else to do your assignment, you will not learn from it (and won’t even know what’s in the work you’ve submitted). Quoting an old episode of SNL, “I could tell you that for a fried bologna sandwich.” In terms of teaching guidelines, I think it ids important to make sure that some assignments which could be done effortlessly using AI, should still be required to be done by students without AI help. And more complicated ones, allowed to be done with the AI help but involve more advanced (and creative) components. On a personal note, I am glad I am retired. 🙂

I 100% think that AI will have a largely negative impact on the students at W&M. While their products will look and sound more polished, student’s ability to understand and rearticulate material will be greatly challenged. Part of the learning process and part of how we develop respect for things is by struggling through it. AI removes that barrier, for better or for (mostly) worse.

Every tool has an impact on your brain and how it works, so it’s not surprising that a tool like ChatGPT would influence how people store and recall information. I think it is much too preliminary to base any policy on. There are many ways to check for understanding of the material by students, and efforts are probably better focused on thinking about the purpose behind the assignment/test question/etc. to ensure that the design reflects that purpose.

I agree with many of the posts, more research is needed. Students will use AI, just like they have used web search browsers rather than card catalogs at libraries to find information (I’m old, I know), but it doesn’t have to mean a cognitive decline – it all depends on how it is used and also teaching that fact checking is encourage or even required, which may be helpful to counteract the potential cognitive decline.