thing 4: GenAI Concerns

Introduction

As AI continues to evolve, users should be able to recognize and address the broad ethical and social issues that accompany its global impact. This module will provide you with an overview of some of the most pressing concerns. 

Watch:

GenAI Considerations (5 minutes)

Read:

For this activity, you’ll choose one of the GenAI concerns mentioned in the video that is most important to you personally or professionally and read a short (~5 minute) article about it.

Misinformation and disinformation: How GenAI is boosting the spread of disinformation and propaganda

Risk of Bias: Unmasking the Bias in Facial Recognition Algorithms

Academic Integrity: How do we maintain academic integrity in the ChatGPT era?

Privacy and Data: ‘A new privacy threat:’ Protecting personal data in the world of artificial intelligence

Environmental Impact: Explained: Generative AI’s environmental Impact


Discussion

Which reading did you choose? What was your biggest takeaway from this module?

41 replies on “thing 4: GenAI Concerns”

I chose to read ‘Explained: Generative AI’s Enivornmental Impact’. The impact on our environment – specifically the impact on electrical grids and water useage -should be of major concern as our usage of (dependency on)GenAI in ‘everyday’ situations continues to grow. I was alarmed to read about the frequency of new models being available and that subsequent impact on the environment as those new models will need to be trained. The amount of waste and cost to maintain the data centers & hardware has to be harnassed in conjunction with the future improvements in Gen AI.

I chose the reading about privacy and data. As someone who is interested in cybersecurity is it fascinating how some people have taken all of these AI tools and let them do their thing without wondering what kind of data it is collecting or what that data is being used for. The article mentioned internet of things (iot) products like Google Home and Alexa collecting data inside of the homes of people with no way to know if or when it is recording. The microphone on a Google Home device can be turned off, but that doesn’t necessarily mean that it’s not still listening.

There are also other devices with cameras that send data to a remote server like Ring doorbells and some automated vacuums. It really makes me wonder what they are going to do with that data. They could easily analyze the videos to see when the person leaves and comes home or what kind of items they own.

I chose the environmental impact reading. The information was surprising. I didn’t realize how much energy is being used to create and run AI. I’m going to think twice about my unnecessary responses to gen AI next time I’m about to reply with, “thank you.” I went on to find articles about that very topic, and when I think about all the times all the users simply say thank you and receive, “you’re welcome,” I realize it must really add up in terms of energy usage.

I chose “Misinformation and disinformation”. This is indeed a problem, but it existed before AI and is not going away any time soon. The antidot is supporting freedom of speech, which should allow access to all trustworthy sources of information, and not only those which support the government’s agenda. The paper itself could have benefited from more examples from recent US history.

I actually read three of the articles (and will read the other two at a later date). I was aware of the issues related to misinformation, risk of bias, academic integrity and privacy, but I have never considered the environmental impact. That’s the reading I got the most out of. It was interesting to consider how much electricity (fossil fuel) and water is needed to run data processing centers, not to mention other negative impacts on the environment. Something that wasn’t mentioned was all the trees (and other greenery) that are cut down to make a place to build them. Last, the reading mentioned the increased resources needed to produce a GPU relative to a CPU. I didn’t know what a GPU was, so I did a Google search (I did not ask ChatGPT!).

I read “Risk of Bias: Unmasking the Bias in Facial Recognition Algorithms” because I have heard about the concern before and wanted to know more. Being aware of what causes the bias is the first step to understanding that when training AI it is essential to ensure that bias is counteracted by selecting the data so it is actually reflecting all of the population. Otherwise a big part of the population would not be considered or misrepresented.

I read the environmental impact article. This is one of my greatest concerns with GenAI. While I appreciate the video and the article’s perspective of encouraging users to consider how their individual behavior contributes to the power demands of running GenAI, it feels a bit like consumer recycling– actions at the individual level have such a limited outcome compared to the actions of corporations. Amazon, Open AI, etc. are going to build these data centers whether or not I use ChatGPT. And even if I turn off Google’s AI search results summary, Google is still running AI on all of my searches, so the energy usage is the same. Students, workers, and even us faculty are hearing a regular basis that we need to become Gen AI savvy and master these new tools at a time when climate change is reaching new extremes. I wish the environmental impact was at the center of the conversation.

Thank you for highlighting the varying level of impact between individuals and corporations. I completely agree that it feels like the conversation around consumer recycling and carries several of the same concerns with how do we get corporations to listen to the concerns of their constituents and the general public. I didn’t originally think about this during my read through so I appreciate your mention of this.

I chose to read the article on the environmental impact of GenAI and found the specific details of the negative impacts to be troubling. A bigger takeaway from this module as a whole was the number of big concerns about GenAI. The 5 options are huge and need to be addressed, but I think there are other concerns, such as concerns with how much younger generations may actually be learning if they are growing up with tools like GenAI, that we didn’t even touch on. It makes me think a lot about how I want to approach my usage moving forward and highlights the importance of having these critical discussions.

I read the article on mis/disinformation article because of my background in international relations. Even before the explosion of AI, mis/disinformation was a huge problem. On the extreme end, you have the Rohingya massacre because of Facebook. On the more quotidian end, you have disinformation machines operating from Macedonia trying to influence voting in the US. As a society, we weren’t handling that well at all. Now, disinformation through AI will have us questioning facts even when coming from reputable sources. On top of all of that, the US (and perhaps it’s a larger trend globally) is seeing people actively denying expertise. We will be seeing the death of expertise coming from various sources.

I read the environmental piece, because as an environmental economist, that is a very important aspect to me. But what I’m probably most concerned about is the economic aspect — right now all these tools are “free” but the energy requirements are huge. At some point, the AI is not going to be free and once we are all dependent on it, I worry about the cost and the distributional consequences. Are rich people going to be able to afford AI and poor people not and will that lead to an ever widening gap between the haves and have nots.

I selected AI and Academic Integrity article. Even if the student cites the source (see my first post about APA) the fact that AI generated results often contains misinformation and hallucinations should be a focal reason not to use it in any scholarly project. While AI might serve as a very crude meta-analysis of sorts (compiling a lot of data on a particular topic) that data has not undergone any peer-review, which is a crucial element in the creation of scholarly research.

I read the article, “A New Privacy Threat:…” It is concerning how little privacy we actually have now that we have so much technology on our phones and in our homes, etc. It will be interesting to see how this all plays out in the future.

I’m concerned about all of these elements, and but the environmental reason is a big piece of why I have not used Generative AI much in my personal life or with my family. I read Environmental Impact: Explained: Generative AI’s environmental Impact. The impact through energy use (especially at a time when the administration is moving against renewable energy), water use (especially in areas out west with real water security issues) and pollution from data centers (like in Memphis and the way that Colossus has negatively impacted the communities in which it was built) are very concerning. I wish there was a way to turn off AI (for example in a google search) to avoid this environmental impact when I’m not even trying to use generative ai.

I read “‘A new privacy threat:’ Protecting personal data in the world of artificial intelligence”. I would like to set up guardrails around our data and information so that it cannot be so easily collected. It feels like we have traded our privacy for ease/ease of access to various functions. Discussions and respect for privacy (which would include our data not being harvested for whatever use) need to be at the forefront.

I opted to read about how GenAI is boosting the spread of disinformation and propaganda. Not sure that I can claim to be surprised by anything that I read. Information OPS is alive and well in the U.S. whether originating from other countries or from within, GenAI is becoming a source to perpetuate propaganda/misinformation or to misrepresent attributions for it. Back to what I expressed as my original concern for GenAI; Folks need to be discerning and on their game in order to spot these campaigns that intentionally spread misinformation.

I chose to read the academic integrity article. The biggest take away is that I am already doing a lot to insulate my courses from the impact of GenAI. The next take away is that the models are constantly being improved so adjustments may be needed in the future to meet my learning objectives, which could perhaps use some help in phrasing from GenAI to better resonate with students.

I chose to read about generative AI’s environmental impact which was sobering. I knew that it required a great deal of resources, but the amount quoted in the article is staggering. In addition to everything else we should be concerned about regarding the environment, we now should be concerned about the electricity and water that AI consumes. We talk a lot about misinformation, hallucination, and bias that AI is capable of. I think the public needs to be better informed about the unsustainable path this industry is on and the concern we should have for future generations.

I chose Explained: Generative AI’s environmental Impact, and like other replies, I didn’t realize the energy required to perform what we see as simple typing. I’ve even used an AI image generator just for fun and now feel guilty that I’ve wasted resources on something idiotic. I think what’s scary is that this is something we are becoming so dependent on, its being roped into everyday life and the effects on the environment will get swept under the rug and out of sight; out of sight, out of mind. It’s like iPhones; the metals needed to make them, the manual labor it takes to assemble, package, and ship them…and we line up in droves to replace models we’ve had for a couple years, just to get the best and newest version. The penchant humans have for what’s new, what’s right now means that something is going to suffer, especially if it’s not in front of their face.

I chose to read Explained: Generative AI’s environmental Impact because of my high level of concern about my natural environment. The general public is at such a disadvantage to not have all the risks set out next to the tool, like a warning label on many other valuable tools! It may be that we as a civilization will have to reach a very negative milestone to see a need to warn folks and to make plans to to create ways to utilize this tool with affecting the environment less but then I am afraid it will be too late.

I read the article on generative AI’s environmental impact-that was very eye opening, and I feel naive for not being aware of the amount of energy being used or the waste produced. And I don’t want to live in a world full of data centers.

I chose the article on misinformation & disinformation. This is already a major problem, made worse by material on the internet & social media that is manipulating popular opinion. Our law makers have yet to come up with comprehensive federal legislation to protect its citizens, and I am not sure they could even agree on the legislation needed. People are more likely to believe what they want to hear, rather than the verifiable facts. AI is only holding up a mirror to our society, where experts are no longer held in esteem. 🙁

I read about the environmental impacts, which I have heard mention of many times, but did not fully understand. It was striking to me how much the power needs of data centers doubled between 2022 and 2023 largely due to the growth of AI. It put it into perspective for me to learn that an AI prompt requires 5x the energy to answer than a general internet search.

I read the article on privacy. As more people and companies adopt AI, we essentially lose control of our information. My doctor’s office is using AI to take notes. I found (in MyChart) where I had said something to the doctor in jest, but that conversation showed up in a summary as a real concern. Where does all that go? Who has access? What could it be used for? It’s moving too fast with no guardrails.

I chose Privacy and Data. My main take-away is that nothing posted online in today’s world can be considered private information. Companies are looking to collect as much data as possible by any means necessary.

I chose the article on environmental factors. I was really surprised to find out how much energy it takes to generate a response from an everyday persons prompt input. As the article states, not many people think about this issue because it’s not something that’s talked about. Most conversations just focus on the creative and potential effects on the job markets. However, we do need to consider the environmental factors as well.

I choose to read about the Privacy & Data concern. My primary take away is that in the current environment, individuals do not have much agency in how their data is used (or indeed, what their data constitutes). This is a policy issue, policymakers and the emerging field of AI ethicists will be essential to avoiding harm to individuals at scale.

I read two of the articles, the ones about misinformation and disinformation and about environmental impact. Misinformation and disinformation was already a concern of mine, and the article caused even greater concern. However, I was not aware of the environmental impact of using AI and was very surprised to learn of the energy needed. Reading the article helped to understand why, but the amount of impact is still quite surprising. It will make me think twice about using AI rather than a Google search or other option.

I chose “How do we maintain academic integrity in the ChatGPT era?” This article really resonated with me. I think the author did a good job summarizing the all-or-nothing adoption of gen AI false dichotomy over the past couple years, and it made a convincing point that we need to adapt our courses to the reality and ubiquity of gen AI. There were a lot of good ideas on how to proceed, some of which seem appropriate for my courses, and I am excited to learn more and try them out.

I chose the environmental impacts piece and was amazed at the volume of electricity and water needed. These data centers are not going away, but incentives to promote greener energy are. It’s a bit depressing, especially knowing that you have to check behind AI for bad info. We humans were the originators of misinformation, and building a Cadillac when a VW would do, and AI is learning from us. It makes me sad.

I read Gallant’s piece on Academic Integrity, and really enjoyed her approach. It isn’t an either/or proposition, and she correctly places the problems of AI as the same problems that have bedeviled teachers and pedagogy forever: time, motivation, support in the classroom and by administrators. I think the challenging thing is having structural support for faculty to have the time and space to tackle these questions effectively. I think W&M is starting to do that fairly well, but I worry about large research universities with their existing overreliance on overworked grad students, since those big schools are going to teach more students in aggregate than our little liberal arts school.

I chose “A New Privacy Threat.” I am interested in how we can use AI for research and what that means in terms of protecting data about people and intellectual property.

I read the article about privacy. This is something that concerns me greatly as we move forward. My key takeaway is that my suspicions about data collection being used in ways that we may not agree with are correct.

I was unaware of the extent of AI’s environmental impact. It will be an interesting topic to bring up in work discussions on utilizing AI for major work processes.

I chose, Environmental Impact: Explained: Generative AI’s environmental Impact. What do I take away from this? Just like the previous discoveries and inventions, once again, we all jump in feet first without taking the time to think about what we are doing to the environment. We need to slow down and look at all of the factors or the only power/water available on earth will be for Generative AI leaving us, the plants and animals without. What good is this knowledge it produces if we fail to protect the world so eager to use it?

I read the article on academic integrity. I like how the author stresses this isn’t an either/or situation, and really liked the recommendation of making coursework more meaningful. Getting student buy-in and letting them know what’s in it for them (other than they need to pass the course to get their degree) goes a long way, but unfortunately, I don’t know how good we are at relaying that information.

I chose the article on environmental impact. I was aware of disinformation and bias, but hadn’t really given thought to the environment. Obviously that was a big gap in my understanding! I was amazed to learn about the amount of electricity and also water needed to power generative AI. As we look toward ways to be more sustainable, this will need to be included. It feels daunting and overwhelming to consider. I know AI won’t be going away, but wonder at the benefit vs the cost, in many areas!

I have chosen the unit on academic integrity, although all of the five concerns are very real to me. I think the affect of AI on teaching is the most fundamental one, because AI is a disruptive tool, and it will affect globally what kind of training students will need in the future. For example, compare to 50 years ago, the education mostly abandon the idea of memorization or pen and paper calculations as main educational activities. Figuring out what students really need to learn to become a valuable and fulfilled professionals is the question.

I chose “Misinformation and Disinformation.”
Misinformation and Disinformation: my biggest take away, it is concerning how easily they can spread misinformation and disinformation. I would like to see more legislation enacted to block the spread of misinformation and disinformation. I realize that it will not completely stop them but at least make it difficult to do so.

I chose “How do we maintain academic integrity in the ChatGPT era?”. I think my biggest takeaway from this is that while AI’s getting a lot smarter and can genuinely help us finish so many tasks, it still lacks the ability to truly mimic our specific tone/way of speaking. So it’s quite easy for a learned person to find out if the student is cheating. At the same time, the purpose of studying is self betterment. If we cheat using AI, we’d be losing valuable opportunities to learn something new and lose our competitiveness

Leave a Reply