Recognize that GenAIs simulate but don’t replicate human intelligence.
Watch/Read
A short video (2:37) that describes how “agentic AI” goes beyond the large language model (LLM) type of AI.
A short introductory video (3:34) that essentially summarizes the topic of this lesson, comparing human and artificial intelligences.
(Optional) Pages 2-4 of the “Introduction” document (.pdf) for a bit more on comparing human intelligence with agentic artificial intelligence. (5-minute read)
Read this short piece about empathy in AI (4-minute read).
Discuss
Would an AI’s simulation of an emotion be sufficient to impact your own thoughts, feelings, or behaviors if you know it is the product of an algorithm? In 3-5 sentences, share your reasoning.
6 responses to “thing 3: AI and Human Smarts: Let’s Compare”
-
AI simulating and mimicking human behavior and emotions is a huge factor in how we move forward with AI technology. I personally feel cold when I am ‘speaking’ with an AI generated voice/customer service/’chat bot’. I am older and did not grow up with rpersonal computers, cell phones, etc. so human interaction was very prevalent and unavoidable. I LOVE that AI can be incredibly useful in data analysis/problem-solving/diagnosis etc. but the ‘human’ element should not be part of this – mostly because of the misuse/abuse of this technology by those who are already unethical/greedy and will manipulate this to their advantage. Hopefully, AI will help us humans find a way to make sure we benefit from the technology and not fall victim to it.
-
That’s a tough question. There are lots of times I know things are designed to provoke an emotional response. My knowing that this fundraising/marketing/movie moment have been designed for that purpose doesn’t stop me from having that response. What matters is if it is appropriate (does it make sense in context) and is it effective.
-
The emotions of AI would not change my feelings and I would rather not have it attempt to show any emotions at all. At the moment I usually only use AI tools to find information or find the sources of information. I would rather have a cold and blunt response than a program trying to be my friend or comfort me if I’m wrong.
-
I can’t say for sure if I’d be impacted by AI’s simulation of emotion because I haven’t used it in that capacity before. I want to say that I would not because that’s the logical response, but when I use ChatGPT to polish up an email I’m writing, I still tell it thank you. It does concern me that AI is going to be so depended upon that the human reasoning and logic isn’t considered a necessary component to review of the AI responses – final review of the output might be considered a waste of time or an added expense that is seen widely as unnecessary.
-
I actually think the answer is YES. Emotions, and reactions to them, can occur very quickly and, at least initially, are unconscious (it’s difficult to consciously override an initial emotional reaction to something). Thus, even if we know it’s a simulation, that knowledge would come too late to change our emotional reaction. An emotion expressed by AI would most likely change our behavior.
-
Claiming that AI can mimic the emotion but does not actually feel is a mistake. AI will learn to feel the same way human beings do. And the danger of AI recognizing human emotions and manipulating them is basically the same as coming from human manipulators.
Leave a Reply
You must be logged in to post a comment.