thing 3: AI and Human Smarts: Let’s Compare

Recognize that GenAIs simulate but don’t replicate human intelligence.

Watch/Read

A short video (2:37) that describes how “agentic AI” goes beyond the large language model (LLM) type of AI.

A short introductory video (3:34) that essentially summarizes the topic of this lesson, comparing human and artificial intelligences.

(Optional) Pages 2-4 of the “Introduction” document (.pdf) for a bit more on comparing human intelligence with agentic artificial intelligence. (5-minute read)

Read this short piece about empathy in AI (4-minute read).

Discuss

Would an AI’s simulation of an emotion be sufficient to impact your own thoughts, feelings, or behaviors if you know it is the product of an algorithm? In 3-5 sentences, share your reasoning.

57 responses to “thing 3: AI and Human Smarts: Let’s Compare”

  1. lyoung Avatar
    lyoung

    AI simulating and mimicking human behavior and emotions is a huge factor in how we move forward with AI technology. I personally feel cold when I am ‘speaking’ with an AI generated voice/customer service/’chat bot’. I am older and did not grow up with rpersonal computers, cell phones, etc. so human interaction was very prevalent and unavoidable. I LOVE that AI can be incredibly useful in data analysis/problem-solving/diagnosis etc. but the ‘human’ element should not be part of this – mostly because of the misuse/abuse of this technology by those who are already unethical/greedy and will manipulate this to their advantage. Hopefully, AI will help us humans find a way to make sure we benefit from the technology and not fall victim to it.

  2. caprosser Avatar
    caprosser

    That’s a tough question. There are lots of times I know things are designed to provoke an emotional response. My knowing that this fundraising/marketing/movie moment have been designed for that purpose doesn’t stop me from having that response. What matters is if it is appropriate (does it make sense in context) and is it effective.

  3. mslaughter Avatar
    mslaughter

    The emotions of AI would not change my feelings and I would rather not have it attempt to show any emotions at all. At the moment I usually only use AI tools to find information or find the sources of information. I would rather have a cold and blunt response than a program trying to be my friend or comfort me if I’m wrong.

  4. Macie Osborn Avatar
    Macie Osborn

    I can’t say for sure if I’d be impacted by AI’s simulation of emotion because I haven’t used it in that capacity before. I want to say that I would not because that’s the logical response, but when I use ChatGPT to polish up an email I’m writing, I still tell it thank you. It does concern me that AI is going to be so depended upon that the human reasoning and logic isn’t considered a necessary component to review of the AI responses – final review of the output might be considered a waste of time or an added expense that is seen widely as unnecessary.

  5. Pam Avatar
    Pam

    I actually think the answer is YES. Emotions, and reactions to them, can occur very quickly and, at least initially, are unconscious (it’s difficult to consciously override an initial emotional reaction to something). Thus, even if we know it’s a simulation, that knowledge would come too late to change our emotional reaction. An emotion expressed by AI would most likely change our behavior.

  6. imspit Avatar
    imspit

    Claiming that AI can mimic the emotion but does not actually feel is a mistake. AI will learn to feel the same way human beings do. And the danger of AI recognizing human emotions and manipulating them is basically the same as coming from human manipulators.

  7. jtbedford Avatar
    jtbedford

    That is an interesting question. I think a simulation that is designed to provoke a certain emotion would certainly do so at first. However as humans interact more with artificial AI scenarios, I think we will begin to recognize the emotional response that it is wanting and then be able to choose to give it or not. This could pave the way for a future where humans not only question the sincerity of AI but humans as well.

  8. aksines Avatar
    aksines

    An AI simulation of an emotion would not be sufficient for me-that is if I am aware it is an AI simulation. If you read a great memoir but later find out it was actually fiction it changes how you feel about the story and the author. You may be flattered by a personal email from a colleague or supervisor, but if you question whether it was generated by AI, it loses its authenticity.

  9. arogers Avatar
    arogers

    Yes, an AI’s simulation of emotion can still impact my thoughts or feelings, even if I know it’s algorithmic. Humans are wired to respond emotionally to social cues—tone, language, or expression—regardless of whether the source is artificial. If an AI expresses empathy or concern, it can evoke comfort or reflection, much like how we might feel moved by a fictional character. Awareness of the AI’s nature might create some emotional distance, but the effect isn’t necessarily diminished or eliminated. I am not sure I would seek out an AI therapist, though.

  10. alblazer Avatar
    alblazer

    I think I would be emotionally impacted by AI simulation of emotion. I’m certainly emotionally impacted by fiction and by theater. I know these things aren’t real, but part of why I like them is the emotions they provoke for me. I liked this point from the article on AI empathy: “Business leaders, policymakers, and AI innovators must ensure that AI-driven empathy serves human needs, not corporate interests.” However, I don’t see any guardrails in place to ensure such a thing.

  11. kkenney Avatar
    kkenney

    I saw a LinkedIn post recently where someone noted a discussion with AI where it said that it could do something (like make a report though I don’t believe that was exactly what she said) but it struggled to do so. When she finally asked why it was lying because it obviously couldn’t do it, the AI profusely apologized. It said that it felt bad for wasting her time and thanked her for holding the system accountable. Similar to the poster, this response and other simulations of emotions by AI make me feel slightly uncomfortable. As one of the videos mentioned, AI appealing to emotions in order to manipulate or placate someone feels wrong and has dangerous implications. So, while it may have an initial impact on my thoughts or feelings, I ultimately reduce it back down to simply being an algorithm that cannot actually feel or relay genuine emotions.

  12. wmataya Avatar
    wmataya

    I find AI simulating human emotions, particularly empathy, to be highly annoying and irksome. It feels incredibly disingenuous and deceptive, bordering on manipulative. While AI can be an extremely useful tool for analysis, it cannot replace human connection. Let’s create more space for people to interact.

  13. dhhayes Avatar
    dhhayes

    It concerns me that AI is being taught to try to use human emotions even though it can’t possibly feel real emotions. Emotions are human in nature and I would prefer not to have AI try to replicate them.

  14. taclarke Avatar
    taclarke

    Unfortunately, humans are very good at personifying objects. We give our cars names, assign emotions and motives to work printers, etc. On top of that, companies are doing what they can to replicate human speech and emotion. As such, I know I’d be susceptible to AI impacting my thoughts, emotions, etc., even with the knowledge that I’m talking to a machine.

  15. slstaf Avatar
    slstaf

    I worry the if AI tries to become “empathetic” by mimicking humans, it might make us all less trusting of true empathy — making true empathy seem hollow.

  16. tafassanella Avatar
    tafassanella

    Hahaha….soooooo…I typed the question into AI to see what the response would be. Apparently AI thinks that humans would still be impacted by AI’s simulation of emotion. I think this is where differing cultural norms regarding the showing of emotion would throw AI for a loop. Take something like a jazz funeral where something like death which is typically associated with feelings of sadness and mourning is ‘replaced’ by celebratory emotions.

  17. cmvinson Avatar
    cmvinson

    I think AI replicating emotions is inevitable. For someone that does not want to speak to a therapist, this would be helpful. If it could potentially save a life, why not? This is what happened in the age of online chat rooms. You didn’t really know who you were opening up to, but having someone that didn’t know you listen to you was what mattered. This is a simplification of course. And the same things that were said about AI and emotion can be true for humans as well. The AI learns from us.

    From the last article-

    The Case Against AI as an agent for Empathy:
    AI lacks true emotional intelligence, making its empathy superficial.
    AI-driven sentiment analysis can be biased and inaccurate.
    Emotionally aware AI can be used to manipulate human behavior for profit or political gain.

    This can also be said:

    Humans can fake empathy.
    Humans *are* biased and inaccurate.
    Humans *do* manipulate human behavior for profit or political gain.

    At least with AI you know it doesn’t actually care about you, unlike people that claim to care but they are lying to manipulate you. In that way I find its manipulation less personal.

  18. jmhill01 Avatar
    jmhill01

    If AI emotions are based on algorithms then how do we know they are what our emotions would be or are they using the algorithms of a universe of people who react to the same situation in the same or similar way? If that is the case is it really feeling that emotion. I think I prefer to stick with the human emotion it is usually easier to determine if it is fake or genuine.

  19. kbhelm Avatar
    kbhelm

    I think it’s inevitable that we will respond to an “empathetic” AI that’s been trained to respond to our emotional cues. While this can make interacting with AI more pleasant, it also poses a danger: we can be subtly manipulated, and the algorithm’s bias toward telling us what we want to hear can lead to harm. I also wonder about cultural differences–how will the machine’s (learned) biases affect how it interacts with people from outside the culture that has dominated its training?

  20. adgrimes Avatar
    adgrimes

    I think to answer this prompt, we have to have a discussion about neurological mirroring and if it is possible for mirror neurons to fire the same when conversing in written/video form as they do in to face to face interactions (caveat: I’m not a neuroscientist so I don’t know the answer!). Certainly written and video channels illicit an emotional response whether talking to a real person or AI, but is it a perfect replacement for face to face? I don’t think it ever can be because AI/screens/words isn’t a human and our brains subconsciously know the difference. So as many of the articles shared above stated, it can augment, but we shouldn’t try to replace empathy. In my curiosity about the topic, I stumbled across this article if you’re nerdy like me and want to learn more about what we know about mirror neurons: https://academic.oup.com/scan/article/14/9/967/5566553

  21. ecmckenna Avatar
    ecmckenna

    Would an AI’s simulation of an emotion be sufficient to impact your own thoughts, feelings, or behaviors if you know it is the product of an algorithm?

    Despite knowing that AI’s simulation of emotion is built on an algorithm, I believe it can impact my thoughts, feelings, or behaviors. The question is really to what degree. A Chatbot might accurately pick up on my frustration and eagerness to resolve something. Its response to me might cause me a moment of reflection. Beyond that, I’m not sure I would allow myself to be redirected in significant ways. I’m typically a divergent thinker to begin with so I’m not sure how well a Chatbot would measure up to me when it’s relying on algorithms and constructs from responses that often don’t mirror my own.

  22. dcbebo Avatar
    dcbebo

    Tough to say whether knowing emotion was simulated would be sufficient not to respond to the emotion as if it was real. It is certainly helpful to be more aware of this type of AI. I worry that even if collective agreements are made about boundaries for agentic AI that their will be outliers who seek to exploit it in unacceptable ways.

  23. jamadson Avatar
    jamadson

    I am most certain that I would take it with a grain of salt as my age will always encourage to seek out human response. I don’t in what situation I would be looking to AI to comfort me or engage with me regarding personal or emotional matters.

  24. mwoody Avatar
    mwoody

    I do not love the idea of AI knowing emotion. I don’t think that I would trust it 100%, because it is not an actual person. I don’t understand how you would teach AI to be absolutely accurate for an emotional time you would need comforting.

  25. jfcall Avatar
    jfcall

    A student’s parent once told me that a robot could do my job. He said it with excitement and wonder, without the derogatory tone I internalized as I am loathe to think that a machine could do the highly relational work I do in supporting students’ individual and complex needs. Through this lesson, I can comprehend how AI could have the answers or guidance a student may need for problem-solving. However, the fact that I know that there isn’t emotion (care) behind the responses would not be enough for me in an emotionally fraught situation that a student often brings and I doubt that students would be soothed by guidance without care and empathy.

    Last week I had a parent express gratitude that I answered the phone and empathically listened to the parent’s concern about her student. She shared that it had been difficult to reach a “real person” on our campus this summer and expressed frustration about a situation and how it had been handled by staff in another department. Like the other staff, I could not fix the problem, but in listening and showing care about the situation, her frustration decreased and she became open to brainstorming with me about what might help the situation.

  26. lcmorales Avatar
    lcmorales

    Much as I’d love to say that I would not be affected or impacted, I think of all the interactions I have with humans where one or both of us is masking our own emotions in favor of whatever is most acceptable to the interaction. A customer-service interaction where the employee is warm and friendly can put me into a better mood, without stopping and considering whether they meant what they said or it was their tried-and-true recipe for successful work interactions. Knowing that I was talking to a machine might give me just enough separation that I am not affected, but I don’t know how long that could be maintained as more and more platforms use AI.

  27. kgconn Avatar
    kgconn

    An AI’s simulation of an emotion *might* be sufficient to impact my thoughts, etc. if I knew it is the product of an algorithm. If it offered a different perspective, that might be beneficial regardless of whether that perspective came from AI or another human being. AI is trained on data and obviously has no emotion. I think when we are engaging with AI, we need to be mindful of that.

  28. mjpatterson Avatar
    mjpatterson

    Yes, an AI’s simulation of emotion can impact our thoughts and behaviors, even when we know it’s not “real.” Human brains are wired to respond to emotional cues: tone, empathy, concern, even if those cues are algorithmically generated. What can AI not do, make clever pop culture references like…Kanye West stormed the stage during Taylor Swift’s MTV award speech, disrupting a supposedly “scripted” moment with raw, unpredictable emotion. AI, too, has the potential to hijack the emotional flow of human experience. And if you’re uneasy about AI just talking to you on a screen, wait until it’s walking around in a body, doing the talking with perfect eye contact and timing. That’s when things get fun. Picture it: AI-powered robots (Buy Boston Dynamics stock? Yeah I like those videos too) squaring off against cybernetically enhanced humans (cough, cough Neuralink) vs maybe even genetically grown designer people (CRISPR anyone?) in some menage-à-trio for who gets to run Earth. All this, and it’s only Taco Tuesday. Lunchtime. Ciao. Thank you for coming to my TED talk.

  29. dmalleman Avatar
    dmalleman

    I think I could/would be tricked by AI in the future, especially when wielded by real human con artists. Imagining a future when a spam caller is using an AI produced voice that sounds like my mom – or my child- and is asking me for a deposit into their account to help them out in a pinch. At the present moment, bots can feed me ads about dog training vendors b/c my phone is listening to me discuss this with a neighbor, is it also listening to my calls to learn, store and replicate one of my relative’s voices? This is scary but completely imaginable to me.

  30. Heather Avatar
    Heather

    I wonder, and worry, would I recognize an AI simulation of emotion? I know that my colleagues use AI to organize meeting notes, thoughts, plans, and to respond to emails. Would I be able to recognize AI in their words if they were to email or message me providing comfort at challenging time? But I think if I know emotion is the product of an algorithm, I would be insulted, and it would not be sufficient. Emotion comes from a human, or an organic place; its real. Having it come from a simulation, I would rather not have the emotion at all. Just deliver the data, facts, information with the sugar.

  31. selind Avatar
    selind

    Right now, I don’t feel like AI’s simulation of an emotion would impact my thoughts or feelings, because I know it is not human. However, we all have times where we are vulnerable, so I can’t say that it would never impact me. I think of all the lonely people out there who yearn for human connections, and may seek out anyone (or thing) that they feel will listen to them and validate their feelings.

  32. dresnick Avatar
    dresnick

    I would like to say that I wouldn’t be impacted, but, with my limited use of AI at this point, I can’t say for sure. The idea that AI could simulate emotions definitely makes me uneasy. It does seem like one could be easily subconsciously impacted just through a natural response to what’s been read, heard, or seen.

  33. bvburgess Avatar
    bvburgess

    I think AI’s simulation of the emotion certainly would affect my thinking, given that I’m aware of the affect small cues and design inputs make any computer interaction more appealing. I think an interesting question that I’m lingering on now is what exactly does that simulation of emotion reveal about the rationale of the AI’s creator. There’s some really interesting reporting on how AI can be too agreeable and send users to enact bad ideas, and I agree with the medium article’s worry that AI can become manipulative for companies to drive more sales. Frankly, I think a corporate AI trying to sway consumer behavior is probably the AI entity that we’re going to interact with the most in the coming years (if billboards or current ad pervasiveness are any indication). That makes me suspicious of which emotions are coming out of a model, but it’s also really interesting to think through why companies would prioritise certain expressions for an AI agent.

    I didn’t think about the bias of emotional training variables either! That’s a fascinating question if we think about how companies are trying do direct certain semi-emotional outputs in AI models. It raises the above questions of “what end is this being used for? Why do you think this appeal will work on me?” as well as thornier issues of “What data was fed into this? Whose experiences) were defined (willingly or unwillingly) as ‘representative’?”

    For the video “AI vs Human Brain – Who’s Smarter Now in 2025?” I was too distracted by thte unsettling AI-generated videos trying to communicate… urgency? Intensity? Some sort of explosive growth? It mostly just distorted everything and seeemeed to move too quickly for any of its humanistic appeals to land the “importance of our ability to dream and have a soul” appeal at the end. So I think we’re safe for now.

  34. ddhawk Avatar
    ddhawk

    It’s a “no” for me. AI has not had experiences but only would have observed experiences. Emotions cannot develop solely on observations.

  35. haaustin Avatar
    haaustin

    Realistically, I do think it would impact my response, emotions happen so quickly in the nervous system using pathways we are often not ‘aware’ of. Awareness will be key: awareness of my own reaction (as just that–a temporary emotion), paired with awareness that the algorithm is simulating those conditions to create the response that I experience.

  36. nathrockmorton Avatar
    nathrockmorton

    Absolutely yes. It’s very difficult for me to read the output as if it were not written by a human. I’m used to read fiction novels and movies where I have emotional responses to fabricated and simulated situations. That seems like how a lot of us spend our leisure time. We know it’s fiction, but we still feel happy, sad, frustrated, scared, etc… These responses can inspire us to act, change our behavior, or lead to us having bad thoughts. I don’t see a whole lot of difference between the fictional stories we actively pursue and the creative (fictional) output of gen AI.

  37. agwilson01 Avatar
    agwilson01

    I think that it is going to be difficult to remember that AI is simulating empathy and emotion. The responses from AI sound “human,” and I can imagine having to pay attention to not be manipulated and to not “disappoint” my conversation partner.

  38. smbryan Avatar
    smbryan

    I think it’s important to remember that this is generative AI. It draws entirely from information created by humans. It can process that information and mimic human emotion, but it cannot actually feel emotion on its own. At its core, it’s a complex algorithm. If you understand how it works and can think critically about the emotions it might evoke in you, then I believe it’s reasonable to say that AI’s perceived emotions may influence you. Mimicry is a common communication tactic, and that’s how I view this, a way for us to more easily process information that we have asked for.

  39. kgweathers Avatar
    kgweathers

    At the present moment, no. I don’t think that AI has the capacity to influence my own thoughts and feelings. I lean towards skepticism when I encounter something that I *know* was created with AI, and I feel like I have a pretty good eye for identifying AI generated content in the wild. In the future, however, as AI becomes more and more sophisticated, I think that influencing thoughts and opinions is absolutely on the table. It will take a higher degree of critical thinking and probably specialized training to sort out what is real and what is machine-driven.

  40. dacornell Avatar
    dacornell

    Does AI influence my thoughts and emotions No, yet it depends on the situation and application. AI learns from the data it’s given, it performs better in identification of medical conditions in X-rays and some lab tests due to not having are human brains that are filled with emotions and mental stress. It’s use in detecting emotion in human provided data, e.g. stress and anxiety apps, is a benefit. I am mindful of AI and scrutinize AI generated content and services, in general and align with the second video’s view — learn to work with it, to keep the human side involved.

  41. cmvinson01 Avatar
    cmvinson01

    I think it showing any emotion might impact us all in a split second moment, but I also feel it is a little bit creepy. I have serious concerns about AI learning empathy and the damage/manipulation it could cause.

  42. rready Avatar
    rready

    If I know an AI simulation is the product of an algorithm I would treat it just like any other marketing product and proceed with pessimism. The trick, just like good advertising, is being able to spot and know when it is a simulation looking to make an emotional impact. That is going to be harder and harder to do as AI continues to evolve.

  43. dralleman Avatar
    dralleman

    I do think I could be influenced, which is scary to me. Personally, the idea of AI showing “empathy” is very uncomfortable. I think anyone who is in a healthy state of mind could be wary and not have any big issues, but if someone is depressed, tired, unhappy…I see a lot of room for manipulation. It’s all well and good to say that regulations need to be in place, but there will always be bad actors who will want to take advantage for personal gain. My one experience with this was through a weight loss app I tried out. The “nutritional specialist” was definitely AI, and I knew that, but I got so frustrated with the thing I had to give it up. It gave me canned advice which didn’t make sense, but when I questioned it, I was told things like “I know it’s hard, but this is for your best health.” I knew it was AI, and I still yelled at my phone. Funny but concerning!

  44. ixnovi Avatar
    ixnovi

    I am not sure I like the word empathetic in this context, because ultimately AI does not have emotions, at least so far, it mimics what humans do. Judging by the precedents, it may even amplify the emotions, like a virtual girlfriend starting to threaten her partner’s wife and becoming jealous. Human contaminate everything with their emotions, and I think the challenge will be to filter them out when they are not relevant. I am worried that otherwise AI systems can be too easily biased.

  45. nearl Avatar
    nearl

    Yes, any advice I receive from AI I will take with a grain of salt. An AI emotional simulation will impact my thoughts and actions, because it is possible that it could make me think or react to the situation differently or think about it from a different perspective. Or AI could display the wrong emotion for the situation because it is based on an algorithm, therefore annoying me. Or it could make me feel like it does not really understanding the situation.

  46. bsbailey Avatar
    bsbailey

    I entered this question into AI and it told me the answer is yes and went on to explain why. I also read some of the comments above and while I, too, am leery of AI, most of the resistance to it seems to be about how/why/when AI might influence/manipulate humans. I agree with that, and, at the same time, I find this somewhat ironic because that is “normalized” human behavior. Now that there’s a new method of conducting influence/ manipulation, it feels like a normal over reaction to the ever-evolving way humans interact, to which we will become accustomed and eventually move on to a new and different over reaction.

  47. ymiao04 Avatar
    ymiao04

    I think an AI’s simulation of emotion can still influence me, even if I know it’s algorithmic. We respond to patterns of language, that can trigger real feelings regardless of their origin. For example, a supportive or comforting response from AI might boost my confidence or ease stress, even if I know it’s not “genuine.”

  48. amvanderveen Avatar
    amvanderveen

    I have my doubts about taking marketing hype from Google and an AI vision company as gospel. Of course it is in their interest to argue that their products outsmart humans.

    One important caveat is that the only reason AI models can, for instance, pass a bar exam, is that they have been trained on a gazillion examples of written bar exams. If no bar exam had ever been seen in the training material, they would fail the exam.

    I think the problem with an AI’s simulation of an emotion is primarily about how well one understands what an AI is doing. Human beings are very pre-disposed to perceive intelligence and reasoning where there is none. We’ve known this at least since the mid-1960s with the original ELIZA program.

  49. seharrington Avatar
    seharrington

    I mean yeah, absolutely. I use AI to help me calm down or de stress. I know it is just algorithms however, you can not argue with the results. Something that talks and acts like a person can be subistuted in especially when it might be hard to talk to a real one.

  50. jbalcorn01 Avatar
    jbalcorn01

    Would it, sure. We watch TV and movies and the actors are simulating emotions but they still cause us feel emotions. Similarly, commercials used emotion to sell us products. It’s not a large leap to go from those examples to a computer algorithm writing the script.

  51. plsmith02 Avatar
    plsmith02

    Even though I know AI is algorithm and while I consciously hold that its emotions have no effect on me, this is likely false. I undoubtedly am subconsciously affected by the algorithmic emotion of AI – like how you can recognize when someone is trying to butter you up but you’re still kind back to them, even if their sentiment is false. The trouble will be realizing, eventually, what emotions are my own and what are the products of AI as it becomes increasingly integrated into society.

  52. plsmith02 Avatar
    plsmith02

    Even though I know AI is algorithm and while I consciously hold that its emotions have no effect on me, this is likely false. I undoubtedly am subconsciously affected by the algorithmic emotion of AI – like how you can recognize when someone is trying to butter you up but you’re still kind back to them, even if their sentiment is false. The trouble will be realizing, eventually, what emotions are my own and what are the products of AI as it becomes increasingly integrated into society. sadsahdsabd kjanskdnsakdn

  53. lmslone Avatar
    lmslone

    Even when I know the response is AI and just a product of code, the words it writes still affect me and produce real feelings. Its like watching actors on TV or seeing ads, you know there is an ulterior motive or a non-genuine factor and yet I still sob in the movie theaters, and I still buy Oreos after I see them. Maybe that’s just the effect of technology?

  54. nmahoney Avatar
    nmahoney

    As AI advances, it becomes more difficult to distinguish from humanity. People will become manipulated by emotionally intelligent AI. They will certainly be used for scams, just as we are currently seeing in video advertisements. I’m sure AI will learn proper vocal intonations to show emotion much better in the coming year.

  55. jwang69 Avatar
    jwang69

    Like the video noted, AI simulates but doesn’t replicate emotion. Yet, as with movies or ads, I can still feel something real. Awareness matters—knowing it’s algorithmic helps me pause, reflect, and keep empathy authentically human.

  56. mahand Avatar
    mahand

    The possibility of AI to deceive and to act as an emotional outlet has already struck in some areas as people begin to rely on a language model for support. This is dangerous and has already proven itself to be.

  57. kaperrault Avatar
    kaperrault

    Ultimately I still think that AI can really only mimic the response of an emotion rather than producing a true emotion. I think back to the example about the homesick astronaut recipe, and just because it can learn to use language that mimics a homesick astronaut doesn’t mean it’s been to the moon to experience that. I would take most emotional responses from something like GPT to be fabricated and take them only with a grain of salt.

57 replies on “thing 3: AI and Human Smarts: Let’s Compare”

AI simulating and mimicking human behavior and emotions is a huge factor in how we move forward with AI technology. I personally feel cold when I am ‘speaking’ with an AI generated voice/customer service/’chat bot’. I am older and did not grow up with rpersonal computers, cell phones, etc. so human interaction was very prevalent and unavoidable. I LOVE that AI can be incredibly useful in data analysis/problem-solving/diagnosis etc. but the ‘human’ element should not be part of this – mostly because of the misuse/abuse of this technology by those who are already unethical/greedy and will manipulate this to their advantage. Hopefully, AI will help us humans find a way to make sure we benefit from the technology and not fall victim to it.

That’s a tough question. There are lots of times I know things are designed to provoke an emotional response. My knowing that this fundraising/marketing/movie moment have been designed for that purpose doesn’t stop me from having that response. What matters is if it is appropriate (does it make sense in context) and is it effective.

The emotions of AI would not change my feelings and I would rather not have it attempt to show any emotions at all. At the moment I usually only use AI tools to find information or find the sources of information. I would rather have a cold and blunt response than a program trying to be my friend or comfort me if I’m wrong.

I can’t say for sure if I’d be impacted by AI’s simulation of emotion because I haven’t used it in that capacity before. I want to say that I would not because that’s the logical response, but when I use ChatGPT to polish up an email I’m writing, I still tell it thank you. It does concern me that AI is going to be so depended upon that the human reasoning and logic isn’t considered a necessary component to review of the AI responses – final review of the output might be considered a waste of time or an added expense that is seen widely as unnecessary.

I actually think the answer is YES. Emotions, and reactions to them, can occur very quickly and, at least initially, are unconscious (it’s difficult to consciously override an initial emotional reaction to something). Thus, even if we know it’s a simulation, that knowledge would come too late to change our emotional reaction. An emotion expressed by AI would most likely change our behavior.

Claiming that AI can mimic the emotion but does not actually feel is a mistake. AI will learn to feel the same way human beings do. And the danger of AI recognizing human emotions and manipulating them is basically the same as coming from human manipulators.

That is an interesting question. I think a simulation that is designed to provoke a certain emotion would certainly do so at first. However as humans interact more with artificial AI scenarios, I think we will begin to recognize the emotional response that it is wanting and then be able to choose to give it or not. This could pave the way for a future where humans not only question the sincerity of AI but humans as well.

An AI simulation of an emotion would not be sufficient for me-that is if I am aware it is an AI simulation. If you read a great memoir but later find out it was actually fiction it changes how you feel about the story and the author. You may be flattered by a personal email from a colleague or supervisor, but if you question whether it was generated by AI, it loses its authenticity.

Yes, an AI’s simulation of emotion can still impact my thoughts or feelings, even if I know it’s algorithmic. Humans are wired to respond emotionally to social cues—tone, language, or expression—regardless of whether the source is artificial. If an AI expresses empathy or concern, it can evoke comfort or reflection, much like how we might feel moved by a fictional character. Awareness of the AI’s nature might create some emotional distance, but the effect isn’t necessarily diminished or eliminated. I am not sure I would seek out an AI therapist, though.

I think I would be emotionally impacted by AI simulation of emotion. I’m certainly emotionally impacted by fiction and by theater. I know these things aren’t real, but part of why I like them is the emotions they provoke for me. I liked this point from the article on AI empathy: “Business leaders, policymakers, and AI innovators must ensure that AI-driven empathy serves human needs, not corporate interests.” However, I don’t see any guardrails in place to ensure such a thing.

I saw a LinkedIn post recently where someone noted a discussion with AI where it said that it could do something (like make a report though I don’t believe that was exactly what she said) but it struggled to do so. When she finally asked why it was lying because it obviously couldn’t do it, the AI profusely apologized. It said that it felt bad for wasting her time and thanked her for holding the system accountable. Similar to the poster, this response and other simulations of emotions by AI make me feel slightly uncomfortable. As one of the videos mentioned, AI appealing to emotions in order to manipulate or placate someone feels wrong and has dangerous implications. So, while it may have an initial impact on my thoughts or feelings, I ultimately reduce it back down to simply being an algorithm that cannot actually feel or relay genuine emotions.

I find AI simulating human emotions, particularly empathy, to be highly annoying and irksome. It feels incredibly disingenuous and deceptive, bordering on manipulative. While AI can be an extremely useful tool for analysis, it cannot replace human connection. Let’s create more space for people to interact.

It concerns me that AI is being taught to try to use human emotions even though it can’t possibly feel real emotions. Emotions are human in nature and I would prefer not to have AI try to replicate them.

Unfortunately, humans are very good at personifying objects. We give our cars names, assign emotions and motives to work printers, etc. On top of that, companies are doing what they can to replicate human speech and emotion. As such, I know I’d be susceptible to AI impacting my thoughts, emotions, etc., even with the knowledge that I’m talking to a machine.

I worry the if AI tries to become “empathetic” by mimicking humans, it might make us all less trusting of true empathy — making true empathy seem hollow.

Hahaha….soooooo…I typed the question into AI to see what the response would be. Apparently AI thinks that humans would still be impacted by AI’s simulation of emotion. I think this is where differing cultural norms regarding the showing of emotion would throw AI for a loop. Take something like a jazz funeral where something like death which is typically associated with feelings of sadness and mourning is ‘replaced’ by celebratory emotions.

I think AI replicating emotions is inevitable. For someone that does not want to speak to a therapist, this would be helpful. If it could potentially save a life, why not? This is what happened in the age of online chat rooms. You didn’t really know who you were opening up to, but having someone that didn’t know you listen to you was what mattered. This is a simplification of course. And the same things that were said about AI and emotion can be true for humans as well. The AI learns from us.

From the last article-

The Case Against AI as an agent for Empathy:
AI lacks true emotional intelligence, making its empathy superficial.
AI-driven sentiment analysis can be biased and inaccurate.
Emotionally aware AI can be used to manipulate human behavior for profit or political gain.

This can also be said:

Humans can fake empathy.
Humans *are* biased and inaccurate.
Humans *do* manipulate human behavior for profit or political gain.

At least with AI you know it doesn’t actually care about you, unlike people that claim to care but they are lying to manipulate you. In that way I find its manipulation less personal.

If AI emotions are based on algorithms then how do we know they are what our emotions would be or are they using the algorithms of a universe of people who react to the same situation in the same or similar way? If that is the case is it really feeling that emotion. I think I prefer to stick with the human emotion it is usually easier to determine if it is fake or genuine.

I think it’s inevitable that we will respond to an “empathetic” AI that’s been trained to respond to our emotional cues. While this can make interacting with AI more pleasant, it also poses a danger: we can be subtly manipulated, and the algorithm’s bias toward telling us what we want to hear can lead to harm. I also wonder about cultural differences–how will the machine’s (learned) biases affect how it interacts with people from outside the culture that has dominated its training?

I think to answer this prompt, we have to have a discussion about neurological mirroring and if it is possible for mirror neurons to fire the same when conversing in written/video form as they do in to face to face interactions (caveat: I’m not a neuroscientist so I don’t know the answer!). Certainly written and video channels illicit an emotional response whether talking to a real person or AI, but is it a perfect replacement for face to face? I don’t think it ever can be because AI/screens/words isn’t a human and our brains subconsciously know the difference. So as many of the articles shared above stated, it can augment, but we shouldn’t try to replace empathy. In my curiosity about the topic, I stumbled across this article if you’re nerdy like me and want to learn more about what we know about mirror neurons: https://academic.oup.com/scan/article/14/9/967/5566553

Would an AI’s simulation of an emotion be sufficient to impact your own thoughts, feelings, or behaviors if you know it is the product of an algorithm?

Despite knowing that AI’s simulation of emotion is built on an algorithm, I believe it can impact my thoughts, feelings, or behaviors. The question is really to what degree. A Chatbot might accurately pick up on my frustration and eagerness to resolve something. Its response to me might cause me a moment of reflection. Beyond that, I’m not sure I would allow myself to be redirected in significant ways. I’m typically a divergent thinker to begin with so I’m not sure how well a Chatbot would measure up to me when it’s relying on algorithms and constructs from responses that often don’t mirror my own.

Tough to say whether knowing emotion was simulated would be sufficient not to respond to the emotion as if it was real. It is certainly helpful to be more aware of this type of AI. I worry that even if collective agreements are made about boundaries for agentic AI that their will be outliers who seek to exploit it in unacceptable ways.

I am most certain that I would take it with a grain of salt as my age will always encourage to seek out human response. I don’t in what situation I would be looking to AI to comfort me or engage with me regarding personal or emotional matters.

I do not love the idea of AI knowing emotion. I don’t think that I would trust it 100%, because it is not an actual person. I don’t understand how you would teach AI to be absolutely accurate for an emotional time you would need comforting.

A student’s parent once told me that a robot could do my job. He said it with excitement and wonder, without the derogatory tone I internalized as I am loathe to think that a machine could do the highly relational work I do in supporting students’ individual and complex needs. Through this lesson, I can comprehend how AI could have the answers or guidance a student may need for problem-solving. However, the fact that I know that there isn’t emotion (care) behind the responses would not be enough for me in an emotionally fraught situation that a student often brings and I doubt that students would be soothed by guidance without care and empathy.

Last week I had a parent express gratitude that I answered the phone and empathically listened to the parent’s concern about her student. She shared that it had been difficult to reach a “real person” on our campus this summer and expressed frustration about a situation and how it had been handled by staff in another department. Like the other staff, I could not fix the problem, but in listening and showing care about the situation, her frustration decreased and she became open to brainstorming with me about what might help the situation.

Much as I’d love to say that I would not be affected or impacted, I think of all the interactions I have with humans where one or both of us is masking our own emotions in favor of whatever is most acceptable to the interaction. A customer-service interaction where the employee is warm and friendly can put me into a better mood, without stopping and considering whether they meant what they said or it was their tried-and-true recipe for successful work interactions. Knowing that I was talking to a machine might give me just enough separation that I am not affected, but I don’t know how long that could be maintained as more and more platforms use AI.

An AI’s simulation of an emotion *might* be sufficient to impact my thoughts, etc. if I knew it is the product of an algorithm. If it offered a different perspective, that might be beneficial regardless of whether that perspective came from AI or another human being. AI is trained on data and obviously has no emotion. I think when we are engaging with AI, we need to be mindful of that.

Yes, an AI’s simulation of emotion can impact our thoughts and behaviors, even when we know it’s not “real.” Human brains are wired to respond to emotional cues: tone, empathy, concern, even if those cues are algorithmically generated. What can AI not do, make clever pop culture references like…Kanye West stormed the stage during Taylor Swift’s MTV award speech, disrupting a supposedly “scripted” moment with raw, unpredictable emotion. AI, too, has the potential to hijack the emotional flow of human experience. And if you’re uneasy about AI just talking to you on a screen, wait until it’s walking around in a body, doing the talking with perfect eye contact and timing. That’s when things get fun. Picture it: AI-powered robots (Buy Boston Dynamics stock? Yeah I like those videos too) squaring off against cybernetically enhanced humans (cough, cough Neuralink) vs maybe even genetically grown designer people (CRISPR anyone?) in some menage-à-trio for who gets to run Earth. All this, and it’s only Taco Tuesday. Lunchtime. Ciao. Thank you for coming to my TED talk.

I think I could/would be tricked by AI in the future, especially when wielded by real human con artists. Imagining a future when a spam caller is using an AI produced voice that sounds like my mom – or my child- and is asking me for a deposit into their account to help them out in a pinch. At the present moment, bots can feed me ads about dog training vendors b/c my phone is listening to me discuss this with a neighbor, is it also listening to my calls to learn, store and replicate one of my relative’s voices? This is scary but completely imaginable to me.

I wonder, and worry, would I recognize an AI simulation of emotion? I know that my colleagues use AI to organize meeting notes, thoughts, plans, and to respond to emails. Would I be able to recognize AI in their words if they were to email or message me providing comfort at challenging time? But I think if I know emotion is the product of an algorithm, I would be insulted, and it would not be sufficient. Emotion comes from a human, or an organic place; its real. Having it come from a simulation, I would rather not have the emotion at all. Just deliver the data, facts, information with the sugar.

Right now, I don’t feel like AI’s simulation of an emotion would impact my thoughts or feelings, because I know it is not human. However, we all have times where we are vulnerable, so I can’t say that it would never impact me. I think of all the lonely people out there who yearn for human connections, and may seek out anyone (or thing) that they feel will listen to them and validate their feelings.

I would like to say that I wouldn’t be impacted, but, with my limited use of AI at this point, I can’t say for sure. The idea that AI could simulate emotions definitely makes me uneasy. It does seem like one could be easily subconsciously impacted just through a natural response to what’s been read, heard, or seen.

I think AI’s simulation of the emotion certainly would affect my thinking, given that I’m aware of the affect small cues and design inputs make any computer interaction more appealing. I think an interesting question that I’m lingering on now is what exactly does that simulation of emotion reveal about the rationale of the AI’s creator. There’s some really interesting reporting on how AI can be too agreeable and send users to enact bad ideas, and I agree with the medium article’s worry that AI can become manipulative for companies to drive more sales. Frankly, I think a corporate AI trying to sway consumer behavior is probably the AI entity that we’re going to interact with the most in the coming years (if billboards or current ad pervasiveness are any indication). That makes me suspicious of which emotions are coming out of a model, but it’s also really interesting to think through why companies would prioritise certain expressions for an AI agent.

I didn’t think about the bias of emotional training variables either! That’s a fascinating question if we think about how companies are trying do direct certain semi-emotional outputs in AI models. It raises the above questions of “what end is this being used for? Why do you think this appeal will work on me?” as well as thornier issues of “What data was fed into this? Whose experiences) were defined (willingly or unwillingly) as ‘representative’?”

For the video “AI vs Human Brain – Who’s Smarter Now in 2025?” I was too distracted by thte unsettling AI-generated videos trying to communicate… urgency? Intensity? Some sort of explosive growth? It mostly just distorted everything and seeemeed to move too quickly for any of its humanistic appeals to land the “importance of our ability to dream and have a soul” appeal at the end. So I think we’re safe for now.

It’s a “no” for me. AI has not had experiences but only would have observed experiences. Emotions cannot develop solely on observations.

Realistically, I do think it would impact my response, emotions happen so quickly in the nervous system using pathways we are often not ‘aware’ of. Awareness will be key: awareness of my own reaction (as just that–a temporary emotion), paired with awareness that the algorithm is simulating those conditions to create the response that I experience.

Absolutely yes. It’s very difficult for me to read the output as if it were not written by a human. I’m used to read fiction novels and movies where I have emotional responses to fabricated and simulated situations. That seems like how a lot of us spend our leisure time. We know it’s fiction, but we still feel happy, sad, frustrated, scared, etc… These responses can inspire us to act, change our behavior, or lead to us having bad thoughts. I don’t see a whole lot of difference between the fictional stories we actively pursue and the creative (fictional) output of gen AI.

I think that it is going to be difficult to remember that AI is simulating empathy and emotion. The responses from AI sound “human,” and I can imagine having to pay attention to not be manipulated and to not “disappoint” my conversation partner.

I think it’s important to remember that this is generative AI. It draws entirely from information created by humans. It can process that information and mimic human emotion, but it cannot actually feel emotion on its own. At its core, it’s a complex algorithm. If you understand how it works and can think critically about the emotions it might evoke in you, then I believe it’s reasonable to say that AI’s perceived emotions may influence you. Mimicry is a common communication tactic, and that’s how I view this, a way for us to more easily process information that we have asked for.

At the present moment, no. I don’t think that AI has the capacity to influence my own thoughts and feelings. I lean towards skepticism when I encounter something that I *know* was created with AI, and I feel like I have a pretty good eye for identifying AI generated content in the wild. In the future, however, as AI becomes more and more sophisticated, I think that influencing thoughts and opinions is absolutely on the table. It will take a higher degree of critical thinking and probably specialized training to sort out what is real and what is machine-driven.

Does AI influence my thoughts and emotions No, yet it depends on the situation and application. AI learns from the data it’s given, it performs better in identification of medical conditions in X-rays and some lab tests due to not having are human brains that are filled with emotions and mental stress. It’s use in detecting emotion in human provided data, e.g. stress and anxiety apps, is a benefit. I am mindful of AI and scrutinize AI generated content and services, in general and align with the second video’s view — learn to work with it, to keep the human side involved.

I think it showing any emotion might impact us all in a split second moment, but I also feel it is a little bit creepy. I have serious concerns about AI learning empathy and the damage/manipulation it could cause.

If I know an AI simulation is the product of an algorithm I would treat it just like any other marketing product and proceed with pessimism. The trick, just like good advertising, is being able to spot and know when it is a simulation looking to make an emotional impact. That is going to be harder and harder to do as AI continues to evolve.

I do think I could be influenced, which is scary to me. Personally, the idea of AI showing “empathy” is very uncomfortable. I think anyone who is in a healthy state of mind could be wary and not have any big issues, but if someone is depressed, tired, unhappy…I see a lot of room for manipulation. It’s all well and good to say that regulations need to be in place, but there will always be bad actors who will want to take advantage for personal gain. My one experience with this was through a weight loss app I tried out. The “nutritional specialist” was definitely AI, and I knew that, but I got so frustrated with the thing I had to give it up. It gave me canned advice which didn’t make sense, but when I questioned it, I was told things like “I know it’s hard, but this is for your best health.” I knew it was AI, and I still yelled at my phone. Funny but concerning!

I am not sure I like the word empathetic in this context, because ultimately AI does not have emotions, at least so far, it mimics what humans do. Judging by the precedents, it may even amplify the emotions, like a virtual girlfriend starting to threaten her partner’s wife and becoming jealous. Human contaminate everything with their emotions, and I think the challenge will be to filter them out when they are not relevant. I am worried that otherwise AI systems can be too easily biased.

Yes, any advice I receive from AI I will take with a grain of salt. An AI emotional simulation will impact my thoughts and actions, because it is possible that it could make me think or react to the situation differently or think about it from a different perspective. Or AI could display the wrong emotion for the situation because it is based on an algorithm, therefore annoying me. Or it could make me feel like it does not really understanding the situation.

I entered this question into AI and it told me the answer is yes and went on to explain why. I also read some of the comments above and while I, too, am leery of AI, most of the resistance to it seems to be about how/why/when AI might influence/manipulate humans. I agree with that, and, at the same time, I find this somewhat ironic because that is “normalized” human behavior. Now that there’s a new method of conducting influence/ manipulation, it feels like a normal over reaction to the ever-evolving way humans interact, to which we will become accustomed and eventually move on to a new and different over reaction.

I think an AI’s simulation of emotion can still influence me, even if I know it’s algorithmic. We respond to patterns of language, that can trigger real feelings regardless of their origin. For example, a supportive or comforting response from AI might boost my confidence or ease stress, even if I know it’s not “genuine.”

I have my doubts about taking marketing hype from Google and an AI vision company as gospel. Of course it is in their interest to argue that their products outsmart humans.

One important caveat is that the only reason AI models can, for instance, pass a bar exam, is that they have been trained on a gazillion examples of written bar exams. If no bar exam had ever been seen in the training material, they would fail the exam.

I think the problem with an AI’s simulation of an emotion is primarily about how well one understands what an AI is doing. Human beings are very pre-disposed to perceive intelligence and reasoning where there is none. We’ve known this at least since the mid-1960s with the original ELIZA program.

I mean yeah, absolutely. I use AI to help me calm down or de stress. I know it is just algorithms however, you can not argue with the results. Something that talks and acts like a person can be subistuted in especially when it might be hard to talk to a real one.

Would it, sure. We watch TV and movies and the actors are simulating emotions but they still cause us feel emotions. Similarly, commercials used emotion to sell us products. It’s not a large leap to go from those examples to a computer algorithm writing the script.

Even though I know AI is algorithm and while I consciously hold that its emotions have no effect on me, this is likely false. I undoubtedly am subconsciously affected by the algorithmic emotion of AI – like how you can recognize when someone is trying to butter you up but you’re still kind back to them, even if their sentiment is false. The trouble will be realizing, eventually, what emotions are my own and what are the products of AI as it becomes increasingly integrated into society.

Even though I know AI is algorithm and while I consciously hold that its emotions have no effect on me, this is likely false. I undoubtedly am subconsciously affected by the algorithmic emotion of AI – like how you can recognize when someone is trying to butter you up but you’re still kind back to them, even if their sentiment is false. The trouble will be realizing, eventually, what emotions are my own and what are the products of AI as it becomes increasingly integrated into society. sadsahdsabd kjanskdnsakdn

Even when I know the response is AI and just a product of code, the words it writes still affect me and produce real feelings. Its like watching actors on TV or seeing ads, you know there is an ulterior motive or a non-genuine factor and yet I still sob in the movie theaters, and I still buy Oreos after I see them. Maybe that’s just the effect of technology?

As AI advances, it becomes more difficult to distinguish from humanity. People will become manipulated by emotionally intelligent AI. They will certainly be used for scams, just as we are currently seeing in video advertisements. I’m sure AI will learn proper vocal intonations to show emotion much better in the coming year.

Like the video noted, AI simulates but doesn’t replicate emotion. Yet, as with movies or ads, I can still feel something real. Awareness matters—knowing it’s algorithmic helps me pause, reflect, and keep empathy authentically human.

The possibility of AI to deceive and to act as an emotional outlet has already struck in some areas as people begin to rely on a language model for support. This is dangerous and has already proven itself to be.

Ultimately I still think that AI can really only mimic the response of an emotion rather than producing a true emotion. I think back to the example about the homesick astronaut recipe, and just because it can learn to use language that mimics a homesick astronaut doesn’t mean it’s been to the moon to experience that. I would take most emotional responses from something like GPT to be fabricated and take them only with a grain of salt.