thing 13: How Are People Using AI Now? 

In this mini-lesson, consider how you and others are using AI and reflect on those uses given what you have now learned!

Activity:

Look at the image below and read the Harvard Business Review article on uses of AI in 2025 (4 min.)

Alt Text:
An infographic titled "Top 10 Gen AI Use Cases" compares the most popular generative AI use cases for 2024 and 2025, highlighting a shift from technical to emotional applications. Categories are color-coded: Personal and Professional Support (blue), Content Creation and Editing (orange), Learning and Education (yellow), Technical Assistance and Troubleshooting (green), Creativity and Recreation (pink), and Research, Analysis, and Decision-Making (purple).

2024 Use Cases:

Generating ideas
Therapy/companionship
Specific search
Editing text
Exploring topics of interest
Fun and nonsense
Troubleshooting
Enhanced learning
Personalized learning
General advice
2025 Use Cases:

Therapy/companionship
Organizing my life
Finding purpose
Enhanced learning
Generating code
Generating ideas
Fun
Improving code
Creativity
Healthier living

Zao-Sanders, M. (n.d.). How People are Really Using GenAI in 2025. Harvard Business Review. Retrieved July 25, 2025, from https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025
(Can’t access either link? Try this one and if you’re really interested, optionally read the full report

Discuss:

If you like, tell us how you are using GenAI and how it compares to this list. What do you think about what other people are using AI for currently? Choose at least one of the uses listed above to reflect on. Given what you now know about AI from the previous things, what do you think some of the benefits and pitfalls might be of this particular use? How can you encourage yourself and others to think critically about AI?

11 replies on “thing 13: How Are People Using AI Now? ”

I don’t use GenAI for work and use it in a very limited capacity in my personal life. However, my husband who is a songwriter & musician (mostly as a hobby) uses Mureka. He provides the lyrics he wrote and the guitar & piano music he recorded and then uses Mureka to assist in the arranging a final song. He selects the type of vocalist and other insturmentation available on Mureka. The benefits to him are that Mureka provides the elements of musical arrangement that he cannot do himself – sort of a la carte. It is also easy to use, since he has a musical background. Since he is not a vocalist, Mureka allows him to add his lyrics and original music to arrange a finished product. The pitfalls are the vocals and musical arrangements available in Mureka are not originally created as you use this AI product to create a song. You would need to be a musician or understand music theory to get the best use of Mureka. I believe staying informed about GenAI and the positive effects on the quality of our lives will help people overcome their fears. It is also imperative that shortcomings of AI, such as the effects on our environment, continue to be made available.

I would say that I use AI primarily for “organizing my life” and “generating ideas,” which is very much in alignment with the list. One of the items on the list that caught my attention was “breaking the rules” (#92). When I went into the full report, there was only one quote attached to it and it was about using GenAI to generate code that bypassed a school internet block. I thought this was a little funny at first and entirely within the norm of K-12 students attempting to find loopholes for school rules. While funny, I do think it presents some risk, particularly in light of our previous content about data security and ethical concerns. There may be a legitimate security reason to block content on school devices/internet and students being able to easily bypass that through AI is a potential problem. There are certainly pros and cons and I think we can help mitigate potential risks with increased transparency, such as clearly laying out the reasons for prohibiting certain content on sites and being open to discussion.

I don’t use AI but reading through this article, I will definitely try it out for creating a travel itinerary and enhancing learning, like learning a new language, could be fun!
It is funny the author used it dispute fine and it was voided.

I use ChatGPT for a couple of the top use cases, but “organizing my life” would be my most frequent out of those top 10. It helps with decoding hospital charges, meal planning, acting as a budget tracker, and as an assistant that can take ideas and extrapolate them into a polished email. This seemed inevitable since our productivity is expected to continually increase, while wages have significantly less power than they did before computers and higher productivity expectations came to be. When the “other duties as assigned” becomes more like 50%+ of your job, rather than the supposed 5%, it is impossible to keep quality of work without assistance. I think if this existed but workloads and the cost of living/wages were on par with what they were roughly 60 years ago, we wouldn’t see such a need for this in terms of work. I think it makes perfect sense that people are using this tool for so many different things. It tells a story about what people are not getting in their lives, or don’t have time to invest in deep thought or planning for things. I think the *need* for this tool is what we should really be concerned with.

On this list, healthy living is one I would be cautious about. I have used AI for meal planning, but it does not know everything about your body and won’t completely replace a doctor or therapist. While it is definitely a helpful tool, people should be mindful of its limitations when it suggests food or exercises. People using it for this purpose will need to give it details about any health concerns, and potentially medicines they are taking. (Ex. no grapefruit if it reacts to the meds you are on.)

I recommend that people understand AI can be very helpful when used in a mindful manner, but it is more of a starting point than an end product.

I have used NotebookLM at work and have uploaded documents, slide decks, and presentation transcripts. I then asked it to create a podcast and produce an overview of the conference. I checked the output to be sure it was accurate. NotebookLM in that instance was spot on.

Personally, I have used AI to learn about what to plant in my garden, create new recipes, and draft an email to a company that was refusing to give me a refund when they advertised 100% guaranteed satisfaction on their website.

I can understand why the number one use case for AI in 2025 is therapy/companionship. AI is always available and has no judgement. While this might be helpful for some, I think others (particularly those who may suffer from mental illness) should be careful. We must remember that AI is not human and has no empathy and no feelings. While it can often be quite helpful, it’s possible for some people to fall into an unhealthy pattern with AI that is quite difficult to pull themselves out of.

I personally don’t use AI that much beyond entertainment (and passively through Grammarly), but the Harvard Business article gave me an idea to ask Copilot to create a to-do list for me for when my friend is coming to visit me in two weeks! I asked it to exclude the weekend because I’ll be out of town, and it did it. It also included optional weekend tasks that I could do. A benefit will be that I don’t have to sit down and think about every little detail and come up with a timeline from scratch. A pitfall could be that it doesn’t know all the ins and outs of your home and what needs to be done, so you’ll still need to add/modify as needed.

So far, I primarily use AI to generate ideas useful to me at work. I find it to be a useful brainstorming tool (think ChatGPT and Claude). I would like to expand my usage to support research (I’m thinking elicit or Semantic Scholar). Of this list, the item that I had not thought about is using AI to organize my life. From the full report, this means “. . . structuring tasks, priorities, and goals and even sorting out your physical environment. Generative AI can assist by offering personalized scheduling, task management, and goal-setting to help people make best use of their time. ” The biggest question for me with this use is how to communicate priorities to the tool — and what its default priorities are. I can imagine that it might be useful in developing SMART goals, but can it help the user imagine?

I have started using AI in both professional and personal settings. It has helped me with drafting emails, summaries, checklists at work as well as meal planning and shopping lists, vacation planning and itineraries for visits. I even used it to put together an exercise plan and decipher a medical results document. I usually double check the results and make sure the information does make sense. I am not sure I would use it for therapy or companionship, especially after learning more about AI in this course. But I can see it being helpful in the future with organizing my home, planning a garden in spring etc. So many possibilities I want to explore.

When I do use generative AI I mostly use it for troubleshooting, improving code, general advice, and enhanced learning. When I am writing scripts there may be a function or module that does what I am trying to do faster or better that I am just not aware of.

From the list I see “Healthier living” has moved from number 75 to number 10 which may be a good thing. Many people seem to not know basic things such as what to eat, how much to eat, and various things about exercising and generating a meal plan or exercise plan may help. I also believe it is able to generate plans that work around simple injuries.

One pitfall in the healthy living scenario is trying to replace actual medical practitioners and people using it to self-diagnose and self-medicate. Artificial intelligence does not know someone’s entire medical history, what medication they are currently taking, or their family history unless they give it that information. I have seen a few articles where users were told to use harmful substances because they couldn’t take the normal medicine. Also, general misinformation may not be taken into account when collecting and processing data.

I don’t use GenAI personally or professionally, but the article was super interesting for me in thinking about the dominant uses. I have a friend who uses ChatGPT for companionship, talking with it while driving and engaging with it like a friend. I play Dungeons and Dragons, number 31 on the list, and my DM use GenAI to generate location and character descriptions– they are often strange, but this makes our game play more fun.

I remember going to a science museum as a kid and interacting with a computer therapist. You would type a sentence like “I like clouds” and the computer would respond “Why do you like clouds?” You would say something like “because they are beautiful” and the machine would say “why are they beautiful?” and so on. I remember the exhibit being about the real benefits of self-reflection in this manner– even though you know its not a real person asking you questions, you gain ways to reflect that are measurably beneficial. Its interesting to consider the consequences of ChatGPT as a therapeutic companion.

Looked it up: ELIZA
https://en.wikipedia.org/wiki/ELIZA

Because of my work as a researcher that uses a lot of data, I use chatgpt almost everyday. It’s incredibly helpful: I asked it to troubleshoot for errors in my code, and ask it to explain complicated academic terms as well as summarizing long journal articles

Leave a Reply