thing 5: Prompt Power: Make AI Work for You

What is a prompt?

Wikipedia defines a prompt as “natural language text describing the task that an AI should perform.” Ages ago during the generative AI boom, (2022-2023), people discovered that a well constructed prompt was a type of secret hack to unlock better responses from tools like ChatGPT, Claude, Gemini, or Copilot. This led to the rise of prompt engineering: a term used to describe the strategies and techniques behind writing effective prompts. But we now understand that good prompting is really just about clear thinking, intentional phrasing, and lots of practice.

Your best prompts: 

  • Are clear instructions
  • Focus not just on what you want, but why you want it
  • Use structure and tone to shape the AI’s behavior
  • Improve through experimentation and reflection

There are many different frameworks and strategies out there that provide guidance for writing effective prompts, but one that is particularly useful is the C.L.E.A.R framework since it describes the qualities of a good prompt, encourages reflection on the process, and focuses on the iterative nature of the interaction with an AI tool.

Read

Activity

Goal: You’ll write and refine your own prompts for a text-based task in Copilot. Let’s start vague, then improve it step by step.

AI Tools Used:

Step 1: Write a short, one-sentence prompt for Copilot that reflects something you might want help with but don’t worry about making it good yet!

Stuck? Try these prompts

For Copilot: Say something about AI in education

Step 2: Review the output and reflect on your prompt. Identify What might be missing.
Look at your prompt and ask:

  • What’s unclear?
  • What’s the purpose?
  • What’s the output format?
  • Who’s the audience?
  • What’s the tone or style?

Step 3: Now try to revise your prompt. Use the C.L.E.A.R. framework for guidance!

Stuck? Try these for inspiration

For Copilot:
Write a 3-paragraph summary on how generative AI is affecting teaching and learning in higher education. Use clear, plain language for an audience of college faculty new to the topic

Step 4: Reflect, and ask yourself:

  • What changed?
  • What do you think made it more effective?

Discussion

How could writing more effective prompts be used in your in your own work for research, design or outreach?

A Note on Prompting and AI Output

Prompting isn’t perfect, and generative AI tools don’t always rely on real-time facts. Many LLMs generate responses based on patterns in the data they were trained on and not from live information or verified sources. Some tools are “grounded” in search engines or databases, but even these can produce content that sounds accurate but is misleading or just wrong.

Even strong prompts can result in:

  • Inaccurate or fabricated details (“hallucinations”)
  • Biased or stereotypical content
  • Responses that sound confident but may lack evidence or context

Bottom line: Keeping this in mind, prompting helps you shape the response but human judgment, background knowledge, and critical thinking remain essential. Use prompting to clarify how you want the AI to respond but always verify what it gives you. Even great prompts need you to evaluate the response, reflect on what worked, and revise when needed.

39 replies on “thing 5: Prompt Power: Make AI Work for You”

More effective prompts allows for getting information in a timely manner. A work-related example of how GenAI could help is to provide detailed, up-to-date information for when we audit payments for the University and need to confirm that not only University policy, but state and federal policies are also followed. A more effective prompt will allow for accurate information in a timely manner regarding policy specifics.

Writing effective prompts would make the process faster and have the output closer to my original vision. The first prompt I gave it I asked for an email template that gave the user information on four things. The output was an email template with the things I wanted separated into their own sections and had emojis. Next I asked it to redo the template while removing the emojis, not having separators, and putting certain information into bullet pointed sections. It did a much better job that was closer to my original vision. I feel I could also ask for suggestions to make the information better or clearer since it has been trained on other emails that may be similar.

The more clear the initial prompt, the better results will come quickly. It is helpful, and will be helpful in my work, to be able to draft a more precise prompt to get the results I want the first or second time. I’m looking forward to getting better through the practice of generating more clear prompts.

The importance of asking the right questions in a right form when dealing with AI was foreseen by by Robert Sheckley in his “Ask a Foolish Question” story. I highly recommend it. 🙂

Can’t wait to read Robert Sheckley. Thanks for the citation.
I can see that the quality of the prompt directly relates to the quality of the AI response. I first asked it for recommendations for hiking through Zion National Park. It generated a huge list of all the trails. But telling it more about me and what I was looking for, it gave me a perfect one-day itinerary. Can’t wait to go on my vacation!
As for work, I could definitely see an improvement in usefulness depending on the details of the prompt. I suppose I need to learn how to generate good prompts just like AI is learning to generate good answers.

The prompt makes all the difference. If you are looking for something specific to met certain requirements your prompt has to communicate those requirements and starting points along with what the output should look like and whom it is supposed to reach. If you are looking for ideas and creativity, keeping the prompt broad and then narrowing down with follow-ups seems to be the way to go. I had AI plan a girls trip to Sedona. I started with a general prompt (Plan a girls trip for 3-4 days to Sedona) and then narrowed it down to preferences (hiking, sights, jeep tour, food, etc). AI made vacation planning easy and it even linked the recommended places 🙂

I have to admit I was surprised by how good the output was. I asked it to create an assignment for my Economics of Climate Change class and then asked it tailor it to an introductory economics class. I am a little relieved that I could find some issues with the way it structured the assignment, so maybe I’m not out of a job yet!

I asked copilot to draft an email for me. I like that copilot prompted me to ask about tone (more formal? more informal?) and asked for more information to make the email more personal. Sometimes I think too long about emails and agonize over tone and substance. GenAI could speed up this process and I could use it depending on who I’m talking to. If I’m emailing someone who I know personally, I will still write the email myself because I’m interested in preserving my voice, but if the recipient is not someone I know personally, GenAI could be useful.

Echoing the thanks to imspit for the book recommendation. I see the value in being both as specific as possible and broad/general with prompts. I searched for ‘chest exercises’ and then ‘chest exercises using dumbbells’ and then ‘lower chest exercises using dumbbells’ and then ‘lower chest exercises using heavy dumbbells.’ While the more specific prompts yielded more specific results, the general prompt also gave me ideas for additional specific prompts to ask.

I found the C.L.E.A.R. framework very interesting and a good model for thinking about how to write a prompt. The CLEAR Framework includes five core principles and I found concise and explicit especially interesting. Finding a balance so that the prompt is concise but also explicit will take some practice. Thinking about and making clear what I’m trying to accomplish, what I’m looking for in an output and what details are the most important keeps me engaged with the work, and helps drive what generative ai suggests.

Ironically, I think following the CLEAR method for writing AI prompts will make me a better communicator to fellow humans! I know I personally have issues with knowing who know what about when. Sometimes, that can lead to miscommunication. If I approach the situation with making sure everyone has the same background information/is on the same page and use the CLEAR method, I think a lot less confusion will happen. 😉

I searched for text for thank you cards and sympathy cards, two items I often find difficult to find the right words. As I gave more specifics of scenario and recipient, it generated better and useful verbiage to use in these cards.

Step 1 of the exercise called for writing a short, one sentence prompt. This work very effectively as a prompt for my scenario. I was surprised that it needed little refinement, based on my knowledge of what I prompted the AI generator to address. For use in research or experimentation for how to build efficiencies in the work I perform, I would keep in mind the need to proof generated outcomes and then make refinements as necessary.

Using effective prompts not only allows for better output that can more easily be used in work, but it also reduces the amount of time one must spend going back and forth trying to clarify and contextualize content. In my role, I communicate with various constituents, and my communication looks different based on the population. Adding that contextualization and clarity in a prompt will allow me to receive more useful information for my work. Additionally, when considering the ethical implications, if we are able to minimize the amount of time we are spending with AI by having clearer prompts, then we are able to reduce possibilities of disinformation, negative environmental impact, etc.

Writing effective prompts is helpful for crafting higher quality and more relevant outputs (I asked for an outline for a transition document because we severely lack those). It is also a good practice because it will make you think more about your own communication skills subconsciously, which I think will make us better communicators (as Tiffany mentioned above!).

Copilot gave me misleading information about resources that could be used to describe scientific figures. Copilot also described something random when I asked it to summarize figure 1A from a literature article based on DOI. In contrast, Perplexity AI mined the paper effectively to provide a reasonable description of the figure. I did like that Copilot provided suggestions on prompt improvement. Might be helpful when I writing introductions for a series of related papers.

My initial one-sentence prompt resulted in more information than I needed because my prompt was not specific enough. I then applied the CLEAR framework to my second prompt. By including the specifics regarding length of output, audience, and type of language desired, the result was much more aligned with what I needed. Using the CLEAR framework definitely helps!

I used a simple prompt, but the output gave me the outline and suggestions I hadn’t thought of which made it helpful in using the CLEAR framework. But I kept thinking about how much energy I was wasting after completing thing 4.

Writing a good prompt can help me quickly write an email or come up with more concise bullets in presentations. Content still has to be fact-checked and I have seen some typos. The more detailed & specific the prompt, the better the outcome. I like how Copilot will ask follow up questions to help tailor the response.

Learning to write a good prompt makes a big difference in the results you get. I will often go into great detail about the target audience and desired tone of what I’m writing. I try to give the AI context and parameters to work in.

My team and I were brainstorming this morning, planning for the upcoming semester. When we hit a wall trying to create new and clear language for our work, our new GenZ colleague turned to ChatGPT. It led into a good discussion on how to clarify our language into helpful prompts just as we are trying to make our communication more accessible to all.

In this exercise, I asked for relaxing activities for my upcoming time off. Clarifying my prompt took away the outdoor explorations it suggested and moved toward more specific INDOOR local activities for this weekend. It also helped me refine my own thinking about how I want to spend my time.

I’m thinking about how to use AI in the classroom since it is inevitable that students will use it in some way. The question is how to use it ethically and for what means. It could be useful to get students to think through and practice prompt engineering so that they can critique the limits of Gen AI. This is coming from the perspective of someone who teaches primarily discussion-based courses with lots of writing assignments.

I asked copilot to draft an email for me about fall registration–didn’t love the output, but I could see using the perspective of the AI to perhaps review what I write for clarity, or to simplify language. Using copilot in that way could help me learn to write better prompts, which might generate content I like in future.

I think more effective prompts could be used everywhere. I think what’s important is knowing what I want or what criteria I want satisfied, then reverse engineering a prompt that will lead the gen AI there. I’m not saying gen AI isn’t useful in the instances where I don’t know what I want, but it will really help to write better prompts if I already have a good idea of what the final product should accomplish or look like.

AI was very helpful by guiding me to get to an effective prompt that eventually narrowed the scope of the response to get to the exact results I needed to assist

Figuring out the right prompts has always been an implicit part of my research, but in using non-AI internet searches or research in the archives. Even just finding my way through the maze of my files. But I find the generative range of AI much more challenging to get the right prompts, or at least the possibilities are more daunting.

I also find that if I’m researching in an archive, and accidentally give the wrong “prompt” by requesting a box that’s not exactly related to my research question, I still sometimes stumble across valuable information for the topic or historical period. With AI I’m more wary of things that surprise me, since I don’t have as reliable a way to verify or contextualize that data.

One of the items I have found most useful in writing prompts is the ability to ask for a particular tone, e.g., formal, conversational, etc. What I get out of the GenAI tools isn’t necessarily directly usable, but it can often steer me in the direction of how to reword something I am working on. I frequently find incorrect information, even when requesting a summary, so thinking about how to prompt to have the best chance of correct information is next on my list.

I write a lot of instructional material for my job to help students understand how to use certain pieces of equipment. AI would be SUPER useful in generating a baseline for instructions, but I would need to go back and finetune the output to make sure that what is being generated is accurate. In this exercise, Copilot understood my prompt, and the instructions made sense, but not for the specific equipment I wanted (I would have to be super specific in my prompt to get what I actually want).

I keep my eyes out for cheat sheets on AI prompts for writing basic agendas, review of work guides, and briefs. Well formed ones output richer results be it the level of language, tone and audience.

Personally, I think writing more effective prompts is similar to writing instruction manuals, and the C.L.E.A.R. Framework touches on it all. I do really like the fact that you can change the tone of your writing. That is extremely helpful.

I clear prompt seems important but it is nice to see that you can modify it continually until you get it right. I especially like the fact that you can set the tone for the information is extremely useful.

The response after my first prompt was actually pretty good, but I was able to quickly tweak some areas and build out others with subsequent prompts. I was originally treating my prompts more like a Google search, where I would just scroll down to find the link I was looking for, but if I spend some more time building out my initial prompts with more specificity, or define the role and the why, it will save me more time trying to get the right response/answer.

My initial prompt was a general question about how AI can help me in my role at the university. After seeing the generated answers, I noticed one particular area which I thought would be especially helpful right now. I tailored the next prompt to that task, and the outcome was a very good template for some letters I need to write. I can see that the more specific you are, the better the outcome. It’s cool that you can continue to refine your questions, to get to what you most want to achieve.

CLEAR is definitely a useful approach. I tend to approach prompts as google searches, looking for right keywords, but it must be different. I must confess I was pleasantly surprised how well ChatGPT was able to come up with the lecture plan on a given topic – pretty close to what I’d do myself.

I consider the CLEAR format to be road map, and I always enjoy a road map that highlights the path to get me where I’m going in the most effective and efficient way.

Writing more effective prompts can make AI a much stronger tool for research, design, and outreach. For design, well-crafted prompts can spark creative ideas, produce varied prototypes, or refine messaging for clarity and impact. The real challenge is to know what you want – truly difficult! I design psoters, and to express what I want accurately is difficult. But if I can, chat gpt will be able to spill out the results I want. So ironically, the valuable idea would still have to come from humans.

Leave a Reply