Newbies, Kids & Teens Prompt Engineering Tutorial
Recently, I hosted an in person, holiday-themed prompt engineering session, and it ended up being a ton of fun. We enjoyed going through the prompts, comparing the results from different LLMs, honing our techniques and laughing at some of the bizarre responses that AI will generate. The goal of the exercise was to lower the barrier to entry for AI usage and de-mystify some of the more complex terms in AI engineering.
We had so much fun, that I decided to create this tutorial specifically for newbies, kids and teens to learn the basics of prompt engineering in a fun and engaging way. Give it a try yourself or try it with your kids and classrooms! I’d love to hear your feedback.
The Challenge
Mystery Detective Agency Challenge
A valuable diamond has gone missing from the local museum, and it’s up to you and your AI partner to find out who stole it. Along the way, you’ll face obstacles like incomplete witness statements, misleading alibis, and tricky suspects. Use advanced prompt engineering techniques to solve the case.
During this tutorial, I’ll guide you through various prompt engineering methods to help you unravel the mystery. After each task, share what you learned about your AI’s responses, what techniques worked or didn’t work, and which AI model you preferred.
Needed Tools and Information
Tools
In this exercise, please use your LLM of choice. If you’re looking for a great tool, try this LLM Playground called Chatbot Arena - A useful website to explore and compare the latest LLMs without having to sign up.
Background Information
What is a LLM?
A LLM (Large Language Model) is an AI model trained on tons of text to understand and generate human-like responses. It doesn’t “think” but uses patterns in its training data to predict the best answer. When you ask it a question, it doesn’t really "know" the answer like a person does, but it uses all the things it has read to guess the best answer for you. It excels at stringing words together to sound smart or helpful.
Prompt Engineering Technique 1: Zero-Shot Prompting
What it is: It sounds fancy, but it’s really just asking AI to solve a task directly without any examples.
Why it’s useful: It’s the simplest way to get quick results. Kids can think of it as the YOLO technique ;)
Example Prompt(s):
"A diamond was stolen from the city museum. Create a detailed plan for investigating the theft, including how to interview suspects, gather evidence, and narrow down potential culprits."
"What is the best strategy to solve a museum heist involving multiple suspects and missing evidence?"
How To Get Started
If you’re not familiar with LLMs and/or don’t have a tool to use, start by opening up Chatbot Arena, go to the “Arena (side-by-side)” tab to get started. Scroll down to see the available LLMs to choose from, you can expand to see descriptions of all the models. Select 2 models to compare, enter your example prompt 1 and see the results.
Tip: Explore the differences in large language models (LLMs)
When going through this exercise, take the time to look at the differences in output produced by the two models using the same prompt. Like humans, each LLM will take in the prompt given and return a different answer that it deems most likely to be correct. Think of LLMs like experts with unique backgrounds who have had different experiences and training. Each LLM will approach questions differently based on what they’ve learned and the rules they must follow.
Training Data: Each model has been trained by different data, so their knowledge and tone vary.
Design: Some are bigger or built differently, affecting how detailed or fast they are.
Focus: Models can specialize in specific tasks, like writing or coding.
Rules: Filters, settings and guardrails (or rules) can make LLMs answer differently.
Now Try Another Model
In the last exercise, I compared chatgpt-4o to gemeni-1.5-pro. While each model had some differences in formatting and directives, they were in this case relatively aligned on approach. Let’s try running it again with chatgpt-4o vs a different model.
Clear the history, replace the gemini model with the claude anthropic model and re-run.
We notice at least two things in this latest run:
chatgpt-4o does not generate the same responses every time. LLMs generate different results because they use probabilities to choose the most likely next word in a response. This process is influenced by randomness, so even with the same input, it may pick different reasonable answers each time.
The claude model was much more straightforward about the exercise, taking less creative liberties and showing up front worry about AI ethics. In the first response, it provided only 5 steps to investigate and reminded the user of it’s limitations. In the second run, it expanded slightly but again kept the response straight forward and without any creative liberties.
Prompt Engineering Technique 2: Few-Shot Prompting
What it is: Providing a few examples to guide the AI’s response.
Why it’s useful: It helps the AI understand what kind of reasoning or format you’re looking for.
In this example, we want the LLM to provide concise and actionable responses. While the open-ended questions from technique 1 were enjoyable, they often produced lengthy replies with multiple options, which isn’t ideal when time is limited. Instead, we want the LLM to focus on giving clear, direct guidance. To achieve this, we’ll provide a few examples of scenarios with appropriate suggested actions, guiding the model to follow this pattern in its final response.
Example Prompt(s):
Scenario: A bicycle was stolen from a park.
Next Action(s): Interview witnesses, look for security footage, and identify suspects nearby.
Scenario: The suspect claimed to be at the movies but had no ticket stub.
Next Action(s): Check security cameras at the movie theater.
Scenario: A muddy shoe print near the museum’s window.
Next Action(s): Compare the print to shoes owned by staff and suspects and analyze the mud type to determine where it came from.
Scenario: A pair of gloves was found at the crime scene.
Identify the next action(s)
Tip: Try the Same Question Again With Zero Shot
To highlight the effectiveness of few-shot prompting, which helps guide AI toward the desired response style, let’s test the last scenario again. This time we will not provide any examples. This will show how the model performs when asked to identify the next best action on its own. We can see below that without the examples to guide it, the LLMs took a lot more creative liberty with their responses, producing lengthy options versus the concise actionable output we were looking for.
Prompt Engineering Technique 3: Chain-of-Thought (CoT)
What it is: Enables complex reasoning by breaking down the problem step-by-step in the prompt.
Why it’s useful: Great for complex decision-making.
In this example, we are evaluating the suspects alibis to see if there was any room in their schedule to stop by the museum and steal the diamond.
Example Prompt(s):
Question: Suspect A claims they went straight home after leaving a friend’s house at 1600 Pennsylvania Avenue NW, Washington, DC at 10:10 PM. They were seen on video footage entering their residence at 1234 U Street NW, Washington, DC at 10:40 PM. The museum is located at 1000 Jefferson Drive SW, Washington, DC. Could the suspect have stopped at the museum?
Answer:
Step 1: Calculate travel time from the friend’s house to the museum.
Distance: Approximately 2 miles.
Speed: Assuming an average driving speed of 30 mph.
Time: 2 miles ÷ 30 mph = 4 minutes.
Step 2: Calculate travel time from the museum to the suspect’s residence.
Distance: Approximately 3 miles.
Speed: Assuming an average driving speed of 30 mph.
Time: 3 miles ÷ 30 mph = 6 minutes.
Step 3: Total travel time with a museum stop:
4 minutes (friend’s house to museum) + 6 minutes (museum to home) = 10 minutes.
Step 4: Evaluate the available time:
The suspect left the friend’s house at 10:10 PM and arrived home at 10:40 PM, giving them 30 minutes of travel time.
With only 10 minutes required for the trip including a museum stop, the suspect had 20 minutes unaccounted for.
Conclusion: Based on the time and travel distances, the suspect had ample time to stop at the museum.
Question: Suspect B visited their mother at 500 12th Street SW, Washington, DC and left her house at 9:40 PM. They were seen on video footage stopping by the convenience store at 1500 Constitution Avenue NW, Washington, DC at 9:48 PM, and then entering their home at 2000 M Street NW, Washington, DC at 10:00 PM, as recorded by their ring camera. The museum is located at 11234 U Street NW, Washington, DC. Could the suspect have stopped at the museum?
In this example above, we can see that the LLMs are providing answers following the pattern given in the original question, showing their work and following a similar reasoning pattern.
Prompt Engineering Technique 4: Prompt Chaining
What it is: Breaking a complex task into smaller, linked prompts. This technique allows each instruction to be fully focused on a smaller and more targeted problem space.
Why it’s useful: Simplifies multi-step problems by focusing on one step at a time.
Prompt(s):
(Prompt 1) "Describe the profiles of three suspects involved in the museum heist. Include their background, alibis, and possible motives. Keep each profile under 2 paragraphs."
(Prompt 2) "Based on these profiles, create a list of their suspicious actions and helpful clues. {Insert output from Prompt 1}"
(Prompt 3) "Use the clues and actions outlined below to write a detective’s report summarizing who is most likely guilty and why. {Insert output from Prompt 2}"
With this example you can see that prompt chaining breaks down complex tasks into smaller, sequential steps, making it easier for AI to handle intricate problems.
Thank You
Thank you for following along with this prompt chaining tutorial. I hope you discovered something new or even just realized that AI can be approachable and fun for everyone. Whether you’re a beginner or a pro, these techniques are meant to inspire creativity and help you tackle problems with AI as a tool. I’d love to hear what you learned and see your results, please share your experiences and findings below. If you would like to learn more about prompt engineering, please check out the Prompt Engineering Guide which provided a lot of inspiration for this tutorial. Let’s keep the conversation going and continue exploring what AI can do together!