
How to secure your Azure Storage with Microsoft Defender for Storage
June 24, 2025Making the Most of Attack Simulation Training: Dynamic Groups, Automation, and User Education
June 25, 2025Welcome to Agent Support—a developer advice column for those head-scratching moments when you’re building an AI agent! Each post answers a question inspired by real conversations in the AI developer community, offering practical advice and tips.
This time, we’re talking about one of the most misunderstood ingredients in agent behavior: the system prompt.
Let’s dive in!
💬 Dear Agent Support
I’ve written a few different prompts to guide my agent’s responses, but the output still misses the mark—sometimes it’s too vague, other times too detailed. What’s the best way to structure the instructions so the results are more consistent?
Great question! It gets right to the heart of prompt engineering.
When the output feels inconsistent, it’s often because the instructions aren’t doing enough to guide the model’s behavior. That’s where prompt engineering can make a difference. By refining how you frame the instructions, you can guide the model toward more reliable, purpose-driven output.
🧠 What Is Prompt Engineering (and Why It Matters for Agents)
Before we can fix the prompt, let’s define the craft.
Prompt engineering is the practice of designing clear, structured input instructions that guide a model toward the behavior you want. In agent systems, this usually means writing the system prompt, a behind-the-scenes instruction that sets the tone, context, and boundaries for how the agent should act.
While prompt engineering feels new, it’s rooted in decades of interface design, instruction tuning, and human-computer interaction research. The big shift? With large language models (LLMs), language becomes the interface. The better your instructions, the better your outcomes.
🧩 The Anatomy of a Good System Prompt
Think of your system prompt as a blueprint for how the agent should operate. It sets the stage before the conversation starts. A strong system prompt should:
- Define the role: Who is this agent? What’s their tone, expertise, or purpose?
- Clarify the goal: What task should the agent help with? What should it avoid?
- Establish boundaries: Are there any constraints? Should it cite sources? Stay concise?
Here’s a rough template you can build from:
“You are a helpful assistant that specializes in [domain]. Your job is to [task]. Keep responses [format/length/tone]. If you’re unsure, respond with ‘I don’t know’ instead of guessing.”
🛠️ Why Prompts Fail (Even When They Sound Fine)
Common issues we see:
- Too vague (“Be helpful” isn’t helpful.)
- Overloaded with logic (Treating the system prompt like a config file.)
- Conflicting instructions (“Be friendly” + “Use legal terminology precisely.”)
Even well-written prompts can underperform if they’re mismatched with the model or task.
That’s why we recommend testing and refining early and often!
✏️ Skip the Struggle— let the AI Toolkit Write It!
Writing a great system prompt takes practice. And even then, it’s easy to overthink it!
If you’re not sure where to start (or just want to speed things up), the AI Toolkit provides a built-in way to generate a system prompt for you. All you have to do is describe what the agent needs to do, and the AI Toolkit will generate a well-defined and detailed system prompt for your agent.
Here’s how to do it:
- Open the Agent Builder from the AI Toolkit panel in Visual Studio Code.
- Click the + New Agent button and provide a name for your agent.
- Select a Model for your agent.
- In the Prompts section, click Generate system prompt.
- In the Generate a prompt window that appears, provide basic details about your task and click Generate.
After the AI Toolkit generates your agent’s system prompt, it’ll appear in the System prompt field. I recommend reviewing the system prompt and modifying any parts that may need revision!
Heads up: System prompts aren’t just behind-the-scenes setup, they’re submitted along with the user prompt every time you send a request. That means they count toward your total token limit, so longer prompts can impact both cost and response length.
🧪 Test Before You Build
Once you’ve written (or generated) a system prompt, don’t skip straight to wiring it into your agent. It’s worth testing how the model responds with the prompt in place first.
You can do that right in the Agent Builder. Just submit a test prompt in the User Prompt field, click Run, and the model will generate a response using the system prompt behind the scenes. This gives you a quick read on whether the behavior aligns with your expectations before you start building around it.
🔁 Recap
Here’s a quick rundown of what we covered:
- Prompt engineering helps guide your agent’s behavior through language.
- A good system prompt sets the tone, purpose, and guardrails for the agent.
- Test, tweak, and simplify—especially if responses seem inconsistent or off-target.
- You can use the Generate system prompt feature within the AI Toolkit to quickly generate instructions for your agent.
📺 Want to Go Deeper?
Check out my latest video on how to define your agent’s behavior—it’s part of the Build an Agent Series, where I walk through the building blocks of turning an idea into a working AI agent.
The Prompt Engineering Fundamentals chapter from our aka.ms/AITKGenAI curriculum overs all the essentials—prompt structure, common patterns, and ways to test and improve your outputs. It also includes exercises so you can get some hands-on practice.
👉 Explore the full curriculum: aka.ms/AITKGenAI
And for all your general AI and AI agent questions, join us in the Azure AI Foundry Discord! You can find me hanging out there answering your questions about the AI Toolkit. I’m looking forward to chatting with you there!
And remember, great agent behavior starts with great instructions—and now you’ve got the tools to write them.