
Control your Azure Bicep deployment flow with the fail function
May 6, 2025
Azure Local Solution Categories explained: Validated Nodes, Integrated Systems, and Premier Solutions
May 6, 2025Introduction
You’ve probably noticed a theme in my recent posts: tackling challenges with AI-powered solutions. In my latest project, I needed a fast way to classify and categorize GitHub issues using a predefined set of tags. The tag data was there, but the connections between issues and tags weren’t. To bridge that gap, I combined Azure OpenAI Service, Prompty, and a GitHub to automatically extract and assign the right labels.
By automating issue tagging, I was able to:
- Streamline contributor workflows with consistent, on-time labels that simplify triage
- Improve repository hygiene by keeping issues well-organized, searchable, and easy to navigate
- Eliminate repetitive maintenance so the team can focus on community growth and developer empowerment
- Scale effortlessly as the project expands, turning manual chores into intelligent automation
Challenge: 46 issues, no tags
The Prompty repository currently hosts 46 relevant, but untagged, issues. To automate labeling, I first defined a complete tag taxonomy. Then I built a solution using:
- Prompty for prompt templating and function calling
- Azure OpenAI (gpt-4o-mini) to classify each issue
- Azure AI Search for retrieval-augmented context (RAG)
- Python to orchestrate the workflow and integrate with GitHub
By the end, you’ll have an autonomous agent that fetches open issues, matches them against your custom taxonomy, and applies labels back on GitHub.
Prerequisites:
- An Azure account with Azure AI Search and Azure OpenAI enabled
- Python and Prompty installed
Clone the repo and install dependencies:
pip install -r requirements.txt
Step 1: Define the prompt template
We’ll use Prompty to structure our LLM instructions. If you haven’t yet, install the Prompty VS Code extension and refer to the Prompty docs to get started. Prompty combines:
- Tooling to configure and deploy models
- Runtime for executing prompts and function calls
- Specification (YAML) for defining prompts, inputs, and outputs
Our Prompty is set to use gpt-4o-mini and below is our sample input:
sample:
title: Including Image in System Message
tags: ${file:tags.json}
description: An error arises in the flow, coming up starting from the “complete” block. It seems like it is caused by placing a static image in the system prompt, since removing it causes the issue to go away. Please let me know if I can provide additional context.
The inputs will be the tags file implemented using RAG, then we will fetch the issue title and description from GitHub once a new issue is posted. Next, in our Prompty file, we gave instructions of how the LLLM should work as follows:
system:
You are an intelligent GitHub issue tagging assistant. Available tags: ${inputs}
{% if tags.tags %}
## Available Tags
{% for tag in tags.tags %}
name: {{tag.name}}
description: {{tag.description}}
{% endfor %}
{% endif %}
Guidelines:
1. Only select tags that exactly match the provided list above
2. If no tags apply, return an empty array []
3. Return ONLY a valid JSON array of strings, nothing else
4. Do not explain your choices or add any other text
Use your understanding of the issue and refer to documentation at https://prompty.ai to match appropriate tags. Tags may refer to:
– Issue type (e.g., bug, enhancement, documentation)
– Tool or component (e.g., tool:cli, tracer:json-tracer)
– Technology or integration (e.g., integration:azure, runtime:python)
– Conceptual elements (e.g., asset:template-loading)
Return only a valid JSON array of the issue title, description and tags. If the issue does not fit in any of the
categories, return an empty array with: [“No tags apply to this issue. Please review the issue and try again.”]
Example:
Issue Title: “App crashes when running in Azure CLI”
Issue Body: “Running the generated code in Azure CLI throws a Python runtime error.”
Tag List: [“bug”, “tool:cli”, “runtime:python”, “integration:azure”]
Output: [“bug”, “tool:cli”, “runtime:python”, “integration:azure”]
user:
Issue Title: {{title}}
Issue Description: {{description}}
Once the Prompty file was ready, I right clicked on the file and converted it to Prompty code, which provided a Python base code to get started from, instead of building from scratch.
Step 2: enrich with context using Azure AI Search
To be able to generate labels for our issues, I created a sample of tags, around 20, each with a title and a description of what it does. As a starting point, I started with Azure AI Foundry, where I uploaded the data and created an index. This typically takes about 1hr to successfully complete.
Next, I implemented a retrieval function:
def query_azure_search(query_text):
“””Query Azure AI Search for relevant documents and tags.”””
search_client = SearchClient(
endpoint=SEARCH_SERVICE_ENDPOINT,
index_name=SEARCH_INDEX_NAME,
credential=AzureKeyCredential(SEARCH_API_KEY)
)
# Perform the search
results = search_client.search(
search_text=query_text,
query_type=QueryType.SIMPLE,
top=5 # Retrieve top 5 results
)
# Extract content and tags from results
documents = [doc[“content”] for doc in results]
tags = [doc.get(“tags”, []) for doc in results] # Assuming “tags” is a field in the index
# Flatten and deduplicate tags
unique_tags = list(set(tag for tag_list in tags for tag in tag_list))
return documents, unique_tags
Step 3: Orchestrate the Workflow
In addition, to adding RAG, I added functions in the basic.py file to:
- fetch_github_issues: calls the GitHub REST API to list open issues and filters out any that already have labels.
- run_with_rag: on the issues selected, calls the query_azure_search to append any retrieved docs, tags the issues and parses the JSON output from the prompt to a list for the labels
- label_issue: patches the issue to apply a list of labels.
- process_issues: this fetches all unlabelled issues, extracts the rag pipeline to generate the tags, and calls the labels_issue tag to apply the tags
- scheduler loop: this runs every so often to check if there’s a new issue and apply a label
Step 4: Validate and Run
- Ensure all .env variables are set (API keys, endpoints, token).
Install dependencies and execute using: python basic.py
- Create a new GitHub issue and watch as your agent assigns tags in real time.
- Below is a short demo video here to illustrate the workflow.
Next Steps
- Migrate from PATs to a GitHub App for tighter security
- Create multi-agent application and add an evaluator agent to review tags before publishing
- Integrate with GitHub Actions or Azure Pipelines for CI/CD
Conclusion and Resources
By combining Prompty, Azure AI Search, and Azure OpenAI, you can fully automate GitHub issue triage—improving consistency, saving time, and scaling effortlessly. Adapt this pattern to any classification task in your own workflows! You can learn more using the following resources:
- Prompty documentation to learn more on Prompty
- Agents for Beginners course to learn how you can build your own agent