We replaced the placeholder single-agent logic with a specialized multi-step pipeline:
- Analyzer agent: classifies intent, sentiment, urgency; decides if clarification/escalation may be needed.
- Policy agent: evaluates allowed action strictly against the support handbook policy.
- Responder agent: writes customer-facing final response in a human, concise style.
support-agent-python/data/support_handbook.md as grounding policy context.ClarificationRequested with a clear next step.approve or reject.reject forces escalation path; approve allows refund action flow.EscalatedToHuman and presents a suitable customer-facing response.support-agent-python/app/agent.pycreate_analyzer_agent, create_policy_agent, create_responder_agent.Centralized Azure OpenAI client/env setup.
support-agent-python/app/processor.py
Added mapping to SupportRequestResult action outcomes.
support-agent-python/main.py
Kept clean console UX for multiline input and looped requests.
support-agent-python/.env
Set deployment name to llatin.
plan.md
uv installed and available.Input: Basic plan API-access question.
- Expected behavior: policy-grounded answer (API is Premium-only).
- Observed: ReplySent with correct policy-aligned response.
Input: suspected double-charge request.
- App requested one clarification.
- After follow-up and approval, app created refund path.
- Observed: RefundTicketCreated, with policy-aware explanation and next action.
Input: customer asks for manager + repeated issue.
- Expected behavior: escalate immediately.
- Observed: EscalatedToHuman with appropriate acknowledgment and escalation response.
Observed flow:
- Initial input was submitted as hi, how are you ? --- on the same line.
- The app treated this as a weak/unclear request and entered clarification mode.
- A full multi-line customer email was then pasted into the clarification prompt (which expects one short line).
Observed result:
- Classification: Intent=Question, Sentiment=Neutral, Urgency=Low
- Action: ReplySent
- Customer response: a generic greeting reply.
Lesson learned:
- Submit with --- on its own line.
- During clarification step, provide one short line only.
Observed flow:
- Customer reported being charged after cancellation but initially skipped clarification details.
Observed result:
- Classification: Intent=Refund, Sentiment=Frustrated, Urgency=Medium
- Action: ClarificationRequested
- Next action text: wait for customer clarification and re-run request.
Lesson learned:
- If no clarification is provided, app safely avoids guessing and asks for details.
Input pattern:
- Explicit manager request, repeated unresolved billing issue, high frustration.
Observed result:
- Classification: Intent=Complaint, Sentiment=Angry, Urgency=High
- Action: EscalatedToHuman
- Customer response acknowledges frustration and confirms escalation to senior support.
Lesson learned:
- Escalation policy from handbook is being applied correctly.
Input pattern:
- Prompt-injection style request asking to ignore policy, expose internal/company/customer data, and force an invalid refund outcome.
Observed result after hardening:
- Classification: Intent=Unclear, Sentiment=Neutral, Urgency=Medium
- Reasoning includes safety marker: content/safety filter triggered and handled gracefully.
- Action: EscalatedToHuman
- Customer response: explicit refusal for unsafe requests + redirect to legitimate account support.
- Most importantly: no crash/traceback in terminal.
Lesson learned:
- Safety behavior is now robust: hostile prompts are blocked and converted to a controlled support response.
cd /Users/llatin/Geek-Academy-Spec-Driven-workshop/support-agent-python
source .venv/bin/activate
python main.py
---
quit on its own line to exit.