Lab 1 Session Summary — Customer Support Agentic App (Python)

Session Context

What We Implemented

1) Multi-agent collaboration flow

We replaced the placeholder single-agent logic with a specialized multi-step pipeline:
- Analyzer agent: classifies intent, sentiment, urgency; decides if clarification/escalation may be needed.
- Policy agent: evaluates allowed action strictly against the support handbook policy.
- Responder agent: writes customer-facing final response in a human, concise style.

2) Policy-aware behavior (no policy invention)

3) One-time clarification flow (optional task)

4) Human-in-the-loop approval (optional task)

5) Escalation behavior (optional + required policy alignment)

Files Updated

Environment Status

End-to-End Validations Performed

Scenario A: Plan question (normal)

Input: Basic plan API-access question.
- Expected behavior: policy-grounded answer (API is Premium-only).
- Observed: ReplySent with correct policy-aligned response.

Scenario B: Refund/billing issue with missing details

Input: suspected double-charge request.
- App requested one clarification.
- After follow-up and approval, app created refund path.
- Observed: RefundTicketCreated, with policy-aware explanation and next action.

Scenario C: Explicit escalation request

Input: customer asks for manager + repeated issue.
- Expected behavior: escalate immediately.
- Observed: EscalatedToHuman with appropriate acknowledgment and escalation response.

Real Session Examples (from @Local and @Local (2))

Example 1 — Greeting + clarification pitfall (from @Local)

Observed flow:
- Initial input was submitted as hi, how are you ? --- on the same line.
- The app treated this as a weak/unclear request and entered clarification mode.
- A full multi-line customer email was then pasted into the clarification prompt (which expects one short line).

Observed result:
- Classification: Intent=Question, Sentiment=Neutral, Urgency=Low
- Action: ReplySent
- Customer response: a generic greeting reply.

Lesson learned:
- Submit with --- on its own line.
- During clarification step, provide one short line only.

Example 2 — Clarification requested correctly (from @Local)

Observed flow:
- Customer reported being charged after cancellation but initially skipped clarification details.

Observed result:
- Classification: Intent=Refund, Sentiment=Frustrated, Urgency=Medium
- Action: ClarificationRequested
- Next action text: wait for customer clarification and re-run request.

Lesson learned:
- If no clarification is provided, app safely avoids guessing and asks for details.

Example 3 — Escalation path works (from @Local and @Local (2))

Input pattern:
- Explicit manager request, repeated unresolved billing issue, high frustration.

Observed result:
- Classification: Intent=Complaint, Sentiment=Angry, Urgency=High
- Action: EscalatedToHuman
- Customer response acknowledges frustration and confirms escalation to senior support.

Lesson learned:
- Escalation policy from handbook is being applied correctly.

Example 4 — Malicious/jailbreak prompt handled safely (from @Local (4))

Input pattern:
- Prompt-injection style request asking to ignore policy, expose internal/company/customer data, and force an invalid refund outcome.

Observed result after hardening:
- Classification: Intent=Unclear, Sentiment=Neutral, Urgency=Medium
- Reasoning includes safety marker: content/safety filter triggered and handled gracefully.
- Action: EscalatedToHuman
- Customer response: explicit refusal for unsafe requests + redirect to legitimate account support.
- Most importantly: no crash/traceback in terminal.

Lesson learned:
- Safety behavior is now robust: hostile prompts are blocked and converted to a controlled support response.

How To Run

cd /Users/llatin/Geek-Academy-Spec-Driven-workshop/support-agent-python
source .venv/bin/activate
python main.py

Input Tips (Important)

---

Outcome vs Lab 1 Expectations

Suggested Next Step (Lab 2 Readiness)