Skip to content
Case Study Banking March 2024

Gen-AI Debt Collection POC

genai azure-openai poc banking-analytics
Anonymized AI workflow exploration frame

Summary

What changed, in short

The POC demonstrated that personalised messaging is technically feasible in this banking context, but it also surfaced the governance and review work a production version would need: prompt-version control, audit logging of inputs and outputs, segment-level guardrails, and a fallback to templated messaging when model confidence is low.

Outcome-first metrics

Outcome-first metrics

Outcome signal

Feasibility demonstrated without overclaiming scale

Demonstrated feasibility of AI-assisted personalized messaging

Focus

POC for personalized messaging

Industry: Banking

My role

POC design and delivery, scope decisions, and honest scoping of the gap between feasibility and production readiness

Tools: Azure OpenAI, Python, Azure Databricks

Problem

What needed to change

Debt collection messaging was largely template-driven, which limited how well it could adapt to borrower context across segments. Any AI-assisted approach had to work inside banking governance constraints — limited data movement, strict auditability, and no assumption that model output reaches a customer without human review.

Context / Constraints

What shaped the work

The POC sat inside a regulated banking environment, so anything promising still had a long path to production. The real question was not "does it work?" but "does it work within the constraints we actually have, and what would it cost to harden?"

  • Manual, non-personalized messaging approach

Approach

How the work was handled

Built a POC on Azure OpenAI and banking-domain data, using Python and Azure Databricks for the data preparation layer. Kept the pipeline small enough to reason about end to end: a controlled prompt layer, a segment-aware context window, and an explicit human-review gate before any generated output left the environment.

Outcome

What changed in practice

The POC demonstrated that personalised messaging is technically feasible in this banking context, but it also surfaced the governance and review work a production version would need: prompt-version control, audit logging of inputs and outputs, segment-level guardrails, and a fallback to templated messaging when model confidence is low.

  • Built a proof of concept using Azure OpenAI and banking data.
  • Explored more personalized debt collection messaging strategies.
  • Presented the work as feasibility exploration rather than production transformation.

My Role

Where I contributed most

POC design and delivery, scope decisions, and honest scoping of the gap between feasibility and production readiness

Trade-offs / Lessons

Choices, constraints, and what mattered

  • Used Azure OpenAI and Python in a banking analytics context.
  • Positioned the work as a POC with controlled scope.
  • Focused on personalized messaging strategy exploration.

Additional Notes

Extra implementation detail

What this POC actually proved

Feasibility POCs in regulated environments are worth more when they come back with an honest map of the gap, not a polished demo. The goal here was to test whether a segment-aware prompt could produce usable draft messaging, and to surface the production-readiness cost before anyone committed to scale.

How the workflow was shaped

  • Data preparation ran in Azure Databricks with domain-specific features only — no general customer PII in the prompt.
  • The prompt layer was deliberately small: a controlled system prompt, a segment-aware context window, and a bounded output format.
  • Every generated message passed through a human-review step. Nothing generated was delivered to a customer directly.
  • The prompt layer was isolated from the data layer so each could be audited, versioned, or swapped independently.

What a production version would still need

The POC deliverable included, in writing, what would be required before anything shipped: prompt-version control, audit logging of every input and output, segment-level guardrails, and a fallback to templated messaging whenever the model returned low-confidence output. Naming those requirements explicitly is the point of a feasibility POC.

Highlights

  • Azure OpenAI POC built inside banking governance constraints rather than a sandbox.
  • Human-review gate between generation and any outgoing message — feasibility, not autonomy.
  • Deliverable included an honest production-readiness gap assessment, not just a working demo.
  • Scoped clearly as a POC with specific next-step requirements documented for the team.

Contact

Need similar help with reporting, model quality, or BI delivery?

Start with the current constraint, what needs to change, and where delivery risk is showing up now.