Frequently Asked Questions
The skill compares addresses found across borrower and entity documents against the anchor address on the loan application. It normalizes superficial differences in formatting and highlights mismatches, inconsistencies, and patterns that may carry risk.
The output is a structured set of findings for human review — not a decision or recommendation.
The skill flags issues such as:
- Addresses that don't align with the application anchor address
- Inconsistent use of unit numbers, suites, or secondary identifiers
- Shared addresses across borrowers, entities, or related documents
- Address patterns that may warrant closer review for occupancy or related-party risk
Not every flag indicates a problem — many simply surface areas that deserve a second look.
No. This skill does not make credit, approval, or fraud determinations. It does not assign intent or fault.
It supports human judgment by surfacing address-related findings early, so underwriters and risk teams can decide how — or whether — to act.
The skill is designed to be consistent and thorough in comparing addresses across documents, which helps reduce fatigue-related misses in repetitive review tasks.
Like any automated review, it can make mistakes — especially with ambiguous, poorly scanned, or unusual documents. That's why all findings are intended to be sense-checked by a human, particularly when they carry material risk.
The skill can surface address patterns that may warrant closer review, including the use of shared, proxy, or non-standard addresses.
Whether a specific address represents a mailbox, proxy location, or legitimate exception requires human judgment and often additional context outside the documents themselves.
Flags are informational, not directives. Underwriters remain in control of interpretation and next steps.
If a flag appears inconsistent with the rest of the file or the known context, it can be dismissed just like any other review artifact. The skill is designed to support judgment, not override it.
Yes — when implemented as a purpose-built, governed skill within underwriting workflows. In that form, it can be combined with document processing, rules, and human review checkpoints.
The GPT or Gemini versions are intended for exploration and validation, not for production use.