Skip to main content
Article

Explainable AI (XAI) for Credit Scoring Models in FinTech: SHAP-Value Interpretation within Digital Lending Ecosystems

Authors
  • Elvira Rantelabi
  • Christiano Hernando

Abstract

The rapid adoption of machine learning–driven credit scoring in Financial Technology has substantially improved efficiency and scalability in digital lending, yet it has simultaneously intensified concerns regarding model transparency, accountability, and regulatory compliance. This study investigates the integration of Explainable Artificial Intelligence (XAI) into credit scoring systems, with a specific focus on SHapley Additive exPlanations (SHAP) as a mechanism for interpreting automated lending decisions. Using empirically grounded data that emulate real-world digital lending environments, multiple predictive models are evaluated, including Logistic Regression, Random Forest, and Gradient Boosting. The results demonstrate that Logistic Regression achieves competitive discriminatory performance, with an AUC-ROC of approximately 0.72, while maintaining superior interpretability compared to more complex ensemble models. Global explainability analysis reveals that a concentrated set of economically meaningful variables, namely income, credit utilization, and credit history length, account for the majority of model-driven credit risk assessments. These features consistently dominate SHAP-based importance rankings, indicating strong alignment between machine learning outputs and established credit risk theory. Local explainability results further show that individual credit decisions can be decomposed into intuitive, feature-level contributions, enabling clear justification of approval and rejection outcomes at the borrower level. Empirical evidence also indicates that explanation patterns remain stable across borrower segments differentiated by income levels, suggesting structural robustness and reduced risk of segment-specific bias. From an operational and regulatory perspective, the findings confirm that embedding explainability directly into the credit decision pipeline enhances governance, auditability, and customer communication without materially compromising predictive performance. Overall, this study provides empirical support for positioning XAI as a functional requirement in modern digital lending systems, demonstrating that transparent and accountable credit scoring models can effectively balance analytical performance, ethical responsibility, and regulatory readiness in FinTech ecosystems.

Keywords: Explainable Artificial Intelligence, Credit Scoring, Financial Technology, Digital Lending, SHAP Values, Model Transparency, Regulatory Compliance

How to Cite:

Rantelabi, E. & Hernando, C., (2025) “Explainable AI (XAI) for Credit Scoring Models in FinTech: SHAP-Value Interpretation within Digital Lending Ecosystems”, FinTech Innovation Journal 1(4), 325-344. doi: https://doi.org/10.63913/ftij.v1i4.85

Downloads:
Download PDF
View PDF

9 Views

2 Downloads

Published on
2025-11-01

Peer Reviewed