Working group 2 - Transparent versus Black Box Decision-Support Models in the Financial Industry

From EU COST Fin-AI
Jump to navigation Jump to search

Working group members

For working group membership, please see here.

Current Activities

Currently, WG2 is focused on preparing a review of the existing literature on AI (including machine learning) approaches as they are used in the finance industry and identify the most important applications.

Due to the broadness of the topic, WG2 is preparing separate reviews on the following sub-topics:

  • Credit risk modeling
  • Asset pricing (Time-series & Cross-sectional Predictability)
  • Market risk analysis
  • Risk management
  • Portfolio choice/construction and performance evaluation
  • Sentiment/Textual analysis
  • Business process modeling
  • If you want to contribute please fill in your name and email contact in the following spreadsheet under your topic of interest.

    Scoping Review NLP Toolkit

    To help with the review, we have prepared an NLP toolkit. The demo input file and results from the toolkit are available on the link.

    The toolkit will collect articles from several databases and prepare initial meta statistics for your convenience.

    If you have any questions regarding the toolkit or you would like to send an input file for processing, please contact Eftim Zdravevski {eftim AT finki.ukim.mk} or Petre Lameski {lameski AT finki.ukim.mk}.

    Motivation

    Regulators do not accept non-transparent “black box” models developed for any aspect of risk exposure. For example, some AI approaches – typically based on machine learning techniques – have not yet received full acceptance by regulators even though they are successfully applied internally by banks. The resulting incentive towards model simplicity bears however some risks: 1) overly simplified models might not apply to the evaluation of some of the more complex modern financial products, creating an indirect and perhaps unintended barrier to financial innovation; 2) regulators and the public using overly simplified models might be left with inferior information about true risk exposures, including systemic risks. Additionally, while AI and machine learning tools hold the potential to improve risk management, they remain untested at addressing risk under shifting financial conditions because they have been only recently deployed.

    A serious investment into transparent, interpretable and explainable AI in Finance is therefore urgently needed. Such research would encourage regulators to consider and apply more advanced AI-based models. Consequently, explainable artificial intelligence (XAI) is an emergent and very important research area. It not only aims at providing a rationale for model selection but also creates stability in model formulation, an important requirement for trust in models (Došilović et al. 2018;,Biran et al. 2018).

    Description of the Challenge (Main Aim)

    Regulators need to ensure the transparency of rules and criteria used to judge the admissibility of decision algorithms employed by financial institutions, to avoid a possible negative impact on the industry such as discrimination among market players. Thus, it is important that regulators and policy-makers have conceptual tools and research at their disposal to make quick and motivated decisions on how to regulate the use of data science techniques.

    Additionally, while AI and ML tools hold the potential to improve risk management, their recent deployment means that they remain untested at addressing risk under shifting financial conditions. Moreover, for more novel asset classes such as those comprising exposure to crypto-assets, the lack of long time series compounds the difficulty of understanding how a given model performs. Thus, it is important to develop methodologies to make inferences on model performance in unstable environments and in the absence of long time series (e.g., along the lines of Athey and Kuang (2018)).

    Another point of concern is that black-box models can (inadvertently or otherwise) introduce biases in decision making within the financial industry that can have important discriminatory effects, as stressed by Kusner, M.; Loftus, C.; Russell, C. and Silva, R. (2017)}. For example, credit scoring models might discriminate based on race and socio-cultural characteristics that might be correlated but not have any direct causal link to individuals’ creditworthiness.

    Progress beyond the state-of-the-art

    With regard to all three challenges faced by WG2, key insight will come from research on the nexus between causality and prediction, which is currently being explored in pioneering literature at the cross-roads of econometrics and data science, such as the work of Victor Chernozhukov (e.g., Belloni et al. (2017)) and of Susan Athey and Guido Imbens (e.g., Athey (2017), Athey and Imbens (2019)). The Action will build on and expand this line of research, leveraging on the multidisciplinary nature of our network (economists and financial economists working alongside applied mathematicians, statisticians and computer scientists).

    During this Action, our working group will develop prototypes to demonstrate the application of quantitative methods to improve transparency for the described “black box” models. The working group will also publish policy papers to suggest new regulations and guidelines for the industry. Our objective is to lower, to the extent possible, the barriers to using more advanced methods.

    In addition, our work will also address the issues of limited data and small-sample problems that arise in situations when the events of interest occur infrequently (e.g., defaults, fraud, etc.), providing solutions that will augment existing methods used in the financial industry. The WG will employ methods drawn from econometrics and statistics to transparently quantify and, to the extent possible, alleviate the impact of this problem on inference and prediction for financial decision making. This can be done, for example, by explicitly modeling the probability of data unavailability (e.g., using penalized logistic regressions and/or censored regressions), or by using estimation methods that allow for missing data (unbalanced panel data models).

    Objectives

    The main objectives of the working group are:

  • The development of conceptual and methodological tools for establishing when black-box models are admissible and, to the extent possible, making them more transparent and/or replacing them with interpretable and explainable models. This will require (i) the classification of algorithms from a range of disciplinary domains (especially ML, Econometrics) according to the predictability of the variables being modeled/forecast, (ii) the identification of methods for mapping results of black-box models to explainable and interpretable ones, at least ex-post, (iii) a better understanding of the conceptual and empirical nexus between identification of causality within models and the interpretability/explainability of the models.
  • Establishing working relationships with regulators and practitioners’ communities, to receive essential input on how to pursue the previous objective and to share the results of the investigation.
  • Minutes of meetings

    The minutes of all WG2 meetings are available on the link.