Sequential deep learning for credit risk monitoring using tabular financial data

annotation 





Machine learning plays an important role in preventing financial loss in the banking industry. Perhaps the most pressing forecasting challenge is assessing credit risk (the risk of debt default). Such risks can lead to losses of billions of dollars annually. Today, most of the benefits of machine learning in predicting credit risk are due to gradient enhancement decision tree models. However, these benefits start to diminish if not supported by new data sources and / or high-tech, flexible features. In this article, we present our attempts to create a new approach to credit risk assessment using deep learning that does not involve complex monitoring, does not rely on new model inputs.We propose new methods for fetching credit card transactions for use with deep recurrent and causal convolutional neural networks that use temporal sequences of financial data, without special resource requirements. We show that our sequential approach to deep learning using a temporary convolutional network has outperformed the reference inconsistent tree model, achieving significant financial savings and early detection of credit risk. We also demonstrate the potential of our approach for use in a production environment, where the proposed sampling technique allows sequences to be stored efficiently in memory, using them for rapid online learning and production.that use time sequences of financial data, without special resource requirements. We show that our sequential approach to deep learning using a temporary convolutional network has outperformed the reference inconsistent tree model, achieving significant financial savings and early detection of credit risk. We also demonstrate the potential of our approach for use in a production environment, where the proposed sampling technique allows sequences to be stored efficiently in memory, using them for rapid online training and production.that use time series of financial data, without special resource requirements. We show that our sequential approach to deep learning using a temporary convolutional network has outperformed the reference inconsistent tree model, achieving significant financial savings and early detection of credit risk. We also demonstrate the potential of our approach for use in a production environment, where the proposed sampling technique allows sequences to be stored efficiently in memory, using them for rapid online learning and production.having achieved significant financial savings and early detection of credit risk. We also demonstrate the potential of our approach for use in a production environment, where the proposed sampling technique allows sequences to be stored efficiently in memory, using them for rapid online learning and production.having achieved significant financial savings and early detection of credit risk. We also demonstrate the potential of our approach for use in a production environment, where the proposed sampling technique allows sequences to be stored efficiently in memory, using them for rapid online training and production. 





KEYWORDS credit risk, tabular data, credit card transactions, recurrent neural networks, temporal convolutional networks 





1.   





  ,    ,   ,     (, ). ,   [24]. 





(GBDTs), , [10]. ,          .  , ,      , , . . -,   , . -,    . , - ( ) -  , , (, [6]). 





, ,    . ,  GBDT, ,    (TCN)  . , , . - . 





        [9, 23],   [3, 26] [1,19]. ,    , (RNN) TCN, . , ,      .  , , ,  « » ,  / . ,  ,     -.     - . 





 , , . , ( )   -. ,   . , , - . 





2.  





2.1.   





, . , () 1,5   . 





2.1.1. .   . , , ,    ()  , 127    . : ( , . .) ( , . .).  ,   . - . 





2.1.2.  .   15 , 2016 2017 .     , 45   . , ( ) . 





      6   ( )  , 2017 2018 ,     . , . 2%   . , - - , - , .





2.1.3.  . ,  , . , .   : 





  •    ( / ) 10 ,   ; 





  •  «»     , , , ( GBDT), , , : (s1, s2, . . . , sk ), si > sj , for all i > j,   : 





x - , xˆ-  ( . . 1); 





•    - [5]; 





•  ()  , [18]   :  





  1.     ; 





  2.     1, : 





G - , | G | - , k – -, ( k = 30), y - .  





• ,    . 





Figure 1: Compacting using GBDT partitioning.
1: GBDT.

2.2.   





   . . ,     ,    - . , , ,     . 





 – . -, , - . -, , 11 .   ( . 2).    ,       (  10 ). 





Figure 2: Sequence of selected transactions.
2: .

     . ,    – , , . , . 





     12 , . . 





2.3.   





, ,  GBDT .  GBDT , . ,  (  TabNet) . 





2.3.1 . (MLP) . MLP /   .   "" , MLP [13]. , , , . 





2.3.2. TabNet. TabNet -   ,  [2].   ,    .  TabNet  : . , .    , ,    , (. 3). , , .





Figure 3: Architecture of TabNet decision steps.
3: TabNet.

2.3.3.  . , GBDT: . RNNS , . 





 long short-term memory (LSTM) RNN [14],   RNN.   ,   . RNN,  zoneout [17]. , zoneout  RNN. , , zoneout   .  zoneout  , RNN . LSTM  zoneout  .4. 





Figure 4: A single-layer LSTM architecture for predicting credit risk (i.e., default on credit card debt) based on 12-month transaction data.
4: LSTM (. . ) 12- .

2.3.4   . RNN , , . ,  RNN  [21]. , [4, 7, 11, 15, 20]. , , ,  RNNs. , . , t t (. . 5) [25]. 





, , ,    (TCN) [4]. , TCNS  (), , ,    [27]. TCNS ,    , (. . 6), [12]. TCN  , .  deep TCN . 7. 





Figure 6: TCN block.
6: TCN.

2.4.   





2.4.1. . , /, ,  SigOpt [8]. , - . , 5- MLP, 3- LSTM ( 2 LSTM ) 6- TCN ( 2 TCN 4 ). ( ~0,2). 





2.4.2. . [16] [22]. - ( = 0,9, 0,999), , . - , . 





2.4.3.  . , . , 512 0,8, 1e-4, , . 





2.4.4 . .   ,   .   ( = (2*AUROC) - 1). , , - . - . 





3.  





GBDT 1.    (  15 000 ) , . 





Table 1: Effectiveness of individual and ensemble models for the general population and the highly indebted population (over $ 15,000)
1: ( 15 . .)

, LSTM TCN,   GBDT. MLP  TabNet  . , , LSTM TCN GBDT, , , . ,   , , , (  ). 





Table 2: Performance of selected models divided by default date.
2: , .

3.1.  .  , , . ,    , . 





LSTM TCN GBDT ( ,  , ) 2. ,  GBDT, -, ( 6 ),  , , ( 7-12 ) ( 13-18 ), , . 





3.2.  .  ( ). 8 , , , , , « » ,  . LSTM, TCN 12 , TCN LSTM .    , ,    . , 2016 . 





Figure 8: Performance increases as the number of monthly transactions in sequence increases.
8: .

3.3. -.  , , LSTM TCN -. , , 2017 . . 9, . , , . , . 





Figure 9: Online training (i.e., progressively adjusting the weights using the input data) gave better performance results compared to reinitializing the weights with small random values ​​before training.
9: - (. . ) .

3.4.  .  TCN LSTM ,   . NVDIA Tesla V100, ~30 , TCN - 512 , ~50   LSTM. , TCN , , LSTM. 





, LSTM , TCN. Bai et al. [4], LSTM / , . , TCN . ,   1 , , ( 10 ). , , , , (TCN) (LSTM). 





4. . 





, , , . . 





, . , TCN . , ,    -. 





, , LSTM TCN. , GBDT    ,  . , - ( ) . 





- , ,    , , . , , . , . 





[1] Peter Martey Addo, Dominique Guegan, and Bertrand Hassani. 2018. Credit

risk analysis using machine and deep learning models. Risks 6, 2 (2018), 38.

https://doi.org/10.3390/risks6020038







[2] Sercan O. Arik and Tomas Pfister. 2019. TabNet: Attentive Interpretable Tabular

Learning. (2019). arXiv:1908.07442







[3] Dmitrii Babaev, Alexander Tuzhilin, Maxim Savchenko, and Dmitrii Umerenkov. E.T.-Rnn: Applying deep learning to credit loan applications. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2183–2190. https://doi.org/10.1145/3292500.3330693





[4] Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. 2018. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling. (2018). arXiv:1803.01271





[5] George EP Box and David R Cox. 1964. An analysis of transformations. Journal of the Royal Statistical Society: Series B (Methodological) 26, 2 (1964), 211–243.





[6] Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD Unternational Conference on Knowledge Discovery and Data Mining. 785–794.





[7] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning. 933–941.





[8] Ian Dewancker, Michael McCourt, and Scott Clark. 2015. Bayesian Optimization Primer.





[9] Dmitry Efimov, Di Xu, Alexey Nefedov, and Archana Anandakrishnan. 2019. Using Generative Adversarial Networks to Synthesize Artificial Financial Datasets. In 33rd Conference on Neural Information Processing Systems, Workshop on Robust AI in Financial Services.





[10] Jerome H. Friedman. 2001. Greedy function approximation: A gradient boosting machine. Annals of Statistics 29, 5 (2001), 1189–1232. https://doi.org/10.2307/ 2699986





[11] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning. 1243–1252.





[12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Identity mappings in deep residual networks, Vol. 9908 LNCS. Springer Verlag, 630–645. https://doi.org/10.1007/978-3-319-46493-0_38 arXiv:1603.05027





[13] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv:1207.0580 (2012).





[14] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735–1780. https://doi.org/10.1162/neco.1997.9. 8.1735





[15] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099 (2016).





[16] Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).





[17] David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, and Chris Pal. 2017. Zoneout: Regularizing rnns by randomly preserving hidden activations. In Proceedings of the 5th International Conference on Learning Representations. arXiv:1606.01305





[18] Christopher D Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to information retrieval. Cambridge University Press. 234–265 pages.





[19] Loris Nanni and Alessandra Lumini. 2009. An experimental comparison of ensemble of classifiers for bankruptcy prediction and credit scoring. Expert Systems with Applications 36, 2 (2009), 3028–3033. https://doi.org/10.1016/j.eswa. 2008.01.018





[20] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016).





[21] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning. 1310–1318.





[22] Lutz Prechelt. 1998. Early stopping-but when? In Neural Networks: Tricks of the trade. Springer, 55–69.





[23] Abhimanyu Roy, Jingyi Sun, Robert Mahoney, Loreto Alonzi, Stephen Adams, and Peter Beling. 2018. Deep learning detecting fraud in credit card transactions. In Proceedings of the Systems and Information Engineering Design Symposium,129–134. https://doi.org/10.1109/SIEDS.2018.8374722





[24] Lyn C Thomas, David B Edelman, and Jonathan N Crook. 2002. Credit scoring and its applications. SIAM.





[25] Alex Waibel, Toshiyuki Hanazawa, Geoffrey Hinton, Kiyohiro Shikano, and Kevin J Lang. 1989. Phoneme recognition using time-delay neural networks. IEEE transactions on acoustics, speech, and signal processing 37, 3 (1989), 328–339.





[26] Chongren Wang, Dongmei Han, Qigang Liu, and Suyuan Luo. 2018. A deep learning approach for credit scoring of Peer-to-Peer lending using attention mechanism LSTM. IEEE Access 7 (2018), 2161-2168.





[27] Fisher Yu and Vladlen Koltun. 2016. Multi-scale context aggregation by dilated convolutions. In Proceedings of the 4th International Conference on Learning Representations. arXiv: 1511.07122








All Articles