Automatic learning of the interaction of functions using self-adaptive neural networks

annotation 

Click-through rate (CTR) prediction, which aims to predict the likelihood that a user will click on an ad or product, is critical for many online applications such as online ads and advisory (recommendation) systems. This problem is very complex because: 1) input functions (eg user id, user age, item id, item category) are usually sparse; 2) effective prediction relies on high-order combinatorial functions (aka cross-functions), which are very laborious for manual processing by domain experts and are not enumerable. Therefore, efforts have been made to find low-dimensional representations of sparse and high-dimensional raw objects and their meaningful combinations. 





In this article, we propose an efficient and effective AutoInt method for automatically analyzing high-order object interactions of input objects. Our proposed algorithm is very general and can be applied to both numerical and categorical input features. In particular, we compare both numerical and categorical features in the same low-dimensional space. Then, a multipurpose self-tuning neural network with residual connections is proposed for explicitly modeling feature interactions in low-dimensional space. With the help of different layers of multipurpose self-stressed neural networks, it is possible to simulate different orders of combinations of features of input features. The entire model can be effectively applied to large-scale raw data in an end-to-end manner.Experimental results on four real data sets show that our proposed approach is not only superior to existing modern forecasting approaches, but also provides a good explanatory power of the network.The code is available at .





1. Introduction 

Predicting the likelihood of users clicking on ads or products (also known as predicting click-through rates) is a critical issue for many web applications such as online advertising and recommendation systems [8, 10, 15]. The effectiveness of the forecast has a direct impact on the final revenues of business providers. Because of its importance, it is generating growing interest in both academia and commercial circles. 





, . . -, [8, 11, 13, 21, 32]. ,   / .    , , . CTR Criteo, 30 99,99%. (). -, [8, 11, 19, 32], . , ,   -, , . <Gender=Male, Age=10, productCategory=VideoGame> . . , [8, 26].   :  ?  , .     , ,  .





,   () [26], , [27, 28]. , ,    . ,  [8, 11, 13, 38],  . , . . ,   , -  [4]. -, , , . ,   , . 





,   - [36].    ,  , . , , , (, ). , . ,    [36]. ,   ()  , , ,   . .   , . [12], . , . 





, : 









  • ,   , ; 





  • , CTR , , . 





. 2. .   3    . 4. . 5 .  ,   6.





2.   

: 1)   -; 2) ; 3) .  





2.1    

   -, [8-10, 15, 21, 29, 43]. , Google   Wide&Deep[8] , ,    . . . ,   . [31] -   , <, , >. Oentaryo  . [24]    . 





2.2   

,  .    () [26], [27, 28].   . ,    (FFM) [16] . GBFM [7] AFM [40] .   . 





, . , NFM [13] . , PNN [25], FNN [41], DeepCrossing [32], Wide&Deep [8]  DeepFM [11] . ,   . , , . -, Deep&Cross [38]  xDeepFM [19]   . , ,  β€“ . -, [39, 42, 44] , , . -, HOFM [5] . HOFM ,   ( 5) . ,    . 





2.3   

: [2] [12].





[2] , [35], [30] [14, 33, 43].   . [36]  - . 





[12]  ImageNet. , y = F (x) + x, , . 





3.   

(CTR) : 





1. ( CTR) x ∈ R n u v, , n -  . , u v x





CTR , x  , . , x ,  . . , , [6, 8, 11, 23, 26, 32]. 





Figure 1: Overview of our proposed AutoInt model.  The details of the embeddable layer and the interacting layer are shown in Figures 2 and 3, respectively.
1: AutoInt. 2 3 .

,   . 





2 ( p-).  x βˆˆ R n p- g (xi1 , ..., xip ),     , p- , g (Β·) - , [26] [19, 38]. , xi1 Γ— xi2- , xi1 xi2





. . ,    . , .   . 





3 ( ).  x βˆˆ Rn , - x, . 





4. Autoint:    

 AutoInt, CTR. , ,   . 





4.1  

, . .1, x, , (. . , ) . , () . ,  , . . 





,    . . 





Figure 2: Illustration of the input and embed layers, where both categorical and numeric fields are represented by low-dimensional dense vectors.
2: , , .

4.2   

, . , 





M - ,  xi - i- . xi -  , i- (, x1 . 2). xi -  , i- (, xM  . 2). 





4.3   

, (, ). ,  , . ., 





 Vi - i,  xi - . ,  xi - . . , , (, «»). 2 : 





q - , i- ,  xi - - . 





, . ,  





 vm - m,  xm - . 





, , .2. 





4.4   

   , . , , . , . - - [36]. 





   (Multi-head self-attentive network) [36] . , [36] [20], [37].   . 





,  Β«-Β» [22], , . m, , , m. m k  () h





ψ (h) (Β·, Β·) - , m k. ,  βŸ¨Β·, ·⟩. - .





W(h)Query, W(h)Key βˆˆ Rdβ€² Γ— d 5 - ,  Rd   Rdβ€². m h, , Ξ±(h)m, k





(6) m ( h), , . , , , . , , : 





 βŠ• - , H - . 





 3:   .     , .. Ξ± (h) m.
3: . , .. Ξ± (h) m.

, (. . ) , . , 





 W Res βˆˆ R d β€² H Γ— d - [12], ReLU (z) = max (0, z) - . 





 em  eResm,   . . , . 





4.5   

{eResm }Mm=1, , , , ()  . CTR , : 





w βˆˆ R d β€² H M - , , b - , Οƒ (x) = 1 / (1 + eβˆ’x) .  





4.6  

- , : 





 yj  yΛ†j - CTR , j , N - . , : 





 logloss  . 





4.7  AutoInt 

. , 5-8, , . 





, (. . M = 4), x1, x2, x3 x4 . (, 5) , , , g (x1, x2), g (x2, x3) g ( x3, x4)   ,   g (Β·) ( 2)  ReLU (Β·). , x1, eRes1.    , , . 





, . eRes1 eRes3, , , x1, x2 x3, , eRes1 eRes3, eRes1 g (x1, x2), eRes3 x3 ( ). , . , g (x1, x2, x3, x4) Res 1 Res 3, g (x1, x2) g (x3 , x4) . , . 





, ,  AutoInt  , , . , [3, 18]. 





 





, [11, 19, 32],  nd , n - , d - . : {W (h) Query, W (h) Key, W (h) Value, WRes}, L- L Γ— (3dd β€² + d β€² Hd),    M. , d β€² HM + 1 . , O (Lddβ€²H). , H d β€² (, H = 2 dβ€² = 32 ), . 





 





. -,  ( )  O(Mdd' + M2d' ) . O(Mdd' + M2d') . H  (), O(MHd' (M + d)) . ,  H,d  d ' .  AutoInt  5.2. 





5.  

. : 





RQ1)   AutoInt  CTR?  ? 





RQ2)  ? 





RQ3)  ?  ? 





RQ4)  ? , . 





 1:    .
1: .

5.1   

5.1.1  . . 1. 





Criteo.  CTR, 45   . 26   13 .  





Avazu.  , , . 23 , / .  





KDD12.  KDDCup 2012, . CTR, , (1 > 0, 0 ), FFM [16].  





MovieLens-1M.  .  ( )  3 , , . 3    β€“  , 3.  





. -, ( ) Β«<>Β», {10, 5, 10}  Criteo, Avazu  KDD12 . -, , , z  log2(z) z> 2,     Criteo Competition. -, 80%    .  





5.1.2  . : 





AUC ROC (AUC)  , CTR , . AUC . 





Logloss.   logloss , 10, .  





, AUC  Logloss  0,001 CTR, [8, 11, 38]. 





5.1.3  . : A) , ; )   , ; C) , . . 





LR (). LR .





FM [26] (B). FM .





F [40] (B). AFM - , . FM, ,   . 





DeepCrossing [32] (C). DeepCrossing     .





NFM [13] (C). NFM . . 





CrossNet [38] (). -,  Deep&Cross, .





CIN [19] (C). ,  xDeepFM, .





HOFM[5] (). HOFM .    Blondel et al. [5]   [13], , . 





 CrossNet  CIN,  Deep&Cross  xDeepFM,  plain DNN (. . 5.5). 





5.1.4  .  TensorFlow[1].  AutoInt  d 16, - 1024. AutoInt  , d 32.  ( ) - . , [34] {0.1 - 0.9} MovieLens-1M, ,   . 200 - NFM, . CN CIN AutoInt. DeepCrossing  100, . ,   . ,  Adam [17] , . 









 2:    .  ,                .      5.2.
2: . , . 5.2.

5.2  (RQ1) 

.  , 10 , 2. : 1) FM AFM, , LR , , CTR; 2)  - , ; ,  DeepCrossing  NFM , FM AFM  ,  (, CIN ); 3) HOFM FM  Criteo  MovieLens-1M, , ; 4) AutoInt  .  





 Avazu CIN ,  AutoInt  AUC,  Logloss. ,  AutoInt  ,  DeepCrossing, , , . 





 4.         . "DC”  β€œCN” -  DeepCrossing  CrossNet  , .  HOFM            KDD12,        .      5.2.
4. . "DC” β€œCN” - DeepCrossing CrossNet , . HOFM KDD12, . 5.2.

  4. , LR - . FM NFM , NFM   . CIN, , - . . , AutoInt ,  DeepCrossing  NFM. 





( ) . 3, CIN  AutoInt  . 





, ,  AutoInt  . CIN, AutoInt  -. 





 3:             Criteo. Β«DCΒ»  Β«CNΒ» -  DeepCrossing  CrossNet  , .     .
3: Criteo. Β«DCΒ» Β«CNΒ» - DeepCrossing CrossNet , . .

5.3  (RQ2)  ,   AutoInt. 

5.3.1  .  AutoInt  , , , . , , . 4, , . ,   KDD12 MovieLens-1M, , . 





 4:  ,   AutoInt      . AutoInt w –   ,     AutoInt w/o –     .
4: , AutoInt . AutoInt w – , AutoInt w/o – .

5.3.2  .  ( 4). ,     , . , ( ), , . 





5. , , , , , . , . . , . , , , . 





 5:     .   Criteo,  Avazu  ,  , , .
5: . Criteo, Avazu , , , .
 6:     .   Criteo,  Avazu  ,  , , .
6: . Criteo, Avazu , , , .

5.3.3  .    d, . KDD12 , , . MovieLens-1M. 24, . , , , . 





5.4  (RQ3) 

, . ,  AutoInt  . MovieLens-1M.  





, , . . 7 () , .  ,  AutoInt  <Gender=Male, Age=[18-24), MovieGenre=Action&Triller> (. . ).  , , , . 





Table 5: Results of the integration of implicit function interactions.  We indicate the basic model for each method.  The last two columns represent the mean changes in AUC and Logloss compared to the corresponding baseline models (β€œ+”: increase, β€œ-”: decrease).
5: . . AUC Logloss (Β«+Β»: , Β«-Β»: ).

, . . . 7 (). , <Gender, Genre>, <Age, Genre>, <RequestTime, ReleaseTime> <Gender, Age, Genre> (. . ) , . 





Figure 7: Thermal (phase) attention weights maps for case and global function interactions on MovieLens-1M.  The axes represent the function fields <Gender, Age, Occupation, Postal Code, RequestTime, RealeaseTime, Genre>.  We highlight some of the studied combinatorial functions in rectangles.
7: () MovieLens-1M. <, , , , RequestTime, RealeaseTime, Genre>. .

5.5 (RQ4) 

     CTR [8, 11, 19]. , ,  AutoInt  .  AutoInt+ : 





  • Wide&Deep [8]. Wide&Deep  ; 





  • DeepFM [11]. DeepFM      ; 





  • Deep&Cross [38]. Deep&Cross-  CrossNet  ; 





  • xDeepFM [19]. xDeepFM - CIN . 





5 ( 10 ) .  : 1)   ,   , , , , , ,   AutoInt  ; 2) , AutoInt+ ,     CTR. 





6.   

CTR, -, .     , . . , . ,   , AUC  Logloss    . 





    -. ,  AutoInt  , , . 









[1] MartΓ­n Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, et al. 2016. TensorFlow: A System for Large-Scale Machine Learning.. In OSDI, Vol. 16. 265–283.





[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations.





[3] Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35, 8 (2013), 1798–1828.





[4] Alex Beutel, Paul Covington, Sagar Jain, Can Xu, Jia Li, Vince Gatto, and Ed H Chi. 2018. Latent Cross: Making Use of Context in Recurrent Recommender Systems. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. ACM, 46–54.





[5] Mathieu Blondel, Akinori Fujino, Naonori Ueda, and Masakazu Ishihata. 2016. Higher-order factorization machines. In Advances in Neural Information Processing Systems. 3351–3359.





[6] Mathieu Blondel, Masakazu Ishihata, Akinori Fujino, and Naonori Ueda. 2016. Polynomial Networks and Factorization Machines: New Insights and Efficient Training Algorithms. In International Conference on Machine Learning. 850–858.





[7] Chen Cheng, Fen Xia, Tong Zhang, Irwin King, and Michael R Lyu. 2014. Gradient boosting factorization machines. In Proceedings of the 8th ACM Conference on Recommender systems. ACM, 265–272.





[8] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al.Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 7–10.





[9] Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 191–198.





[10] Thore Graepel, Joaquin QuiΓ±onero Candela, Thomas Borchert, and Ralf Herbrich. 2010. Web-scale Bayesian Click-through Rate Prediction for Sponsored Search Advertising in Microsoft’s Bing Search Engine. In Proceedings of the 27th International Conference on International Conference on Machine Learning. 13–20.





[11] Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: A Factorization-machine Based Neural Network for CTR Prediction. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press, 1725–1731.





[12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.





[13] Xiangnan He and Tat-Seng Chua. 2017. Neural factorization machines for sparse predictive analytics. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 355–364.





[14] Xiangnan He, Zhankui He, Jingkuan Song, Zhenguang Liu, Yu-Gang Jiang, and Tat-Seng Chua. 2018. NAIS: Neural attentive item similarity model for recommendation. IEEE Transactions on Knowledge and Data Engineering 30, 12 (2018), 2354–2366.





[15] Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yanxin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers, et al. 2014. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the Eighth International Workshop on Data Mining for Online Advertising. ACM, 1–9.





[16] Yuchin Juan, Yong Zhuang, Wei-Sheng Chin, and Chih-Jen Lin. 2016. Fieldaware factorization machines for CTR prediction. In Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 43–50.





[17] Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.





[18] Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y Ng. 2011. Unsupervised learning of hierarchical representations with convolutional deep belief networks. Commun. ACM 54, 10 (2011), 95–103.





[19] Jianxun Lian, Xiaohuan Zhou, Fuzheng Zhang, Zhongxia Chen, Xing Xie, and Guangzhong Sun. 2018. xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1754– 1763.





[20] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In International Conference on Learning Representations.





[21] H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, et al. 2013. Ad Click Prediction: A View from the Trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1222–1230.





[22] Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-Value Memory Networks for Directly Reading Documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 1400–1409.





[23] Alexander Novikov, Mikhail Trofimov, and Ivan Oseledets. 2016. Exponential machines. arXiv preprint arXiv:1605.03795 (2016).





[24] Richard J Oentaryo, Ee-Peng Lim, Jia-Wei Low, David Lo, and Michael Finegold. Predicting response in mobile advertising with hierarchical importanceaware factorization machine. In Proceedings of the 7th ACM international conference on Web search and data mining. ACM, 123–132.





[25] Yanru Qu, Han Cai, Kan Ren, Weinan Zhang, Yong Yu, Ying Wen, and Jun Wang.Product-based neural networks for user response prediction. In Data Mining (ICDM), 2016 IEEE 16th International Conference on. IEEE, 1149–1154.





[26] Steffen Rendle. 2010. Factorization machines. In Data Mining (ICDM), 2010 IEEE 10th International Conference on. IEEE, 995–1000.





[27] Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010. Factorizing personalized markov chains for next-basket recommendation. In Proceedings of the 19th international conference on World wide web. ACM, 811–820.





[28] Steffen Rendle, Zeno Gantner, Christoph Freudenthaler, and Lars Schmidt-Thieme. Fast context-aware recommendations with factorization machines. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval. ACM, 635–644.





[29] Matthew Richardson, Ewa Dominowska, and Robert Ragno. 2007. Predicting clicks: estimating the click-through rate for new ads. In Proceedings of the 16th international conference on World Wide Web. ACM, 521–530.





[30] Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 379–389.





[31] Lili Shan, Lei Lin, Chengjie Sun, and Xiaolong Wang. 2016. Predicting ad clickthrough rates via feature-based fully coupled interaction tensor factorization. Electronic Commerce Research and Applications 16 (2016), 30–42.





[32] Ying Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. 2016. Deep crossing: Web-scale modeling without manually crafted combinatorial features. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 255–262.





[33] Weiping Song, Zhiping Xiao, Yifan Wang, Laurent Charlin, Ming Zhang, and Jian Tang. 2019. Session-based Social Recommendation via Dynamic Graph Attention Networks. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. ACM, 555–563.





[34] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15, 1 (2014), 1929–1958.





[35] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems. 2440–2448.





[36] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 6000–6010.





[37] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph Attention Networks. In International Conference on Learning Representations.





[38] Ruoxi Wang, Bin Fu, Gang Fu, and Mingliang Wang. 2017. Deep & Cross Network for Ad Click Predictions. In Proceedings of the ADKDD’17. ACM, 12:1–12:7.





[39] Xiang Wang, Xiangnan He, Fuli Feng, Liqiang Nie, and Tat-Seng Chua. 2018. TEM: Tree-enhanced Embedding Model for Explainable Recommendation. In Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 1543–1552.





[40] Jun Xiao, Hao Ye, Xiangnan He, Hanwang Zhang, Fei Wu, and Tat-Seng Chua. Attentional factorization machines: learning the weight of feature interactions via attention networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press, 3119–3125.





[41] Weinan Zhang, Tianming Du, and Jun Wang. 2016. Deep learning over multi-field categorical data. In European conference on information retrieval. Springer, 45–57.





[42] Qian Zhao, Yue Shi, and Liangjie Hong. 2017. GB-CENT: Gradient Boosted Categorical Embedding and Numerical Trees. In Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 1311–1319.





[43] Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018. Deep Interest Network for ClickThrough Rate Prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1059–1068.





[44] Jie Zhu, Ying Shan, JC Mao, Dong Yu, Holakou Rahmanian, and Yi Zhang. 2017. Deep embedding forest: Forest-based serving with deep embedding features. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1703-1711.
















All Articles