sigma-summation Underscript k equals 1 Overscript n Endscripts e 1 left-parenthesis normal x Subscript k Baseline right-parenthesis StartFraction partial-differential f Subscript upper F upper M Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis Over partial-differential sigma Subscript j Superscript negative 2 Baseline EndFraction minus alpha sigma-summation Underscript k equals 1 Overscript n Endscripts e 2 left-parenthesis normal x Subscript k Baseline right-parenthesis StartFraction partial-differential f Subscript upper F upper M Baseline left-parenthesis normal x Subscript k Baseline right-parenthesis Over partial-differential sigma Subscript j Superscript negative 2 Baseline EndFraction"/>
where
(4.73)
and σj are updated as
(4.74)
where η is the constant step size.
References
1 1 R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad, “Intelligible models for healthcare: Predicting pneumonia risk and hospital 30‐day readmission,” in Proc. 21th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2015, pp. 1721–1730.
2 2 A. Howard, C. Zhang, and E. Horvitz, “Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems,” in Proc. Adv. Robot. Social Impacts (ARSO), Mar. 2017, pp. 1–7.
3 3 (2016). European Union General Data Protection Regulation (GDPR). Accessed: Jun. 6, 2018. [Online]. Available: http://www.eugdpr.org
4 4 D. Silver, J. Schrittwieser, K. Simonyan, et al., “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354–359, 2017.
5 5 M. Bojarski, D. Del Testa, D. Dworakowski, et al. (2016). “End to end learning for self‐driving cars.” [Online]. Available: https://arxiv.org/abs/1604.07316
6 6 J. Haspiel, J. Meyerson, L.P. Robert Jr, et al. (2018). Explanations and Expectations: Trust Building in Automated Vehicles, http://deepblue.lib.umich.edu. [Online]. Available: https://doi.org/10.1145/3173386.3177057
7 7 A. Holzinger, C. Biemann, C. S. Pattichis, and D. B. Kell. (2017). “What do we need to build explainable AI systems for the medical domain?” [Online]. Available: https://arxiv.org/abs/1712.09923
8 8 G. J. Katuwal and R. Chen. (2016). Machine Learning Model Interpretability for Precision Medicine. [Online]. Available: https://arxiv. org/abs/1610.09045
9 9 Z. Che, S. Purushotham, R. Khemani, and Y. Liu, “Interpretable deep models for ICU outcome prediction,” in Proc. AMIA Annu. Symp., 2017, pp. 371–380
10 10 S. Tan, R. Caruana, G. Hooker, and Y. Lou. (2018). “Detecting bias in black‐box models using transparent model distillation.” [Online]. Available: https://arxiv.org/abs/1710.06169
11 11 C. Howell, “A framework for addressing fairness in consequential machine learning,” in Proc. FAT Conf., Tuts., 2018, pp. 1–2.
12 12 Berk, R. and Bleich, J. (2013). Statistical procedures for forecasting criminal behavior: a comparative assessment. Criminol. Public Policy 12 (3): 513–544.
13 13 Equifax. (2018). Equifax Launches NeuroDecision Technology. Accessed: Jun. 6, 2018. [Online]. Available: https://investor.equifax.com news‐and‐events/news/2018/03‐26‐2018‐143044126
14 14 D. Gunning. Explainable artificial intelligence (XAI), Defense Advanced Research Projects Agency (DARPA). Accessed: Jun. 6, 2018. [Online]. Available: http://www.darpa.mil/program/explainable‐artificialintelligence
15 15 W. Knight. (2017). The U.S. military wants its autonomous machines to explain themselves, MIT Technology Review. Accessed: Jun. 6, 2018. [Online]. Available: https://www.technologyreview.com/s/603795/theus‐military‐wants‐its‐autonomous‐machines‐to‐explain‐themselves
16 16 A. Henelius, K. Puolamäki, and A. Ukkonen. (2017). “Interpreting classifiers through attribute interactions in datasets.” [Online]. Available: https://arxiv.org/abs/1707.07576
17 17 Future of Privacy Forum. (2017). Unfairness by Algorithm: Distilling the Harms of Automated Decision‐Making. Accessed: Jun. 6, 2018. [Online]. Available: https://fpf.org/wp‐content/uploads/2017/12/FPF‐AutomatedDecision‐Making‐Harms‐and‐Mitigation‐Charts.pdf
18 18 Letham, B., Rudin, C., McCormick, T.H., and Madigan, D. (2015). Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 9 (3): 1350–1371.
19 19 K. Xu, J. Lei Ba, R. Kiros, et al., “Show, attend and tell: Neural image caption generation with visual attention,” in Proc. Int. Conf. Mach. Learn. (ICML), 2015, pp. 1–10
20 20 Ustun, B. and Rudin, C. (2015). Supersparse linear integer models for optimized medical scoring systems. Mach. Learn. 102 (3): 349–391.
21 21 S. Sarkar, “Accuracy and interpretability trade‐offs in machine learning applied to safer gambling,” in Proc. CEUR Workshop, 2016, pp. 79–87.
22 22