References | [1] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv:2010.11929 (2020). [2] C. Godard, O. Mac Aodha, G. J. Brostow, Unsupervised monocular depth estimation with left-right consistency, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 270–279. [3] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al., Training language models to follow instructions with human feedback, arXiv preprint arXiv:2203.02155 (2022). 18 [4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., Language models are few-shot learners, Advances in neural information processing systems 33 (2020) 1877–1901. [5] C. Blundell, J. Cornebise, K. Kavukcuoglu, D. Wierstra, Weight uncertainty in neural network, in: International conference on machine learning, PMLR, 2015, pp. 1613–1622. [6] W. J. Maddox, P. Izmailov, T. Garipov, D. P. Vetrov, A. G. Wilson, A simple baseline for bayesian uncertainty in deep learning, Advances in neural information processing systems 32 (2019). [7] K. Osawa, S. Swaroop, M. E. E. Khan, A. Jain, R. Eschenhagen, R. E. Turner, R. Yokota, Practical deep learning with bayesian principles, Advances in neural information processing systems 32 (2019). [8] D. Eswaran, S. G¨unnemann, C. Faloutsos, The power of certainty: A dirichlet-multinomial model for belief propagation, in: Proceedings of the 2017 SIAM International Conference on Data Mining, SIAM, 2017, pp. 144–152. [9] B. Lakshminarayanan, A. Pritzel, C. Blundell, Simple and scalable predictive uncertainty estimation using deep ensembles, Advances in neural information processing systems 30 (2017). [10] Y. Gal, Z. Ghahramani, Dropout as a bayesian approximation: Representing model uncertainty in deep learning, in: international conference on machine learning, PMLR, 2016, pp. 1050–1059. [11] X. Li, Y. Dai, Y. Ge, J. Liu, Y. Shan, L.-Y. Duan, Uncertainty modeling for out-of-distribution generalization, arXiv preprint arXiv:2202.03958 (2022). [12] D. Hendrycks, T. Dietterich, Benchmarking neural network robustness to common corruptions and perturbations, arXiv preprint arXiv:1903.12261 (2019). [13] P. W. Koh, S. Sagawa, H. Marklund, S. M. Xie, M. Zhang, A. Balsubramani, W. Hu, M. Yasunaga, R. L. Phillips, I. Gao, et al., Wilds: A 19 benchmark of in-the-wild distribution shifts, in: International Conference on Machine Learning, PMLR, 2021, pp. 5637–5664. [14] Y. Gal, et al., Uncertainty in deep learning (2016). [15] A. Kendall, Y. Gal, What uncertainties do we need in bayesian deep learning for computer vision?, Advances in neural information processing systems 30 (2017). [16] M. Hong, J. Liu, C. Li, Y. Qu, Uncertainty-driven dehazing network, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, 2022, pp. 906–913. [17] J. Hornauer, V. Belagiannis, Gradient-based uncertainty for monocular depth estimation, in: Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XX, Springer, 2022, pp. 613–630. [18] C. Guo, G. Pleiss, Y. Sun, K. Q. Weinberger, On calibration of modern neural networks, in: International conference on machine learning, PMLR, 2017, pp. 1321–1330. [19] J. Quinonero-Candela, M. Sugiyama, A. Schwaighofer, N. D. Lawrence, Dataset shift in machine learning, Mit Press, 2008. [20] S. G. Finlayson, A. Subbaswamy, K. Singh, J. Bowers, A. Kupke, J. Zittrain, I. S. Kohane, S. Saria, The clinician and dataset shift in artificial intelligence, New England Journal of Medicine 385 (3) (2021) 283–286. [21] H. Guo, H. Wang, Q. Ji, Uncertainty-guided probabilistic transformer for complex action recognition, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20052–20061. [22] D. Hendrycks, S. Basart, N. Mu, S. Kadavath, F. Wang, E. Dorundo, R. Desai, T. Zhu, S. Parajuli, M. Guo, et al., The many faces of robustness: A critical analysis of out-of-distribution generalization, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8340–8349. [23] B. Charpentier, D. Z¨ugner, S. G¨unnemann, Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts, Advances in Neural Information Processing Systems 33 (2020) 1356–1367. 20 [24] Y. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. Dillon, B. Lakshminarayanan, J. Snoek, Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift, Advances in neural information processing systems 32 (2019). [25] I. Kononenko, Bayesian neural networks, Biological Cybernetics 61 (5) (1989) 361–370. [26] O. Sagi, L. Rokach, Ensemble learning: A survey, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8 (4) (2018) e1249. [27] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, R. R. Salakhutdinov, Improving neural networks by preventing co-adaptation of feature detectors, arXiv preprint arXiv:1207.0580 (2012). [28] D. Rezende, S. Mohamed, Variational inference with normalizing flows, in: International conference on machine learning, PMLR, 2015, pp. 1530–1538. [29] I. Kobyzev, S. J. Prince, M. A. Brubaker, Normalizing flows: An introduction and review of current methods, IEEE transactions on pattern analysis and machine intelligence 43 (11) (2020) 3964–3979. [30] J. P. Agnelli, M. Cadeiras, E. G. Tabak, C. V. Turner, E. Vanden-Eijnden, Clustering and classification through normalizing flows in feature space, Multiscale Modeling & Simulation 8 (5) (2010) 1784–1802. [31] D. P. Kingma, P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, Advances in neural information processing systems 31 (2018). [32] L. Dinh, D. Krueger, Y. Bengio, Nice: Non-linear independent components estimation, arXiv preprint arXiv:1410.8516 (2014). [33] A. Amini, A. Soleimany, S. Karaman, D. Rus, Spatial uncertainty sampling for end-to-end control, arXiv preprint arXiv:1805.04829 (2018). [34] Y. Gal, J. Hron, A. Kendall, Concrete dropout, Advances in neural information processing systems 30 (2017). 21 [35] D. Molchanov, A. Ashukha, D. Vetrov, Variational dropout sparsifies deep neural networks, in: International Conference on Machine Learning, PMLR, 2017, pp. 2498–2507. [36] T. Pearce, M. Zaki, A. Brintrup, N. Anastassacos, A. Neely, Uncertainty in neural networks: Bayesian ensembling, stat 1050 (2018) 12. [37] M. Biloˇs, B. Charpentier, S. G¨unnemann, Uncertainty on asynchronous time event prediction, Advances in Neural Information Processing Systems 32 (2019). [38] A. Malinin, M. Gales, Predictive uncertainty estimation via prior networks, Advances in neural information processing systems 31 (2018). [39] M. Gales, A. Malinin, Reverse kl-divergence training of prior networks: Improved uncertainty and adversarial robustness (2019). [40] M. Sensoy, L. Kaplan, M. Kandemir, Evidential deep learning to quantify classification uncertainty, Advances in neural information processing systems 31 (2018). [41] A. Amini, W. Schwarting, A. Soleimany, D. Rus, Deep evidential regression, Advances in Neural Information Processing Systems 33 (2020) 14927–14937. [42] G. Parisi, R. Shankar, Statistical field theory (1988). [43] M. Jordan, The exponential family: Conjugate priors (2009). [44] L. Dinh, J. Sohl-Dickstein, S. Bengio, Density estimation using real nvp, arXiv preprint arXiv:1605.08803 (2016). [45] G. Papamakarios, T. Pavlakou, I. Murray, Masked autoregressive flow for density estimation, Advances in neural information processing systems 30 (2017). [46] C.-W. Huang, D. Krueger, A. Lacoste, A. Courville, Neural autoregressive flows, in: International Conference on Machine Learning, PMLR, 2018, pp. 2078–2087. 22 [47] D. P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, M. Welling, Improved variational inference with inverse autoregressive flow, Advances in neural information processing systems 29 (2016). [48] J. M. Hern´andez-Lobato, R. Adams, Probabilistic backpropagation for scalable learning of bayesian neural networks, in: International conference on machine learning, PMLR, 2015, pp. 1861–1869. [49] N. Silberman, D. Hoiem, P. Kohli, R. Fergus, Indoor segmentation and support inference from rgbd images., ECCV (5) 7576 (2012) 746–760. [50] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 2015, pp. 234–241. [51] J. Tompson, R. Goroshin, A. Jain, Y. LeCun, C. Bregler, Efficient object localization using convolutional networks, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 648–656. [52] V. Kuleshov, N. Fenner, S. Ermon, Accurate uncertainties for deep learning using calibrated regression, in: International conference on machine learning, PMLR, 2018, pp. 2796–2804. [53] I. J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples, arXiv preprint arXiv:1412.6572 (2014). [54] X. Huang, X. Cheng, Q. Geng, B. Cao, D. Zhou, P. Wang, Y. Lin, R. Yang, The apolloscape dataset for autonomous driving, in: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 954–960. |
---|