[5] N. Farsad, N. Shlezinger, A. J. Goldsmith, and Y. C. Eldar, “Data-driven
symbol detection via model-based machine learning,” arXiv preprint
arXiv:2002.07806, 2020.
[6] Q. Mao, F. Hu, and Q. Hao, “Deep learning for intelligent wireless
networks: A comprehensive survey,” IEEE Commun. Surveys Tuts.,
vol. 20, no. 4, pp. 2595–2621, 2018.
[7] L. Dai, R. Jiao, F. Adachi, H. V. Poor, and L. Hanzo, “Deep learning
for wireless communications: An emerging interdisciplinary paradigm,”
IEEE Wireless Commun., vol. 27, no. 4, pp. 133–139, 2020.
[8] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems:
Applications, trends, technologies, and open research problems,” IEEE
Network, vol. 34, no. 3, pp. 134–142, 2019.
[9] T. Raviv, S. Park, O. Simeone, Y. C. Eldar, and N. Shlezinger, “Adaptive
and flexible model-based AI for deep receivers in dynamic channels,”
IEEE Wireless Commun., early access, 2023.
[10] L. Chen, S. T. Jose, I. Nikoloska, S. Park, T. Chen, and O. Simeone,
“Learning with limited samples: Meta-learning and applications to com-
munication systems,” Foundations and Trends® in Signal Processing,
vol. 17, no. 2, pp. 79–208, 2023.
[11] S. T. Jose and O. Simeone, “Free energy minimization: A unified
framework for modelling, inference, learning, and optimization,” IEEE
Signal Process. Mag., vol. 38, no. 2, pp. 120–125, 2021.
[12] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration
of modern neural networks,” in International Conference on Machine
Learning. PMLR, 2017, pp. 1321–1330.
[13] O. Simeone, Machine learning for engineers. Cambridge University
Press, 2022.
[14] N. Shlezinger, J. Whang, Y. C. Eldar, and A. G. Dimakis, “Model-based
deep learning,” Proc. IEEE, vol. 111, no. 5, pp. 465–499, 2023.
[15] N. Shlezinger, Y. C. Eldar, and S. P. Boyd, “Model-based deep learning:
On the intersection of deep learning and optimization,” IEEE Access,
vol. 10, pp. 115 384–115 398, 2022.
[16] N. Shlezinger and Y. C. Eldar, “Model-based deep learning,” Founda-
tions and Trends® in Signal Processing, vol. 17, no. 4, pp. 291–416,
2023.
[17] V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable,
efficient deep learning for signal and image processing,” IEEE Signal
Process. Mag., vol. 38, no. 2, pp. 18–44, 2021.
[18] N. Shlezinger and T. Routtenberg, “Discriminative and generative learn-
ing for the linear estimation of random signals [lecture notes],” IEEE
Signal Process. Mag., vol. 40, no. 6, pp. 75–82, 2023.
[19] T. Raviv, S. Park, O. Simeone, Y. C. Eldar, and N. Shlezinger, “Online
meta-learning for hybrid model-based deep receivers,” IEEE Trans.
Wireless Commun., vol. 22, no. 10, pp. 6415–6431, 2023.
[20] N. Shlezinger, N. Farsad, Y. C. Eldar, and A. J. Goldsmith, “Data-driven
factor graphs for deep symbol detection,” in Proc. IEEE ISIT, 2020, pp.
2682–2687.
[21] N. Shlezinger, R. Fu, and Y. C. Eldar, “DeepSIC: Deep soft interference
cancellation for multiuser MIMO detection,” IEEE Trans. Wireless
Commun., vol. 20, no. 2, pp. 1349–1362, 2021.
[22] N. Shlezinger, N. Farsad, Y. C. Eldar, and A. Goldsmith, “Viterbinet:
A deep learning based Viterbi algorithm for symbol detection,” IEEE
Trans. Wireless Commun., vol. 19, no. 5, pp. 3319–3331, 2020.
[23] T. Raviv, N. Raviv, and Y. Be’ery, “Data-driven ensembles for deep and
hard-decision hybrid decoding,” in Proc. IEEE ISIT, 2020, pp. 321–326.
[24] T. Van Luong, N. Shlezinger, C. Xu, T. M. Hoang, Y. C. Eldar, and
L. Hanzo, “Deep learning based successive interference cancellation
for the non-orthogonal downlink,” IEEE Trans. Veh. Technol., vol. 71,
no. 11, pp. 11 876–11 888, 2022.
[25] P. Jiang, T. Wang, B. Han, X. Gao, J. Zhang, C.-K. Wen, S. Jin, and G. Y.
Li, “AI-aided online adaptive OFDM receiver: Design and experimental
results,” IEEE Trans. Wireless Commun., vol. 20, no. 11, pp. 7655–7668,
2021.
[26] L. V. Jospin, H. Laga, F. Boussaid, W. Buntine, and M. Bennamoun,
“Hands-on Bayesian neural networks—a tutorial for deep learning
users,” IEEE Comput. Intell. Mag., vol. 17, no. 2, pp. 29–48, 2022.
[27] D. T. Chang, “Bayesian neural networks: Essentials,” arXiv preprint
arXiv:2106.13594, 2021.
[28] H. Wang and D.-Y. Yeung, “A survey on Bayesian deep learning,” ACM
Computing Surveys (CSUR), vol. 53, no. 5, pp. 1–37, 2020.
[29] M. Zecchin, S. Park, O. Simeone, M. Kountouris, and D. Gesbert,
“Robust Bayesian learning for reliable wireless AI: Framework and
applications,” IEEE Trans. on Cogn. Commun. Netw., vol. 9, no. 4, pp.
897–912, 2023.
[30] K. M. Cohen, S. Park, O. Simeone, and S. Shamai, “Bayesian active
meta-learning for reliable and efficient AI-based demodulation,” IEEE
Trans. Signal Process., vol. 70, pp. 5366–5380, 2022.
[31] E. Nachmani, E. Marciano, L. Lugosch, W. J. Gross, D. Burshtein, and
Y. Be’ery, “Deep learning methods for improved decoding of linear
codes,” IEEE J. Sel. Topics Signal Process., vol. 12, no. 1, pp. 119–131,
2018.
[32] E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear
codes using deep learning,” in Annual Allerton Conference on Commu-
nication, Control, and Computing (Allerton), 2016.
[33] W.-J. Choi, K.-W. Cheong, and J. M. Cioffi, “Iterative soft interference
cancellation for multiple antenna systems,” in Proc. IEEE WCNC, 2000.
[34] J. Pearl, Probabilistic reasoning in intelligent systems: networks of
plausible inference. Elsevier, 2014.
[35] M. Tomlinson, C. J. Tjhai, M. A. Ambroze, M. Ahmed, and M. Jibril,
Error-Correction Coding and Decoding: Bounds, Codes, Decoders,
Analysis and Applications. Springer Nature, 2017.
[36] Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation:
Representing model uncertainty in deep learning,” in International
Conference on Machine Learning. PMLR, 2016, pp. 1050–1059.
[37] S. Boluki, R. Ardywibowo, S. Z. Dadaneh, M. Zhou, and X. Qian,
“Learnable Bernoulli dropout for Bayesian deep learning,” in Interna-
tional Conference on Artificial Intelligence and Statistics. PMLR, 2020,
pp. 3905–3916.
[38] L. Liu, C. Oestges, J. Poutanen, K. Haneda, P. Vainikainen, F. Quitin,
F. Tufvesson, and P. De Doncker, “The COST 2100 MIMO channel
model,” IEEE Wireless Commun., vol. 19, no. 6, pp. 92–99, 2012.
[39] M. P. Naeini, G. Cooper, and M. Hauskrecht, “Obtaining well calibrated
probabilities using bayesian binning,” in Proceedings of the AAAI
Conference on Artificial Intelligence, vol. 29, no. 1, 2015.
[40] P. Becker, H. Pandya, G. Gebhardt, C. Zhao, C. J. Taylor, and
G. Neumann, “Recurrent Kalman networks: Factorized inference in
high-dimensional deep feature spaces,” in International Conference on
Machine Learning, 2019, pp. 544–552.
[41] G. Revach, N. Shlezinger, X. Ni, A. L. Escoriza, R. J. Van Sloun, and
Y. C. Eldar, “KalmanNet: Neural network aided Kalman filtering for
partially known dynamics,” IEEE Trans. Signal Process., vol. 70, pp.
1532–1547, 2022.
[42] D. H. Shmuel, J. P. Merkofer, G. Revach, R. J. van Sloun, and
N. Shlezinger, “SubspaceNet: Deep learning-aided subspace methods
for DoA estimation,” arXiv preprint arXiv:2306.02271, 2023.
[43] J. Knoblauch, J. Jewson, and T. Damoulas, “Generalized variational
inference: Three arguments for deriving new posteriors,” arXiv preprint
arXiv:1904.02063, 2019.
[44] A. Masegosa, “Learning under model misspecification: Applications to
variational and ensemble methods,” Advances in Neural Information
Processing Systems, vol. 33, pp. 5479–5491, 2020.
[45] W. R. Morningstar, A. Alemi, and J. V. Dillon, “Pacm-bayes: Narrowing
the empirical risk gap in the misspecified bayesian regime,” in Interna-
tional Conference on Artificial Intelligence and Statistics. PMLR, 2022,
pp. 8270–8298.
[46] V. Vovk, A. Gammerman, and G. Shafer, Algorithmic learning in a
random world. Springer, 2005, vol. 29.
[47] A. N. Angelopoulos and S. Bates, “A gentle introduction to confor-
mal prediction and distribution-free uncertainty quantification,” arXiv
preprint arXiv:2107.07511, 2021.
[48] K. M. Cohen, S. Park, O. Simeone, and S. Shamai, “Calibrating AI
models for wireless communications via conformal prediction,” IEEE
Trans. Mach. Learn. Commun. Netw., vol. 1, pp. 296–312, 2023.
12