Masahiro Fujisawa (藤澤 将広), Ph.D.
Assistant Professor at Machine Learning & Systems Laboratory, Graduate School of Information Science and Technology, The University of Osaka.
Visiting Scientist at RIKEN Center for Advanced Intelligence Project (RIKEN AIP).
Visiting Scientist at Lattice Lab., Toyota Motor Corporation.
I received my Ph.D. in Science (Machine Learning) from the University of Tokyo in 2023, under the supervision of Prof. Issei Sato and co-advisory of Prof. Masashi Sugiyama.
About Me
I am a researcher dedicated to enhancing the reliability of machine learning by exploring the theoretical foundations of robustness, generalization, and uncertainty quantification through Bayesian statistics and learning theory.
My research translates these theories into practice. I explore robustness of machine learning, from ensuring provably robust model alignment for LLMs (NeurIPS25) to developing outlier-robust approximate Bayesian computation (AISTATS21). I also investigate the theoretical underpinnings of generalization and calibration, using PAC-Bayes and information theory to analyze calibration error (e.g., NeurIPS24; ICML25) and the role of latent variables in VAEs (NeurIPS25). My work also extends to the foundational analysis of core Bayesian methods, such as scalable variational inference (JMLR21).
Research Keywords
- Provable Robust Methods
- Generalization Analysis
- Calibration
- Bias Analysis for Metrics
- Bayesian Methods & Others
- Applications
- AI for Science such as Ecology (Preprint); Data Analysis (Sports, Marketing)
CV
My CV is here.
News
Nov 15, 2025: We received the Best Presentation Award and were selected as a finalist for the Excellent Presentation Award at IBIS2025! I’d like to express my sincere appreciation to my brilliant collaborators, Futoshi, Masaki, and Mike.
Oct 15, 2025: Our new paper, L2-Regularized Empirical Risk Minimization Guarantees Small Smooth Calibration Error, was made publicly available on arXiv, coauthored by Prof. Futoshi Futami (The University of Osaka / RIKEN AIP / The University of Tokyo).
Sep 18, 2025: Our two papers, Scalable Valuation of Human Feedback through Provably Robust Model Alignment and Information-theoretic Generalization Analysis for VQ-VAEs: A Role of Latent Variables, have been accepted to NeurIPS 2025! Many thanks to the reviewers for their valuable feedback, and to the ACs, SACs, and PCs for their hard work amidst their busy schedules. And, huge thanks for my collaborators, Masaki, Mike, and Futoshi!
