Posts by Collection

portfolio

publications

A General Framework for Accurate and Private Mean Estimation

Published in IEEE Signal Processing Letters, 2022

In this letter, we present a differentially private algorithm which accurately estimates the mean of an underlying population with given cumulative distribution function.

Recommended citation: Z. Yang, X. Xu and Y. Gu, A General Framework for Accurate and Private Mean Estimation, in IEEE Signal Processing Letters, vol. 29, pp. 2293-2297, 2022, doi: 10.1109/LSP.2022.3219356.

Distributionally Robust Policy Gradient for Offline Contextual Bandits

Published in AISTATS, 2023

In this paper, we employ a distributionally robust policy gradient method, DROPO, to account for the distributional shift between the static logging policy and the learning policy in policy gradient. Our approach conservatively estimates the conditional reward distributional and updates the policy accordingly.

Recommended citation: Z. Yang, Y. Guo, P. Xu, A. Liu, and A. Anandkumar. Distributionally robust policy gradient for offline contextual bandits. In International Conference on Artificial Intelligence and Statistics, pages 6443–6462. PMLR, 2023.

Bias-Variance Trade-off in Physics-Informed Neural Networks with Randomized Smoothing for High-Dimensional PDEs

Published in SIAM Journal of Computational Physics (under review), 2023

In this paper, we present a comprehensive analysis of biases in RS-PINN, attributing them to the nonlinearity of the Mean Squared Error (MSE) loss as well as the intrinsic nonlinearity of the PDE itself. We also propose tailored bias correction techniques, delineating their application based on the order of PDE nonlinearity. The derivation of an unbiased RS-PINN allows for a detailed examination of its advantages and disadvantages compared to the biased version.

Recommended citation: Z. Hu, Z. Yang, Y. Wang, G.E. Karniadakis, K. Kawaguchi, Bias-variance trade-off in physics-informed neural networks with randomized smoothing for high-dimensional PDEs, 2023, arXiv preprint arXiv:2311.15283.

Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization

Published in Neurips, 2024

In this paper, we introduce Forward Gradient Unrolling with Forward Gradient, abbreviated as $(FG)^2U$, which achieves an unbiased stochastic approximation of the meta gradient for bi-level optimization.

Recommended citation: Q. Shen, Y. Wang, Z. Yang, X. Li, H. Wang, Y. Zhang, J. Scarlett, Z. Zhu, and K. Kawaguchi, Memory-efficient gradient unrolling for large-scale bi-level optimization. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024b.

talks

teaching

TA for Introduction to Computational Mathematics

Undergraduate course, Johns Hopkins University, Department of Applied Mathematics and Statistics, 2024

As a teaching assistant for Introduction to Computational Mathematics, I assist with lectures, teach discussion sessions, and grade assignments and exams.