Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in IEEE Signal Processing Letters, 2022
Abstract: In this letter, we present a differentially private algorithm which accurately estimates the mean of an underlying population with given cumulative distribution function. Our algorithm outperforms the former algorithms in two aspects. First, our algorithm is capably of handling more general types of probability distributions, possibly with a very heavy tail. Second, for light-tailed distributions, our algorithm achieves a better level of accuracy with fewer samples.
Recommended citation: Z. Yang, X. Xu and Y. Gu, "A General Framework for Accurate and Private Mean Estimation," in IEEE Signal Processing Letters, vol. 29, pp. 2293-2297, 2022, doi: 10.1109/LSP.2022.3219356.
Published in AISTATS, 2023
Abstract: Learning an optimal policy from offline data is notoriously challenging, which requires the evaluation of the learning policy using data pre-collected from a static logging policy. We study the policy optimization problem in offline contextual bandits using policy gradient methods. We employ a distributionally robust policy gradient method, DROPO, to account for the distributional shift between the static logging policy and the learning policy in policy gradient. Our approach conservatively estimates the conditional reward distributional and updates the policy accordingly. We show that our algorithm converges to a stationary point with rate O(1/T), where T is the number of time steps. We conduct experiments on real-world datasets under various scenarios of logging policies to compare our proposed algorithm with baseline methods in offline contextual bandits. We also propose a variant of our algorithm, DROPO-exp, to further improve the performance when a limited amount of online interaction is allowed. Our results demonstrate the effectiveness and robustness of the proposed algorithms, especially under heavily biased offline data.
Recommended citation: Z. Yang, Y. Guo, P. Xu, A. Liu, and A. Anandkumar. Distributionally robust policy gradient for offline contextual bandits. In International Conference on Artificial Intelligence and Statistics, pages 6443–6462. PMLR, 2023.
Published in SIAM Journal of Computational Physics (under review), 2023
Abstract: Physics-Informed Neural Networks (PINNs) have triggered a paradigm shift in scientific computing, leveraging mesh-free properties and robust approximation capabilities. While proving effective for low-dimensional partial differential equations (PDEs), the computational cost of PINNs remains a hurdle in high-dimensional scenarios. This is particularly pronounced when computing high-order and high-dimensional derivatives in the physics-informed loss. Randomized Smoothing PINN (RS-PINN) introduces Gaussian noise for stochastic smoothing of the original neural net model, enabling the use of Monte Carlo methods for derivative approximation, which eliminates the need for costly automatic differentiation. Despite its computational efficiency, especially in the approximation of high-dimensional derivatives, RS-PINN introduces biases in both loss and gradients, negatively impacting convergence, especially when coupled with stochastic gradient descent (SGD) algorithms. We present a comprehensive analysis of biases in RS-PINN, attributing them to the nonlinearity of the Mean Squared Error (MSE) loss as well as the intrinsic nonlinearity of the PDE itself. We propose tailored bias correction techniques, delineating their application based on the order of PDE nonlinearity. The derivation of an unbiased RS-PINN allows for a detailed examination of its advantages and disadvantages compared to the biased version. Specifically, the biased version has a lower variance and runs faster than the unbiased version, but it is less accurate due to the bias. To optimize the bias-variance trade-off, we combine the two approaches in a hybrid method that balances the rapid convergence of the biased version with the high accuracy of the unbiased version. In addition to methodological contributions, we present an enhanced implementation of RS-PINN. Extensive experiments on diverse high-dimensional PDEs, including Fokker-Planck, Hamilton-Jacobi-Bellman (HJB), viscous Burgers’, Allen-Cahn, and Sine-Gordon equations, illustrate the bias-variance trade-off and highlight the effectiveness of the hybrid RS-PINN. Empirical guidelines are provided for selecting biased, unbiased, or hybrid versions, depending on the dimensionality and nonlinearity of the specific PDE problem.
Recommended citation: Z. Hu, Z. Yang, Y. Wang, G.E. Karniadakis, K. Kawaguchi, Bias-variance trade-off in physics-informed neural networks with randomized smoothing for high-dimensional PDEs, 2023, arXiv preprint arXiv:2311.15283.
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.