Phillip E. PopePhD StudentDepartment of Computer Science University of Maryland, College Park ude || tod || dmu || tod || sc (ta) epopep Google Scholar | GitHub |
I am a PhD Student advised by David Jacobs and Hong-Zhou Ye part of the Center for Machine Learning and the Institute for Advanced Computed Studies at the University of Maryland, College Park.
My research is on machine learning for quantum chemistry. Specifically I am working on speeding up solutions to the Kohn-Sham Equations by learning better initializations of the charge density. The long-term vision of my work is to accelerate the search for new catalysts for the Green economy. My work has spanned a number of other machine learning topics including robustness, data manifolds, explanability, and generative models. |
Towards Combinatorial Generalization for Catalysts: A Kohn-Sham Charge-Density Approach Pope P., Jacobs, D. Published at the Thirty-seventh Conference on Neural Information Processing Systems (Neurips 2023) VIDEO |
|
The Intrinsic Dimension of Images and Its Impact on Learning Pope P., Zhu C., Abdelkader, A., Goldblum, M., Goldstein, T. Published at The Tenth International Conference on Learning Representations (ICLR 2021) Awarded Spotlight Presentation (3.8% overall acceptance rate) VIDEO |
|
A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes Moayeri, M., Pope, P., Balaji, Y., and Feizi, S. Published at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2022) Awarded Oral Presentation (4.2% overall acceptance rate) |
|
Explainability Methods for Graph Convolutional Neural Networks Pope, P.*, Kolouri, S.*, Rostrami, M., Martin, C., Hoffman, H. Published at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2019) Awarded Oral Presentation (5.5% overall acceptance rate) VIDEO |
|
Stochastic Training is Not Necessary for Generalization Geiping, J., Goldblum, M., Pope, P., Moeller, M., and Goldstein, T. Published at The Eleventh International Conference on Learning Representations (ICLR 2022) |
|
Influence Functions in Deep Learning Are Fragile Basu, S.*, Pope, P.*, Feizi, S. Published at The Tenth International Conference on Learning Representations (ICLR 2021) |
|
Adversarial Robustness of Flow-Based Generative Models Pope, P.*, Balaji, Y.*, Feizi, S. Published at The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) |
|
Sliced-Wasserstein Autoencoders Kolouri, S., Pope, P., Martin, C., Rohde, G. Published at The Eighth International Conference on Learning Representations (ICLR 2019) |