
Gupta, S., C. Wang, Y. Wang, T. Jaakkola, and S. Jegelka, 2024. Symmetries In-Context: Universal Self-Supervised Learning through Contextual World Models. Neural Information Processing Systems (NeurIPS) 2024. https://arxiv.org/abs/2405.18193
Gupta, S., C. Wang, Y. Wang, T. Jaakkola, and S. Jegelka, 2024. Symmetries In-Context: Universal Self-Supervised Learning through Contextual World Models. Neural Information Processing Systems (NeurIPS) 2024. https://arxiv.org/abs/2405.18193
Kiani, B., T. Le, H. Lawrence, S. Jegelka, and M. Weber, 2024. On the Hardness of Learning Under Symmetries. International Conference on Learning Representations (ICLR) 2024. https://arxiv.org/abs/2401.01869
Le, T., L. Ruiz, and S. Jegelka, 2024. A Poincaré Inequality and Consistency Results for Signal Sampling on Large Graphs. International Conference on Learning Representations (ICLR) 2024. https://arxiv.org/abs/2311.10610
Burt, D. R., Shen, Y., & Broderick, T. (2024). Consistent Validation for Predictive Methods in Spatial Settings. ICML 2024 AI for Science Workshop. https://openreview.net/forum?id=dUGehG7cRf
Shen, Y., Berlinghieri, R., & Broderick, T. (2025). Multi-marginal Schrödinger Bridges with Iterative Reference Refinement. The 28th International Conference on Artificial Intelligence and Statistics. https://openreview.net/forum?id=VcwZ3gtYFY
Zhao, M., Y. Cong, and L. Carin, 2020. On Leveraging Pretrained GANs for Generation with Limited Data. In: Proceedings of Machine Learning Research 119, 11340–11351.
Garg, V.K., S. Jegelka, and T. Jaakkola, 2020. Generalization and Representational Limits of Graph Neural Networks. In: Proceedings of Machine Learning Research 119, 3419–3430.
Stephenson, W.T., S. Ghosh, T.D. Nguyen, M. Yurochkin, S.K. Deshpande, and T. Broderick, 2022. Measuring the Robustness of Gaussian Processes to Kernel Choice. Proceedings of Machine Learning Research 151, 3308–3331.
Cheng, P., W. Hao, S. Dai, J. Liu, Z. Gan, and L. Carin, 2020. CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information. In: Proceedings of Machine Learning Research 119, 1779–1788.
Chuang, C.-Y., Y. Mroueh, K. Greenewald, A. Torralba, and S. Jegelka, 2021. Measuring Generalization with Optimal Transport. In: Neural Information Processing Systems (NeurIPS) 2021, December 6–14, 2021.
Berlinghieri, R., B.L. Trippe, D.R. Burt, R. Giordano, K. Srinivasan, T. Özgökmen, J. Xia, and T. Broderick, 2023. Gaussian Processes at the Helm(holtz): A More Fluid Model for Ocean Currents. In: Proceedings of Machine Learning Research 202, 2113–2163.
Tahmasebi, B. and S. Jegelka, 2023. The Exact Sample Complexity Gain from Invariances for Kernel Regression. In: Neural Information Processing Systems (NeurIPS) 2023, New Orleans, December 10–16, 2023. doi:10.48550/arXiv.2303.14269
Rajagopal, E., A.N.S. Babu, T. Ryu, P.J. Haley, Jr., C. Mirabito, and P.F.J. Lermusiaux, 2023. Evaluation of Deep Neural Operator Models toward Ocean Forecasting. In: OCEANS '23 IEEE/MTS Gulf Coast, 25–28 September 2023. doi:10.23919/OCEANS52994.2023.10337380
Chandramoorthy, N., A. Loukas, K. Gatmiry, S. Jegelka, 2022. On the Generalization of Learning Algorithms That Do Not Converge. In: Neural Information Processing Systems (NeurIPS) 2022, New Orleans, November 28–December 9, 2022.
Ryu, T., Suresh Babu, A., and Lermusiaux, P., 2022. Neural Closure Model for Dynamic Mode Decomposition Forecasts. In: Model Reduction and Surrogate Modelling (MORE) 2022, Berlin, September 19–23, 2022.
Kulkarni, C.S., A. Gupta, and P.F.J. Lermusiaux, 2020. Sparse Regression and Adaptive Feature Generation for the Discovery of Dynamical Systems. In: Darema, F., E. Blasch, S. Ravela, and A. Aved (eds.), Dynamic Data Driven Application Systems. DDDAS 2020. Lecture Notes in Computer Science 12312, 208–216. doi:10.1007/978-3-030-61725-7_25