I am a Machine Learning/AI researcher in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratories. My research interest broadly spans machine learning, statistics and signal processing for applications in computer vision, healthcare, language modeling and scientific data analysis. I collaborate with several researchers and practitioners to enable the use of machine learning and AI technologies to solve challenging real-world problems.
Here is a profile of my research at LLNL
Check out this site for more details on my research, updates, preprints and code release.
And yes… you can call me Jay.
- News article about our recent PNAS paper on using deep learning for surrogate modeling in scientific applications.
- Feature article on AI-based analysis of clinical diagnosis models and COVID-19 infections.
- Listen to my podcast with DataSkeptic where I talked about some recent work on interpretability in healthcare AI [Apple] [Spotify].
- New paper alert: A preprint of our work at LLNL on designing accurate emulators for scientific processes that is currently under consideration at Nature Communications.
- Need to build chest X-ray based diagnostic models with limited data? Our new paper provides a bunch of tricks.
- New paper alert: A new approach to build reliable and interpretable deep models for Healthcare AI. In collaboration with IBM Research and Arizona State University.
- Listen to our presentation on Learn-by-Calibrating at IEEE ICASSP 2020.
- Paper on building accurate neural network surrogates for inertial confinement fusion published in Proceedings of the National Academy of Sciences.
- Coverage-based designs for hyper-parameter optimization in neural nets accepted for publication at IEEE Transactions on Neural Networks and Learning Systems.
- Our paper on MimicGAN, an easy and effective way to robustly project onto the image manifold, is accepted to IJCV Special Issue on GANs.
- New paper alert: How does prediction calibration affect the performance of lottery tickets? Our new paper answers this question!
- A regularized GAT model for robust semi-supervised learning under node injection attacks accepted as an oral talk at ICASSP 2020.
- Our paper Learn-by-calibrating that explores the use of a prior-free calibration objective for training regression models accepted at ICASSP 2020.
- At AAAI presenting our work on building calibrated deep models for regression, time-series forecasting and object localization [poster].
- Presented our paper on weakly supervised instance labeling in histopathology images at ICMLA 2019 [Slides].
- Paper on uncertainty quantification with deep neural networks accepted at AAAI 2020.
- Released the technical report for my recently completed DOE-funded project on High-Dimensional Spectral Sampling.
- I will be presenting our paper on “Improving Deep Embeddings for Inferencing with Multi-layered Graphs” in the Deep Graph Learning: Methodologies and Applications Workshop at IEEE Big Data 2019.
- Recieved the DOE-ASCR Artificial Intelligence research grant to develop uncertainty quantification methods for deep learning.
- Four papers accepted for presentation at the Neurips 2019 Workshops, ML for Physical Sciences, Deep inverse and Graph Representation Learning.
- New paper alert: Learning interpretable linear embeddings using function preserving projections.
- Co-organized the 2nd Applied Math Visioning workshop in New Mexico.
- Paper on weakly supervised instance labeling in histopathological images accepted for oral presentation at ICMLA 2019.
- Best paper award at the KDD Applied Data Science for Healthcare workshop.
- New paper alert: Do deep learning models in the clinical domain generalize under complex domain and task shifts? Our work explores disease landscapes to characterize this behavior.
- New preprint on unsupervised domain adaptation based on classical subspace analysis. Significant performance improvements over SOTA
- State-of-the-art results obtained in audio source sepration via multi-scale feature learning using dilated dense U-Nets.
- New results on hyper-parameter optimization. Coverage-based sample designs identify highly-optimal configurations.
- Updated version of the multi-layered graph attention models paper submitted.
- Co-organized the 1st DOE Applied Math Visioning Workshop and participated in exciting conversations about the future of ML/Data Science.
- NVIDIA blog features results from our recent work on generative models
- Highlights from my research in the Computation Newsletter